Physics World https://physicsworld.com/a/the-pros-and-cons-of-patenting/ Mon, 02 Feb 2026 11:00:42 +0000 en-GB Copyright by IOP Publishing Ltd and individual contributors hourly 1 https://wordpress.org/?v=6.9 Physics World episodic Physics World dens.milne@ioppublishing.org Copyright by IOP Publishing Ltd and individual contributors Copyright by IOP Publishing Ltd and individual contributors podcast Physics World https://physicsworld.com/wp-content/uploads/2021/01/PW-podcast-logo-STORIES-resized.jpg https://physicsworld.com TV-G Monthly The pros and cons of patenting https://physicsworld.com/a/the-pros-and-cons-of-patenting/ Mon, 02 Feb 2026 11:00:42 +0000 https://physicsworld.com/?p=126092 Honor Powrie reveals her love-hate relationship with patents

The post The pros and cons of patenting appeared first on Physics World.

]]>
For any company or business, it’s important to recognize and protect intellectual property (IP). In the case of novel inventions, which can include machines, processes and even medicines, a patent offers IP protection and lets firms control how those inventions are used. Patents, which in most countries can be granted for up to 20 years, give the owner exclusive rights so that others can’t directly copy the creation. A patent essentially prevents others from making, using or selling your invention.

But there are more reasons for holding a patent than IP protection alone. In particular, patents go some way to protecting the investment that may have been necessary to generate the IP in the first place, such as the cost of R&D facilities, materials, labour and expertise. Those factors need to be considered when you’re deciding if patenting is the right approach or not.

Patents are tangible assets that can be sold to other businesses or licensed for royalties to provide your compay with regular income.

Patents are in effect a form of currency. Counting as tangible assets that add to the overall value of a company, they can be sold to other businesses or licensed for royalties to provide regular income. Some companies, in fact, build up or acquire significant patent portfolios, which can be used for bargaining with competitors, potentially leading to cross-licensing agreements where both parties agree to use each other’s technology.

Patents also say something about the competitive edge of a company, by demonstrating technical expertise and market position through the control of a specific technology. Essentially, patents give credibility to a company’s claims of its technical know-how: a patent shows investors that a firm has a unique, protected asset, making the business more appealing and attractive to further investment.

However, it’s not all one-way traffic and there are obligations on the part of the patentee. Firstly, a patent holder has to reveal to the world exactly how their invention works. Governments favour this kind of public disclosure as it encourages broader participation in innovation. The downside is that whilst your competitors cannot directly copy you, they can enhance and improve upon your invention, provided those changes aren’t covered by the original patent.

It’s also worth bearing in mind that a patent holder is responsible for patent enforcement and any ensuing litigation; a patent office will not do this for you. So you’ll have to monitor what your competitors are up to and decide on what course of action to take if you suspect your patent’s been infringed. Trouble is, it can sometimes be hard to prove or disprove an infringement – and getting the lawyers in can be expensive, even if you win.

Money talks

Probably the biggest consideration of all is the cost and time involved in making a patent application. Filing a patent requires a rigorous understanding of “prior art” – the existing body of relevant knowledge on which novelty is judged. You’ll therefore need to do a lot of work finding out about relevant established patents, any published research and journal articles, along with products or processes publicly disclosed before the patent’s filing date.

Before it can be filed with a patent office, a patent needs to be written as a legal description, which includes all the legwork like an abstract, background, detailed specifications, drawings and claims of the invention. Once filed, an expert in the relevant technical field will be assigned to assess the worth of the claim; this examiner must be satisfied that the application is both unique and “non-obvious” before it’s granted.

Even when the invention is judged to be technically novel, in order to be non-obvious, it must also involve an “inventive step” that would not be obvious to a person with “ordinary skill” in that technical field at the time of filing. The assessment phase can result in significant to-ing and fro-ing between the examiner and the applicant to determine exactly what is patentable. If insufficient evidence is found, the patent application will be refused.

Patents are only ever granted in a particular country or region, such as Europe, and the application process has to be repeated for each new place (although the information required is usually pretty similar). Translations may be required for some countries, there are fees for each application and, even if a patent is granted, you have to pay an additional annual bill to maintain the patent (which in the UK rises year on year).

Patents can take years to process, which is why many companies pay specialized firms to support their applications.

Patent applications, in other words, can be expensive and can take years to process. That’s why many companies pay specialized firms to support their patent applications. Those firms employ patent attorneys – legal experts with a technical background who help inventors and companies manage their IP rights by drafting patent applications, navigating patent office procedures and advising on IP strategy. Attorneys can also represent their clients in disputes or licensing deals, thereby acting as a crucial bridge between science/engineering and law.

Perspiration and aspiration

It’s impossible to write about patents without mentioning the impact that Thomas Edison had as an inventor. During the 20th century, he became the world’s most prolific inventor with a staggering 1093 US patents granted in his lifetime. This monumental achievement remained unsurpassed until 2003, when it was overtaken by the Japanese inventor Shunpei Yamazaki and, more recently, by the Australian “patent titan” Kia Silverbrook in 2008.

Edison clearly saw there was a lot of value in patents, but how did he achieve so much? His approach was grounded in systematic problem solving, which he accomplished through his Menlo Park lab in New Jersey. Dedicated to technological development and invention, it was effectively the world’s first corporate R&D lab. And whilst Edison’s name appeared on all the patents, they were often primarily the work of his staff; he was effectively being credited for inventions made by his employees.

I have a love-hate relationship with patents or at least the process of obtaining them.

I will be honest; I have a love-hate relationship with patents or at least the process of obtaining them. As a scientist or engineer, it’s easy to think all the hard work is getting an invention over the line, slogging your guts out in the lab. But applying for a patent can be just as expensive and time-consuming, which is why you need to be clear on what and when to patent. Even Edison grew tired of being hailed a genius, stating that his success was “1% inspiration and 99% perspiration”.

Still, without the sweat of patents, your success might be all but 99% aspiration.

The post The pros and cons of patenting appeared first on Physics World.

]]>
Opinion and reviews Honor Powrie reveals her love-hate relationship with patents https://physicsworld.com/wp-content/uploads/2026/02/2026-01-transactions-patent-application-process-149400842-shutterstock-kentoh.jpg
Practical impurity analysis for biogas producers https://physicsworld.com/a/practical-impurity-analysis-for-biogas-producers/ Mon, 02 Feb 2026 08:41:31 +0000 https://physicsworld.com/?p=125973 A centralised sampling and GC‑ICP‑MS method supports a reliable siloxane and sulfur measurement

The post Practical impurity analysis for biogas producers appeared first on Physics World.

]]>
Biogas is a renewable energy source formed when bacteria break down organic materials such as food waste, plant matter, and landfill waste in an oxygen‑free (anaerobic) process. It contains methane and carbon dioxide, along with trace amounts of impurities. Because of its high methane content, biogas can be used to generate electricity and heat, or to power vehicles. It can also be upgraded to almost pure methane, known as biomethane, which can directly replace natural fossil gas.

Strict rules apply to the amount of impurities allowed in biogas and biomethane, as these contaminants can damage engines, turbines, and catalysts during upgrading or combustion. EN 16723 is the European standard that sets maximum allowable levels of siloxanes and sulfur‑containing compounds for biomethane injected into the natural gas grid or used as vehicle fuel. These limits are extremely low, meaning highly sensitive analytical techniques are required. However, most biogas plants do not have the advanced equipment needed to measure these impurities accurately.

Researchers from the Paul Scherrer Institute, Switzerland: Julian Indlekofer (left) and Ayush Agarwal (right), with the Liquid Quench Sampling System

The researchers developed a new, simpler method to sample and analyse biogas using GC‑ICP‑MS. Gas chromatography (GC) separates chemical compounds in a gas mixture based on how quickly they travel through a column. Inductively Coupled Plasma Mass Spectrometry (ICP‑MS) then detects the elements within those compounds at very low concentrations. Crucially, this combined method can measure both siloxanes and sulfur compounds simultaneously. It avoids matrix effects that can limit other detectors and cause biased or ambiguous results. It also achieves the very low detection limits required by EN 16723.

The sampling approach and centralized measurement enables biogas plants to meet regulatory standards using an efficient, less complex, and more cost‑effective method with fewer errors. Overall, this research provides a practical, high‑accuracy tool that makes reliable biogas impurity monitoring accessible to plants of all sizes, strengthening biomethane quality, protecting infrastructure, and accelerating the transition to cleaner energy systems.

Read the full article

Sampling to analysis: simultaneous quantification of siloxanes and sulfur compounds in biogas for cleaner energy

Ayush Agarwal et al 2026 Prog. Energy 8 015001

Do you want to learn more about this topic?

Household biogas technology in the cold climate of low-income countries: a review of sustainable technologies for accelerating biogas generation Sunil Prasad Lohani et al. (2024)

The post Practical impurity analysis for biogas producers appeared first on Physics World.

]]>
Research highlight A centralised sampling and GC‑ICP‑MS method supports a reliable siloxane and sulfur measurement https://physicsworld.com/wp-content/uploads/2026/01/air-pollution-chimneys-221000-cc0-pixabay-sdpictures.jpg
Cavity-based X-ray laser delivers high-quality pulses https://physicsworld.com/a/cavity-based-x-ray-laser-delivers-high-quality-pulses/ Sat, 31 Jan 2026 15:00:06 +0000 https://physicsworld.com/?p=126260 Electron undulator amplifies X-ray beam

The post Cavity-based X-ray laser delivers high-quality pulses appeared first on Physics World.

]]>
Physicists in Germany have created a new type of X-ray laser that uses a resonator cavity to improve the output of a conventional X-ray free electron laser (XFEL). Their proof-of-concept design delivers X-ray pulses that are more monochromatic and coherent than those from existing XFELs.

In recent decades, XFELs have delivered pulses of monochromatic and coherent X-rays for a wide range of science including physics, chemistry, biology and materials science.

Despite their name, XFELs do not work like conventional lasers. In particular, there is no gain medium or resonator cavity. Instead, XFELs rely on the fact that when a free electron is accelerated, it will emit electromagnetic radiation. In an XFEL, pulses of high-energy electrons are sent through an undulator, which deflects the electrons back and forth. These wiggling electrons radiate X-rays at a specific energy. As the X-rays and electrons travel along the undulator, they interact in such a way that the emitted X-ray pulse has a high degree of coherence.

While these XFELs have proven very useful, they do not deliver radiation that is as monochromatic or as coherent as radiation from conventional lasers. One reason why conventional lasers perform better is that the radiation is reflected back and forth many times in a mirrored cavity that is tuned to resonate at a specific frequency – whereas XFEL radiation only makes one pass through an undulator.

Practical X-ray cavities, however, are difficult to create. This is because X-rays penetrate deep into materials, where they are usually absorbed – making reflection with conventional mirrors impossible.

Crucial overlap

Now, researchers working at the European XFEL at DESY in Germany have created a proof-of-concept hybrid system that places an undulator within a mirrored resonator cavity. X-ray pulses that are created in the undulator are directed at a downstream mirror and reflected back to a mirror upstream of the undulator. The X-ray pulses are then reflected back downstream through the undulator. Crucially, a returning X-ray pulse overlaps with a subsequent electron pulse in the undulator, amplifying the X-ray pulse. As a result, the X-ray pulses circulating within the cavity quickly become more monochromatic and more coherent than pulses created by an undulator alone.

The team solved the mirror challenge by using diamond crystals that achieve the Bragg reflection of X-rays with a specific frequency. These are used at either end of the cavity in conjunction with Kirkpatrick–Baez mirrors, which help focus the reflected X-rays back into the cavity.

Some of the X-ray radiation circulating in the cavity is allowed to escape downstream, providing a beam of monochromatic and coherent X-ray pulses. They have called their system X-ray Free-Electron Laser Oscillator (XFELO). The cavity is about 66 m long.

Narrow frequency range

DESY accelerator scientist Patrick Rauer explains, “With every round trip, the noise in the X-ray pulse gets less and the concentrated light more defined”. Rauer pioneered the design of the cavity in his PhD work and is now the DESY lead on its implementation. “It gets more stable and you start to see this single, clear frequency – this spike.” Indeed, the frequency width of XFELO X-ray pulses is about 1% that of pulses that are created by the undulators alone

Ensuring the overlap of electron and X-pulses within the cavity was also a significant challenge. This required a high degree of stability within the accelerator that provides electron pulses to XFELO. “It took years to bring the accelerator to that state, which is now unique in the world of high-repetition-rate accelerators”, explains Rauer.

Team member Harald Sinn says, “The successful demonstration shows that the resonator principle is practical to implement”. Sinn is head of  European XFEL’s instrumentation department and he adds, “In comparison with methods used up to now, it delivers X-ray pulses with a very narrow wavelength as well as a much higher stability and coherence.”

The team will now work towards improving the stability of XFELO so that in the future it can be used to do experiments by European XFEL’s research community.

XFELO is described in Nature.

The post Cavity-based X-ray laser delivers high-quality pulses appeared first on Physics World.

]]>
Research update Electron undulator amplifies X-ray beam https://physicsworld.com/wp-content/uploads/2026/01/31-1-26-xfelo-desy.jpg
The physics of an unethical daycare model that uses illness to maximize profits https://physicsworld.com/a/the-physics-of-an-unethical-daycare-model-that-uses-illness-to-maximise-profits/ Fri, 30 Jan 2026 14:07:26 +0000 https://physicsworld.com/?p=126255 The arXiv paper is "not intended as a recipe for unethical daycare centre"

The post The physics of an unethical daycare model that uses illness to maximize profits appeared first on Physics World.

]]>
When I had two kids going through daycare, or nursery as we call it in the UK, every day seemed like a constant fight with germs and illness. After all, at such a young age kids still have a developing immune system and are not exactly hot on personal hygiene.

That same dilemma faced mathematician Lauren Smith from the University of Auckland. She has two children at a “wonderful daycare centre” who often fall ill. As many parents juggling work and parenting will understand, Smith is frequently faced with the issue of whether her kids are well enough to attend daycare.

Smith then thought about how an unethical daycare centre might take advantage of this to maximize its profits – under the assumption that if there are not enough children attending (who still pay) then staff get sent home without pay, and also don’t get sick pay themselves.

“It occurred to me that a sick kid attending daycare could actually be financially beneficial to the centre, while clearly being a detriment to the wellbeing of the other children as well as the staff and the broader community,” Smith told Physics World.

For a hypothetical daycare centre that is solely focused on making as much money as possible, Smith realized that full attendance of sick children is not optimal financially as this requires maximal staffing at all times, whereas zero attendance of sick children does not give an opportunity for the disease to spread such that other children are then sent home.

But in between these two extremes, Smith thought there should be an optimal attendance rate so that the disease is still able to spread and some children – and staff – are sent home. “As a mathematician I knew I had the tools to find it,” adds Smith.

Model behaviour

Using the so-called Susceptible-Infected-Recovered model for 100 children, a teacher to child ratio of 1:6 and a recovery rate from illness of 10 days, Smith found that the more infectious the disease, the lower the optimal attendance rate for sick children is, and so the more savings the unethical daycare centre can make.

In other words, the more infectious a disease, fewer ill children are required to attend to spread it around, and so can keep more of them – and importantly staff – at home while still making sure it still spreads to non-infected kids.

For a measles outbreak with a basic reproductive number of 12-18, for example, the model resulted in a potential staff saving of 90 working days, whereas for seasonal flu with a basic reproductive rate of 1.2 to 1.3, the potential staff savings is 4.4 days.

Smith writes in the paper that the work is “not intended as a recipe for unethical daycare centre” but is rather to illustrate the financial incentive that exists for daycare centres to propagate diseases among children, which would lead to more infections of at-risk populations in the wider community.

“I hope that as well as being an interesting topic, it can show that mathematics itself is interesting and is useful for describing the real world,” adds Smith.

The post The physics of an unethical daycare model that uses illness to maximize profits appeared first on Physics World.

]]>
Blog The arXiv paper is "not intended as a recipe for unethical daycare centre" https://physicsworld.com/wp-content/uploads/2026/01/young-children-play-outside-794064370-shutterstock-monkey-business-images.jpg
Saving the Titanic: the science of icebergs and unsinkable ships https://physicsworld.com/a/saving-the-titanic-the-science-of-icebergs-and-unsinkable-ships/ Fri, 30 Jan 2026 13:00:46 +0000 https://physicsworld.com/?p=126223 Two studies published this week have an unexpected (if tenuous) link to the 20th century’s most famous maritime disaster

The post Saving the <em>Titanic</em>: the science of icebergs and unsinkable ships appeared first on Physics World.

]]>
When the Titanic was built, her owners famously described her as “unsinkable”. A few days into her maiden voyage, an iceberg in the North Atlantic famously proved them wrong. But what if we could make ships that really are unsinkable? And what if we could predict exactly how long a hazardous iceberg will last before it melts?

These are the premises of two separate papers published independently this week by Chunlei Guo and colleagues at the University of Rochester, and by Daisuke Noto and Hugo N Ulloa of the University of Pennsylvania, both in the US. The Rochester group’s paper, which appears in Advanced Functional Materials, describes how applying a superhydrophobic coating to an open-ended metallic tube can make it literally unsinkable – a claim supported by extensive tests in a water tank. Noto and Ulloa’s research, which they describe in Science Advances, likewise involved a water tank. Theirs, however, was equipped with cameras, lasers and thermochromic liquid crystals that enabled them to track a freely floating miniature iceberg as it melted.

Imagine a spherical iceberg

Each study is surprising in its own way. For the iceberg paper, arguably the biggest surprise is that no-one had ever done such experiments before. After all, water and ice are readily available. Fancy tanks, lasers, cameras and temperature-sensitive crystals are less so, yet surely someone, somewhere, must have stuck some ice in a tank and monitored what happened to it?

Noto and Ulloa’s answer is, in effect, no. “Despite the relevance of melting of floating ice in calm and energetic environments…most experimental and numerical efforts to examine this process, even to date, have either fixed or tightly constrained the position and posture of ice,” they write. “Consequently, the relationships between ice dissolution rate and background fluid flow conditions inferred from these studies are meaningful only when a one-way interaction, from the liquid to the solid phase, dominates the melting dynamics.”

The problem, they continue, is that eliminating these approximations “introduces a significant technical challenge for both laboratory experiments and numerical simulations” thanks to a slew of interactions that would otherwise get swept under the rug. These interactions, in turn, lead to complex dynamics such as drifting, spinning and even flipping that must be incorporated into the model. Consequently, they write, “fundamental questions persist: ‘How long does an ice body last?’”

  • Tracking a melting iceberg: This side view of the experiment shows fluid motions as moving particles and temperature distributions as colours of the thermochromic liquid crystal particles. Meltplume (dark colour) formed beneath the floating ice plunges down, penetrating through the thermally stratified layer (red: cold, blue: warm). Note: this video has no sound. (Courtesy: Noto and Ulloa, Science Advances 12 5 DOI: 10.1126/sciadv.ady352)

To answer this question, Noto and Ulloa used their water-tank observations (see video) to develop a model that incorporates the thermodynamics of ice melting and mass balance conservation. Based on this model, they correctly predict both the melting rate and the lifespan of freely floating ice under self-driven convective flows that arise from interactions between the ice and the calm, fresh water surrounding it. Though the behaviour of ice in tempestuous salty seas is, they write, “beyond our scope”, their model nevertheless provides a useful upper bound on iceberg longevity, with applications for climate modelling as well as (presumably) shipping forecasts for otherwise-doomed ocean liners.

The tube that would not sink

In the unsinkable tube study, the big surprise is that a metal tube, divided in the middle but open at both ends, can continue to float after being submerged, corroded with salt, tossed about on a turbulent sea and peppered with holes. How is that even possible?

“The inside of the tube is superhydrophobic, so water can’t enter and wet the walls,” Guo explains. “As a result, air remains trapped inside, providing buoyancy.”

Importantly, this buoyancy persists even if the tube is damaged. “When the tube is punctured, you can think of it as becoming two, three, or more smaller sections,” Guo tells Physics World. “Each section will work in the same way of preventing water from entering inside, so no matter how many holes you punch into it, the tube will remain afloat.”

So, is there anything that could make these superhydrophobic structures sink?  “I can’t think of any realistic real-world challenges more severe than what we have put them through experimentally,” he says.

We aren’t in unsinkable ship territory yet: the largest structure in the Rochester study was a decidedly un-Titanic-like raft a few centimetres across. But Guo doesn’t discount the possibility. He points out that tubes are made from ordinary aluminium, with a simple fabrication process. “If suitable applications call for it, I believe [human-scale versions] could become a reality within a decade,” he concludes.

The post Saving the <em>Titanic</em>: the science of icebergs and unsinkable ships appeared first on Physics World.

]]>
Blog Two studies published this week have an unexpected (if tenuous) link to the 20th century’s most famous maritime disaster https://physicsworld.com/wp-content/uploads/2026/01/2026-01-29-unsinkable-tubes-web-scaled.jpg
Scientists quantify behaviour of micro- and nanoplastics in city environments https://physicsworld.com/a/scientists-quantify-behaviour-of-micro-and-nanoplastics-in-city-environments/ Fri, 30 Jan 2026 09:00:12 +0000 https://physicsworld.com/?p=126228 New insight into the behaviour of atmospheric plastics could be beneficial for investigating their effects on climate, ecosystems and human health

The post Scientists quantify behaviour of micro- and nanoplastics in city environments appeared first on Physics World.

]]>
Abundance and composition of atmospheric plastics

Plastic has become a global pollutant concern over the last couple of decades: it is widespread in society, not often disposed of effectively, and generates both microplastics (1 µm to 5 mm in size) and nanoplastics (smaller than 1 µm) that have infiltrated many ecosystems – including being found inside humans and animals.

Over time, bulk plastics break down into micro- and nanoplastics through fragmentation mechanisms that create much smaller particles with a range of shapes and sizes. Their small size has become a problem because they are increasingly finding their way into waterways that pollute the environment, into cities and other urban environments, and are now even being transported to remote polar and high-altitude regions.

This poses potential health risks around the world. While the behaviour of micro- and nanoplastics in the atmosphere is poorly understood, it’s thought that they are transported by transcontinental and transoceanic winds, which causes the spread of plastic in the global carbon cycle.

However, the lack of data on the emission, distribution and deposition of atmospheric micro- and nanoplastic particles makes it difficult to definitively say how they are transported around the world. It is also challenging to quantify their behaviour, because plastic particles can have a range of densities, sizes and shapes that undergo physical changes in clouds, all of which affect how they travel.

A global team of researchers has developed a new semi-automated microanalytical method that can quantify atmospheric plastic particles present in air dustfall, rain, snow and dust resuspension. The research was performed across two Chinese megacities, Guangzhou and Xi’an.

“As atmospheric scientists, we noticed that microplastics in the atmosphere have been the least reported among all environmental compartments in the Earth system due to limitations in detection methods, because atmospheric particles are smaller and more complex to analyse,” explains Yu Huang, from the Institute of Earth Environment of the Chinese Academy of Sciences (IEECAS) and one of the paper’s lead authors. “We therefore set out to develop a reliable detection technique to determine whether microplastics are present in the atmosphere, and if so, in what quantities.”

Quantitative detection

For this new approach, the researchers employed a computer-controlled scanning electron microscopy (CCSEM) system equipped with energy-dispersive X-ray spectroscopy to reduce human bias in the measurements (which is an issue in manual inspections). They located and measured individual micro- and nanoplastic particles – enabling their concentration and physicochemical characteristics to be determined – in aerosols, dry and wet depositions, and resuspended road dust.

“We believe the key contribution of this work lies in the development of a semi‑automated method that identifies the atmosphere as a significant reservoir of microplastics. By avoiding the human bias inherent in visual inspection, our approach provides robust quantitative data,” says Huang. “Importantly, we found that these microplastics often coexist with other atmospheric particles, such as mineral dust and soot – a mixing state that could enhance their potential impacts on climate and the environment.”

The method could detect and quantify plastic particles as small as 200 nm, and revealed airborne concentrations of 1.8 × 105 microplastics/m3 and 4.2 × 104 nanoplastics/m3 in Guangzhou and 1.4 × 105 microplastics/m3 and 3.0 × 104 nanoplastics/m3 in Xi’an. This is two to six orders of magnitude higher for both microplastic and nanoplastic fluxes than reported previously via visual methods.

The team also found that the deposition samples were more heterogeneously mixed with other particle types (such as dust and other pollution particles) than aerosols and resuspension samples, which showed that particles tend to aggregate in the atmosphere before being removed during atmospheric transport.

The study revealed transport insights that could be beneficial for investigating the climate, ecosystem and human health impacts of plastic particles at all levels. The researchers are now advancing their method in two key directions.

“First, we are refining sampling and CCSEM‑based analytical strategies to detect mixed states between microplastics and biological or water‑soluble components, which remain invisible with current techniques. Understanding these interactions is essential for accurately assessing microplastics’ climate and health effects,” Huang tells Physics World. “Second, we are integrating CCSEM with Raman analysis to not only quantify abundance but also identify polymer types. This dual approach will generate vital evidence to support environmental policy decisions.”

The research was published in Science Advances.

The post Scientists quantify behaviour of micro- and nanoplastics in city environments appeared first on Physics World.

]]>
Research update New insight into the behaviour of atmospheric plastics could be beneficial for investigating their effects on climate, ecosystems and human health https://physicsworld.com/wp-content/uploads/2026/01/30-01-26-microplastics-in-aerosols-featured.jpg
Michele Dougherty steps aside as president of the Institute of Physics https://physicsworld.com/a/michele-dougherty-steps-aside-as-president-of-the-institute-of-physics/ Thu, 29 Jan 2026 15:00:26 +0000 https://physicsworld.com/?p=126217 IOP president-elect Paul Howarth will take on Dougherty’s responsibilities with immediate effect

The post Michele Dougherty steps aside as president of the Institute of Physics appeared first on Physics World.

]]>
The space physicist Michele Dougherty has stepped aside as president of the Institute of Physics, which publishes Physics World. The move was taken to avoid any conflicts of interest given her position as executive chair of the Science and Technology Facilities Council (STFC) – one of the main funders of physics research in the UK.

Dougherty, who is based at Imperial College London, spent two years as IOP president-elect from October 2023 before becoming president in October 2025. Dougherty was appointed executive chair of the STFC in January 2025 and in July that year was also announced as the next Astronomer Royal – the first woman to hold the position.

The changes at the IOP come in the wake of UK Research and Innovation (UKRI) stating last month that it will be adjusting how it allocates government funding for scientific research and infrastructure. Spending on curiosity-driven research will remain flat from 2026 to 2030, with UKRI prioritising funding in three key areas or “buckets”.

The three buckets are: curiosity-driven research, which will be the largest; strategic government and societal priorities; and supporting innovative companies. There will also be a fourth “cross-cutting” bucket with funding for infrastructure, facilities and talent. In the four years to 2030, UKRI’s budget will be £38.6bn.

While the detailed implications of the funding changes are still to be worked out, the IOP says its “top priority” is understanding and responding to them. With the STFC being one of nine research councils within UKRI, Dougherty is stepping aside as IOP president to ensure the IOP can play what it says is “a leadership role in advocating for physics without any conflict of interest”.

In her role as STFC executive chair, Dougherty yesterday wrote to the UK’s particle physics, astronomy and nuclear physics community, asking researchers to identify by March how their projects would respond to flat cash as well as reductions of 20%, 40% and 60% – and to “identify the funding point at which the project becomes non-viable”. The letter says that a “similar process” will happen for facilities and labs.

In her letter, Dougherty says that the UK’s science minister Lord Vallance and UKRI chief executive Ian Chapman want to protect curiosity-driven research, which they say is vital, and grow it “as the economy allows”. However, she adds, “the STFC will need to focus our efforts on a more concentrated set of priorities, funded at a level that can be maintained over time”.

Tom Grinyer, chief executive officer of the IOP, says that the IOP is “fully focused on ensuring physics is heard clearly as these serious decisions are shaped”. He says the IOP is “gathering insight from across the physics community and engaging closely with government, UKRI and the research councils so that we can represent the sector with authority and evidence”.

Grinyer warns, however, that UKRI’s shift in funding priorities and the subsequent STFC funding cuts will have “severe consequences” for physics. “The promised investment in quantum, AI, semiconductors and green technologies is welcome but these strengths depend on a stable research ecosystem,” he says.

“I want to thank Michele for her leadership, and we look forward to working constructively with her in her capacity at STFC as this important period for physics unfolds,” adds Grinyer.

Next steps

The nuclear physicist Paul Howarth, who has been IOP president-elect since September, will now take on Dougherty’s responsibilities – as prescribed by the IOP’s charter – with immediate effect, with the IOP Council discussing its next steps at its February 2026 meeting.

With a PhD in nuclear physics, Howarth has had a long career in the nuclear sector working on the European Fusion Programme and at British Nuclear Fuels, as well as co-founding the Dalton Nuclear Institute at the University of Manchester.

He was a non-executive board director of the National Physical Laboratory and until his retirement earlier this year was chief executive officer of the National Nuclear Laboratory.

In response to the STFC letter, Howarth says that the projected cuts “are a devastating blow for the foundations of UK physics”.

“Physics isn’t a luxury we can afford to throw away through confusion,” says Howarth. “We urge the government to rethink these cuts, listen to the physics community, and deliver to a 10-year strategy to secure physics for the future.”

The post Michele Dougherty steps aside as president of the Institute of Physics appeared first on Physics World.

]]>
News IOP president-elect Paul Howarth will take on Dougherty’s responsibilities with immediate effect https://physicsworld.com/wp-content/uploads/2026/01/michele-dougherty-29-01-2026.jpg
AI-based tool improves the quality of radiation therapy plans for cancer treatment https://physicsworld.com/a/ai-based-tool-improves-the-quality-of-radiation-therapy-plans-for-cancer-treatment/ Thu, 29 Jan 2026 13:36:18 +0000 https://physicsworld.com/?p=126220  Medical physicist Todd McNutt is our podcast guest

The post AI-based tool improves the quality of radiation therapy plans for cancer treatment appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features Todd McNutt, who is a medical physicist at Johns Hopkins University and the founder of Oncospace. In a conversation with Physics World’s Tami Freeman, McNutt explains how an artificial intelligence-based tool called Plan AI can help improve the quality of radiation therapy plans for cancer treatments.

As well as discussing the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres, they examine its evolution from an idea developed by an academic collaboration to a clinical product offered today by Sun Nuclear, a US manufacturer of radiation equipment and software.

This podcast is sponsored by Sun Nuclear.

The post AI-based tool improves the quality of radiation therapy plans for cancer treatment appeared first on Physics World.

]]>
Podcasts  Medical physicist Todd McNutt is our podcast guest https://physicsworld.com/wp-content/uploads/2026/01/29-1-26-todd-mcnutt-original.jpg newsletter
The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’ https://physicsworld.com/a/the-future-circular-collider-is-unduly-risky-cern-needs-a-plan-b/ Thu, 29 Jan 2026 09:00:22 +0000 https://physicsworld.com/?p=126167 Michael Riordan calls on CERN to face financial and geopolitical reality

The post The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’ appeared first on Physics World.

]]>
Last November I visited the CERN particle-physics lab near Geneva to attend the 4th International Symposium on the History of Particle Physics, which focused on advances in particle physics during the 1980s and 1990s. As usual, it was a refreshing, intellectually invigorating visit. I’m always inspired by the great diversity of scientists at CERN – complemented this time by historians, philosophers and other scholars of science.

As noted by historian John Krige in his opening keynote address, “CERN is a European laboratory with a global footprint. Yet for all its success it now faces a turning point.” During the period under examination at the symposium, CERN essentially achieved the “world laboratory” status that various leaders of particle physics had dreamt of for decades.

By building the Large Electron Positron (LEP) collider and then the Large Hadron Collider (LHC), the latter with contributions from Canada, China, India, Japan, Russia, the US and other non-European nations, CERN has attracted researchers from six continents. And as the Cold War ended in 1989–1991, two prescient CERN staff members developed the World Wide Web, helping knit this sprawling international scientific community together and enable extensive global collaboration.

The LHC was funded and built during a unique period of growing globalization and democratization that emerged in the wake of the Cold War’s end. After the US terminated the Superconducting Super Collider in 1993, CERN was the only game in town if one wanted to pursue particle physics at the multi-TeV energy frontier. And many particle physicists wanted to be involved in the search for the Higgs boson, which by the mid-1990s looked as if it should show up at accessible LHC energies.

Having discovered this long-sought particle at the LHC in 2012, CERN is now contemplating an ambitious construction project, the Future Circular Collider (FCC). Over three times larger than the LHC, it would study this all-important, mass-generating boson in greater detail using an electron–positron collider dubbed FCC-ee, estimated to cost $18bn and start operations by 2050.

Later in the century, the FCC-hh, a proton–proton collider, would go in the same tunnel to see what, if anything, may lie at much higher energies. That collider, the cost of which is currently educated guesswork, would not come online until the mid 2070s.

But the steadily worsening geopolitics of a fragmenting world order could make funding and building these colliders dicey affairs. After Russia’s expulsion from CERN, little in the way of its contributions can be expected. Chinese physicists had hoped to build an equivalent collider, but those plans seem to have been put on the backburner for now.

And the “America First” political stance of the current US administration is hardly conducive to the multibillion-dollar contribution likely required from what is today the world’s richest (albeit debt-laden) nation. The ongoing collapse of the rules-based world order was recently put into stark relief by the US invasion of Venezuela and abduction of its president Nicolás Maduro, followed by Donald Trump’s menacing rhetoric over Greenland.

While these shocking events have immediate significance for international relations, they also suggest how difficult it may become to fund gargantuan international scientific projects such as the FCC. Under such circumstances, it is very difficult to imagine non-European nations being able to contribute a hoped-for third of the FCC’s total costs.

But the mounting European populist right-wing parties are no great friends of physics either, nor of international scientific endeavours. And Europeans face the not-insignificant costs of military rearmament in the face of Russian aggression and likely US withdrawal from Europe.

So the other two thirds of the FCC’s many billions in costs cannot be taken for granted – especially not during the decades needed to construct its 91 km tunnel, 350 GeV electron–positron collider, the subsequent 100 TeV proton collider, and the massive detectors both machines require.

According to former CERN director-general Chris Llewellyn Smith in his symposium lecture, “The political history of the LHC“, just under 12% of the material project costs of the LHC eventually came from non-member nations. It therefore warps the imagination to believe that a third of the much greater costs of the FCC can come from non-member nations in the current “Wild West” geopolitical climate.

But particle physics desperately needs a Higgs factory. After the 1983 Z boson discovery at the CERN SPS Collider, it took just six years before we had not one but two Z factories – LEP and the Stanford Linear Collider – which proved very productive machines. It’s now been more than 13 years since the Higgs boson discovery. Must we wait another 20 years?

Other options

CERN therefore needs a more modest, realistic, productive new scientific facility – a “Plan B” – to cope with the geopolitical uncertainties of an imperfect, unpredictable world. And I was encouraged to learn that several possible ideas are under consideration, according to outgoing CERN director-general Fabiola Gianotti in her symposium lecture, “CERN today and tomorrow“.

Three of these ideas reflect the European Strategy for Particle Physics, which states that “an electron–positron Higgs factory is the highest-priority next CERN collider”. Two linear electron–positron colliders would require just 11–34 km of tunnelling and could begin construction in the mid-2030s, but would involve a fair amount of technical risk and cost roughly €10bn.

The least costly and risky option, dubbed LEP3, involves installing superconducting radio-frequency cavities in the existing LHC tunnel once the high-luminosity proton run ends. Essentially an upgrade of the 200 GeV LEP2, this approach is based on well-understood technologies and would cost less than €5bn but can reach at most 240 GeV. The linear colliders could attain over twice that energy, enabling research on Higgs-boson decays into top quarks and the triple-Higgs self-interaction.

Other proposed projects involving the LHC tunnel can produce large numbers of Higgs bosons with relatively minor backgrounds, but they can hardly be called “Higgs factories”. One of these, dubbed the LHeC, could only produce a few thousand Higgs bosons annually and would allow other important research on proton structure functions. Another idea is the proposed Gamma Factory, in which laser beams would be backscattered from LHC beams of partially stripped ions. If sufficient photon energies and intensity can be achieved, it will allow research on the γγ → H interaction. These alternatives would cost at most a few billion euros.

As Krige stressed in his keynote address, CERN was meant to be more than a scientific laboratory at which European physicists could compete with their US and Soviet counterparts. As many of its founders intended, he said, it was “a cultural weapon against all forms of bigoted nationalism and anti-science populism that defied Enlightenment values of critical reasoning”. The same logic holds true today.

In planning the next phase in CERN’s estimable history, it is crucial to preserve this cultural vitality, while of course providing unparalleled opportunities to do world-class science – lacking which, the best scientists will turn elsewhere.

I therefore urge CERN planners to be daring but cognizant of financial and political reality in the fracturing world order. Don’t for a nanosecond assume that the future will be a smooth extrapolation from the past. Be fairly certain that whatever new facility you decide to build, there is a solid financial pathway to achieving it in a reasonable time frame.

The future of CERN – and the bracing spirit of CERN – rests in your hands.

The post The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’ appeared first on Physics World.

]]>
Opinion and reviews Michael Riordan calls on CERN to face financial and geopolitical reality https://physicsworld.com/wp-content/uploads/2026/01/2026-02-forum-riordan-fcc-hh-detector.jpg newsletter
Ion-clock transition could benefit quantum computing and nuclear physics https://physicsworld.com/a/ion-clock-transition-could-benefit-quantum-computing-and-nuclear-physics/ Wed, 28 Jan 2026 15:07:43 +0000 https://physicsworld.com/?p=126213 Deformed nucleus makes multi-ion design easier

The post Ion-clock transition could benefit quantum computing and nuclear physics appeared first on Physics World.

]]>
Schematic showing how the shape of ytterbium-173 nucleus affects the clock transition

An atomic transition in ytterbium-173 could be used to create an optical multi-ion clock that is both precise and stable. That is the conclusion of researchers in Germany and Thailand who have characterized a clock transition that is enhanced by the non-spherical shape of the ytterbium-173 nucleus. As well as applications in timekeeping, the transition could be used in quantum computing. Furthermore, the interplay between atomic and nuclear effects in the transition could provide insights into the physics of deformed nuclei.

The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. These clocks play crucial roles in technologies that require precision timing – such as global navigation satellite systems and communications networks. Currently, the international definition of the second is given by the frequency of caesium-based clocks, which deliver microwave time signals.

Today’s best clocks, however, work at higher optical frequencies and are therefore much more precise than microwave clocks. Indeed, at some point in the future metrologists will redefine the second in terms of an optical transition – but the international metrology community has yet to decide which transition will be used.

Broadly speaking, there are two types of optical clock. One uses an ensemble of atoms that are trapped and cooled to ultralow temperatures using lasers; the other involves a single atomic ion (or a few ions) held in an electromagnetic trap. Clocks that use one ion are extremely precise, but lack stability; whereas clocks that use many atoms are very stable, but sacrifice precision.

Optimizing performance

As a result, some physicists are developing clocks that use multiple ions with the aim of creating a clock that optimizes precision and stability.

Now, researchers at PTB and NIMT (the national metrology institutes of Germany and Thailand respectively) have characterized a clock transition in ions of ytterbium-173, and have shown that the transition could be used to create a multi-ion clock.

“This isotope has a particularly interesting transition,” explains PTB’s Tanja Mehlstäubler – who is a pioneer in the development of multi-ion clocks.

The ytterbium-173 nucleus is highly deformed with a shape that resembles a rugby ball. This deformation affects the electronic properties of the ion, which should make it much easier to use a laser to excite a specific transition that would be very useful for creating a multi-ion clock.

Stark effect

This clock transition can also be excited in ytterbium-171 and has already been used to create a single-ion clock. However, excitation in a ytterbium-171 clock requires an intense laser pulse, which creates a strong electric field that shifts the clock frequency (called the AC Stark effect). This is a particular problem for multi-ion clocks because the intensity of the laser (and hence the clock frequency) can vary across the region in which the ions are trapped.

To show that a much lower laser intensity can be used to excite the clock transition in ytterbium-173, the team studied a “Coulomb crystal” in which three ions were trapped in a line and separated by about 10 micron. They illuminated the ions with laser light that was not uniform in intensity across the crystal. They were able to excite the transition at a relatively low laser intensity, which resulted in very small AC Stark shifts between the frequencies of the three ions.

According to the team, this means that as many as 100 trapped ytterbium-173 ions could be used to create a clock that could be used as a time standard; to redefine the second; and also to make very precise measurements of the Earth’s gravitational field.

As well as being useful for creating an optical ion clock, this multi-ion capability could also be exploited to create quantum-computing architectures based on multiple trapped ions. And because the observed effect is a result of the shape of the ytterbium-173 nucleus, further studies could provide insights into nuclear physics.

The research is described in Physical Review Letters.

 

The post Ion-clock transition could benefit quantum computing and nuclear physics appeared first on Physics World.

]]>
Research update Deformed nucleus makes multi-ion design easier https://physicsworld.com/wp-content/uploads/2026/01/28-1-26-ion-clock-transition-list.jpg newsletter1
The power of a poster https://physicsworld.com/a/the-power-of-a-poster/ Wed, 28 Jan 2026 11:00:30 +0000 https://physicsworld.com/?p=125853 Kevin McGuigan highlights what a good poster can do for your research

The post The power of a poster appeared first on Physics World.

]]>
Most researchers know the disappointment of submitting an abstract to give a conference lecture, only to find that it has been accepted as a poster presentation instead. If this has been your experience, I’m here to tell you that you need to rethink the value of a good poster.

For years, I pestered my university to erect a notice board outside my office so that I could showcase my group’s recent research posters. Each time, for reasons of cost, my request was unsuccessful. At the same time, I would see similar boards placed outside the offices of more senior and better-funded researchers in my university. I voiced my frustrations to a mentor whose advice was, It’s better to seek forgiveness than permission.” So, since I couldn’t afford to buy a notice board, I simply used drawing pins to mount some unauthorized posters on the wall beside my office door.

Some weeks later, I rounded the corner to my office corridor to find the head porter standing with a group of visitors gathered around my posters. He was telling them all about my research using solar energy to disinfect contaminated drinking water in disadvantaged communities in Sub-Saharan Africa. Unintentionally, my illegal posters had been subsumed into the head porter’s official tour that he frequently gave to visitors.

The group moved on but one man stayed behind, examining the poster very closely. I asked him if he had any questions. “No, thanks,” he said, “I’m not actually with the tour, I’m just waiting to visit someone further up the corridor and they’re not ready for me yet. Your research in Africa is very interesting.” We chatted for a while about the challenges of working in resource-poor environments. He seemed quite knowledgeable on the topic but soon left for his meeting.

A few days later while clearing my e-mail junk folder I spotted an e-mail from an Asian “philanthropist” offering me €20,000 towards my research. To collect the money, all I had to do was send him my bank account details. I paused for a moment to admire the novelty and elegance of this new e-mail scam before deleting it. Two days later I received a second e-mail from the same source asking why I hadn’t responded to their first generous offer. While admiring their persistence, I resisted the urge to respond by asking them to stop wasting their time and mine, and instead just deleted it.

So, you can imagine my surprise when the following Monday morning I received a phone call from the university deputy vice-chancellor inviting me to pop up for a quick chat. On arrival, he wasted no time before asking why I had been so foolish as to ignore repeated offers of research funding from one of the college’s most generous benefactors. And that is how I learned that those e-mails from the Asian philanthropist weren’t bogus.

The gentleman that I’d chatted with outside my office was indeed a wealthy philanthropic funder who had been visiting our university. Having retrieved the e-mails from my deleted items folder, I re-engaged with him and subsequently received €20,000 to install 10,000-litre harvested-rainwater tanks in as many primary schools in rural Uganda as the money would stretch to.

Kevin McGuigan

About six months later, I presented the benefactor with a full report accounting for the funding expenditure, replete with photos of harvested-rainwater tanks installed in 10 primary schools, with their very happy new owners standing in the foreground. Since you miss 100% of the chances you don’t take, I decided I should push my luck and added a “wish list” of other research items that the philanthropist might consider funding.

The list started small and grew steadily ambitious. I asked for funds for more tanks in other schools, a travel bursary, PhD registration fees, student stipends and so on. All told, the list came to a total of several hundred thousand euros, but I emphasized that they had been very generous so I would be delighted to receive funding for any one of the listed items and, even if nothing was funded, I was still very grateful for everything he had already done. The following week my generous patron deposited a six-figure-euro sum into my university research account with instructions that it be used as I saw fit for my research purposes, “under the supervision of your university finance office”.

In my career I have co-ordinated several large-budget, multi-partner, interdisciplinary, international research projects. In each case, that money was hard-earned, needing at least six months and many sleepless nights to prepare the grant submission. It still amuses me that I garnered such a large sum on the back of one research poster, one 10-minute chat and fewer than six e-mails.

So, if you have learned nothing else from this story, please don’t underestimate the power of a strategically placed and impactful poster describing your research. You never know with whom it may resonate and down which road it might lead you.

The post The power of a poster appeared first on Physics World.

]]>
Blog Kevin McGuigan highlights what a good poster can do for your research https://physicsworld.com/wp-content/uploads/2026/01/2026-01-mcguigan-posters.jpg newsletter1
ATLAS narrows the hunt for dark matter https://physicsworld.com/a/atlas-narrows-the-hunt-for-dark-matter/ Wed, 28 Jan 2026 09:04:03 +0000 https://physicsworld.com/?p=125927 A new search for emerging jets at CERN has ruled out key dark‑sector scenarios

The post ATLAS narrows the hunt for dark matter appeared first on Physics World.

]]>
Researchers at the ATLAS collaboration have been searching for signs of new particles in the dark sector of the universe, a hidden realm that could help explain dark matter. In some theories, this sector contains dark quarks (fundamental particles) that undergo a shower and hadronization process, forming long-lived dark mesons (dark quarks and antiquarks bound by a new dark strong force), which eventually decay into ordinary particles. These decays would appear in the detector as unusual “emerging jets”: bursts of particles originating from displaced vertices relative to the primary collision point.

Using 51.8 fb⁻¹ of proton–proton collision data at 13.6 TeV collected in 2022–2023, the ATLAS team looked for events containing two such emerging jets. They explored two possible production mechanisms, which are a vector mediator (Z′) produced in the s‑channel and a scalar mediator (Φ) exchanged in the t‑channel. The analysis combined two complementary strategies. A cut-based strategy relying on high-level jet observables, including track-, vertex-, and jet-substructure-based selections, enables a straightforward reinterpretation for alternative theoretical models. A machine learning approach employs a per-jet tagger using a transformer architecture trained on low-level tracking variables to discriminate emerging from Standard Model jets, maximizing sensitivity for the specific models studied.

No emerging‑jet signal excess was found, but the search set the first direct limits on emerging‑jet production via a Z′ mediator and the first constraints on t‑channel Φ production. Depending on the model assumptions, Z′ masses up to around 2.5 TeV and Φ masses up to about 1.35 TeV are excluded. These results significantly narrow the space in which dark sector particles could exist and form part of a broader ATLAS programme to probe dark quantum chromodynamics. The work sharpens future searches for dark matter and advances our understanding of how a dark sector might behave.

Read the full article

Search for emerging jets in pp collisions at √s = 13.6 TeV with the ATLAS experiment

The ATLAS Collaboration 2025 Rep. Prog. Phys. 88 097801

Do you want to learn more about this topic?

Dark matter and dark energy interactions: theoretical challenges, cosmological implications and observational signatures by B WangE AbdallaF Atrio-Barandela and D Pavón (2016)

The post ATLAS narrows the hunt for dark matter appeared first on Physics World.

]]>
Research highlight A new search for emerging jets at CERN has ruled out key dark‑sector scenarios https://physicsworld.com/wp-content/uploads/2026/01/abstract-collider-square-505516720-istock-koto-feja.jpg
How do bacteria produce entropy? https://physicsworld.com/a/how-do-bacteria-produce-entropy/ Wed, 28 Jan 2026 09:02:07 +0000 https://physicsworld.com/?p=126193 A team of researchers from King’s College and Imperial College London have proposed a new way to calculate entropy production in active matter systems

The post How do bacteria produce entropy? appeared first on Physics World.

]]>
Active matter is matter composed of large numbers of active constituents, each of which consumes chemical energy in order to move or to exert mechanical forces.

This type of matter is commonly found in biology: swimming bacteria or migrating cells are both classic examples. In addition, a wide range of synthetic systems, such as active colloids or robotic swarms, can also fall into this umbrella.

Active matter has therefore been the focus of much research over the past decade, unveiling many surprising theoretical features and a suggesting a plethora of applications.

Perhaps most importantly, these systems’ ability to perform work leads to sustained non-equilibrium behaviour. This is distinctly different from that of relaxing equilibrium thermodynamic systems, commonly found in other areas of physics.

The concept of entropy production is often used to quantify this difference and to calculate how much useful work can be performed. If we want to harvest and utilise this work however, we need to understand the small-scale dynamics of the system. And it turns out this is rather complicated.

One way to calculate entropy production is through field theory, the workhorse of statistical mechanics. Traditional field theories simplify the system by smoothing out details, which works well for predicting densities and correlations. However, these approximations often ignore the individual particle nature, leading to incorrect results for entropy production.

The new paper details a substantial improvement on this method. By making use of Doi-Peliti field theory, they’re able to keep track of microscopic particle dynamics, including reactions and interactions.

The approach starts from the Fokker-Planck equation and provides a systematic way to calculate entropy production from first principles. It can be extended to include interactions between particles and produces general, compact formulas that work for a wide range of systems. These formulas are practical because they can be applied to both simulations and experiments.

The authors demonstrated their method with numerous examples, including systems of Active Brownian Particles, showing its broad usefulness. The big challenge going forward though is to extend their framework to non-Markovian systems, ones where future states depend on the present as well as past states.

Read the full article

Field theories of active particle systems and their entropy production – IOPscience

G. Pruessner and R. Garcia-Millan, 2025 Rep. Prog. Phys. 88 097601

The post How do bacteria produce entropy? appeared first on Physics World.

]]>
Research highlight A team of researchers from King’s College and Imperial College London have proposed a new way to calculate entropy production in active matter systems https://physicsworld.com/wp-content/uploads/2026/01/bacteria-22181010-istock-jezperklauzen.jpg
Einstein’s recoiling slit experiment realized at the quantum limit https://physicsworld.com/a/einsteins-recoiling-slit-experiment-realized-at-the-quantum-limit/ Wed, 28 Jan 2026 09:00:52 +0000 https://physicsworld.com/?p=126182 A century-old thought experiment on wave–particle duality is brought into the laboratory using a single trapped atom

The post Einstein’s recoiling slit experiment realized at the quantum limit appeared first on Physics World.

]]>
Quantum mechanics famously limits how much information about a system can be accessed at once in a single experiment. The more precisely a particle’s path can be determined, the less visible its interference pattern becomes. This trade-off, known as Bohr’s complementarity principle, has shaped our understanding of quantum physics for nearly a century. Now, researchers in China have brought one of the most famous thought experiments surrounding this principle to the quantum limit, using a single atom as a movable slit.

The thought experiment dates back to the 1927 Solvay Conference, where Albert Einstein proposed a modification of the double-slit experiment in which one of the slits could recoil. He argued that if a photon caused the slit to recoil as it passed through, then measuring that recoil might reveal which path the photon had taken without destroying the interference pattern. Conversely, Niels Bohr argued that any such recoil would entangle the photon with the slit, washing out the interference fringes.

For decades, this debate remained largely philosophical. The challenge was not about adding a detector or a label to track a photon’s path. Instead, the question was whether the “which-path” information could be stored in the motion of the slit itself. Until now, however, no physical slit was sensitive enough to register the momentum kick from a single photon.

A slit that kicks back

To detect the recoil from a single photon, the slit’s momentum uncertainty must be comparable to the photon’s momentum. For any ordinary macroscopic slit, its quantum fluctuations are significantly larger than the recoil, washing out the which-path information. To give a sense of scale, the authors note that even a 1 g object modelled as a 100 kHz oscillator (for example, a mirror on a spring) would have a ground-state momentum uncertainty of about 10-16 kg m s-1, roughly 11 orders of magnitude larger than the momentum of an optical photon (approximately 10-27 kg m s-1).

Illustration showing the experimental realization

In their study, published in Physical Review Letters, Yu-Chen Zhang and colleagues from the University of Science and Technology of China overcame this obstacle by replacing the movable slit with a single rubidium atom held in an optical tweezer and cooled to its three-dimensional motional ground state. In this regime, the atom’s momentum uncertainty reaches the quantum limit, making the recoil from a single photon directly measurable.

Rather than using a conventional double-slit geometry, the researchers built an optical interferometer in which photons scattered off the trapped atom. By tuning the depth of this optical trap, the researchers were able to precisely control the atom’s intrinsic momentum uncertainty, effectively adjusting how “movable” the slit was.

Watching interference fade 

As the researchers decreased the atom’s momentum uncertainty, they observed a loss of interference in the scattered photons. Increasing the atom’s momentum uncertainty caused the interference to reappear.

This behaviour directly revealed the trade-off between interference and which-path information at the heart of the Einstein–Bohr debate. The researchers note that the loss of interference arose not from classical noise, but from entanglement between the photon and the atom’s motion.

“The main challenge was matching the slit’s momentum uncertainty to that of a single photon,” says corresponding author Jian-Wei Pan. “For macroscopic objects, momentum fluctuations are far too large – they completely hide the recoil. Using a single atom cooled to its motional ground state allows us to reach the fundamental quantum limit.”

Maintaining interferometric phase stability was equally demanding. The team used active phase stabilization with a reference laser to keep the optical path length stable to within a few nanometres (roughly 3 nm) for over 10 h.

Beyond settling a historical argument, the experiment offers a clean demonstration of how entanglement plays a key role in Bohr’s complementarity principle. As Pan explains, the results suggest that “entanglement in the momentum degree-of-freedom is the deeper reason behind the loss of interference when which-path information becomes available”.

This experiment opens the door to exploring quantum measurement in a new regime. By treating the slit itself as a quantum object, future studies could probe how entanglement emerges between light and matter. Additionally, the same set-up could be used to gradually increase the mass of the slit, providing a new way to study the transition from quantum to classical behaviour.

The post Einstein’s recoiling slit experiment realized at the quantum limit appeared first on Physics World.

]]>
Research update A century-old thought experiment on wave–particle duality is brought into the laboratory using a single trapped atom https://physicsworld.com/wp-content/uploads/2026/01/28-01-26-einstein-slit-featured.jpg newsletter1
European Space Agency unveils first images from Earth-observation ‘sounder’ satellite https://physicsworld.com/a/european-space-agency-unveils-first-images-from-earth-observation-sounder-satellite/ Tue, 27 Jan 2026 18:26:07 +0000 https://physicsworld.com/?p=126204 Data from the mission will help improve the accuracy of weather forecasting

The post European Space Agency unveils first images from Earth-observation ‘sounder’ satellite appeared first on Physics World.

]]>
The European Space Agency has released the first images from the Meteosat Third Generation-Sounder (MTG-S) satellite. They show variations in temperature and humidity over Europe and northern Africa in unprecedented detail with further data from the mission set to improve weather-forecasting models and improve measurements of air quality over Europe.

Launched on 1 July 2025 from the Kennedy Space Center in Florida aboard a SpaceX Falcon 9 rocket, MTG-S operates from a geostationary orbit, about 36 000 km above Earth’s surface and is able to provide coverage of Europe and part of northern Africa on a 15-minute repeat cycle.

The satellite carries a hyperspectral sounding instrument that uses interferometry to capture data on temperature and humidity as well as being able to measure wind and trace gases in the atmosphere. It can scan nearly 2,000 thermal infrared wavelengths every 30 minutes.

The data will eventually be used to generate 3D maps of the atmosphere and help improve the accuracy of weather forecasting, especially for rapidly evolving storms.

The “temperature” image, above, was taken in November 2025 and shows heat (red) from the African continent, while a dark blue weather front covers Spain and Portugal.

The “humidity” image, below, was captured using the sounder’s medium-wave infrared channel. Blue colours represent regions in the atmosphere with higher humidity, while red colours correspond to lower humidity.

Whole-Earth image showing cloud formation

“Seeing the first infrared sounder images from MTG-S really brings this mission and its potential to life,” notes Simonetta Cheli, ESA’s director of Earth observation programmes. “We expect data from this mission to change the way we forecast severe storms over Europe – and this is very exciting for communities and citizens, as well as for meteorologists and climatologists.”

ESA is expected to launch a second Meteosat Third Generation-Imaging satellite later this year following the launch of the first one – MTG-I1 – in December 2022.

The post European Space Agency unveils first images from Earth-observation ‘sounder’ satellite appeared first on Physics World.

]]>
News Data from the mission will help improve the accuracy of weather forecasting https://physicsworld.com/wp-content/uploads/2026/01/temperature-esa-27-01-26.jpg newsletter
Uranus and Neptune may be more rocky than icy, say astrophysicists https://physicsworld.com/a/uranus-and-neptune-may-be-more-rocky-than-icy-say-astrophysicists/ Tue, 27 Jan 2026 13:00:47 +0000 https://physicsworld.com/?p=126105 Novel modelling approach suggests that the traditionally ice-rich image of these planets may be skewed

The post Uranus and Neptune may be more rocky than icy, say astrophysicists appeared first on Physics World.

]]>
Our usual picture of Uranus and Neptune as “ice giant” planets may not be entirely correct. According to new work by scientists at the University of Zürich (UZH), Switzerland, the outermost planets in our solar system may in fact be rock-rich worlds with complex internal structures – something that could have major implications for our understanding of how these planets formed and evolved.

Within our solar system, planets fall into three categories based on their internal composition. Mercury, Venus, Earth and Mars are deemed terrestrial rocky planets; Jupiter and Saturn are gas giants; and Uranus and Neptune are ice giants.

An agnostic approach

The new work, which was led by PhD student Luca Morf in UZH’s astrophysics department, challenges this last categorization by numerically simulating the two planets’ interiors as a mixture of rock, water, hydrogen and helium. Morf explains that this modelling framework is initially “agnostic” – meaning unbiased – about what the density profiles of the planets’ interiors should be. “We then calculate the gravitational fields of the planets so that they match with observational measurements to infer a possible composition,” he says.

This process, Morf continues, is then repeated and refined to ensure that each model satisfies several criteria. The first criteria is that the planet should be in hydrostatic equilibrium, meaning that its internal pressure is enough to counteract its gravity and keep it stable. The second is that the planet should have the gravitational moments observed in spacecraft data. These moments describe the gravitational field of a planet, which is complex because planets are not perfect spheres.

The final criteria is that the modelled planets need to be thermodynamically and compositionally consistent with known physics. “For example, a simulation of the planets’ interiors must obey equations of state, which dictate how materials behave under given pressure and temperature conditions,” Morf explains.

After each iteration, the researchers adjust the density profile of each planet and test it to ensure that the model continues to adhere to the three criteria. “We wanted to bridge the gap between existing physics-based models that are overly constrained and empirical approaches that are too simplified,” Morf explains. Avoiding strict initial assumptions about composition, he says, “lets the physics and data guide the solution [and] allows us to probe a larger parameter space.”

A wide range of possible structures

Based on their models, the UZH astrophysicists concluded that the interiors of Uranus and Neptune could have a wide range of possible structures, encompassing both water-rich and rock-rich configurations. More specifically, their calculations yield rock-to-water ratios of between 0.04-3.92 for Uranus and 0.20-1.78 for Neptune.

Diagrams showing possible "slices" of Uranus and Neptune. Four slices are shown, two for each planet. Each slice is filled with brown areas representing silicon dioxide rock and blue areas representing water ice, plus smaller areas of tan colouring for hydrogen-helium mixtures and (for Neptune only) grey areas representing iron. Two slices are mostly blue, while the other two contain large fractions of brown.

The models, which are detailed in Astronomy and Astrophysics, also contain convective regions with ionic water pockets. The presence of such pockets could explain the fact that Uranus and Neptune, unlike Earth, have more than two magnetic poles, as the pockets would generate their own local magnetic dynamos.

Traditional “ice giant” label may be too simple

Overall, the new findings suggest that the traditional “ice giant” label may oversimplify the true nature of Uranus of Neptune, Morf tells Physics World. Instead, these planets could have complex internal structures with compositional gradients and different heat transport mechanisms. Though much uncertainty remains, Morf stresses that Uranus and Neptune – and, by extension, similar intermediate-class planets that may exist in other solar systems – are so poorly understood that any new information about their internal structure is valuable.

A dedicated space mission to these outer planets would yield more accurate measurements of the planets’ gravitational and magnetic fields, enabling scientists to refine the limited existing observational data. In the meantime, the UZH researchers are looking for more solutions for the possible interiors of Uranus and Neptune and improving their models to account for additional constraints, such as atmospheric conditions. “Our work will also guide laboratory and theoretical studies on the way materials behave in general at high temperatures and pressures,” Morf says.

The post Uranus and Neptune may be more rocky than icy, say astrophysicists appeared first on Physics World.

]]>
Research update Novel modelling approach suggests that the traditionally ice-rich image of these planets may be skewed https://physicsworld.com/wp-content/uploads/2026/01/uranus-neptune.jpg newsletter1
String-theory concept boosts understanding of biological networks https://physicsworld.com/a/string-theory-concept-boosts-understanding-of-biological-networks/ Tue, 27 Jan 2026 09:35:45 +0000 https://physicsworld.com/?p=126150 Better imaging data point to surface minimization

The post String-theory concept boosts understanding of biological networks appeared first on Physics World.

]]>
Many biological networks – including blood vessels and plant roots – are not organized to minimize total length, as long assumed. Instead, their geometry follows a principle of surface minimization, following a rule that is also prevalent in string theory. That is the conclusion of physicists in the US, who have created a unifying framework that explains structural features long seen in real networks but poorly captured by traditional mathematical models.

Biological transport and communication networks have fascinated scientists for decades. Neurons branch to form synapses, blood vessels split to supply tissues, and plant roots spread through soil. Since the mid-20th century, many researchers believed that evolution favours networks that minimize total length or volume.

“There is a longstanding hypothesis, going back to Cecil Murray from the 1940s, that many biological networks are optimized for their length and volume,” Albert-László Barabási of Northeastern University explains. “That is, biological networks, like the brain and the vascular systems, are built to achieve their goals with the minimal material needs.” Until recently, however, it had been difficult to characterize the complicated nature of biological networks.

Now, advances in imaging have given Barabási and colleagues a detailed 3D picture of real physical networks, from individual neurons to entire vascular systems. With these new data in hand, the researchers found that previous theories are unable to describe real networks in quantitative terms.

From graphs to surfaces

To remedy this, the team defined the problem in terms of physical networks, systems whose nodes and links have finite thickness and occupy space. Rather than treating them as abstract graphs made of idealized edges, the team models them as geometrical objects embedded in 3D space.

To do this, the researchers turned to an unexpected mathematical tool. “Our work relies on the framework of covariant closed string field theory, developed by Barton Zwiebach and others in the 1980s,” says team member Xiangyi Meng at Rensselaer Polytechnic Institute. This framework provides a correspondence between network-like graphs and smooth surfaces.

Unlike string theory, their approach is entirely classical. “These surfaces, obtained in the absence of quantum fluctuations, are precisely the minimal surfaces we seek,” Meng says. No quantum mechanics, supersymmetry, or exotic string-theory ingredients are required. “Those aspects were introduced mainly to make string theory quantum and thus do not apply to our current context.”

Using this framework, the team analysed a wide range of biological systems. “We studied human and fruit fly neurons, blood vessels, trees, corals, and plants like Arabidopsis,” says Meng. Across all these cases, a consistent pattern emerged: the geometry of the networks is better predicted by minimizing surface area rather than total length.

Complex junctions

One of the most striking outcomes of the surface-minimization framework is its ability to explain structural features that previous models cannot. Traditional length-based theories typically predict simple Y-shaped bifurcations, where one branch splits into two. Real networks, however, often display far richer geometries.

“While traditional models are limited to simple bifurcations, our framework predicts the existence of higher-order junctions and ‘orthogonal sprouts’,” explains Meng.

These include three- or four-way splits and perpendicular, dead-end offshoots. Under a surface-based principle, such features arise naturally and allow neurons to form synapses using less membrane material overall and enable plant roots to probe their environment more effectively.

Ginestra Bianconi of the UK’s Queen Mary University of London says that the key result of the new study is the demonstration that “physical networks such as the brain or vascular networks are not wired according to a principle of minimization of edge length, but rather that their geometry follows a principle of surface minimization.”

Bianconi, who was not involved in the study, also highlights the interdisciplinary leap of invoking ideas from string theory, “This is a beautiful demonstration of how basic research works”.

Interdisciplinary leap

The team emphasizes that their work is not immediately technological. “This is fundamental research, but we know that such research may one day lead to practical applications,” Barabási says. In the near term, he expects the strongest impact in neuroscience and vascular biology, where understanding wiring and morphology is essential.

Bianconi agrees that important questions remain. “The next step would be to understand whether this new principle can help us understand brain function or have an impact on our understanding of brain diseases,” she says. Surface optimization could, for example, offer new ways to interpret structural changes observed in neurological disorders.

Looking further ahead, the framework may influence the design of engineered systems. “Physical networks are also relevant for new materials systems, like metamaterials, who are also aiming to achieve functions at minimal cost,” Barabási notes. Meng points to network materials as a particularly promising area, where surface-based optimization could inspire new architectures with tailored mechanical or transport properties.

The research is described in Nature.

The post String-theory concept boosts understanding of biological networks appeared first on Physics World.

]]>
Research update Better imaging data point to surface minimization https://physicsworld.com/wp-content/uploads/2026/01/27-1-26-neurons-10100054-istock-ktsimage.jpg newsletter1
The secret life of TiO₂ in foams https://physicsworld.com/a/the-secret-life-of-tio%e2%82%82-in-foams/ Mon, 26 Jan 2026 16:31:27 +0000 https://physicsworld.com/?p=126116 A detailed look inside carbon foams reveals how TiO₂ coatings form in 3D, offering new control over next‑generation energy materials

The post The secret life of TiO₂ in foams appeared first on Physics World.

]]>
Porous carbon foams are an exciting area of research because they are lightweight, electrically conductive, and have extremely high surface areas. Coating these foams with TiO₂ makes them chemically active, enabling their use in energy storage devices, fuel cells, hydrogen production, CO₂‑reduction catalysts, photocatalysis, and thermal management systems. While many studies have examined the outer surfaces of coated foams, much less is known about how TiO₂ coatings behave deep inside the foam structure.

In this study, researchers deposited TiO₂ thin films onto carbon foams using magnetron sputtering and applied different bias voltages to control ion energy, which in turn affects coating density, crystal structure, thickness, and adhesion. They analysed both the outer surface and the interior of the foam using microscopy, particle‑transport simulations, and X‑ray techniques.

They found that the TiO₂ coating on the outer surface is dense, correctly composed, and crystalline (mainly anatase with a small amount of rutile) ideal for catalytic and energy applications. They also discovered that although fewer particles reach deep inside the foam, those do retain the same energy, meaning particle quantity decreases with depth but particle energy does not. Because devices like batteries and supercapacitors rely on uniform coatings, variations in thickness or structure inside the foam can lead to poorer performance and faster degradation.

Overall, this research provides a much clearer understanding of how TiO₂ coatings grow inside complex 3D foams, showing how thickness, density, and crystal structure evolve with depth and how bias voltage can be used to tune these properties. By revealing how plasma particles move through the foam and validating models that predict coating behaviour, it enables the design of more reliable, higher‑performing foam‑based devices for energy and catalytic applications.

Read the full article

A comprehensive multi-scale study on the growth mechanisms of magnetron sputtered coatings on open-cell 3D foams

Loris Chavée et al 2026 Prog. Energy 8 015002

Do you want to learn more about this topic?

Advances in thermal conductivity for energy applications: a review Qiye Zheng et al. (2021)

The post The secret life of TiO₂ in foams appeared first on Physics World.

]]>
Research highlight A detailed look inside carbon foams reveals how TiO₂ coatings form in 3D, offering new control over next‑generation energy materials https://physicsworld.com/wp-content/uploads/2026/01/liquid-patterns-shutterstock-caracolla.jpg
Laser processed thin NiO powder coating for durable anode-free batteries https://physicsworld.com/a/laser-processed-thin-nio-powder-coating-for-durable-anode-free-batteries/ Mon, 26 Jan 2026 16:30:22 +0000 https://physicsworld.com/?p=125971 CO2 infra-red Laser modified NiO powder coating on Cu suppresses dendrite formation, and enables reversible Lithium plating and stripping over 700 stable cycles

The post Laser processed thin NiO powder coating for durable anode-free batteries appeared first on Physics World.

]]>
Traditional lithium‑ion batteries use a thick graphite anode, where lithium ions move in and out of the graphite during charging and discharging. In an anode‑free lithium metal battery, there is no anode material at the start, only a copper foil. During the first charge, lithium leaves the cathode and deposits onto the copper as pure lithium metal, effectively forming the anode. Removing the anode increases energy density dramatically by reducing weight, and it also simplifies and lowers the cost of manufacturing. Because of this, anode‑free batteries are considered to have major potential for next‑generation energy storage. However, a key challenge is that lithium deposits unevenly on bare copper, forming long needle‑like dendrites that can pierce the separator and cause short circuits. This uneven growth also leads to rapid capacity loss, so anode‑free batteries typically fail after only a few hundred cycles.

In this research, the scientists coated the copper foil with NiO powder and used a CO₂ laser (l = 10.6 mm) to rapidly heat the same in a rapid scanning mode to transform it. The laser‑treated NiO becomes porous and strongly adherent to the copper, helping lithium spread out more evenly. The process is fast, energy‑efficient, and can be done in air. As a result, lithium ions diffuse or move more easily across the surface, reducing dendrite formation. The exchange current density also doubled compared to bare copper, indicating better charge‑transfer behaviour. Overall, battery performance improved dramatically. The modified cells lasted 400 cycles at room temperature and 700 cycles at 40°C, compared with only 150 cycles for uncoated copper.

This simple, rapid, and scalable technique offers a powerful way to improve anode‑free lithium metal batteries, one of the most promising next‑generation battery technologies.

Read the full article

Microgradient patterned NiO coating on copper current collector for anode-free lithium metal battery

Supriya Kadam et al 2025 Prog. Energy 7 045003

Do you want to learn more about this topic?

Lithium aluminum alloy anodes in Li-ion rechargeable batteries: past developments, recent progress, and future prospects by Tianye Zheng and Steven T Boles (2023)

The post Laser processed thin NiO powder coating for durable anode-free batteries appeared first on Physics World.

]]>
Research highlight CO2 infra-red Laser modified NiO powder coating on Cu suppresses dendrite formation, and enables reversible Lithium plating and stripping over 700 stable cycles https://physicsworld.com/wp-content/uploads/2026/01/rechargeable-batteries-1446570564-istock-phonlamai-photo-scaled.jpg
Planning a sustainable water future in the United States https://physicsworld.com/a/planning-a-sustainable-water-future-in-the-united-states/ Mon, 26 Jan 2026 16:28:44 +0000 https://physicsworld.com/?p=125958 Advanced desalination can supply fresh water cheaply while managing brine responsibly

The post Planning a sustainable water future in the United States appeared first on Physics World.

]]>
Within 45 years, water demand in the United States is predicted to double, while climate change is expected to worsen freshwater supplies, with 44% of the country already experiencing some form of drought. One way to expand water resources is desalination, where salt is removed from seawater or brackish groundwater to make clean, usable water. Brackish groundwater contains far less salt than seawater, making it much easier and cheaper to treat, and the United States has vast reserves of it in deep aquifers. The challenge is that desalination traditionally requires a lot of energy and produces a concentrated brine waste stream that is difficult and costly to dispose of. As a result, desalination currently provides only about 1% of the nation’s water supply, even though it is a major source of drinking water in regions such as the Middle East and North Africa.

Researchers Vasilis Fthenakis (left) and Zhuoran Zhang (right) from Columbia University taken at Nassau Point in Long Island

In this work, the researchers show how desalination of brackish groundwater can be made genuinely sustainable and economically viable for addressing the United States’ looming water shortages. A key part of the solution is zero‑liquid‑discharge, which avoids brine disposal by extracting more freshwater and recovering salts such as sodium, calcium, and magnesium for reuse. Crucially, the study demonstrates that when desalination is powered by low‑cost solar and wind energy, the overall process becomes far more affordable. By 2040, solar photovoltaics paired with optimised battery storage are projected to produce electricity at lower cost than the grid in the states facing the largest water deficits, making renewable‑powered desalination a competitive option.

The researchers also show that advanced technologies, such as high‑recovery reverse osmosis and crystallisation, can achieve zero‑liquid‑discharge without increasing costs, because the extra water and salt recovery offsets the expense of brine management. Their modelling indicates that a full renewable‑powered zero‑liquid‑discharge pathway can produce freshwater at an affordable cost, while reducing environmental impacts and avoiding brine disposal altogether. Taken together, this work outlines a realistic, sustainable pathway for large‑scale desalination in the United States, offering a credible strategy for securing future water supplies in increasingly water‑stressed regions.

Progress diagram adapted from article

Do you want to learn more about this topic?

Review of solar-enabled desalination and implications for zero-liquid-discharge applications by Vasilis Fthenakis et al. (2024)

 

The post Planning a sustainable water future in the United States appeared first on Physics World.

]]>
Research highlight Advanced desalination can supply fresh water cheaply while managing brine responsibly https://physicsworld.com/wp-content/uploads/2026/01/drought-5723502-istock-clint-spencer.jpg
Could silicon become the bedrock of quantum computers? https://physicsworld.com/a/could-silicon-become-the-bedrock-of-quantum-computers/ Mon, 26 Jan 2026 16:00:06 +0000 https://physicsworld.com/?p=126149 Australian spin-out Silicon Quantum Computing makes the case with a modality-leading 11-qubit processor

The post Could silicon become the bedrock of quantum computers? appeared first on Physics World.

]]>
Silicon, in the form of semiconductors, integrated chips and transistors, is the bedrock of modern classical computers – so much so that it lends its name to technological hubs around the world, beginning with Silicon Valley in the US . For quantum computers, the bedrock is still unknown, but a new platform developed by researchers in Australia suggests that silicon could play a role here, too.

Dubbed the 14|15 platform due to its elemental constituents, it combines a crystalline silicon substrate with qubits made from phosphorus atoms . By relying on only two types of atoms, team co-leader Michelle Simmons says the device “avoids the interfaces and complexities that plague so many multi-material platforms” while enabling “high-quality qubits with lower noise, simplicity of design and device stability”.

Boarding at platform 14|15

Quantum computers take registers of qubits, which store quantum information, and apply basic operations to them sequentially to execute algorithms. One of the primary challenges they face is scalability – that is, sustaining reliable, or high-fidelity, operations on an increasing number of qubits. Many of today’s platforms use only a small number of qubits, for which operations can be individually tuned for optimal performance. However, as the amount of hardware, complexity and noise increases, this hands-on approach becomes debilitating.

Silicon quantum processors may offer a solution. Writing in Nature, Simmons, Ludwik Kranz, and their team at Silicon Quantum Computing (a spinout from the University of New South Wales in Sydney) describe a system that uses the nuclei of phosphorus atoms as its primary qubit. Each nucleus behaves a little like a bar magnet with an orientation (north/south or up/down) that represents a 0 or 1.

These so-called spin qubits are particularly desirable because they exhibit relatively long coherence times, meaning information can be preserved for long enough to apply the numerous operations of an algorithm. Using monolithic, high-purity silicon as the substrate further benefits coherence since it reduces undesirable charge and magnetic noise arising from impurities and interfaces.

To make their quantum processor, the team deposited phosphorus atoms in small registers a few nanometres across. Within each register, the phosphorus nuclei do not interact enough to generate the entangled states required for a quantum computation. The team remedy this by loading each cluster of phosphorous atoms with a electron that is shared between the atoms. The result is that so-called hyperfine interactions, wherein each nuclear spin interacts with the electron like an interacting bar magnet, arise and provide the interaction necessary to entangle nuclear spins within each register.

By combining these interactions with control of individual nuclear spins, the researchers showed that they can generate Bell states (maximally entangled two-qubit states) between pairs of nuclei within a register with error rates as low as 0.5% – the lowest to date for semiconductor platforms.

Scaling through repulsion

The team’s next step was to connect multiple processors – a step that exponentially increases their combined capacity. To understand how, consider two quantum processors, one with n qubits and the other m qubits. Isolated from one another, they can collectively represent at most 2n + 2m states. Once they are entangled, however, they can represent 2n + m states.

Simmons says that silicon quantum processors offer an inherent advantage in scaling, too. Generating numerous registers on a single chip and using “naturally occurring” qubits, she notes, reduces their need for extraneous confinement gates and electronics as they scale.

The researchers showcased these scaling capabilities by entangling a register of four phosphorus atoms with a register of five, separated by 13 nm. The entanglement of these registers is mediated by the electron-exchange interaction, a phenomenon arising from the combination of Pauli’s exclusion principle and Coulomb repulsion when electrons are confined in a small region. By leveraging this and all other interactions and control in their toolkit, the researchers generate entanglement of eight data qubits across the two registers.

Retaining such high-quality qubits and individual control of them despite their high density demonstrates the scaling potential of the platform. Future avenues of exploration include increasing the size of 2D arrays of registers to increase the number of qubits, but Simmons says the rest is “top secret”, adding “the world will know soon enough”.

The post Could silicon become the bedrock of quantum computers? appeared first on Physics World.

]]>
Research update Australian spin-out Silicon Quantum Computing makes the case with a modality-leading 11-qubit processor https://physicsworld.com/wp-content/uploads/2026/01/27-01-2026-silicon-quantum-computing-engineers.jpg newsletter1
Is our embrace of AI naïve and could it lead to an environmental disaster? https://physicsworld.com/a/is-our-embrace-of-ai-naive-and-could-it-lead-to-an-environmental-disaster/ Mon, 26 Jan 2026 11:00:38 +0000 https://physicsworld.com/?p=125839 Johan Hansson says it is dangerous to treat artificial intelligence as a magic wand

The post Is our embrace of AI naïve and could it lead to an environmental disaster? appeared first on Physics World.

]]>
According to today’s leading experts in artificial intelligence (AI), this new technology is a danger to civilization. A statement on AI risk published in 2023 by the US non-profit Center for AI Safety warned that mitigating the risk of extinction from AI must now be “a global priority”, comparing it to other societal-scale dangers such as pandemics and nuclear war. It was signed by more than 600 people, including the winner of the 2024 Nobel Prize for Physics and so-called “Godfather of AI” Geoffrey Hinton. In a speech at the Nobel banquet after being awarded the prize, Hinton noted that AI may be used “to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim”.

Despite signing the letter, Sam Altman of OpenAI, the firm behind ChatGPT, has stated that the company’s explicit ambition is to create artificial general intelligence (AGI) within the next few years, to “win the AI-race”. AGI is predicted to surpass human cognitive capabilities for almost all tasks, but the real danger is if or when AGI is used to generate more powerful versions of itself. Sometimes called “superintelligence”, this would be impossible to control. Companies do not want any regulation of AI and their business model is for AGI to replace most employees at all levels. This is how firms are expected to benefit from AI, since wages are most companies’ biggest expense.

AI, to me, is not about saving the world, but about a handful of people wanting to make enormous amounts of money from it. No-one knows what internal mechanism makes even today’s AI work – just as one cannot find out what you think from how the neurons in your brain are firing. If we don’t even understand today’s AI models, how are we going to understand – and control – the more powerful models that already exist or are planned in the near future?

AI has some practical benefits but too often is put to mostly meaningless, sometimes downright harmful, uses such as cheating your way through school or creating disinformation and fake videos online. What’s more, an online search with the help of AI requires at least 10 times as much energy as a search without AI. It already uses 5% of all electricity in the US and by 2028 this figure is expected to be 15%, which will be over a quarter of all US households’ electricity consumption. AI data servers are more than 50% as carbon intensive as the rest of the US’s electricity supply.

Those energy needs are why some tech companies are building AI data centres – often under confidential, opaque agreements – very quickly for fear of losing market share. Indeed, the vast majority of those centres are powered by fossil-fuel energy sources – completely contrary to the Paris Agreement to limit global warming. We must wisely allocate Earth’s strictly limited resources, with what is wasted on AI instead going towards vital things.

To solve the climate crisis, there is definitely no need for AI. All the solutions have already been known for decades: phasing out fossil fuels, reversing deforestation, reducing energy and resource consumption, regulating global trade, reforming the economic system away from its dependence on growth. The problem is that the solutions are not implemented because of short-term selfish profiteering, which AI only exacerbates.

Playing with fire

AI, like all other technologies, is not a magic wand and, as Hinton says, potentially has many negative consequences. It is not, as the enthusiasts seem to think, a magical free resource that provides output without input (and waste). I believe we must rethink our naïve, uncritical, overly fast, total embrace of AI. Universities are known for wise reflection, but worryingly they seem to be hurrying to jump on the AI bandwagon. The problem is that the bandwagon may be going in the wrong direction or crash and burn entirely.

Why then should universities and organizations send their precious money to greedy, reckless and almost totalitarian tech billionaires? If we are going to use AI, shouldn’t we create our own AI tools that we can hopefully control better? Today, more money and power is transferred to a few AI companies that transcend national borders, which is also a threat to democracy. Democracy only works if citizens are well educated, committed, knowledgeable and have influence.

AI is like using a hammer to crack a nut. Sometimes a hammer may be needed but most of the time it is not and is instead downright harmful. Happy-go-lucky people at universities, companies and throughout society are playing with fire without knowing about the true consequences now, let alone in 10 years’ time. Our mapped-out path towards AGI is like a zebra on the savannah creating an artificial lion that begins to self-replicate, becoming bigger, stronger, more dangerous and more unpredictable with each generation.

Wise reflection today on our relationship with AI is more important than ever.

The post Is our embrace of AI naïve and could it lead to an environmental disaster? appeared first on Physics World.

]]>
Opinion and reviews Johan Hansson says it is dangerous to treat artificial intelligence as a magic wand https://physicsworld.com/wp-content/uploads/2026/01/2026-01-forum-hansson-ai-bad-actor-2622481035-shutterstock-khunkorn-studio.jpg newsletter
New sensor uses topological material to detect helium leaks https://physicsworld.com/a/new-sensor-uses-topological-material-to-detect-helium-leaks/ Mon, 26 Jan 2026 09:00:06 +0000 https://physicsworld.com/?p=126089 Device works by monitoring frequency of sound waves propagating through a kagome material

The post New sensor uses topological material to detect helium leaks appeared first on Physics World.

]]>
A new sensor detects helium leaks by monitoring how sound waves propagate through a topological material – no chemical reactions required. Developed by acoustic scientists at Nanjing University, China, the innovative, physics-based device is compact, stable, accurate and capable of operating at very low temperatures.

Helium is employed in a wide range of fields, including aerospace, semiconductor manufacturing and medical applications as well as physics research. Because it is odourless, colourless, and inert, it is essentially invisible to traditional leak-detection equipment such as adsorption-based sensors. Specialist helium detectors are available, but they are bulky, expensive and highly sensitive to operating conditions.

A two-dimensional acoustic topological material

The new device created by Li Fan and colleagues at Nanjing consists of nine cylinders arranged in three sub-triangles with tubes in between the cylinders. The corners of the sub-triangles touch and the tubes allow air to enter the device. The resulting two-dimensional system has a so-called “kagome” structure and is an example of a topological material – that is, one that contains special, topologically protected, states that remain stable even if the bulk structure contains minor imperfections or defects. In this system, the protected states are the corners.

To test their setup, the researchers placed speakers under the corners that send sound waves into the structure and make the gas within it vibrate at a certain frequency (the resonance frequency). When they replaced the air in the device with helium, the sound waves travelled faster, changing the vibration frequency. Measuring this shift in frequency enabled the researchers to calculate the concentration of helium in the device.

Many advantages over traditional gas sensors

Fan explains that the device works because the interface/corner states are impacted by the properties of the gas within it. This mechanism has many advantages over traditional gas sensors. First, it does not rely on chemical reactions, making it ideal for detecting inert gases like helium. Second, the sensor is not affected by external conditions and can therefore work at extremely low temperatures – something that is challenging for conventional sensors that contain sensitive materials. Third, its sensitivity to the presence of helium does not change, meaning it does not need to be recalibrated during operation. Finally, it detects frequency changes quickly and rapidly returns to its baseline once helium levels decrease.

As well as detecting helium, Fan says the device can also pinpoint the direction a gas leak is coming from. This is because when helium begins to fill the device, the corner closest to the source of the gas is impacted first. Each corner thus acts as an independent sensing point, giving the device a spatial sensing capability that most traditional detectors lack.

Other gases could be detected

Detecting helium leaks is important in fields such as semiconductor manufacturing, where the gas is used for cooling, and in medical imaging systems that operate at liquid helium temperatures. “We think our work opens an avenue for inert gas detection using a simple device and is an example of a practical application for two-dimensional acoustic topological materials,” says Fan.

While the new sensor was fabricated to detect helium, the same mechanism could also be employed to detect other gases such as hydrogen, he adds.

Spurred on by these promising preliminary results, which they report in Applied Physics Letters, the researchers plan to extend their fabrication technique to create three-dimensional acoustic topological structures. “These could be used to orientate the corner points so that helium can be detected in 3D space,” says Fan. “Ultimately, we are trying to integrate our system into a portable structure that can be deployed in real-world environments without complex supporting equipment.,” he tells Physics World.

The post New sensor uses topological material to detect helium leaks appeared first on Physics World.

]]>
Research update Device works by monitoring frequency of sound waves propagating through a kagome material https://physicsworld.com/wp-content/uploads/2026/01/helium-sensor.jpg
Encrypted qubits can be cloned and stored in multiple locations https://physicsworld.com/a/encrypted-qubits-can-be-cloned-and-stored-in-multiple-locations/ Sat, 24 Jan 2026 15:09:46 +0000 https://physicsworld.com/?p=126133 “Elegant” result has implications for a quantum internet

The post Encrypted qubits can be cloned and stored in multiple locations appeared first on Physics World.

]]>
Encrypted qubits can be cloned and stored in multiple locations without violating the no-cloning theorem of quantum mechanics, researchers in Canada have shown. Their work could potentially allow quantum-secure cloud storage, in which data can be stored on multiple servers, thereby allowing for redundancy without compromising security. The research also has implications for quantum fundamentals.

Heisenberg’s uncertainty principle – which states that it is impossible to measure conjugate variables of a quantum object with less than a combined minimum uncertainty – is one of the central tenets of quantum mechanics. The no-cloning theorem – that it is impossible to create identical clones of unknown quantum states – flows directly from this. Achim Kempf of the University of Waterloo explains, “If you had [clones] you could take half your copies and perform one type of measurement, and the other half of your copies and perform an incompatible measurement, and then you could beat the uncertainty principle.”

No-cloning poses a challenge those trying to create a quantum internet. On today’s Internet, storage of information on remote servers is common, and multiple copies of this information are usually stored in different locations to preserve data in case of disruption. Users of a quantum cloud server would presumably desire the same degree of information security, but no-cloning theorem would apparently forbid this.

Signal and noise

In the new work, Kempf and his colleague Koji Yamaguchi, now at Japan’s Kyushu University, show that this is not the case. Their encryption protocol begins with the generation of a set of pairs of entangled qubits. When a qubit, called A, is encrypted, it interacts with one qubit (called a signal qubit) from each pair in turn. In the process of interaction, the signal qubits record information about the state of A, which has been altered by previous interactions. As each signal qubit is entangled with a noise qubit, the state of the noise qubits is also changed.

Another central tenet of quantum mechanics, however, is that quantum entanglement does not allow for information exchange. “The noise qubits don’t know anything about the state of A either classically or quantum mechanically,” says Kempf. “The noise qubits’ role is to serve as a record of noise…We use the noise that is in the signal qubit to encrypt the clone of A. You drown the information in noise, but the noise qubit has a record of exactly what noise has been added because [the signal qubits and noise qubits] are maximally entangled.”

Therefore, a user with all of the noise qubits knows nothing about the signal, but knows all of the noise that was added to it. Possession of just one of the signal qubits, therefore, allows them to recover the unencrypted qubit. This does not violate the uncertainty principle, however, because decrypting one copy of A involves making a measurement of the noise qubits: “At the end of [the measurement], the noise qubits are no longer what they were before, and they can no longer be used for the decryption of another encrypted clone,” explains Kempf.

Cloning clones

Kempf says that, working with IBM, they have demonstrated hundreds of steps of iterative quantum cloning (quantum cloning of quantum clones) on a Heron 2 processor successfully and showed that the researchers could even clone entangled qubits and recover the entanglement after decryption. “We’ll put that on the arXiv this month,” he says.

 The research is described in Physical Review Letters and Barry Sanders at Canada’s University of Calgary is impressed by both the elegance and the generality of the result. He notes it could have significance for topics as distant as information loss from black holes: “It’s not a flash in the pan,” he says; “If I’m doing something that is related to no-cloning, I would look back and say ‘Gee, how do I interpret what I’m doing in this context?’: It’s a paper I won’t forget.”

Seth Lloyd of MIT agrees: “It turns out that there’s still low-hanging fruit out there in the theory of quantum information, which hasn’t been around long,” he says. “It turns out nobody ever thought to look at this before: Achim is a very imaginative guy and it’s no surprise that he did.” Both Lloyd and Sanders agree that quantum cloud storage remains hypothetical, but Lloyd says “I think it’s a very cool and unexpected result and, while it’s unclear what the implications are towards practical uses, I suspect that people will find some very nice applications in the near future.”

The post Encrypted qubits can be cloned and stored in multiple locations appeared first on Physics World.

]]>
Research update “Elegant” result has implications for a quantum internet https://physicsworld.com/wp-content/uploads/2026/01/23-1-26-cloning-illustration.jpg
Cosmic time capsules: the search for pristine comets https://physicsworld.com/a/cosmic-time-capsules-the-search-for-pristine-comets/ Fri, 23 Jan 2026 13:40:59 +0000 https://physicsworld.com/?p=126073 Ancient icy wanderers like comet 3I/ATLAS, spotted in July 2025, can reveal secrets of our cosmic origins

The post Cosmic time capsules: the search for pristine comets appeared first on Physics World.

]]>

In this episode of Physics World Stories, host Andrew Glester explores the fascinating hunt for pristine comets – icy bodies that preserve material from the solar system’s beginnings and even earlier. Unlike more familiar comets that repeatedly swing close to the Sun and transform, these frozen relics act as time capsules, offering unique insights into our cosmic history.

Pale blue circle against red streaks. composite image of interstellar comet 3I/ATLAS captured by the Europa Ultraviolet Spectrograph instrument on NASA’s Europa Clipper spacecraft

The first guest is Tracy Becker, deputy principal investigator for the Ultraviolet Spectrograph on NASA’s Europa Clipper mission. Becker describes how the Jupiter-bound spacecraft recently turned its gaze to 3I/ATLAS, an interstellar visitor that appeared last July. Mission scientists quickly reacted to this unique opportunity, which also enabled them to test the mission’s instruments before it arrives at the icy world of Europa.

Michael Küppers then introduces the upcoming Comet Interceptor mission, set for launch in 2029. This joint ESA–JAXA mission will “park” in space until a suitable comet arrives from the outer reaches of the solar system. They will deploy two probes to study it from multiple angles – offering a first-ever close look at material untouched since the solar system’s birth.

From interstellar wanderers to carefully orchestrated intercepts, this episode blends pioneering missions and cosmic detective work. Keep up to date with all the latest space and astronomy developments in the dedicated section of the Physics World website.

The post Cosmic time capsules: the search for pristine comets appeared first on Physics World.

]]>
Physics World Cosmic time capsules: the search for pristine comets full 51:40 Podcasts Ancient icy wanderers like comet 3I/ATLAS, spotted in July 2025, can reveal secrets of our cosmic origins https://physicsworld.com/wp-content/uploads/2026/01/hubble-3i-atlas-scaled.jpg newsletter
Hot ancient galaxy cluster challenges current cosmological models https://physicsworld.com/a/hot-ancient-galaxy-cluster-challenges-current-cosmological-models/ Fri, 23 Jan 2026 11:30:13 +0000 https://physicsworld.com/?p=126130 Observations of the thermal energy in a baby galaxy cluster 12.4 billion light years away suggest much more energetic early cluster growth than current theories assume

The post Hot ancient galaxy cluster challenges current cosmological models appeared first on Physics World.

]]>
As with people, age in cosmology does not always extrapolate. An early-career politician may be more likely to win a debate with a student than with a seasoned diplomat, but put all three in a room with a toddler and the toddler will almost certainly get their own way – they are following a different set of rules. A team of global collaborators noticed a similar phenomenon when peering at a cluster of developing galaxies from a time when the universe was just a tenth of its current age.

Cosmological theories suggest that such infant clusters should host much cooler and less abundant gas than more mature clusters. But what the researchers saw was at least five times hotter than expected – apparently not abiding by those rules.

“That’s a massive surprise and forces us to rethink how large structures actually form and evolve in the universe,” says first author Dazhi Zhou, a PhD candidate at the University of British Columbia.

Eyes on the past

Looking into distant outer space allows us to peer into the past. The protocluster of developing galaxies that Zhou and collaborators investigated – known as SPT2349–56 – is 12.4 billion light years away, so the light observed from it left home when the universe was just 1.4 billion years old. Light from so far away will be quite faint and hard to detect by the time it reaches us, so the researchers used the Atacama Large Millimeter/submillimeter Array (ALMA) to study SPT2349–56 using a special type of shadow.

As this type of protocluster develops, Zhou explains, the gas around its galaxies  becomes so hot that electrons in the gas interact with, and confer some of their energy upon, passing photons. This leaves light passing through the gas with more photons at the higher energy end of the spectrum and fewer at the lower end. When viewing the cosmic microwave background radiation – the “afterglow” left behind by the Big Bang – this results in a shadow at low energies. This energy shift, discovered by physicists Rashid Sunyaev and Yakov Zeldovich, not only reveals the presence of the protocluster, but the strength of this signature indicates the thermal energy of the gas in the protocluster.

The team’s observations were not easy. “This shadow is actually pretty tiny,” Zhou explains. In addition, there is thermal emission from the dust inside galaxies at radio wavelengths, originally estimated to be 20 times stronger than the Sunyaev–Zeldovich signature. “It really is like finding a needle in a haystack,” he adds. Nonetheless, the team did identify a definite Sunyaev–Zeldovich signature from SPT2349–56, with a thermal energy indicating that it was at least five times hotter than expected – thousands of times hotter than the surface of our Sun.

Time to upgrade?

SPT2349–56 has some quirks that may explain its high thermal energy, including three supermassive black holes shooting out jets of high-energy matter – a known but rare phenomenon for these supermassive black holes. However, simulations that take these outbursts into account as a heating mechanism that’s more efficient and occurs much earlier than heating from gravitational collapse (as current models suggest) still do not give the high temperatures observed, perhaps pointing to gaps in our knowledge of the underlying physics.

Eiichiro Komatsu from the Max-Planck-Institut für Astrophysik describes the work as “a wonderful  measurement”. Although not directly involved in this research, Komatsu has also looked at what the Sunyaev–Zeldovich effect can reveal about the cosmos. “The amount of thermal energy measured by the authors is staggering, yet its origin is a mystery,” he tells Physics World. He suggests these results will motivate further observations of other systems in the early universe.

“We need to be cautious rather than making any big claim,” adds Zhou. This is the first Sunyaev–Zeldovich detection of a protocluster from the first three billion years of the universe’s existence. Next, he aims to study similar protoclusters, and he hopes others will also work to corroborate the observations.

The research is reported in Nature.

The post Hot ancient galaxy cluster challenges current cosmological models appeared first on Physics World.

]]>
Research update Observations of the thermal energy in a baby galaxy cluster 12.4 billion light years away suggest much more energetic early cluster growth than current theories assume https://physicsworld.com/wp-content/uploads/2026/01/23-01-26-galaxy-cluster.jpg
Laser fusion: Focused Energy charts a course to commercial viability https://physicsworld.com/a/laser-fusion-focused-energy-charts-a-course-to-commercial-viability/ Thu, 22 Jan 2026 15:01:44 +0000 https://physicsworld.com/?p=126112 Plasma physicist Debbie Callahan is our podcast guest

The post Laser fusion: Focused Energy charts a course to commercial viability appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features a conversation with the plasma physicist Debbie Callahan who is chief strategy officer at Focused Energy – a California and Germany based fusion-energy startup. Prior to that she spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US.

Focused Energy is developing a commercial system for generating energy from the laser-driven fusion of hydrogen isotopes. Callahan describes LightHouse, which is the company’s design for a laser-fusion power plant, and Pearl, which is the firm’s deuterium–tritium fuel capsule.

Callahan talks about the challenges and rewards of working in the fusion industry and also calls on early-career physicists to consider careers in this burgeoning sector.

The post Laser fusion: Focused Energy charts a course to commercial viability appeared first on Physics World.

]]>
Podcasts Plasma physicist Debbie Callahan is our podcast guest https://physicsworld.com/wp-content/uploads/2026/01/22-1-25-debbie-callahan-list.jpg
Fuel cell catalyst requirements for heavy-duty vehicle applications https://physicsworld.com/a/fuel-cell-catalyst-requirements-for-heavy-duty-vehicle-applications/ Thu, 22 Jan 2026 11:25:19 +0000 https://physicsworld.com/?p=125182 Join the audience for a live webinar at 3 p.m. GMT/10 a.m. EST on 18 February 2026

Discover the realities and requirements of catalyst development for fuel cell applications

The post Fuel cell catalyst requirements for heavy-duty vehicle applications appeared first on Physics World.

]]>

Heavy-duty vehicles (HDVs) powered by hydrogen-based proton-exchange membrane (PEM) fuel cells offer a cleaner alternative to diesel-powered internal combustion engines for decarbonizing long-haul transportation sectors. The development path of sub-components for HDV fuel-cell applications is guided by the total cost of ownership (TCO) analysis of the truck.

TCO analysis suggests that the cost of the hydrogen fuel consumed over the lifetime of the HDV is more dominant because trucks typically operate over very high mileages (~a million miles) than the fuel cell stack capital expense (CapEx). Commercial HDV applications consume more hydrogen and demand higher durability, meaning that TCO is largely related to the fuel-cell efficiency and durability of catalysts.

This article is written to bridge the gap between the industrial requirements and academic activity for advanced cathode catalysts with an emphasis on durability. From a materials perspective, the underlying nature of the carbon support, Pt-alloy crystal structure, stability of the alloying element, cathode ionomer volume fraction, and catalyst–ionomer interface play a critical role in improving performance and durability.

We provide our perspective on four major approaches, namely, mesoporous carbon supports, ordered PtCo intermetallic alloys, thrifting ionomer volume fraction, and shell-protection strategies that are currently being pursued. While each approach has its merits and demerits, their key developmental needs for future are highlighted.

Nagappan Ramaswamy joined the Department of Chemical Engineering at IIT Bombay as a faculty member in January 2025. He earned his PhD in 2011 from Northeastern University, Boston specialising in fuel cell electrocatalysis.

He then spent 13 years working in industrial R&D – two years at Nissan North American in Michigan USA focusing on lithium-ion batteries, followed by 11 years at General Motors in Michigan USA focusing on low-temperature fuel cells and electrolyser technologies. While at GM, he led two multi-million-dollar research projects funded by the US Department of Energy focused on the development of proton-exchange membrane fuel cells for automotive applications.

At IIT Bombay, his primary research interests include low-temperature electrochemical energy-conversion and storage devices such as fuel cells, electrolysers and redox-flow batteries involving materials development, stack design and diagnostics.

The post Fuel cell catalyst requirements for heavy-duty vehicle applications appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar at 3 p.m. GMT/10 a.m. EST on 18 February 2026 Discover the realities and requirements of catalyst development for fuel cell applications https://physicsworld.com/wp-content/uploads/2025/11/2026-02-ecs-wb-feature-image.jpg
Ask me anything: Mažena Mackoit-Sinkevičienė – ‘Above all, curiosity drives everything’ https://physicsworld.com/a/ask-me-anything-mazena-mackoit-sinkeviciene-above-all-curiosity-drives-everything/ Thu, 22 Jan 2026 11:00:23 +0000 https://physicsworld.com/?p=126019 Mažena Mackoit-Sinkevičienė works in quantum optics and technology and is vice-president of the Lithuanian Physical Society

The post Ask me anything: Mažena Mackoit-Sinkevičienė – ‘Above all, curiosity drives everything’ appeared first on Physics World.

]]>
What skills do you use every day in your job?

Much of my time is spent trying to build and refine models in quantum optics, usually with just a pencil, paper and a computer. This requires an ability to sit with difficult concepts for a long time, sometimes far longer than is comfortable, until they finally reveal their structure.

Good communication is equally essential – I teach students; collaborate with colleagues from different subfields; and translate complex ideas into accessible language for the broader public. Modern physics connects with many different fields, so being flexible and open-minded matters as much as knowing the technical details. Above all, curiosity drives everything. When I don’t understand something, that uncertainty becomes my strongest motivation to keep going.

What do you like best and least about your job?

What I like the best is the sense of discovery – the moment when a problem that has evaded understanding for weeks suddenly becomes clear. Those flashes of insight feel like hearing the quiet whisper of nature itself. They are rare, but they bring along a joy that is hard to find elsewhere.

I also value the opportunity to guide the next generation of physicists, whether in the university classroom or through public science communication. Teaching brings a different kind of fulfilment: witnessing students develop confidence, curiosity and a genuine love for physics.

What I like the least is the inherent uncertainty of research. Questions do not promise favourable answers, and progress is rarely linear. Fortunately, I have come to see this lack of balance not as a weakness but as a source of power that forces growth, new perspectives, and ultimately deeper understanding.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had known that feeling lost is not a sign of inadequacy but a natural part of doing physics at a high level. Not understanding something can be the greatest motivator, provided one is willing to invest time and effort. Passion and curiosity matter far more than innate brilliance. If I had realized earlier that steady dedication can carry you farther than talent alone, I would have embraced uncertainty with much more confidence.

The post Ask me anything: Mažena Mackoit-Sinkevičienė – ‘Above all, curiosity drives everything’ appeared first on Physics World.

]]>
Interview Mažena Mackoit-Sinkevičienė works in quantum optics and technology and is vice-president of the Lithuanian Physical Society https://physicsworld.com/wp-content/uploads/2026/01/2026-01-ama-mazena-mackoit-sinkeviciene.jpg newsletter
Modelling wavefunction collapse as a continuous flow yields insights on the nature of measurement https://physicsworld.com/a/modelling-wavefunction-collapse-as-a-continuous-flow-yields-insights-on-the-nature-of-measurement/ Thu, 22 Jan 2026 09:30:19 +0000 https://physicsworld.com/?p=126064 Quantum state diffusion framework makes it possible to characterize quantum measurement in terms of entropy production

The post Modelling wavefunction collapse as a continuous flow yields insights on the nature of measurement appeared first on Physics World.

]]>
“God does not play dice.”

With this famous remark at the 1927 Solvay Conference, Albert Einstein set the tone for one of physics’ most enduring debates. At the heart of his dispute with Niels Bohr lay a question that continues to shape the foundations of physics: does the apparently probabilistic nature of quantum mechanics reflect something fundamental, or is it simply due to lack of information about some “hidden variables” of the system that we cannot access?

Physicists at University College London, UK (UCL) have now addressed this question via the concept of quantum state diffusion (QSD). In QSD, the wavefunction does not collapse abruptly. Instead, wavefunction collapse is modelled as a continuous interaction with the environment that causes the system to evolve gradually toward a definite state, restoring some degree of intuition to the counterintuitive quantum world.

A quantum coin toss

To appreciate the distinction (and the advantages it might bring), imagine tossing a coin. While the coin is spinning in midair, it is neither fully heads nor fully tails – its state represents a blend of both possibilities. This mirrors a quantum system in superposition.

When the coin eventually lands, the uncertainty disappears and we obtain a definite outcome. In quantum terms, this corresponds to wavefunction collapse: the superposition resolves into a single state upon measurement.

In the standard interpretation of quantum mechanics, wavefunction collapse is considered instantaneous. However, this abrupt transition is challenging from a thermodynamic perspective because uncertainty is closely tied to entropy. Before measurement, a system in superposition carries maximal uncertainty, and thus maximum entropy. After collapse, the outcome is definite and our uncertainty about the system is reduced, thereby reducing the entropy.

This apparent reduction in entropy immediately raises a deeper question. If the system suddenly becomes more ordered at the moment of measurement, where does the “missing” entropy go?

From instant jumps to continuous flows

Returning to the coin analogy, imagine that instead of landing cleanly and instantly revealing heads or tails, the coin wobbles, leans, slows and gradually settles onto one face. The outcome is the same, but the transition is continuous rather than abrupt.

This gradual settling captures the essence of QSD. Instead of an instantaneous “collapse”, the quantum state unfolds continuously over time. This makes it possible to track various parameters of thermodynamic change, including a quantity called environmental stochastic entropy production that measures how irreversible the process is.

Another benefit is that whereas standard projective measurements describe an abrupt “yes/no” outcome, QSD models a broader class of generalized or “weak” measurements, revealing the subtle ways quantum systems evolve. It also allows physicists to follow individual trajectories rather than just average outcomes, uncovering details that the standard framework smooths over.

“The QSD framework helps us understand how unpredictable environmental influences affect quantum systems,” explains Sophia Walls, a PhD student at UCL and the first author of a paper in Physical Review A on the research. Environmental noise, Walls adds, is particularly important for quantum technologies, making the study’s insights valuable for quantum error correction, control protocols and feedback mechanisms.

Bridging determinism and probability

At first glance, QSD might seem to resemble decoherence, which also arises from system–environment interactions such as noise. But the two differ in scope. “Decoherence explains how a system becomes a classical mixed state,” Walls clarifies, “but not how it ultimately purifies into a single eigenstate.” QSD, with its stochastic term, describes this final purification – the point where the coin’s faint shimmer sharpens into heads or tails.

In this view, measurement is not a single act but a continuous, entropy-producing flow of information between system and environment – a process that gradually results in manifestation of one of the possible quantum states, rather than an abrupt “collapse”.

“Standard quantum mechanics separates two kinds of dynamics – the deterministic Schrödinger evolution and the probabilistic, instantaneous collapse,” Walls notes. “QSD connects both in a single dynamical equation, offering a more unified description of measurement.”

This continuous evolution makes otherwise intractable quantities, such as entropy production, measurable and meaningful. It also breathes life into the wavefunction itself. By simulating individual realizations, QSD distinguishes between two seemingly identical mixed states: one genuinely entangled with its environment, and another that simply represents our ignorance. Only in the first case does the system dynamically evolve – a distinction invisible in the orthodox picture.

A window on quantum gravity?

Could this diffusion-based framework also illuminate other fundamental questions beyond the nature of measurement? Walls thinks it’s possible. Recent work suggests that stochastic processes could provide experimental clues about how gravity behaves at the quantum scale. QSD may one day offer a way to formalize or test such ideas. “If the nature of quantum gravity can be studied through a diffusive or stochastic process, then QSD would be a relevant framework to explore it,” Walls says.

The post Modelling wavefunction collapse as a continuous flow yields insights on the nature of measurement appeared first on Physics World.

]]>
Research update Quantum state diffusion framework makes it possible to characterize quantum measurement in terms of entropy production https://physicsworld.com/wp-content/uploads/2026/01/22-01-2026-spinning-quantum-coin.png newsletter
NPL unveils miniature atomic fountain clock   https://physicsworld.com/a/npl-unveils-miniature-atomic-fountain-clock/ Wed, 21 Jan 2026 17:23:16 +0000 https://physicsworld.com/?p=126087 Precision timekeeper is just 5% the size of a conventional clock

The post NPL unveils miniature atomic fountain clock   appeared first on Physics World.

]]>
A miniature version of an atomic fountain clock has been unveiled by researchers at the UK’s National Physical Laboratory (NPL). Their timekeeper occupies just 5% of the volume of a conventional atomic fountain clock while delivering a time signal with a stability that is on par with a full-sized system. The team is now honing its design to create compact fountain clocks that could be used in portable systems and remote locations.

The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. Today, the second is defined using a transition in caesium atoms that involves microwave radiation. Caesium atoms are placed in a microwave cavity and a measurement-and-feedback mechanism is used to tune the frequency of the cavity radiation to the atomic transition – creating a source of microwaves with a very narrow frequency range centred at the clock frequency.

The first atomic clocks sent a fast-moving beam of atoms through a microwave cavity. The precision of such a beam clock is limited by the relatively short time that individual atoms spend in the cavity. Also, the speed of the atoms means that the measured frequency peak is shifted and broadened by the Doppler effect.

Launching atoms

These problems were addressed by the development of the fountain clock, in which the atoms are cooled (slowed down) by laser light, which also launches the atoms upwards. The atoms pass through a microwave cavity on the way up, and again as they fall back down. The atoms travel at much slower speeds than in a beam clock. The atoms spend much more time in the cavity and therefore the time signal from an atomic clock is much more precise than a beam clock. However, long times result in greater thermal spread of the atomic beam – which degrades clock performance. Trading-off measurement time with thermal spread means that the caesium fountain clocks that currently define the second have drops of about 30 cm.

Other components are also needed to operate fountain clocks – including a vacuum system and laser and microwave instrumentation. This pushes the height of a typical clock to about 2 m, and makes it a complex and expensive instrument that cannot be easily transported.

Now, Sam Walby and colleagues at NPL have shrunk the overall height of a rubidium-based fountain clock to 80 cm, while retaining the 30 cm drop. The result is an instrument that is 5% the volume of one of NPL’s conventional caesium atomic fountain clocks.

Precise yet portable

“That’s taking it from barely being able to fit though a doorway, to something one could pick up and carry with one arm,” says Walby.

Despite the miniaturization, the mini-fountain achieved a stability of one part in 1015 after several days of operation – which NPL says is comparable to full-sized clocks.

Walby told Physics World that the NPL team achieved miniaturization by eliminating two conventional components from their clock design. One is a dedicated chamber used to measure the quantum states of the atoms. Instead, this measurement is make within the clock’s cooling chamber. Also eliminated is a dedicated state-selection microwave cavity, which puts the atoms into the quantum state from which the clock transition occurs.

“The mini-fountain also does this [state] selection,” explains Walby, “but instead of using a dedicated cavity, we use a coax-to-waveguide adapter that is directed into the cooling chamber, which creates a travelling wave of microwaves at the correct frequency.”

The NPL team also reduced the amount of magnetic shielding used, which meant that the edge-effects of the magnetic field had to be more carefully considered. The optics system of the clock was greatly simplified and the use of commercial components mean that the clock is low maintenance and easy to operate – according to NPL.

Radical simplification

“By radically simplifying and shrinking the atomic fountain, we’re making ultra-precise timing technology available beyond national labs,” said Walby. “This opens new possibilities for resilient infrastructure and next-generation navigation.”

According to Walby, one potential use of a miniature atomic fountain clock is as a holdover clock. These are devices that produce a very stable time signal when not synchronized with other atomic clocks. This is important for creating resilience in infrastructure that relies on precision timing – such as communications networks, global navigation satellite systems (including GPS) and power grids. Synchronization is usually done using GNSS signals but these can be jammed or spoofed to disrupt timing systems.

Holdover clocks require time errors of just a few nanoseconds over a month, which the new NPL clock can deliver. The miniature atomic clock could also be used as a secondary frequency standard for the SI second.

The small size of the clock also lends itself to portable and even mobile applications, according to Walby: “The adaptation of the mini-fountain technology to mobile platforms will be subject of further developments”.

However, the mini-clock is large when compared to more compact or chip-based clocks – which do not perform as well. Therefore, he believes that the technology is more likely to be implemented on ships or ground vehicles than aircraft.

“At a minimum, it should be easily transportable compared to the current solutions of similar performance,” he says.

“Highly innovative”

Atomic-clock expert Elizabeth Donley tells Physics World, “NPL has been highly innovative in recent years in standardizing fountain clock designs and even supplying caesium fountains to other national standards labs and organizations around the world for timekeeping purposes. This new compact rubidium fountain is a continuation of this work and can provide a smaller frequency standard with comparable performance to the larger fountains based on caesium.”

Donley spent more than two decades developing atomic clocks at the US National Institute of Standards and Technology (NIST) and now works as a consultant in the field. She agrees that miniature fountain clocks would be useful for holding-over timing information when time signals are interrupted.

She adds, “Once the international community decides to redefine the second to be based on an optical transition, it won’t matter if you use rubidium or caesium. So I see this work as more of a practical achievement than a ground-breaking one. Practical achievements are what drives progress most of the time.”

The new clock is described in Applied Physics Letters.

The post NPL unveils miniature atomic fountain clock   appeared first on Physics World.

]]>
Research update Precision timekeeper is just 5% the size of a conventional clock https://physicsworld.com/wp-content/uploads/2026/01/21-1-26-miniature-atomic-fountain-clock.jpg newsletter
Shining laser light on a material produces subtle changes in its magnetic properties https://physicsworld.com/a/shining-laser-light-on-a-material-produces-subtle-changes-in-its-magnetic-properties/ Wed, 21 Jan 2026 14:00:49 +0000 https://physicsworld.com/?p=126080 New use for photolithography could have applications for data storage

The post Shining laser light on a material produces subtle changes in its magnetic properties appeared first on Physics World.

]]>
Researchers in Switzerland have found an unexpected new use for an optical technique commonly used in silicon chip manufacturing. By shining a focused laser beam onto a sample of material, a team at the Paul Scherrer Institute (PSI) and ETH Zürich showed that it was possible to change the material’s magnetic properties on a scale of nanometres – essentially “writing” these magnetic properties into the sample in the same way as photolithography etches patterns onto wafers. The discovery could have applications for novel forms of computer memory as well as fundamental research.

In standard photolithography – the workhorse of the modern chip manufacturing industry – a light beam passes through a transmission mask and projects an image of the mask’s light-absorption pattern onto a (usually silicon) wafer. The wafer itself is covered with a photosensitive polymer called a resist. Changing the intensity of the light leads to different exposure levels in the resist-covered material, making it possible to create finely detailed structures.

In the new work, Laura Heyderman and colleagues in PSI-ETH Zürich’s joint Mesoscopic System group began by placing a thin film of a magnetic material in a standard photolithography machine, but without a photoresist. They then scanned a focused laser beam over the surface of the sample while modulating the beam’s wavelength of 405 nm to deliver varying intensities of light. This process is known as direct write laser annealing (DWLA), and it makes it possible to heat areas of the sample that measure just 150 nm across.

In each heated area, thermal energy from the laser is deposited at the surface and partially absorbed by the film down to a depth of around 100 nm). The remainder dissipates through a silicon substrate coated in 300-nm-thick silicon oxide. However, the thermal conductivity of this substrate is low, which maximizes the temperature increase in the film for a given laser fluence. The researchers also sought to keep the temperature increase as uniform as possible by using thin-film heterostructures with a total thickness of less than 20 nm.

Crystallization and interdiffusion effects

Members of the PSI-ETH Zürich team applied this technique to several technologically important magnetic thin-film systems, including ferromagnetic CoFeB/MgO, ferrimagnetic CoGd and synthetic antiferromagnets composed of Co/Cr, Co/Ta or CoFeB/Pt/Ru. They found that DWLA induces both crystallization and interdiffusion effects in these materials. During crystallization, the orientation of the sample’s magnetic moments gradually changes, while interdiffusion alters the magnetic exchange coupling between the layers of the structures.

The researchers say that both phenomena could have interesting applications. The magnetized regions in the structures could be used in data storage, for example, with the direction of the magnetization (“up” or “down”) corresponding to the “1” or “0” of a bit of data. In conventional data-storage systems, these bits are switched with a magnetic field, but team member Jeffrey Brock explains that the new technique allows electric currents to be used instead. This is advantageous because electric currents are easier to produce than magnetic fields, while data storage devices switched with electricity are both faster and capable of packing more data into a given space.

Team member Lauren Riddiford says the new work builds on previous studies by members of the same group, which showed it was possible to make devices suitable for computer memory by locally patterning magnetic properties. “The trick we used here was to locally oxidize the topmost layer in a magnetic multilayer,” she explains. “However, we found that this works only in a few systems and only produces abrupt changes in the material properties. We were therefore brainstorming possible alternative methods to create gradual, smooth gradients in material properties, which would open possibilities to even more exciting applications and realized that we could perform local annealing with a laser originally made for patterning polymer resist layers for photolithography.”

Riddiford adds that the method proved so fast and simple to implement that the team’s main challenge was to investigate all the material changes it produced. Physical characterization methods for ultrathin films can be slow and difficult, she tells Physics World.

The researchers, who describe their technique in Nature Communications, now hope to use it to develop structures that are compatible with current chip-manufacturing technology. “Beyond magnetism, our approach can be used to locally modify the properties of any material that undergoes changes when heated, so we hope researchers using thin films for many different devices – electronic, superconducting, optical, microfluidic and so on – could use this technique to design desired functionalities,” Riddiford says. “We are looking forward to seeing where this method will be implemented next, whether in magnetic or non-magnetic materials, and what kind of applications it might bring.”

The post Shining laser light on a material produces subtle changes in its magnetic properties appeared first on Physics World.

]]>
Research update New use for photolithography could have applications for data storage https://physicsworld.com/wp-content/uploads/2026/01/magnetic-landscapes.jpg newsletter
The obscure physics theory that helped Chinese science emerge from the shadows https://physicsworld.com/a/the-obscure-physics-theory-that-helped-chinese-science-emerge-from-the-shadows/ Wed, 21 Jan 2026 11:00:52 +0000 https://physicsworld.com/?p=125852 Robert P Crease reveals the curious twist in the development of Chinese physics in the 1960s

The post The obscure physics theory that helped Chinese science emerge from the shadows appeared first on Physics World.

]]>
“The Straton Model of elementary particles had very limited influence in the West,” said Jinyan Liu as she sat with me in a quiet corner of the CERN cafeteria. Liu, who I caught up with during a break in a recent conference on the history of particle physics, was referring to a particular model of elementary particle physics first put together in China in the mid-1960s. The Straton Model was, and still largely is, unknown outside that country. “But it was an essential step forward,” Liu added, “for Chinese physicists in joining the international community.”

Liu was at CERN to give a talk on how Chinese theorists redirected their research efforts in the years after the Cultural Revolution, which ended in 1976. They switched from the Straton Model, which was a politically infused theory of matter favoured by Mao Zedong, the founder of the People’s Republic of China, to mainstream particle physics as practised by the rest of the world. It’s easy to portray the move as the long-overdue moment when Chinese scientists resumed their “real” physics research. But, Liu told me, “actually it was much more complicated”.

A physicist by training, Liu received her PhD on contemporary theories of spontaneous charge-parity (CP) violation from the Institute of Theoretical Physics at the Chinese Academy of Sciences (CAS) in 2013. She then switched to the CAS Institute for History of Natural Sciences, where she was its first member with a physics PhD. Her initial research topic was the history and development of the Straton Model.

The model is essentially a theory of the structure of hadrons – either baryons (such as protons and neutrons) or mesons (such as pions and kaons). But the model’s origins are as improbable as they are labyrinthine. Mao, who had a keen interest in natural science, was convinced that matter was infinitely divisible, and in 1963 he came across an article by the Marxist-inspired Japanese physicist Shoichi Sakata (1911–1970).

First published in Japanese in 1961 and later translated into Russian, Sakata’s paper was entitled “Dialogues concerning a new view of elementary particles”. It restated Sakata’s belief, which he had been working on since the 1950s, that hadrons are made of smaller constituents – “elementary particles are not the ultimate elements of matter” as he put it. With some Chinese scholars back then still paying close attention to publications from the Soviet Union, their former political and ideological ally, that paper was then translated into Chinese.

Mao Zedong was engrossed in Shoichi Sakata’s paper, for it seemed to offer scientific support for his own views.

This version appeared in the Bulletin of the Studies of Dialectics of Nature in 1963. Mao, who received an issue of that bulletin from his son-in-law, was engrossed in Sakata’s paper, for it seemed to offer scientific support for his own views. Sakata’s article – both in the original Japanese and now in Chinese – cited Friedrich Engels’ view that matter has numerous stages of discrete but qualitatively different parts. In addition, it quoted Lenin’s remark that “even the electron is inexhaustible”.

A wider dimension

“International politics now also entered,” Liu told me, as we discussed the issue further at CERN. A split between China and the Soviet Union had begun to open up in the late 1950s, with Mao breaking off relations with the Soviet Union and starting to establish non-governmental science and technology exchanges between China and Japan. Indeed, when China hosted the Peking Symposium of foreign scientists in 1964, Japan brought the biggest delegation, with Sakata as its leader.

At the event, Mao personally congratulated Sakata on his theory. It was, Sakata later recalled, “the most unforgettable moment of my journey to China”. In 1965, Sakata’s paper was retranslated from the Japanese original, with an annotated version published in Red Flag and the newspaper Renmin ribao, or “People’s Daily”, both official organs of the Chinese Communist Party.

Chinese physicists realized that they could capitalize on Mao’s enthusiasm to make elementary particle physics a legitimate research direction.

Chinese physicists, who had been assigned to work on the atomic bomb and other research deemed important by the Communist Party, now started to take note. Uninterested in philosophy, they realized that they could capitalize on Mao’s enthusiasm to make elementary particle physics a legitimate research direction.

As a result, 39 members of CAS, Peking University and the University of Science and Technology of China formed the Beijing Elementary Particle Group. Between 1965 and 1966, they wrote dozens of papers on a model of hadrons inspired by both Sakata’s work and quark theory based on the available experimental data. It was dubbed the Straton Model because it involved layers or “strata” of particles nested in each other.

Liu has interviewed most surviving members of the group and studied details of the model. It differed from the model being developed at the time by the US theorist Murray Gell-Mann, which saw quarks as not physical but mathematical elements. As Liu discovered, Chinese particle physicists were now given resources they’d never had before. In particular, they could use computers, which until then had been devoted to urgent national defence work. “To be honest,” Liu chuckled, “the elementary particle physicists didn’t use computers much, but at least they were made available.”

The high-water mark for the Straton Model occurred in July 1966 when members of the Beijing Elementary Particle Group presented it at a summer physics colloquium organized by the China Association for Science and Technology. The opening ceremony was held in Tiananmen Square, in what was then China’s biggest conference centre, with attendees including Abdus Salam from Imperial College London. The only high-profile figure to be invited from the West, Salam was deemed acceptable because he was science advisor to the president of Pakistan, a country considered outside the western orbit.

The proceedings of the colloquium were later published as “Research on the theory of elementary particles carried out under the brilliant illumination of Mao Tse-Tung’s thought”. Its introduction was what Liu calls a “militant document” – designed to reinforce the idea that the authors were carrying Mao’s thought into scientific research to repudiate “decadent feudal, bourgeois and revisionist ideologies”.

Participants in Beijing had expected to make their advances known internationally by publishing the proceedings in English. But the Cultural Revolution had just begun two months before, and publications in English were forbidden. “As a result,” Liu told me, “the model had very limited influence outside China.” Sakata, however, had an important influence on Japanese theorists having co-authored the key paper on neutrino flavour oscillation (Prog. Theoretical. Physics 28 870).

A resurfaced effort

In recent years, Liu has shed new light on the Straton Model, writing a paper in the journal Chinese Annals of History of Science and Technology (2 85). In 2022, she also published a 2022 Chinese-language book entitled Constructing a Theory of Hadron Structure: Chinese Physicists’ Straton Model, which describes the downfall of the model after 1966. None of its predicted material particles appeared, though a candidate event once occurred in a cosmic ray observatory in the south of China.

By 1976, quantum chromodynamics (QCD) had convincingly emerged as the established model of hadrons. The effective end of the Straton Model took place at a conference in January 1980 in Conghua, near Hong Kong. Hung-Yuan Tzu, one of the key leaders of the Beijing Group, gave a paper entitled “Reminiscences of the Straton Model”, signalling that physics had moved on.

During our meeting at CERN, Liu showed me photos of the 1980 event. “It was a very important conference in the history of Chinese physics,” she said, “the first opening to Chinese physicists in the West”. Visits by Chinese expatriates were organized by Tsung-Dao Lee and Chen-Ning Yang, who shared the 1957 Nobel Prize for Physics for their work on parity violation.

The critical point

It is easy for westerners to mock the Straton Model; Sheldon Glashow once referred to it as about “Maons”. But Liu sees it as significant research that had many unexpected consequences, such as helping to advance physics research in China. “It gave physicists a way to pursue quantum field theory without having to do national defence work”.

The model also trained young researchers in particle physics and honed their research competence. After the post-Cultural Revolution reform and its opening to the West, these physicists could then integrate into the international community. “The story,” Liu said, “shows how ingeniously the Chinese physicists adapted to the political situation.”

The post The obscure physics theory that helped Chinese science emerge from the shadows appeared first on Physics World.

]]>
Opinion and reviews Robert P Crease reveals the curious twist in the development of Chinese physics in the 1960s https://physicsworld.com/wp-content/uploads/2026/01/mao-straton-pic-lighter.jpg newsletter
A surprising critical state emerges in active nematic materials https://physicsworld.com/a/a-surprising-critical-state-emerges-in-active-nematic-materials/ Wed, 21 Jan 2026 07:47:29 +0000 https://physicsworld.com/?p=125804 A transition in active nematics produces slow, strongly interacting defects, a behaviour confirmed in living cells

The post A surprising critical state emerges in active nematic materials appeared first on Physics World.

]]>
Nematics are materials made of rod‑like particles that tend to align in the same direction. In active nematics, this alignment is constantly disrupted and renewed because the particles are driven by internal biological or chemical energy. As the orientation field twists and reorganises, it creates topological defects-points where the alignment breaks down. These defects are central to the collective behaviour of active matter, shaping flows, patterns, and self‑organisation.

In this work, the researchers identify an active topological phase transition that separates two distinct regimes of defect organisation. As the system approaches this transition from below, the dynamics slow dramatically: the relaxation of defect density becomes sluggish, fluctuations in the number of defects grow in amplitude and lifetime, and the system becomes increasingly sensitive to small changes in activity. At the critical point, defects begin to interact over long distances, with correlation lengths that grow with system size. This behaviour produces a striking dual‑scaling pattern, defect fluctuations appear uniform at small scales but become anti‑hyperuniform at larger scales, meaning that the number of defects varies far more than expected from a random distribution.

A key finding is that this anti‑hyperuniformity originates from defect clustering. Rather than forming ordered structures or undergoing phase separation, defects tend to appear near existing defects, creating multiscale clusters. This distinguishes the transition from well‑known defect‑unbinding processes such as the Berezinskii-Kosterlitz-Thouless transition in passive nematics or the nematic-isotropic transition in screened active systems. Above the critical activity, the system enters a defect‑laden turbulent state where defects are more uniformly distributed and correlations become short‑ranged and negative.

The researchers confirm these behaviours experimentally using large‑field‑of‑view measurements of endothelial cell monolayers which are the cells that line blood vessels. The same dual‑scaling behaviour, long‑range correlations, and clustering appear in these living tissues, demonstrating that the transition is robust across system sizes, parameter variations, frictional damping, and boundary conditions.

Read the full article

Anti-hyperuniform critical states of active topological defects

Simon Guldager Andersen et al 2025 Rep. Prog. Phys. 88 108101

Do you want to learn more about this topic?

Active phase separation: new phenomenology from non-equilibrium physics M E Cates and C Nardini (2025)

The post A surprising critical state emerges in active nematic materials appeared first on Physics World.

]]>
Research highlight A transition in active nematics produces slow, strongly interacting defects, a behaviour confirmed in living cells https://physicsworld.com/wp-content/uploads/2026/01/2026-january-emergenceofantihyperuniformdefectorganization-doostmohammadi.jpg
Non-Abelian anyons: anything but easy https://physicsworld.com/a/non-abelian-anyons-anything-but-easy/ Wed, 21 Jan 2026 07:42:34 +0000 https://physicsworld.com/?p=126067 A team of researchers from the USA have observed spontaneously broken rotational symmetry in fractional quantum Hall states

The post Non-Abelian anyons: anything but easy appeared first on Physics World.

]]>
Topological quantum computing is a proposed approach to building quantum computers that aims to solve one of the biggest challenges in quantum technology: error correction.

In conventional quantum systems, qubits are extremely sensitive to their environment and even tiny disturbances can cause errors. Topological quantum computing addresses this by encoding information in the global properties of a system: the topology of certain quantum states.

These systems rely on the use of non-Abelian anyons, exotic quasiparticles that can exist in two-dimensional materials under special conditions.

The main challenge faced by this approach to quantum computing is the creation and control of these quasiparticles.

One possible source of non-Abelian anyons is the fractional quantum Hall state (FQH): an exotic state of matter which can exist at very low temperatures and high magnetic fields.

These states come in two forms: even-denominator and odd-denominator. Here, we’re interested in the even-denominator states – the more interesting but less well understood of the two.

In this latest work, researchers have observed this exotic state in gallium arsenide (GaAs) two-dimensional hole systems.

Typically, FQH states are isotropic, showing no preferred direction. Here, however, the team found states that are strongly anisotropic, suggesting that the system spontaneously breaks rotational symmetry.

This means that it forms a nematic phase – similar to liquid crystals – where molecules align along a direction without forming a rigid structure.

This spontaneous symmetry breaking adds complexity to the state and can influence how quasiparticles behave, interact, and move.

The observation of the existence of spontaneous nematicity in an even-denominator fractional quantum Hall state is the first of its kind.

Although there are many questions left to be answered, the properties of this system could be hugely important for topological quantum computers as well as other novel quantum technologies.

Read the full article

Even-denominator fractional quantum Hall states with spontaneously broken rotational symmetry – IOPscience

C. Wang et al 2025 Rep. Prog. Phys. 88 100501

The post Non-Abelian anyons: anything but easy appeared first on Physics World.

]]>
Research highlight A team of researchers from the USA have observed spontaneously broken rotational symmetry in fractional quantum Hall states https://physicsworld.com/wp-content/uploads/2026/01/20262001wang.jpg
Physicist Norbert Holtkamp takes over as head of Fermilab https://physicsworld.com/a/physicist-norbert-holtkamp-takes-over-as-head-of-fermilab/ Tue, 20 Jan 2026 17:49:05 +0000 https://physicsworld.com/?p=126071 Holtkamp will oversee the completion of the $1.5bn Long-Baseline Neutrino Facility-Deep Underground Neutrino Experiment

The post Physicist Norbert Holtkamp takes over as head of Fermilab appeared first on Physics World.

]]>
Norbert Holtkamp

Particle physicist Norbert Holtkamp has been appointed the new director of Fermi National Accelerator Laboratory. He took up the position on 12 January, replacing Young-Kee Kim from the University of Chicago, who held the job on an interim basis following the resignation of Lia Merminga last year.

With a PhD in physics from the Technical University in Darmstadt, Germany, Holtkamp has managed large scientific projects throughout his career.

Holtkamp is the former deputy director of the SLAC National Accelerator Laboratory at Stanford University where he managedthe construction of the Linac Coherent Light Source upgrade, the world’s most powerful X-ray laser, along with more than $2bn of onsite construction projects.

Holtkamp also previously served as the principal deputy director general for the international fusion project ITER, which is currently under construction in Cadarache, France.

Holtkamp worked at Fermilab between 1998 and 2001, where he worked on commissioning the Main Injector and also led a study on the feasibility of an intense neutrino source based on a muon storage ring.

One of Holtkamp’s main aims as Fermilab boss will be to oversee the completion of the $5bn Long-Baseline Neutrino Facility-Deep Underground Neutrino Experiment (LBNF-DUNE) at Fermilab, which is expected to come online towards the end of the decade.

LBNF-DUNE will study the properties of neutrinos in unprecedented detail, as well as the differences in behaviour between neutrinos and antineutrinos. The DUNE detector, which lies about 1300 km from Fermilab, will measure the neutrinos that are generated by Fermilab’s accelerator complex, which is just outside Chicago.

In a statement, Holtkamp said he is “deeply honoured” to lead the lab. “Fermilab has done so much to advance our collective understanding of the fundamentals of our universe,” he says. “I am committed to ensuring the laboratory remains the neutrino capital of the world, and the safe and successful completion of LBNF-DUNE is key to that goal. I’m excited to rejoin Fermilab at this pivotal moment to guide this project and our other important modernization efforts to prepare the lab for a bright future.”

Managerial experience

Fermilab has experienced a difficult few years, with questions raised about its internal management and external oversight. In August 2024 a group of anonymous self-styled whistleblowers published a 113-page “white paper” on the arXiv preprint server, asserting that the lab was “doomed without a management overhaul”.

Then in October that year, a new organization – Fermi Forward Discovery Group – was announced to manage the lab for the US Department of Energy. That move came under scrutiny given it is dominated by the University of Chicago and Universities Research Association (URA), a consortium of research universities, which had already been part of the management since 2007. Then a month later, almost 2.5% of Fermilab’s employees were laid off.

“We’re excited to welcome Norbert, who brings of a wealth of scientific and managerial experience to Fermilab,” noted University of Chicago president Paul Alivisatos, who is also chair of the board of directors of Fermi Forward Discovery Group.

Alivisatos thanked Kim for her “tireless service” as director. “[Kim] played a critical role in strengthening relationships with Fermilab’s leading stakeholders, driving the lab’s modernization efforts, and positioning Fermilab to amplify DOE’s broader goals in areas like quantum science and AI,” added Alivisatos.

The post Physicist Norbert Holtkamp takes over as head of Fermilab appeared first on Physics World.

]]>
News Holtkamp will oversee the completion of the $1.5bn Long-Baseline Neutrino Facility-Deep Underground Neutrino Experiment https://physicsworld.com/wp-content/uploads/2026/01/holtkamp-lists.png
CERN accepts $1bn in private cash towards Future Circular Collider https://physicsworld.com/a/cern-accepts-1bn-in-private-cash-towards-future-circular-collider/ Mon, 19 Jan 2026 13:00:02 +0000 https://physicsworld.com/?p=126061 Cash comes as Mark Thomson takes the reins at CERN

The post CERN accepts $1bn in private cash towards Future Circular Collider appeared first on Physics World.

]]>
The CERN particle-physics lab near Geneva has received $1bn from private donors towards the construction of the Future Circular Collider (FCC). The cash marks the first time in the lab’s 72-year history that individuals and philanthropic foundations have agreed to support a major CERN project. If built, the FCC would be the successor to the Large Hadron Collider (LHC), where the Higgs boson was discovered.

CERN originally released a four-volume conceptual design report for the FCC in early 2019, with more detail included in a three-volume feasibility study that came out last year. It calls for a giant tunnel some 90.7 km in circumference – roughly three times as long as the LHC  – that would be built about 200 m underground on average.

The FCC has been recommended as the preferred option for the next flagship collider at CERN in the ongoing process to update the European Strategy for Particle Physics, which will be passed over to the  CERN Council in May 2026.If the plans are given the green light by CERN Council in 2028, construction on the FCC electron-positron machine, dubbed FCC-ee, would begin in 2030. It would start operations in 2047, a few years after the High Luminosity LHC (HL-LHC) closes down, and run for about 15 years until the early 2060s.

The FCC-ee would focus on creating a million Higgs particles in total to allow physicists to study its properties with an accuracy an order of magnitude better that possible with the LHC. The FCC feasibility study then calls for a hadron machine, dubbed FCC-hh, to replace the FCC-ee in the existing 91 km tunnel. It would be a “discovery machine”, smashing together protons at high energy – about 85 TeV – with the aim of creating new particles. If built, the FCC-hh will begin operation in 2073 and run to the end of the century.

The funding model for the FCC-ee, which is expected to have a price tag of about $18bn, is still a work in progress. But it is estimated that at least two-thirds of the construction costs will come from CERN’s 24 member states with the rest needing to be found elsewhere. One option to plug that gap is private donations and in late December CERN received a significant boost from several organizations including the Breakthrough Prize Foundation, the Eric and Wendy Schmidt Fund for Strategic Innovation, and the entrepreneurs John Elkann and Xavier Niel. Together, they pledged a total of $1bn towards the FCC-ee.

Costas Fountas, president of the CERN Council, says CERN is “extremely grateful” for the interest. “This once again demonstrates CERN’s relevance and positive impact on society, and the strong interest in CERN’s future that exists well beyond our own particle physics community,” he notes.

Eric Schmidt, who founded Google, claims that he and Wendy Schmidt were “inspired by the ambition of this project and by what it could mean for the future of humanity”. The FCC, he believes, is an instrument that “could push the boundaries of human knowledge and deepen our understanding of the fundamental laws of the Universe” and could lead to technologies that could benefit society “in profound ways” from medicine to computing to sustainable energy.

The cash promised has been welcomed by outgoing CERN director-general Fabiola Gianotti. “It’s the first time in history that private donors wish to partner with CERN to build an extraordinary research instrument that will allow humanity to take major steps forward in our understanding of fundamental physics and the universe,” she said. “I am profoundly grateful to them for their generosity, vision, and unwavering commitment to knowledge and exploration.”

Further boost

The cash comes a few months after the Circular Electron–Positron Collider (CEPC) – a rival collider to the FCC-ee that also involves building a huge 100 km tunnel to study the Higgs in unprecedented detail – was not considered for inclusion in China’s next five-year plan, which runs from 2026 to 2030. There has been much discussion in China about whether the CEPC is the right project for the country, with the collider facing criticism from particle physicist and Nobel laureate Chen-Ning Yang, before he died last year.

Wang Yifang of the Institute of High Energy Physics (IHEP) in Beijing says they will submit the CEPC for consideration again in 2030 unless FCC is officially approved before then. But for particle theorist John Ellis from Kings College London, China’s decision to effectively put the CEPC on the back burner  “certainly simplifies the FCC discussion”. “However, an opportunity for growing the world particle physics community has been lost, or at least deferred [by the decision],” Ellis told Physics World.

Ellis adds, however, that he would welcome China’s participation in the FCC. “Their accelerator and detector [technical design reviews] show that they could bring a lot to the table, if the political obstacles can be overcome,” he says.

However, if the FCC-ee goes ahead China could perhaps make significant “in-kind” contributions rather like those that occur with the ITER experimental fusion reactor, which is currently being built in France. In this case, instead of cash payments, the countries provide components, equipment and other materials.

Those considerations and more will now fall to the British physicist Mark Thomson, who took over from Gianotti as CERN director-general on 1 January for a five-year term. As well as working on funding requirements for the FCC-ee, top of his in-tray will actually be shutting down the LHC in June to make way for further work on the HL-LHC, which involves installing powerful new superconducting magnets and improving the detection.

About 90% of the 27 km LHC accelerator will be affected by the upgrade with a major part being to replace the magnets in the final focus systems of the two large experiments, ATLAS and CMS. These magnets will take the incoming beams and then focus them down to less than 10 µm in cross section. The upgrade includes the installation of brand new state-of-the-art niobium-tin (Nb3Sn) superconducting focusing magnets.

The HL-LHC will probably not turn on until 2030, at which time Thomson’s term will nearly be over, but that doesn’t deter him from leading the world’s foremost particle-physics lab. “It’s an incredibly exciting project,” Thomson told the Guardian. “It’s more interesting than just sitting here with the machine hammering away.”

The post CERN accepts $1bn in private cash towards Future Circular Collider appeared first on Physics World.

]]>
Analysis Cash comes as Mark Thomson takes the reins at CERN https://physicsworld.com/wp-content/uploads/2026/01/cern-19-01-2026.jpg newsletter
Polarization-sensitive photoacoustic microscopy reveals heart tissue health https://physicsworld.com/a/polarization-sensitive-photoacoustic-microscopy-reveals-heart-tissue-health/ Mon, 19 Jan 2026 09:30:58 +0000 https://physicsworld.com/?p=126010 Label-free imaging technique can distinguish diseased cardiac tissue from healthy tissue and identify different types of fibrosis

The post Polarization-sensitive photoacoustic microscopy reveals heart tissue health appeared first on Physics World.

]]>
MIR-DS-PAM images of fibrotic and normal cardiac tissue

Many of the tissues in the human body rely upon highly organized microstructures to function effectively. If the collagen fibres in heart muscle become disordered, for instance, this can lead to or reflect disorders such as fibrosis and cancer. To image and analyse such structural changes, researchers at Pohang University of Science and Technology (POSTECH) in Korea have developed a new label-free microscopy technique and demonstrated its use in engineered heart tissue.

The ability to assess the alignment of microstructures such as protein fibres within tissue’s extracellular matrix provides a valuable tool for diagnosing disease, monitoring therapy response and evaluating tissue engineering models. Currently, however, this is achieved using histological imaging methods based on immunofluorescent staining, which can be labour-intensive and sensitive to the imaging conditions and antibodies used.

Instead, a team headed up by Chulhong Kim and Jinah Jang is investigating photoacoustic microscopy (PAM), a label-free imaging modality that relies on light absorption by endogenous tissue chromophores to reveal structural and functional information. In particular, PAM with mid-infrared (MIR) incident light provides bond-selective, high-contrast imaging of proteins, lipids and carbohydrates. The researchers also incorporated dichroism-sensitive (DS) functionality, resulting in a technique referred to as MIR-DS-PAM.

“Dichroism-sensitivity enables the quantitative assessment of fibre alignment by detecting the polarization-dependent absorption of anisotropic materials like collagen,” explains first author Eunwoo Park. “This adds a new contrast mechanism to conventional photoacoustic imaging, allowing simultaneous visualization of molecular content and microstructural organization without any labelling.”

Park and colleagues constructed a MIR-DS-PAM system using a pulsed quantum cascade laser as the light source. They tuned the laser to a centre wavelength of 6.0 µm to correspond with an absorption peak from the C=O stretching vibration in proteins. The laser beam was linearly polarized, modulated by a half-wave plate and used to illuminate the target tissue.

Tissue analysis

To validate the functionality of their MIR-DS-PAM technique, the researchers used it to image a formalin-fixed section of engineered heart tissue (EHT). They obtained images at four incident angles and used the acquired photoacoustic data to calculate the photoacoustic amplitude, which visualizes the protein content, as well as the degree of linear dichroism (DoLD) and the orientation angle of linear dichroism (AoLD), which reveal the extracellular matrix alignment.

“Cardiac tissue features highly aligned extracellular matrix with complex fibre orientation and layered architecture, which are critical to its mechanical and electrical function,” Park explains. “These properties make it an ideal model for demonstrating the ability of MIR-DS-PAM to detect physiologically relevant histostructural and fibrosis-related changes.”

The researchers also used MIR-DS-PAM to quantify the structural integrity of EHT during development, using specimens cultured for one to five days before fixing. Analysis of the label-free images revealed that as the tissue matured, the DoLD gradually increased, while the standard deviation of the AoLD decreased – indicating increased protein accumulation with more uniform fibre alignment over time. They note that these results agree with those from immunofluorescence-stained confocal fluorescence microscopy.

Next, they examined diseased EHT with two types of fibrosis: cell-induced fibrosis (CIF) and drug-induced fibrosis (DIF). In the CIF sample, the average photoacoustic amplitude and AoLD uniformity were both lower than found in normal EHT, indicating reduced protein density and disrupted fibre alignment. DIF exhibited a higher photoacoustic amplitude and lower AoLD uniformity than normal EHT, suggesting extensive extracellular matrix accumulation with disorganized orientation.

Both CIF and DIF showed a slight reduction in DoLD, again signifying a disorganized tissue structure, a common hallmark of fibrosis. The two fibrosis types, however, exhibited diverse biochemical profiles and different levels of mechanical dysfunction. The findings demonstrate the ability of MIR-DS-PAM to distinguish diseased from healthy tissue and identify different types of fibrosis. The researchers also imaged a tissue assembly containing both normal and fibrotic EHT to show that MIR-DS-PAM can capture features in a composite sample.

They conclude that MIR-DS-PAM enables label-free monitoring of both tissue development and fibrotic remodelling. As such, the technique shows potential for use within tissue engineering research, as well as providing a diagnostic tool for assessing tissue fibrosis or remodelling in biopsied samples. “Its ability to visualize both biochemical composition and structural alignment could aid in identifying pathological changes in cardiological, musculoskeletal or ocular tissues,” says Park.

“We are currently expanding the application of MIR-DS-PAM to disease contexts where extracellular matrix remodelling plays a central role,” he adds. “Our goal is to identify label-free histological biomarkers that capture both molecular and structural signatures of fibrosis and degeneration, enabling multiparametric analysis in pathological conditions.”

 

The post Polarization-sensitive photoacoustic microscopy reveals heart tissue health appeared first on Physics World.

]]>
Research update Label-free imaging technique can distinguish diseased cardiac tissue from healthy tissue and identify different types of fibrosis https://physicsworld.com/wp-content/uploads/2026/01/19-01-26-photoacoustic-microscopy-fig4-featured.jpg newsletter
Astronomer Daniel Jaffe named president of the Giant Magellan Telescope project https://physicsworld.com/a/astronomer-daniel-jaffe-named-president-of-the-giant-magellan-telescope-project/ Fri, 16 Jan 2026 15:30:52 +0000 https://physicsworld.com/?p=126036 Jaffe will be aiming to secure the funding necessary to complete the $2.5bn telescope

The post Astronomer Daniel Jaffe named president of the Giant Magellan Telescope project appeared first on Physics World.

]]>
Daniel Jaffe

Astronomer Daniel Jaffe has been appointed the next president of the Giant Magellan Telescope Corporation –  the international consortium building the $2.5bn Giant Magellan Telescope (GMT). He succeeds Robert Shelton, who announced his retirement last year after eight years in the role.

A former head of astronomy at the University of Texas at Austin from 2011 to 2015, Jaffe became vice-president for research at the university from 2016 to 2025 and he also served as interim provost from 2020 to 2021.

Jaffe has sat on the board of directors of the Association of Universities for Research in Astronomy and the Gemini Observatory and played a role in establishing the University of Texas at Austin’s partnership in the GMT.

Under construction in Chile and expected to be complete in the 2030s, the GMT consists of seven mirrors to create a 25.4 m telescope. From the ground it will produce images 4–16 times sharper than the James Webb Space Telescope and will investigate the origins of the chemical elements, and search for signs of life on distant planets.

“I am honoured to lead the GMT at this exciting stage,” notes Jaffe. “[It] represents a profound leap in our ability to explore the universe and employ a host of new technologies to make fundamental discoveries.”

“[Jaffe] brings decades of leadership in research, astronomy instrumentation, public-private partnerships, and academia,” noted Taft Armandroff, board chair of the GMTO Corporation. “His deep understanding of the Giant Magellan Telescope, combined with his experience leading large research enterprises and cultivating a collaborative environment, make him exceptionally well suited to lead the observatory through its next phase of construction and toward operations.”

Jaffe joins the GMT at a pivotal time, as it aims to secure the funding necessary to complete the telescope with just over $1bn from private funds having been pledges so far. The collaboration recently added Northwestern University and the Massachusetts Institute of Technology to its international consortium taking the number of members to 16 universities and research institutions.

In June 2025 the GMT, which is already 40% completed, received NSF approval confirming that the observatory will advance into its “major facilities final design phase”, one of the final steps before becoming eligible for federal construction funding.

Yet it faces competition from another next-generation telescope – the Thirty Meter Telescope (TMT) that will use a segmented primary mirror consisting of 492 elements of zero-expansion glass for a 30 m-diameter primary mirror.

The TMT team chose Hawaii’s Mauna Kea peak as its location. However, protests by indigenous Hawaiians, who regard the site as sacred, have delayed the start of construction with officials identifying the island of La Palma, belonging to Spain’s Canary Islands, as an alternative site in 2019.

The post Astronomer Daniel Jaffe named president of the Giant Magellan Telescope project appeared first on Physics World.

]]>
News Jaffe will be aiming to secure the funding necessary to complete the $2.5bn telescope https://physicsworld.com/wp-content/uploads/2026/01/daniel-jaffe-list.jpg newsletter
India turns to small modular nuclear reactors to meet climate targets https://physicsworld.com/a/india-turns-to-small-modular-nuclear-reactors-to-meet-climate-targets/ Fri, 16 Jan 2026 12:30:55 +0000 https://physicsworld.com/?p=126025 While SMRs could help meet climate targets there are concerns over their commercial viability  

The post India turns to small modular nuclear reactors to meet climate targets appeared first on Physics World.

]]>
India has been involved in nuclear energy and power for decades, but now the country is  turning to small modular nuclear reactors (SMRs) as part of a new, long-term push towards nuclear and renewable energy. In December 2025 the country’s parliament passed a bill that allows private companies for the first time to participate in India’s nuclear programme, which could see them involved in generating power, operating plants and making equipment.

Some commentators are unconvinced that the move will be enough to help meet India’s climate pledge to achieving 500 GW of non-fossil-fuel based energy generation by 2030. Interestingly, however, India has now joined other nations, such as Russia and China, in taking an interest in SMRs. They could help stem the overall decline in nuclear power, which now accounts for just 9% of electricity generated around the world – down from 17.5% in 1996.

Last year India’s finance minister Nirmala Sitharaman announced a nuclear energy mission funded with 200 billion Indian rupees ($2.2bn) to develop at least five indigenously designed and operational SMRs by 2033. Unlike huge, conventional nuclear plants, such as pressurized heavy-water reactors (PHWRs), most or all components of an SMR are manufactured in factories before being assembled at the reactor site.

SMRs, typically generate less than 300 MW of electrical power but – being modular – additional capacity can be brought on quickly and easily given their lower capital costs, shorter construction times, ability to work with lower-capacity grids and lower carbon emissions. Despite their promise, there are only two fully operating SMRs in the world – both in Russia – although two further high-temperature gas-cooled SMRs are currently being built in China. In June 2025 Rolls-Royce SMR was selected as the preferred bidder by Great British Nuclear to build the UK’s first fleet of SMRs, with plans to provide 470 MW of low-carbon electricity.

Cost benefit analysis

An official at the Department of Atomic Energy told Physics World that part of that mix of five new SMRs in India could be the 200 MW Bharat small modular reactor, which are based on pressurized water reactor technology and use slightly enriched uranium as a fuel. Other options are 55 MW small modular reactors and the Indian government also plans to partner with the private sector to deploy 220 MW Bharat small reactors.

Despite such moves, some are unconvinced that small nuclear reactors could help India scale its nuclear ambitions. “SMRs are still to demonstrate that they can supply electricity at scale,” says Karthik Ganesan, a fellow and director of partnerships at the Council on Energy, Environment and Water (CEEW), a non-profit policy research think-tank based in New Delhi. “SMRs are a great option for captive consumption, where large investment that will take time to start generating is at a premium.”

Ganesan, however, says it is too early to comment on the commercial viability of SMRs as cost reductions from SMRs depend on how much of the technology is produced in a factory and in what quantities. “We are yet to get to that point and any test reactors deployed would certainly not be the ones to benchmark their long-term competitiveness,” he says. “[But] even at a higher tariff, SMRs will still have a use case for industrial consumers who want certainty in long-term tariffs and reliable continuous supply in a world where carbon dioxide emissions will be much smaller than what we see from the power sector today.”

M V Ramana from the University of British Columbia, Vancouver, who works in international security and energy supply, is concerned over the cost efficiency of SMRs compared to their traditional counterparts. “Larger reactors are cheaper on a per-megawatt basis because their material and work requirements do not scale linearly with power capacity,” says Ramana.  This, according to Ramana, means that the electricity SMRs produce will be more expensive than nuclear energy from large reactors, which are already far more expensive than renewables such as solar and wind energy.

Clean or unclean?

Even if SMRs take over from PHWRs, there is still the question of what do with its nuclear waste. As Ramana points out, all activities linked to the nuclear fuel chain – from mining uranium to dealing with the radioactive wastes produced – have significant health and environmental impacts. “The nuclear fuel chain is polluting, albeit in a different way from that of fossil fuels,” he says, adding that those pollutants remain hazardous for hundreds of thousands of years. “There is no demonstrated solution to managing these radioactive wastes – nor can there be, given the challenge of trying to ensure that these materials do not come into contact with living beings,” says Ramana.

Ganesan, however, thinks that nuclear energy is still clean as it produces electricity with much a lower environmental footprint especially when it comes to so-called “criteria pollutants”: ozone; particulate matter; carbon monoxide; lead; sulphur dioxide; and nitrogen dioxide.  While nuclear waste still needs to be managed, Ganesan says the associated costs are already included in the price of setting up a reactor. “In due course, with technological development, the burn up will significantly higher and waste generated a lot lesser.”

The post India turns to small modular nuclear reactors to meet climate targets appeared first on Physics World.

]]>
Analysis While SMRs could help meet climate targets there are concerns over their commercial viability   https://physicsworld.com/wp-content/uploads/2026/01/nuclear-plant-belgium-1007906419-shutterstock-engel-ac.jpg newsletter
Gravitational lensing sheds new light on Hubble constant controversy https://physicsworld.com/a/gravitational-lensing-sheds-new-light-on-hubble-constant-controversy/ Fri, 16 Jan 2026 10:00:13 +0000 https://physicsworld.com/?p=125984 Astronomers calculate new value for the universe's expansion

The post Gravitational lensing sheds new light on Hubble constant controversy appeared first on Physics World.

]]>
By studying how light from eight distant quasars is gravitationally lensed as it propagates towards Earth, astronomers have calculated a new value for the Hubble constant – a parameter that describes the rate at which the universe is expanding. The result agrees more closely with previous “late-universe” probes of this constant than it does with calculations based on observations of the cosmic microwave background (CMB) in the early universe, strengthening the notion that we may be misunderstanding something fundamental about how the universe works.

The universe has been expanding ever since the Big Bang nearly 14 billion years ago. We know this, in part, because of observations made in the 1920s by the American astronomer Edwin Hubble. By measuring the redshift of various galaxies, Hubble discovered that galaxies further away from Earth are moving away faster than galaxies that are closer to us. The relationship between this speed and the galaxies’ distance is known as the Hubble constant, H0.

Astronomers have developed several techniques for measuring H0. The problem is that different techniques deliver different values. According to measurements made by the European Space Agency’s Planck satellite of CMB radiation “left over” from the Big Bang, the value of H0 is about 67 kilometres per second per megaparsec (km/s/Mpc), where one Mpc is 3.3 million light years. In contrast, “distance-ladder” measurements such as those made by the SH0ES collaboration those involving observations of type Ia supernovae yield a value of about 73 km/s/Mpc. This discrepancy is known as the Hubble tension.

Time-delay cosmography

In the latest work, the TDCOSMO collaboration, which includes astronomers Kenneth Wong and Eric Paic of the University of Tokyo, Japan, measured H0 using a technique called time-delay cosmography. This well-established method dates back to 1964 and uses the fact that massive galaxies can act as lenses, deflecting the light from objects behind them so that from our perspective, these objects appear distorted.

“This is called gravitational lensing, and if the circumstances are right, we’ll actually see multiple distorted images, each of which will have taken a slightly different pathway to get to us, taking different amounts of time,” Wong explains.

By looking for changes in these images that are identical, but slightly out of sync, astronomers can measure the time differences required for the light from the objects to reach Earth. Then, by combining these data with estimates of the distribution of the mass of the distorting galactic lens, they can calculate H0.

A real tension, not a measurement artefact

Wong and colleagues measured the light from eight strongly lensed quasars using various telescopes, including the James Webb Space Telescope (JWST), the Keck Telescopes and the Very Large Telescope (VLT). They also made use of observations from the Sloan Lens ACS (SLACS) sample with Keck and the Legacy Survey (SL2S) sample.

Based on these measurements, they obtained a H0 value of roughly 71.6 km s−1 Mpc−1, which is more consistent with current-day observations (such as that from SH0ES) than early-universe ones (such as that from Planck). Wong explains that this discrepancy supports the idea that the Hubble tension arises from real physics, not just some unknown error in the various methods. “Our measurement is completely independent of other methods, both early- and late-universe, so if there are any systematic uncertainties in those, we should not be affected by them,” he says.

The astronomers say that the SLACS and SL2S sample data are in excellent agreement with the new TDCOSMO-2025 sample, while the new measurements improve the precision of H0 to 4.6%. However, Paic notes that nailing down the value of H0 to a level that would “definitely confirm” the Hubble tension will require a precision of 1-2%. “This could be possible by increasing the number of objects observed as well as ruling out any systematic errors as yet unaccounted for,” he says.

Wong adds that while the TDCOSMO-2025 dataset contains its own uncertainties, multiple independent measurements should, in principle, strengthen the result. “One of the largest sources of uncertainty is the fact that we don’t know exactly how the mass in the lens galaxies is distributed,” he explains. “It is usually assumed that the mass follows some simple profile that is consistent with observations, but it is hard to be sure and this uncertainty can directly influence the values we calculate.”

The biggest hurdle, Wang adds, will “probably be addressing potential sources of systematic uncertainty, making sure we have thought of all the possible ways that our result could be wrong or biased and figuring out how to handle those uncertainties.”

The study is detailed in Astronomy and Astrophysics.

The post Gravitational lensing sheds new light on Hubble constant controversy appeared first on Physics World.

]]>
Research update Astronomers calculate new value for the universe's expansion https://physicsworld.com/wp-content/uploads/2026/01/gravitational-lenses.jpg newsletter
RFID-tagged drug capsule lets doctors know when it has been swallowed https://physicsworld.com/a/rfid-tagged-drug-capsule-lets-doctors-know-when-it-has-been-swallowed/ Thu, 15 Jan 2026 09:15:21 +0000 https://physicsworld.com/?p=125979 A dissolvable drug capsule uses radiofrequency signals to ensure that people are taking their medication on schedule

The post RFID-tagged drug capsule lets doctors know when it has been swallowed appeared first on Physics World.

]]>
Taking medication as and when prescribed is crucial for it to have the desired effect. But nearly half of people with chronic conditions don’t adhere to their medication regimes, a serious problem that leads to preventable deaths, drug resistance and increased healthcare costs. So how can medical professionals ensure that patients are taking their medicine as prescribed?

A team at Massachusetts Institute of Technology (MIT) has come up with a solution: a drug capsule containing an RFID tag that uses radiofrequency (RF) signals to communicate that it has been swallowed, and then bioresorbs into the body.

“Medication non-adherence remains a major cause of preventable morbidity and cost, but existing ingestible tracking systems rely on non-degradable electronics,” explains project leader Giovanni Traverso. “Our motivation was to create a passive, battery-free adherence sensor that confirms ingestion while fully biodegrading, avoiding long-term safety and environmental concerns associated with persistent electronic devices.”

The device – named SAFARI (smart adherence via Faraday cage and resorbable ingestible) – incorporates an RFID tag with a zinc foil RF antenna and an RF chip, as well as the drug payload, inside an ingestible gelatin capsule. The capsule is coated with a mixture of cellulose and molybdenum particles, which blocks the transit of any RF signals.

SAFARI capsules with and without RF-blocking coating

Once swallowed, however, this shielding layer breaks down in the stomach. The RFID tag (which can be preprogrammed with information such as dose metadata, manufacturing details and unique ID) can then be wirelessly queried by an external reader and return a signal from inside the body confirming that the medication has been ingested.

The capsule itself dissolves upon exposure to digestive fluids, releasing the desired medication; the  metal antenna components also dissolve completely in the stomach. The use of biodegradable materials is key as it eliminates the need for device retrieval and minimizes the risk of gastrointestinal (GI) blockage. The tiny (0.16 mm²) RFID chip remains intact and should safely leave the body through the GI tract.

Traverso suggests that the first clinical applications for the SAFARI capsule will likely be high-risk settings in which objective ingestion confirmation is particularly valuable. “[This includes] tuberculosis, HIV, transplant immunosuppression or cardiovascular therapies, where missed doses can have serious clinical consequences,” he tells Physics World.

In vivo demonstration

To assess the degradation of the SAFARI capsule and its components in vitro, Traverso and colleagues placed the capsule into simulated gastric fluid at physiological temperature (37 °C). The RF shielding coating dissolved in 10–20 min, while the capsule and the zinc layer in the RFID tag disintegrated into pieces after one day.

Next, the team endoscopically delivered the SAFARI capsules into the stomachs of sedated pigs, chosen as they have a similar sized GI tract to humans. Once in contact with gastric fluid in the stomach, the capsule coating swelled and then partially dissolved (as seen using endoscopic images), exposing the RFID tag. The researchers found that, in general, the tag and capsule parts disintegrated in the stomach at up to 24 h later.

A panel antenna positioned 10 cm from the animal captured the tag data. Even with the RFID tags immersed in gastric fluid, the external receiver could effectively record signals in the frequency range of 900–925 MHz, with RSSI (received signal strength indicator) values ranging from 65 to 78 dB – demonstrating that the tag could effectively transmit RF signals from inside the stomach.

The researchers conclude that this successful use of SAFARI in swine indicates the potential for translation to clinical research. They note that the device should be safe for human ingestion as its composite materials meet established dietary and biomedical exposure limits, with levels of zinc and molybdenum orders of magnitude below those associated with toxicity.

“We have demonstrated robust performance and safety in large-animal models, which is an important translational milestone,” explains first author Mehmet Girayhan Say. “Before human studies, further work is needed on chronic exposure with characterization of any material accumulation upon repeated dosing, as well as user-centred integration of external readers to support real-world clinical workflows.”

The post RFID-tagged drug capsule lets doctors know when it has been swallowed appeared first on Physics World.

]]>
Research update A dissolvable drug capsule uses radiofrequency signals to ensure that people are taking their medication on schedule https://physicsworld.com/wp-content/uploads/2026/01/15-01-26-mit-medi-adherence-schematic.jpg newsletter1
Quantum state teleported between quantum dots at telecoms wavelengths https://physicsworld.com/a/quantum-state-teleported-between-quantum-dots-at-telecoms-wavelengths/ Wed, 14 Jan 2026 16:00:47 +0000 https://physicsworld.com/?p=125961 Frequency-converted photons are key to proof-of-principle experiment

The post Quantum state teleported between quantum dots at telecoms wavelengths appeared first on Physics World.

]]>
Physicists at the University of Stuttgart, Germany have teleported a quantum state between photons generated by two different semiconductor quantum dot light sources located several metres apart. Though the distance involved in this proof-of-principle “quantum repeater” experiment is small, members of the team describe the feat as a prerequisite for future long-distance quantum communications networks.

“Our result is particularly exciting because such a quantum Internet will encompass these types of distant quantum nodes and will require quantum states that are transmitted among these different nodes,” explains Tim Strobel, a PhD student at Stuttgart’s Institute of Semiconductor Optics and Functional Interfaces (IHFG) and the lead author of a paper describing the research. “It is therefore an important step in showing that remote sources can be effectively interfaced in this way in quantum teleportation experiments.”

In the Stuttgart study, one of the quantum dots generates a single photon while the other produces a pair of photons that are entangled – meaning that the quantum state of one photon is closely linked to the state of the other, no matter how far apart they are. One of the photons in the entangled pair then travels to the other quantum dot and interferes with the photon there. This process produces a superposition that allows the information encapsulated in the single photon to be transferred to the distant “partner” photon from the pair.

Quantum frequency converters

Strobel says the most challenging part of the experiment was making photons from two remote quantum dots interfere with each other. Such interference is only possible if the two particles are indistinguishable, meaning they must be similar in every regard, be it in their temporal shape, spatial shape or wavelength. In contrast, each quantum dot is unique, especially in terms of its spectral properties, and each one emits photons at slightly different wavelengths.

To close the gap, the team used devices called quantum frequency converters to precisely tune the wavelength of the photons and match them spectrally. The researchers also used the converters to shift the original wavelengths of the photons emitted from the quantum dots (around 780 nm) to a wavelength commonly used in telecommunications (1515 nm) without altering the quantum state of the photons. This offers further advantages: “Being at telecommunication wavelengths makes the technology compatible with the existing global optical fibre network, an important step towards real-life applications,” Strobel tells Physics World.

Proof-of-principle experiment

In this work, the quantum dots were separated by an optical fibre just 10 m in length. However, the researchers aim to push this to considerably greater distances in the future. Strobel notes that the Stuttgart study was published in Nature Communications back-to-back with an independent work carried out by researchers led by Rinaldo Trotta of Sapienza University in Rome, Italy. The Rome-based group demonstrated quantum state teleportation across the Sapienza University campus at shorter wavelengths, enabled by the brightness of their quantum-dot source.

“These two papers that we published independently strengthen the measurement outcomes, demonstrating the maturity of quantum dot light sources in this domain,” Strobel says. Semiconducting quantum dots are particularly attractive for this application, he adds, because as well as producing both single and entangled photons on demand, they are also compatible with other semiconductor technologies.

Fundamental research pays off

Simone Luca Portalupi, who leads the quantum optics group at IHFG, notes that “several years of fundamental research and semiconductor technology are converging into these quantum teleportation experiments”. For Peter Michler, who led the study team, the next step is to leverage these advances to bring quantum-dot-based teleportation technology out of a controlled laboratory environment and into the real world.

Strobel points out that there is already some precedent for this, as one of the group’s previous studies showed that they could maintain photon entanglement across a 36-km fibre link deployed across the city of Stuttgart. “The natural next step would be to show that we can teleport the state of a photon across this deployed fibre link,” he says. “Our results will stimulate us to improve each building block of the experiment, from the sample to the setup.”

The post Quantum state teleported between quantum dots at telecoms wavelengths appeared first on Physics World.

]]>
Research update Frequency-converted photons are key to proof-of-principle experiment https://physicsworld.com/wp-content/uploads/2026/01/quantenrepeater-team.jpg newsletter1
Quantum metrology at NPL: we explore the challenges and opportunities https://physicsworld.com/a/quantum-metrology-at-npl-we-explore-the-challenges-and-opportunities/ Wed, 14 Jan 2026 14:02:04 +0000 https://physicsworld.com/?p=125919 This podcast features Tim Prior and John Devaney of the National Physical Laboratory

The post Quantum metrology at NPL: we explore the challenges and opportunities appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features a conversation with Tim Prior and John Devaney of the National Physical Laboratory (NPL), which is the UK’s national metrology institute.

Prior is NPL’s quantum programme manager and Devaney is its quantum standards manager. They talk about NPL’s central role in the recent launch of NMI-Q, which brings together some of the world’s leading national metrology institutes to accelerate the development and adoption of quantum technologies.

Prior and Devaney describe the challenges and opportunities of developing metrology and standards for rapidly evolving technologies including quantum sensors, quantum computing and quantum cryptography. They talk about the importance of NPL’s collaborations with industry and academia and explore the diverse career opportunities for physicists at NPL. Prior and Devaney also talk about their own careers and share their enthusiasm for working in the cutting-edge and fast-paced field of quantum metrology.

This podcast is sponsored by the National Physical Laboratory.

Further reading

Why quantum metrology is the driving force for best practice in quantum standardization

Performance metrics and benchmarks point the way to practical quantum advantage

End note: NPL retains copyright on this article.

The post Quantum metrology at NPL: we explore the challenges and opportunities appeared first on Physics World.

]]>
Podcasts This podcast features Tim Prior and John Devaney of the National Physical Laboratory https://physicsworld.com/wp-content/uploads/2026/01/NPL-podcast-list-image.jpg newsletter
Mapping electron phases in nanotube arrays https://physicsworld.com/a/mapping-electron-phases-in-nanotube-arrays/ Wed, 14 Jan 2026 12:56:38 +0000 https://physicsworld.com/?p=125798 A nanotube lattice reveals how electrons shift between 1D and 2D quantum phases under voltage control

The post Mapping electron phases in nanotube arrays appeared first on Physics World.

]]>
Carbon nanotube arrays are designed to investigate the behaviour of electrons in low‑dimensional systems. By arranging well‑aligned 1D nanotubes into a 2D film, the researchers create a coupled‑wire structure that allows them to study how electrons move and interact as the system transitions between different dimensionalities. Using a gate electrode positioned on top of the array, the researchers were able to tune both the carrier density (number of electrons and holes in a unit area) and the strength of electron–electron interactions, enabling controlled access to regimes. The nanotubes behave as weakly coupled 1D channels where electrons move along each nanotube, as a 2D Fermi liquid where the electrons can move between nanotubes behaving like a conventional metal, or as a set of quantum‑dot‑like islands showing Coulomb blockade where at low carrier densities sections of the nanotubes become isolated.

The dimensional transitions are set by two key temperatures: T₂D, where electrons begin to hop between neighbouring nanotubes, and T₁D, where the system behaves as a Luttinger liquid which is a 1D state in which electrons cannot easily pass each other and therefore move in a strongly correlated, collective way. Changing the number of holes in the nanotubes changes how strongly the tubes interact with each other. This controls when the system stops acting like separate 1D wires and when strong interactions make parts of the film break up into isolated regions that show Coulomb blockade.

The researchers built a phase diagram by looking at how the conductance changes with temperature and voltage, and by checking how well it follows power‑law behaviour at different energy ranges. This approach allows them to identify the boundaries between Tomonaga–Luttinger liquid, Fermi liquid and Coulomb blockade phases across a wide range of gate voltages and temperatures.

Overall, the work demonstrates a continuous crossover between 2D, 1D and 0D electronic behaviour in a controllable nanotube array. This provides an experimentally accessible platform for studying correlated low‑dimensional physics and offers insights relevant to the development of nanoscale electronic devices and future carbon nanotube technologies.

Read the full article

Dimensionality and correlation effects in coupled carbon nanotube arrays

Xiaosong Deng et al 2025 Rep. Prog. Phys. 88 088001

Do you want to learn more about this topic?

Structural approach to charge density waves in low-dimensional systems: electronic instability and chemical bonding Jean-Paul Pouget and Enric Canadell (2024)

The post Mapping electron phases in nanotube arrays appeared first on Physics World.

]]>
Research highlight A nanotube lattice reveals how electrons shift between 1D and 2D quantum phases under voltage control https://physicsworld.com/wp-content/uploads/2025/12/atomic-abstract-508089172-istock-pobytov.jpg
CMS spots hints of a new form of top‑quark matter https://physicsworld.com/a/cms-spots-hints-of-a-new-form-of-top-quark-matter/ Wed, 14 Jan 2026 12:54:38 +0000 https://physicsworld.com/?p=125801 A threshold excess in top–antitop production hints at toponium‑like physics and supports non‑relativistic QCD models

The post CMS spots hints of a new form of top‑quark matter appeared first on Physics World.

]]>
The CMS Collaboration investigated in detail events in which a top quark and an anti‑top quark are produced together in high‑energy proton–proton collisions at √s = 13 TeV, using the full 138 fb⁻¹ dataset collected between 2016 and 2018. The top quark is the heaviest fundamental particle and decays almost immediately after being produced in high-energy collisions. As a consequence, the formation of a bound top–antitop state was long considered highly unlikely and had never been observed. The anti-top quark has the same mass and lifetime as the top quark but opposite charges. When a top quark and an anti-top quark are produced together, they form a top-antitop pair (tt̄).

Focusing on events with two charged leptons (top quarks and anti-top quarks decay into two electrons, two muons or one electron and one muon) and multiple jets (sprays of particles associated with top quark decay), the analysis examines the invariant mass of the top–antitop system along with two angular observables that directly probe how the spins of the top and anti‑top quarks are correlated. These measurements allow the team to compare the data with the prediction for the non resonant tt̄ production based on fixed order perturbative quantum chromodynamics (QCD), which is what physicists normally use to calculate how quarks behave according to the standard model of particle physics.

Near the kinematic threshold where the top–antitop pair is produced, CMS observes a significant excess of events relative to the QCD prediction. The number of extra events they see can be translated into a production rate. Using a simplified model based on non‑relativistic QCD, they estimate that this excess corresponds to a cross section of about 8.8 picobarns, with an uncertainty of roughly +1.2/–1.4 picobarns. The pattern of the excess, including its spin‑correlation features, is consistent with the production of a colour singlet pseudoscalar (a top–antitop pair in the 1S₀ state, i.e. the simplest, lowest energy configuration), and therefore with the prediction of non-relativistic QCD near the tt̄ threshold. The statistical significance of the excess exceeds five standard deviations, indicating that the effect is unlikely to be a statistical fluctuation. Researchers want to find a toponium‑like state because it would reveal how the strongest force in nature behaves at the highest energies, test key theories of heavy‑quark physics, and potentially expose new physics beyond the Standard Model.

The researchers emphasise that modelling the tt̄ threshold region is theoretically challenging, and that alternative explanations remain possible. Nonetheless, the result aligns with long‑standing predictions from non‑relativistic QCD that heavy quarks could form short‑lived bound states near threshold. The analysis also showcases spin correlation as an effective means to discover and characterise such effects, which were previously considered to be beyond the reach of experimental capabilities. Starting with the confirmation by the ATLAS Collaboration last July, this observation has sparked and continues to inspire follow-up theoretical follow-up theoretical and experimental works, opening up a new field of study involving bound states of heavy quarks and providing new insight into the behaviour of the strong force at high energies.

Read the full article

Observation of a pseudoscalar excess at the top quark pair production threshold

The CMS Collaboration 2025 Rep. Prog. Phys. 88 087801

Do you want to learn more about this topic?

The sea of quarks and antiquarks in the nucleon D F Geesaman and P E Reimer (2019)

The post CMS spots hints of a new form of top‑quark matter appeared first on Physics World.

]]>
Research highlight A threshold excess in top–antitop production hints at toponium‑like physics and supports non‑relativistic QCD models https://physicsworld.com/wp-content/uploads/2025/12/cms-1.jpg
Photonics West explores the future of optical technologies https://physicsworld.com/a/photonics-west-explores-the-future-of-optical-technologies/ Wed, 14 Jan 2026 12:00:17 +0000 https://physicsworld.com/?p=125834 SPIE Photonics West sees the global optics and photonics community come together to present and discuss the latest industry trends and research breakthroughs

The post Photonics West explores the future of optical technologies appeared first on Physics World.

]]>
The 2026 SPIE Photonics West meeting takes place in San Francisco, California, from 17 to 22 January. The premier event for photonics research and technology, Photonics West incorporates more than 100 technical conferences covering topics including lasers, biomedical optics, optoelectronics, quantum technologies and more.

As well as the conferences, Photonics West also offers 60 technical courses and a new Career Hub with a co-located job fair. There are also five world-class exhibitions featuring over 1500 companies and incorporating industry-focused presentations, product launches and live demonstrations. The first of these is the BiOS Expo, which begins on 17 January and examines the latest breakthroughs in biomedical optics and biophotonics technologies.

Then starting on 20 January, the main Photonics West Exhibition will host more than 1200 companies and showcase the latest innovative optics and photonics devices, components, systems and services. Alongside, the Quantum West Expo features the best in quantum-enabling technology advances, the AR | VR | MR Expo brings together leading companies in XR hardware and systems and – new for 2026 – the Vision Tech Expo highlights cutting-edge vision, sensing and imaging technologies.

Here are some of the product innovations on show at this year’s event.

Enabling high-performance photonics assembly with SmarAct

As photonics applications increasingly require systems with high complexity and integration density, manufacturers face a common challenge: how to assemble, align and test optical components with nanometre precision – quickly, reliably and at scale. At Photonics West, SmarAct presents a comprehensive technology portfolio addressing exactly these demands, spanning optical assembly, fast photonics alignment, precision motion and advanced metrology.

SmarAct’s photonics assembly portfolio

A central highlight is SmarAct’s Optical Assembly Solution, presented together with a preview of a powerful new software platform planned for release in late-Q1 2026. This software tool is designed to provide exceptional flexibility for implementing automation routines and process workflows into user-specific control applications, laying the foundation for scalable and future-proof photonics solutions.

For high-throughput applications, SmarAct showcases its Fast Photonics Alignment capabilities. By combining high-dynamic motion systems with real-time feedback and controller-based algorithms, SmarAct enables rapid scanning and active alignment of PICs and optical components such as fibres, fibre array units, lenses, beam splitters and more. These solutions significantly reduce alignment time while maintaining sub-micrometre accuracy, making them ideal for demanding photonics packaging and assembly tasks.

Both the Optical Assembly Solution and Fast Photonics Alignment are powered by SmarAct’s electromagnetic (EM) positioning axes, which form the dynamic backbone of these systems. The direct-drive EM axes combine high speed, high force and exceptional long-term durability, enabling fast scanning, smooth motion and stable positioning even under demanding duty cycles. Their vibration-free operation and robustness make them ideally suited for high-throughput optical assembly and alignment tasks in both laboratory and industrial environments.

Precision feedback is provided by SmarAct’s advanced METIRIO optical encoder family, designed to deliver high-resolution position feedback for demanding photonics and semiconductor applications. The METIRIO stands out by offering sub-nanometre position feedback in an exceptionally compact and easy-to-integrate form factor. Compatible with linear, rotary and goniometric motion systems – and available in vacuum-compatible designs – the METIRIO is ideally suited for space-constrained photonics setups, semiconductor manufacturing, nanopositioning and scientific instrumentation.

For applications requiring ultimate measurement performance, SmarAct presents the PICOSCALE Interferometer and Vibrometer. These systems provide picometre-level displacement and vibration measurements directly at the point of interest, enabling precise motion tracking, dynamic alignment, and detailed characterization of optical and optoelectronic components. When combined with SmarAct’s precision stages, they form a powerful closed-loop solution for high-yield photonics testing and inspection.

Together, SmarAct’s motion, metrology and automation solutions form a unified platform for next-generation photonics assembly and alignment.

  • Visit SmarAct at booth #3438 at Photonics West and booth #8438 at BiOS to discover how these technologies can accelerate your photonics workflows.

Avantes previews AvaSoftX software platform and new broadband light source

Photonics West 2026 will see Avantes present the first live demonstration of its completely redesigned software platform, AvaSoftX, together with a sneak peek of its new broadband light source, the AvaLight-DH-BAL. The company will also run a series of application-focused live demonstrations, highlighting recent developments in laser-induced breakdown spectroscopy (LIBS), thin-film characterization and biomedical spectroscopy.

AvaSoftX is developed to streamline the path from raw spectra to usable results. The new software platform offers preloaded applications tailored to specific measurement techniques and types, such as irradiance, LIBS, chemometry and Raman. Each application presents the controls and visualizations needed for that workflow, reducing time and the risk of user error.

The new AvaSoftX software platform

Smart wizards guide users step-by-step through the setup of a measurement – from instrument configuration and referencing to data acquisition and evaluation. For more advanced users, AvaSoftX supports customization with scripting and user-defined libraries, enabling the creation of reusable methods and application-specific data handling. The platform also includes integrated instruction videos and online manuals to support the users directly on the platform.

The software features an accessible dark interface optimized for extended use in laboratory and production environments. Improved LIBS functionality will be highlighted through a live demonstration that combines AvaSoftX with the latest Avantes spectrometers and light sources.

Also making its public debut is the AvaLight-DH-BAL, a new and improved deuterium–halogen broadband light source designed to replace the current DH product line. The system delivers continuous broadband output from 215 to 2500 nm and combines a more powerful halogen lamp with a reworked deuterium section for improved optical performance and stability.

A switchable deuterium and halogen optical path is combined with deuterium peak suppression to improve dynamic range and spectral balance. The source is built into a newly developed, more robust housing to improve mechanical and thermal stability. Updated electronics support adjustable halogen output, a built-in filter holder, and both front-panel and remote-controlled shutter operation.

The AvaLight-DH-BAL is intended for applications requiring stable, high-output broadband illumination, including UV–VIS–NIR absorbance spectroscopy, materials research and thin-film analysis. The official launch date for the light source, as well as the software, will be shared in the near future.

Avantes will also run a series of live application demonstrations. These include a LIBS setup for rapid elemental analysis, a thin-film measurement system for optical coating characterization, and a biomedical spectroscopy demonstration focusing on real-time measurement and analysis. Each demo will be operated using the latest Avantes hardware and controlled through AvaSoftX, allowing visitors to assess overall system performance and workflow integration. Avantes’ engineering team will be available throughout the event.

  • For product previews, live demonstrations and more, meet Avantes at booth #1157.

HydraHarp 500: high-performance time tagger redefines precision and scalability

One year after its successful market introduction, the HydraHarp 500 continues to be a standout highlight at PicoQuant’s booth at Photonics West. Designed to meet the growing demands of advanced photonics and quantum optics, the HydraHarp 500 sets benchmarks in timing performance, scalability and flexible interfacing.

At its core, the HydraHarp 500 delivers exceptional timing precision combined with ultrashort jitter and dead time, enabling reliable photon timing measurements even at very high count rates. With support for up to 16 fully independent input channels plus a common sync channel, the system allows true simultaneous multichannel data acquisition without cross-channel dead time, making it ideal for complex correlation experiments and high-throughput applications.

The HydraHarp 500

A key strength of the HydraHarp 500 is its high flexibility in detector integration. Multiple trigger methods support a wide range of detector technologies, from single-photon avalanche diodes (SPADs) to superconducting nanowire single-photon detectors (SNSPDs). Versatile interfaces, including USB 3.0 and a dedicated FPGA interface, ensure seamless data transfer and easy integration into existing experimental setups. For distributed and synchronized systems, White Rabbit compatibility enables precise cross-device timing coordination.

Engineered for speed and efficiency, the HydraHarp 500 combines ultrashort per-channel dead time with industry-leading timing performance, ensuring complete datasets and excellent statistical accuracy even under demanding experimental conditions.

Looking ahead, PicoQuant is preparing to expand the HydraHarp family with the upcoming HydraHarp 500 L. This new variant will set new standards for data throughput and scalability. With outstanding timing resolution, excellent timing precision and up to 64 flexible channels, the HydraHarp 500 L is engineered for highest-throughput applications powered – for the first time – by USB 3.2 Gen 2×2, making it ideal for rapid, large-volume data acquisition.

With the HydraHarp 500 and the forthcoming HydraHarp 500 L, PicoQuant continues to redefine what is possible in photon timing, delivering precision, scalability and flexibility for today’s and tomorrow’s photonics research. For more information, visit www.picoquant.com or contact us at info@picoquant.com.

  • Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.

 

The post Photonics West explores the future of optical technologies appeared first on Physics World.

]]>
Innovation showcase SPIE Photonics West sees the global optics and photonics community come together to present and discuss the latest industry trends and research breakthroughs https://physicsworld.com/wp-content/uploads/2026/01/san-francisco-oakland-bay-bridge-1136437406-istockbluejayphoto.jpg newsletter
Mission to Mars: from biological barriers to ethical impediments https://physicsworld.com/a/mission-to-mars-from-biological-barriers-to-ethical-impediments/ Wed, 14 Jan 2026 11:00:14 +0000 https://physicsworld.com/?p=125458 Emma Chapman reviews Becoming Martian: How Living in Space Will Change Our Bodies and Minds by Scott Solomon

The post Mission to Mars: from biological barriers to ethical impediments appeared first on Physics World.

]]>
“It’s hard to say when exactly sending people to Mars became a goal for humanity,” ponders author Scott Solomon in his new book Becoming Martian: How Living in Space Will Change Our Bodies and Minds – and I think we’d all agree. Ten years ago, I’m not sure any of us thought even returning to the Moon was seriously on the cards. Yet here we are, suddenly living in a second space age, where the first people to purchase one-way tickets to the Red Planet have likely already been born.

The technology required to ship humans to Mars, and the infrastructure required to keep them alive, is well constrained, at least in theory. One could write thousands of words discussing the technical details of reusable rocket boosters and underground architectures. However, Becoming Martian is not that book. Instead, it deals with the effect Martian life will have on the human body – both in the short term across a single lifetime; and in the long term, on evolutionary timescales.

This book’s strength lies in its authorship: it is not written by a physicist enthralled by the engineering challenge of Mars, nor by an astronomer predisposed to romanticizing space exploration. Instead, Solomon is a research biologist who teaches ecology, evolutionary biology and scientific communication at Rice University in Houston, Texas.

Becoming Martian starts with a whirlwind, stripped-down tour of Mars across mythology, astronomy, culture and modern exploration. This effectively sets out the core issue: Mars is fundamentally different from Earth, and life there is going to be very difficult. Solomon goes on to describe the effects of space travel and microgravity on humans that we know of so far: anaemia, muscle wastage, bone density loss and increased radiation exposure, to name just a few.

Where the book really excels, though, is when Solomon uses his understanding of evolutionary processes to extend these findings and conclude how Martian life would be different. For example, childbirth becomes a very risky business on a planet with about one-third of Earth’s gravity. The loss of bone density translates into increased pelvic fractures, and the muscle wastage into an inability for the uterus to contract strongly enough. The result? All Martian births will likely need to be C-sections.

Solomon applies his expertise to the whole human body, including our “entourage” of micro-organisms. The indoor life of a Martian is likely to affect the immune system to the degree that contact with an Earthling would be immensely risky. “More than any other factor, the risk of disease transmission may be the wedge that drives the separation between people on the two planets,” he writes. “It will, perhaps inevitably, cause the people on Mars to truly become Martians.” Since many diseases are harboured or spread by animals, there is a compelling argument that Martians would be vegan and – a dealbreaker for some I imagine – unable to have any pets. So no dogs, no cats, no steak and chips on Mars.

Let’s get physical

The most fascinating part of the book for me is how Solomon repeatedly links the biological and psychological research with the more technical aspects of designing a mission to Mars. For example, the first exploratory teams should have odd numbers, to make decisions easier and us-versus-them rifts less likely. The first colonies will also need to number between 10,000 and 11,000 individuals to ensure enough genetic diversity to protect against evolutionary concepts such as genetic drift and population crashes.

Amusingly, the one part of human activity most important for a sustainable colony – procreation – is the most understudied. When a NASA scientist made the suggestion a colony would need private spaces with soundproof walls, the backlash was so severe that NASA had to reassure Congress that taxpayer dollars were not being “wasted” encouraging sexual activity among astronauts.

Solomon’s writing is concise yet extraordinarily thorough – there is always just enough for you to feel you can understand the importance and nuance of topics ranging from Apollo-era health studies to evolution, and from AI to genetic engineering. The book is impeccably researched, and he presents conflicting ethical viewpoints so deftly, and without apparent judgement, that you are left plenty of space to imprint your own opinions. So much so that when Solomon shares his own stance on the colonization of Mars in the epilogue, it comes as a bit of a surprise.

In essence, this book lays out a convincing argument that it might be our biology, not our technology, that limits humanity’s expansion to Mars. And if we are able to overcome those limitations, either with purposeful genetic engineering or passive evolutionary change, this could mean we have shed our humanity.

Becoming Martian is one of the best popular-science books I have read within the field, and it is an uplifting read, despite dealing with some of the heaviest ethical questions in space sciences. Whether you’re planning your future as a Martian or just wondering if humans can have sex in space, this book should be on your wish list.

  • February 2026 MIT Press 264pp £27hb

The post Mission to Mars: from biological barriers to ethical impediments appeared first on Physics World.

]]>
Opinion and reviews Emma Chapman reviews Becoming Martian: How Living in Space Will Change Our Bodies and Minds by Scott Solomon https://physicsworld.com/wp-content/uploads/2026/01/2026-01-chapman-mars-colony-family-2190457854-istock-denis-art.jpg newsletter
Solar storms could be forecast by monitoring cosmic rays https://physicsworld.com/a/solar-storms-could-be-forecast-by-monitoring-cosmic-rays/ Wed, 14 Jan 2026 08:33:19 +0000 https://physicsworld.com/?p=125948 Forbush decrease effect is tracked between the Sun and Earth

The post Solar storms could be forecast by monitoring cosmic rays appeared first on Physics World.

]]>
Using incidental data collected by the BepiColombo mission, an international research team has made the first detailed measurements of how coronal mass ejections (CMEs) reduce cosmic-ray intensity at varying distances from the Sun. Led by Gaku Kinoshita at the University of Tokyo, the team hopes that their approach could help improve the accuracy of space weather forecasts following CMEs.

CMEs are dramatic bursts of plasma originating from the Sun’s outer atmosphere. In particularly violent events, this plasma can travel through interplanetary space, sometimes interacting with Earth’s magnetic field to produce powerful geomagnetic storms. These storms result in vivid aurorae in Earth’s polar regions and can also damage electronics on satellites and spacecraft. Extreme storms can even affect electrical grids on Earth.

To prevent such damage, astronomers aim to predict the path and intensity of CME plasma as accurately as possible – allowing endangered systems to be temporarily shut down with minimal disruption. According to Kinoshita’s team, one source of information has so far been largely unexplored.

Pushing back cosmic rays

Within interplanetary space, CME plasma interacts with cosmic rays, which are energetic charged particles of extrasolar origin that permeate the solar system with a roughly steady flux. When an interplanetary CME (ICME) passes by, it temporarily pushes back these cosmic rays, creating a local decrease in their intensity.

“This phenomenon is known as the Forbush decrease effect,” Kinoshita explains. “It can be detected even with relatively simple particle detectors, and reflects the properties and structure of the passing ICME.”

In principle, cosmic-ray observations can provide detailed insights into the physical profile of a passing ICME. But despite their relative ease of detection, Forbush decreases had not yet been observed simultaneously by detectors at multiple distances from the Sun, leaving astronomers unclear on how propagation distance affects their severity.

Now, Kinoshita’s team have explored this spatial relationship using BepiColombo, a European and Japanese mission that will begin orbiting Mercury in November 2026. While the mission focuses on Mercury’s surface, interior, and magnetosphere, it also carries non-scientific equipment capable of monitoring cosmic rays and solar plasma in its surrounding environment.

“Such radiation monitoring instruments are commonly installed on many spacecraft for engineering purposes,” Kinoshita explains. “We developed a method to observe Forbush decreases using a non-scientific radiation monitor onboard BepiColombo.”

Multiple missions

The team combined these measurements with data from specialized radiation-monitoring missions, including ESA’s Solar Orbiter, which is currently probing the inner heliosphere from inside Mercury’s orbit, as well as a network of near-Earth spacecraft. Together, these instruments allowed the researchers to build a detailed, distance-dependent profile of a week-long ICME that occurred in March 2022.

Just as predicted, the measurements revealed a clear relationship between the Forbush decrease effect and distance from the Sun.

“As the ICME evolved, the depth and gradient of its associated cosmic-ray decrease changed accordingly,” Kinoshita says.

With this method now established, the team hopes it can be applied to non-scientific radiation monitors on other missions throughout the solar system, enabling a more complete picture of the distance dependence of ICME effects.

“An improved understanding of ICME propagation processes could contribute to better forecasting of disturbances such as geomagnetic storms, leading to further advances in space weather prediction,” Kinoshita says. In particular, this approach could help astronomers model the paths and intensities of solar plasma as soon as a CME erupts, improving preparedness for potentially damaging events.

The research is described in The Astrophysical Journal.

The post Solar storms could be forecast by monitoring cosmic rays appeared first on Physics World.

]]>
Research update Forbush decrease effect is tracked between the Sun and Earth https://physicsworld.com/wp-content/uploads/2026/01/14-1-25-bepicolumbo.jpg
CERN team solves decades-old mystery of light nuclei formation https://physicsworld.com/a/cern-team-solves-decades-old-mystery-of-light-nuclei-formation/ Tue, 13 Jan 2026 14:00:29 +0000 https://physicsworld.com/?p=125933 ALICE collaboration identifies three-phase process that allows particles to form despite energetically unfavourable conditions

The post CERN team solves decades-old mystery of light nuclei formation appeared first on Physics World.

]]>
When particle colliders smash particles into each other, the resulting debris cloud sometimes contains a puzzling ingredient: light atomic nuclei. Such nuclei have relatively low binding energies, and they would normally break down at temperatures far below those found in high-energy collisions. Somehow, though, their signature remains. This mystery has stumped physicists for decades, but researchers in the ALICE collaboration at CERN have now figured it out. Their experiments showed that light nuclei form via a process called resonance-decay formation – a result that could pave the way towards searches for physics beyond the Standard Model.

Baryon resonance

The ALICE team studied deuterons (a bound proton and neutron) and antideuterons (a bound antiproton and antineutron) that form in experiments at CERN’s Large Hadron Collider. Both deuterons and antideuterons are fragile, and their binding energies of 2.2 MeV would seemingly make it hard for them to form in collisions with energies that can exceed 100 MeV – 100 000 times hotter than the centre of the Sun.

The collaboration found that roughly 90% of the deuterons seen after such collisions form in a three-phase process. In the first phase, an initial collision creates a so-called baryon resonance, which is an excited state of a particle made of three quarks (such as a proton or neutron). This particle is called a Δ baryon and is highly unstable, so it rapidly decays into a pion and a nucleon (a proton or a neutron) during the second phase of the process. Then, in the third (and, crucially, much later) phase, the nucleon cools down to a point where its energy properties allow it to bind with another nucleon to form a deuteron.

Smoking gun

Measuring such a complex process is not easy, especially as everything happens on a length scale of femtometres (10-15 meter). To tease out the details, the collaboration performed precision measurements to correlate the momenta of the pions and deuterons. When they analysed the momentum difference between these particle pairs, they observed a peak in the data corresponding to the mass of the Δ baryon. This peak shows that the pion and the deuteron are kinematically linked because they share a common ancestor: the pion came from the same Δ decay that provided one of the deuteron’s nucleons.

Panos Christakoglou, a member of the ALICE collaboration based at the Netherlands’ Maastricht University, says the experiment is special because in contrast to most previous attempts, where results were interpreted in light of models or phenomenological assumptions, this technique is model-independent. He adds that the results of this study could be used to improve models of high energy proton-proton collisions in which light nuclei (and maybe hadrons more generally) are formed. Other possibilities include improving our interpretations of cosmic-ray studies that measure the fluxes of (anti)nuclei in the galaxy – a useful probe for astrophysical processes.

The hunt is on

Intriguingly, Christakoglou suggests that the team’s technique could also be used to search for indirect signs of dark matter. Many models predict that dark-matter candidates such as Weakly Interacting Massive Particles (WIMPs) will decay or annihilate in processes that also produce Standard Model particles, including (anti)deuterons. “If for example one measures the flux of (anti)nuclei in cosmic rays being above the ‘Standard Model based’ astrophysical background, then this excess could be attributed to new physics which might be connected to dark matter,” Christakoglou tells Physics World.

Michael Kachelriess, a physicist at the Norwegian University of Science and Technology in Trondheim, Norway, who was not involved in this research, says the debate over the correct formation mechanism for light nuclei (and antinuclei) has divided particle physicists for a long time. In his view, the data collected by the ALICE collaboration decisively resolves this debate by showing that light nuclei form in the late stages of a collision via the coalescence of nucleons. Kachelriess calls this a “great achievement” in itself, and adds that similar approaches could make it possible to address other questions, such as whether thermal plasmas form in proton-proton collisions as well as in collisions between heavy ions.

The post CERN team solves decades-old mystery of light nuclei formation appeared first on Physics World.

]]>
Research update ALICE collaboration identifies three-phase process that allows particles to form despite energetically unfavourable conditions https://physicsworld.com/wp-content/uploads/2026/01/alice-deuteron-production-web.jpg newsletter1
Anyon physics could explain coexistence of superconductivity and magnetism https://physicsworld.com/a/anyon-physics-could-explain-coexistence-of-superconductivity-and-magnetism/ Tue, 13 Jan 2026 08:45:52 +0000 https://physicsworld.com/?p=125911 Calculations explain curious properties of some 2D materials

The post Anyon physics could explain coexistence of superconductivity and magnetism appeared first on Physics World.

]]>
New calculations by physicists in the US provide deeper insights into an exotic material in which superconductivity and magnetism can coexist. Using a specialized effective field theory, Zhengyan Shi and Todadri Senthil at the Massachusetts Institute of Technology show how this coexistence can emerge from the collective states of mobile anyons in certain 2D materials.

An anyon is a quasiparticle with statistical properties that lie somewhere between those of bosons and fermions. First observed in 2D electron gases in strong magnetic fields, anyons are known for their fractional electrical charge and fractional exchange statistics, which alter the quantum state of two identical anyons when they are exchanged for each other.

Unlike ordinary electrons, anyons produced in these early experiments could not move freely, preventing them from forming complex collective states. Yet in 2023, experiments with a twisted bilayer of molybdenum ditelluride provided the first evidence for mobile anyons through observations of fractional quantum anomalous Hall (FQAH) insulators. This effect appears as fractionally quantized electrical resistance in 2D electron systems at zero applied magnetic field.

Remarkably, these experiments revealed that molybdenum ditelluride can exhibit superconductivity and magnetism at the same time. Since superconductivity usually relies on electron pairing that can be disrupted by magnetism, this coexistence was previously thought impossible.

Anyonic quantum matter

“This then raises a new set of theoretical questions,” explains Shi. “What happens when a large number of mobile anyons are assembled together? What kind of novel ‘anyonic quantum matter’ can emerge?”

In their study, Shi and Senthil explored these questions using a new effective field theory for an FQAH insulator. Effective field theories are widely used in physics to approximate complex phenomena without modelling every microscopic detail. In this case, the duo’s model captured the competition between anyon mobility, interactions, and fractional exchange statistics in a many-body system of mobile anyons.

To test their model, the researchers considered the doping of an FQAH insulator – adding mobile anyons beyond the plateau in Hall resistance, where the existing anyons were effectively locked in place. This allowed the quasiparticles to move freely and form new collective phases.

“Crucially, we recognized that the fate of the doped state depends on the energetic hierarchy of different types of anyons,” Shi explains. “This observation allowed us to develop a powerful heuristic for predicting whether the doped state becomes a superconductor without any detailed calculations.”

In their model, Shi and Senthil focused on a specific FQAH insulator called a Jain state, which hosts two types of anyon excitations. One type has electrical charge of 1/3 of an electron and the other with 2/3. In a perfectly clean system, doping the insulator with 2/3-charge anyons produced a chiral topological superconductor, a phase that is robust against disorder and features edge currents flowing in only one direction. In contrast, doping with 1/3-charge anyons produced a metal with broken translation symmetry – still conducting, but with non-uniform patterns in its electron density.

Anomalous vortex glass

“In the presence of impurities, we showed that the chiral superconductor near the superconductor–insulator transition is a novel phase of matter dubbed the ‘anomalous vortex glass’, in which patches of swirling supercurrents are sprinkled randomly across the sample,” Shi describes. “Observing this vortex glass phase would be smoking-gun evidence for the anyonic mechanism for superconductivity.”

The results suggest that even when adding the simplest kind of anyons – like those in the Jain state – the collective behaviour of these quasiparticles can enable the coexistence of magnetism and superconductivity. In future studies, the duo hopes that more advanced methods for introducing mobile anyons could reveal even more exotic phases.

“Remarkably, our theory provides a qualitative account of the phase diagram of a particular 2D material (twisted molybdenum ditelluride), although many more tests are needed to rule out other possible explanations,” Shi says. “Overall, these findings highlight the vast potential of anyonic quantum matter, suggesting a fertile ground for future discoveries.”

The research is described in PNAS.

The post Anyon physics could explain coexistence of superconductivity and magnetism appeared first on Physics World.

]]>
Research update Calculations explain curious properties of some 2D materials https://physicsworld.com/wp-content/uploads/2026/01/13-1-26-stock-pic-for-anyon-story.jpg newsletter1
Can entrepreneurship be taught? An engineer’s viewpoint https://physicsworld.com/a/can-entrepreneurship-be-taught-an-engineers-viewpoint/ Mon, 12 Jan 2026 11:00:54 +0000 https://physicsworld.com/?p=125395 Honor Powrie wonders what skills she’d need to be an entrepreneur

The post Can entrepreneurship be taught? An engineer’s viewpoint appeared first on Physics World.

]]>
I am intrigued by entrepreneurship. Is it something we all innately possess – or can entrepreneurship be taught to anyone (myself included) for whom it doesn’t come naturally? Could we all – with enough time, training and support – become the next Jeff Bezos, Richard Branson or Martha Lane Fox?

In my professional life as an engineer in industry, we often talk about the importance of invention and innovation. Without them, products will become dated and firms will lose their competitive edge. However, inventions don’t necessarily sell themselves, which is where entrepreneurs have a key influence.

So what’s the difference between inventors, innovators and entrepreneurs? An inventor, to me, is someone who creates a new process, application or machine. An innovator is a person who introduces something new or does something for the first time. An entrepreneur, however, is someone who sets up a business or takes on a venture, embracing financial risks with the aim of profit.

Scientists and engineers are naturally good inventors and innovators. We like to solve problems, improve how we do things, and make the world more ordered and efficient. In fact, many of the greatest inventors and innovators of all time were scientists and engineers – think James Watt, George Stephenson and Frank Whittle.

But entrepreneurship requires different, additional qualities. Many entrepreneurs come from a variety of different backgrounds – not just science and engineering – and tend to have finance in their blood. They embrace risk and have unlimited amounts of courage and business acumen – skills I’d need to pick up if I wanted to be an entrepreneur myself.

Risk and reward

Engineers are encouraged to take risks, exploring new technologies and designs; in fact, it’s critical for companies seeking to stay competitive. But we take risks in a calculated and professional manner that prioritizes safety, quality, regulations and ethics, and project success. We balance risk taking with risk management, spotting and assessing potential risks – and mitigating or removing them if they’re big.

Courage is not something I’ve always had professionally. Over time, I have learned to speak up if I feel I have something to say that’s important to the situation or contributes to our overall understanding. Still, there’s always a fear of saying something silly in front of other people or being unable to articulate a view adequately. But entrepreneurs have courage in their DNA.

So can entrepreneurship be taught? Specifically, can it be taught to people like me with a technical background – and, if so, how? Some of the most famous innovators, like Henry Ford, Thomas Edison, Steve Jobs, James Dyson and Benjamin Franklin, had scientific or engineering backgrounds, so is there a formula for making more people like them?

Skill sets and gaps

Let’s start by listing the skills that most engineers have that could be beneficial for entrepreneurship. In no particular order, these include:

  • problem-solving ability: essential for designing effective solutions or to identify market gaps;
  • innovative mindset: critical for building a successful business venture;
  • analytical thinking: engineers make decisions based on data and logic, which is vital for business planning and decision making;
  • persistence: a pre-requisite for delivering engineering projects and needed to overcome the challenges of starting a business;
  • technical expertise: a significant competitive advantage and providing credibility, especially relevant for tech start-ups.

However, there are mindset differences between engineers and entrepreneurs that any training would need to overcome. These include:

  • risk tolerance: engineers typically focus on improving reliability and reducing risk, whilst entrepreneurs are more comfortable with embracing greater uncertainty;
  • focus: engineers concentrate on delivering to requirements, whilst entrepreneurs focus on consumer needs and speed to market;
  • business acumen: a typical engineering education doesn’t cover essential business skills such as marketing, sales and finance, all of which are vital for running a company.

Such skills may not always come naturally to engineers and scientists, but they can be incorporated into our teaching and learning. Some great examples of how to do this were covered in Physics World last year. In addition, there is a growing number of UK universities offering science and engineering degrees combined with entrepreneurship.

The message is that whilst some scientists and engineers become entrepreneurs, not all do. Simply having a science or engineering background is no guarantee of becoming an entrepreneur, nor is it a requirement. Nevertheless, the problem-solving and technical skills developed by scientists and engineers are powerful assets that, when combined with business acumen and entrepreneurial drive, can lead to business success.

Of course, entrepreneurship may not suit everybody – and that’s perfectly fine. No-one should be forced to become an entrepreneur if they don’t want to. We all need to play to our core strengths and interests and build well-rounded teams with complementary skillsets – something that every successful business needs. But surely there’s a way of teaching entrepreneurism too?

The post Can entrepreneurship be taught? An engineer’s viewpoint appeared first on Physics World.

]]>
Opinion and reviews Honor Powrie wonders what skills she’d need to be an entrepreneur https://physicsworld.com/wp-content/uploads/2026/01/2026-01-Transactions-teaching-entrepreneurship-2406937943-shutterstock-Lightspring.jpg newsletter1
Shapiro steps spotted in ultracold bosonic and fermionic gases https://physicsworld.com/a/shapiro-steps-spotted-in-ultracold-bosonic-and-fermionic-gases/ Mon, 12 Jan 2026 08:00:00 +0000 https://physicsworld.com/?p=125900 Research could lead to a standard for chemical potential

The post Shapiro steps spotted in ultracold bosonic and fermionic gases appeared first on Physics World.

]]>
Shapiro steps – a series of abrupt jumps in the voltage–current characteristic of a Josephson junction that is exposed to microwave radiation – have been observed for the first time in ultracold gases by groups in Germany and Italy. Their work on atomic Josephson junctions provides new insights into the phenomenon, and could lead to a standard for chemical potential.

In 1962 Brian Josephson of the University of Cambridge calculated that, if two superconductors were separated by a thin insulating barrier, the phase difference between the wavefunctions on either side should induce quantum tunneling, leading to a current at zero potential difference.

A year later, Sidney Shapiro and colleagues at the consultants Arthur D. Little showed that inducing an alternating electric current using a microwave field causes the phase of the wavefunction on either side of a Josephson junction to evolve at different rates, leading to quantized increases in potential difference across the junction. The height of these “Shapiro steps” depends only on the applied frequency of the field and the electrical charge. This is now used as a reference standard for the volt.

Researchers have subsequently developed analogues of Josephson junctions in other systems such as liquid helium and ultracold atomic gases. In the new work, two groups have independently observed Shapiro steps in ultracold quantum gases. Instead of placing a fixed insulator in the centre and driving the system with a field, the researchers used focused laser beams to create potential barriers that divided the traps into two. Then they moved the positions of the barriers to alter the potentials of the atoms on either side.

Current emulation

“If we move the atoms with a constant velocity, that means there’s a constant velocity of atoms through the barrier,” says Herwig Ott of RPTU University Kaiserslautern-Landau in Germany, who led one of the groups. “This is how we emulate a DC current. Now for the Shapiro protocol you have to apply an AC current, and the AC current you simply get by modulating your barrier in time.”

Ott and colleagues in Kaiserslautern, in collaboration with researchers in Hamburg and the United Arab Emirates (UAE), used a Bose–Einstein condensate (BEC) of rubidium-87 atoms. Meanwhile in Italy, Giulia Del Pace of the European Laboratory for Nonlinear Spectroscopy at the University of Florence and colleagues (including the same UAE collaborators) studied ultracold lithium-6 atoms, which are fermions.

Both groups observed the theoretically-predicted Shapiro steps, but Ott and Del Pace explain that these observations do not simply confirm predictions. “The message is that no matter what your microscopic mechanism is, the phenomenon of Shapiro steps is universal,” says Ott. In superconductors, the Shapiro steps are caused by the breaking of Cooper pairs; in ultracold atomic gases, vortex rings are created. Nevertheless, the same mathematics applies. “This is really quite remarkable,” says Ott.

Del Pace says it was unclear whether Shapiro steps would be seen in strongly-interacting fermions, which are “way more interacting than the electrons in superconductors”. She asks, “Is it a limitation to have strong interactions or is it something that actually helps the dynamics to happen? It turns out it’s the latter.”

Magnetic tuning

Del Pace’s group applied a variable magnetic field to tune their system between a BEC of molecules, a system dominated by Cooper pairs and a unitary Fermi gas in which the particles were as strongly interacting as permitted by quantum mechanics. The size of the Shapiro steps was dependent on the strength of the interparticle interaction.

Ott and Del Pace both suggest that this effect could be used to create a reference standard for chemical potential – a measure of the strength of the atomic interaction (or equation of state) in a system.

“This equation of state is very well known for a BEC or for a strongly interacting Fermi gas…but there is a range of interaction strengths where the equation of state is completely unknown, so one can imagine taking inspiration from the way Josephson junctions are used in superconductors and using atomic Josephson junctions to study the equation of state in systems where the equation of state is not known,” explains Del Pace.

The two papers are published side by side in Science: Del Pace and Ott.

Rocío Jáuregui Renaud of the Autonomous University of Mexico is impressed, especially by the demonstration in both bosons and fermions.  “The two papers are important, and they are congruent in their results, but the platform is different,” she says. “At this point, the idea is not to give more information directly about superconductivity, but to learn more about phenomena that sometimes you are not able to see in electronic systems but you would probably see in neutral atoms.”

The post Shapiro steps spotted in ultracold bosonic and fermionic gases appeared first on Physics World.

]]>
Research update Research could lead to a standard for chemical potential https://physicsworld.com/wp-content/uploads/2026/01/12-1-26-shapiro-steps.jpg
Watching how grasshoppers glide inspires new flying robot design https://physicsworld.com/a/watching-how-grasshoppers-glide-inspires-new-flying-robot-design/ Fri, 09 Jan 2026 14:43:34 +0000 https://physicsworld.com/?p=125901 The findings could result in flying robots with smaller batteries

The post Watching how grasshoppers glide inspires new flying robot design appeared first on Physics World.

]]>
While much insight has been gleaned from how grasshoppers hop, their gliding prowess has mostly been overlooked. Now researchers at Princeton University have studied how these gangly insects deploy and retract their wings to inspire a new approach to flying robots.

Typical insect-inspired robot designs are often based on bees and flies. They feature constant flapping motion, yet that requires a lot of power so the robots either carry heavy batteries or are tethered to a power supply.

Grasshoppers, however, are able to jump and glide as well as flap their wings and while they are not the best gliding insect, they have another trick as they are able to retract and unfurl their wings.

Grasshoppers have two sets of wings, the forewings and hindwings. The front wing is mainly used for protection and camouflage while the hindwing is used for flight. The hindwing is corrugated, which allows it to fold in neatly like an accordion.

A team of engineers, biologists and entomologists analysed the wings of the American grasshopper, also known as the bird grasshopper, due to its superior flying skills. They took CT scans of the insects and then used the findings to 3D-print model wings. They attached these wings to small frames to create grasshopper-inspired gliders, finding that their performance was on par with that of actual grasshoppers.

The team also tweaked certain wing features such as the shape, camber and corrugation, finding that a smooth wing produced gliding that was more efficient and repeatable than one with corrugations. “This showed us that these corrugations might have evolved for other reasons,” notes Princeton engineer Aimy Wissa, who adds that “very little” is known about how grasshoppers deploy their wings.

The researchers say that further work could result in new ways to extend the flight time for insect-sized robots without the need for heavy batteries or tethering. “This grasshopper research opens up new possibilities not only for flight, but also for multimodal locomotion,” adds Lee. “By combining biology with engineering, we’re able to build and ideate on something completely new.”

The post Watching how grasshoppers glide inspires new flying robot design appeared first on Physics World.

]]>
Blog The findings could result in flying robots with smaller batteries https://physicsworld.com/wp-content/uploads/2026/01/flying-grasshoppers-09-01-2026.jpg newsletter
Cracking the limits of clocks: a new uncertainty relation for time itself https://physicsworld.com/a/cracking-the-limits-of-clocks-a-new-uncertainty-relation-for-time-itself/ Fri, 09 Jan 2026 12:00:03 +0000 https://physicsworld.com/?p=125771 Physicists uncover a universal limit on timekeeping precision, proving that anything from heartbeats to ocean waves can be a clock – but none escape the noise

The post Cracking the limits of clocks: a new uncertainty relation for time itself appeared first on Physics World.

]]>
What if a chemical reaction, ocean waves or even your heartbeat could all be used as clocks? That’s the starting point of a new study by Kacper Prech, Gabriel Landi and collaborators, who uncovered a fundamental, universal limit to how precisely time can be measured in noisy, fluctuating systems. Their discovery – the clock uncertainty relation (CUR) – doesn’t just refine existing theory, it reframes timekeeping as an information problem embedded in the dynamics of physical processes, from nanoscale biology to engineered devices.

The foundation of this work contains a simple but powerful reframing: anything that “clicks” regularly is a clock. In the research paper’s opening analogy, a castaway tries to cook a fish without a wristwatch. They could count bird calls, ocean waves, or heartbeats – each a potential timekeeper with different cadence and regularity. But questions remain: given real-world fluctuations, what’s the best way to estimate time, and what are the inescapable limits?

The authors answer both. They show for a huge class of systems – those described by classical, Markovian jump processes (systems where the future depends only on the present state, not the past history – a standard model across statistical physics and biophysics) – there is a tight achievable bound on timekeeping precision. The bound is controlled not by how often the system jumps on average (the traditional “dynamical activity”), but by a subtler quantity: the mean residual time, or the average time you’d wait for the next event if you start observing at a random moment. That distinction matters.

The inspection paradox

The study introduces CUR, a universal, tight bound on timekeeping precision that – unlike earlier bounds – can be saturated and the researchers identify the exact observables that achieve this limit. Surprisingly, the optimal strategy for estimating time from a noisy process is remarkably simple: sum the expected waiting times of each observed state along the trajectory, rather than relying on complex fitting methods. The work also reveals that the true limiting factor for precision isn’t the traditional dynamical activity, but rather the inverse of the mean residual time. This makes the CUR provably tighter than the earlier kinetic uncertainty relation, especially in systems far from equilibrium.

The team also connects precision to two practical clock metrics: resolution (how often a clock ticks) and accuracy (how many ticks before it drifts by one tick.) In other words, achieving steadier ticks comes at the cost of accepting fewer of them per unit of time.

This framework offers practical tools across several domains. It can serve as a diagnostic for detecting hidden states in complex biological or chemical systems: if measured event statistics violate the CUR, that signals the presence of hidden transitions or memory effects. For nanoscale and molecular clocks – like biomolecular oscillators (cellular circuits that produce rhythmic chemical signals) and molecular motors (protein machines that walk along cellular tracks) – the CUR sets fundamental performance limits and guides the design of optimal estimators. Finally, while this work focuses on classical systems, it establishes a benchmark for quantum clocks, pointing toward potential quantum advantages and opening new questions about what trade-offs emerge in the quantum regime.

Landi, an associate professor of theoretical quantum physics at the University of Rochester, emphasizes the conceptual shift: that clocks aren’t just pendulums and quartz crystals. “Anything is a clock,” he notes. The team’s framework “gives the recipe for constructing the best possible clock from whatever fluctuations you have,” and tells you “what the best noise-to-signal ratio” can be. In everyday terms, the Sun is accurate but low-resolution for cooking; ocean waves are higher resolution but noisier. The CUR puts that intuition on firm mathematical ground.

Looking forward, the group is exploring quantum generalizations and leveraging CUR-violations to infer hidden structure in biological data. A tantalizing foundational question lingers: can robust biological timekeeping emerge from many bad, noisy clocks, synchronizing into a good one?

Ultimately, this research doesn’t just sharpen a bound; it reframes timekeeping as a universal inference task grounded in the flow of events. Whether you’re a cell sensing a chemical signal, a molecular motor stepping along a track or an engineer building a nanoscale device, the message is clear: to tell time well, count cleverly – and respect the gaps.

The research is detailed in Physical Review X.

The post Cracking the limits of clocks: a new uncertainty relation for time itself appeared first on Physics World.

]]>
Research update Physicists uncover a universal limit on timekeeping precision, proving that anything from heartbeats to ocean waves can be a clock – but none escape the noise https://physicsworld.com/wp-content/uploads/2025/12/time-estimation.jpg newsletter1
Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously https://physicsworld.com/a/bidirectional-scattering-microscope-detects-micro-and-nanoscale-structures-simultaneously/ Fri, 09 Jan 2026 10:00:59 +0000 https://physicsworld.com/?p=125812 New device could be used to observe structures as small as individual proteins, as well as the environment in which they move

The post Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously appeared first on Physics World.

]]>
A new microscope that can simultaneously measure both forward- and backward-scattered light from a sample could allow researchers to image both micro- and nanoscale objects at the same time. The device could be used to observe structures as small as individual proteins, as well as the environment in which they move, say the researchers at the University of Tokyo who developed it.

“Our technique could help us link cell structures with the motion of tiny particles inside and outside cells,” explains Kohki Horie of the University of Tokyo’s department of physics, who led this research effort. “Because it is label-free, it is gentler on cells and better for long observations. In the future, it could help quantify cell states, holding potential for drug testing and quality checks in the biotechnology and pharmaceutical industries.”

Detecting forward and backward scattered light at the same time

The new device combines two powerful imaging techniques routinely employed in biomedical applications: quantitative phase microscopy (QPM) and interferometric scattering (iSCAT).

QPM measures forward-scattered (FS) light – that is, light waves that travel in the same direction as before they were scattered. This technique is excellent at imaging structures in the Mie scattering region (greater than 100 nm, referred to as microscale in this study). This makes it ideal for visualizing complex structures such as biological cells. It falls short, however, when it comes to imaging structures in the Rayleigh scattering region (smaller than 100 nm, referred to as nanoscale in this study).

The second technique, iSCAT, detects backward-scattered (BS) light. This is light that’s reflected back towards the direction from which it came and which predominantly contains Rayleigh scattering. As such, iSCAT exhibits high sensitivity for detecting nanoscale objects. Indeed, the technique has recently been used to image single proteins, intracellular vesicles and viruses. It cannot, however, image microscale structures because of its limited ability to detect in the Mie scattering region.

The team’s new bidirectional quantitative scattering microscope (BiQSM) is able to detect both FS and BS light at the same time, thereby overcoming these previous limitations.

Cleanly separating the signals from FS and BS

The BiQSM system illuminates a sample through an objective lens from two opposite directions and detects both the FS and BS light using a single image sensor. The researchers use the spatial-frequency multiplexing method of off-axis digital holography to capture both images simultaneously. The biggest challenge, says Horie, was to cleanly separate the signals from FS and BS light in the images while keeping noise low and avoiding mixing between them.

Horie and colleagues, Keiichiro Toda, Takuma Nakamura and team leader Takuro Ideguchi, tested their technique by imaging live cells. They were able to visualize micron-sized cell structures, including the nucleus, nucleoli and lipid droplets, as well as nanoscale particles. They compared the FS and BS results using the scattering-field amplitude (SA), defined as the amplitude ratios between the scattered wave and the incident illumination wave.

“SA characterizes the light scattered in both the forward and backward directions within a unified framework,” says Horie, “so allowing for a direct comparison between FS and BS light images.”

Spurred on by their findings, which are detailed in Nature Communications, the researchers say they now plan to study even smaller particles such as exosomes and viruses.

The post Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously appeared first on Physics World.

]]>
Research update New device could be used to observe structures as small as individual proteins, as well as the environment in which they move https://physicsworld.com/wp-content/uploads/2026/01/bqsm.jpeg
Quantum information theory sheds light on quantum gravity https://physicsworld.com/a/quantum-information-theory-sheds-light-on-quantum-gravity/ Thu, 08 Jan 2026 14:34:50 +0000 https://physicsworld.com/?p=125883 Our podcast guest is Alex May of the Perimeter Institute for Theoretical Physics

The post Quantum information theory sheds light on quantum gravity appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features Alex May, whose research explores the intersection of quantum gravity and quantum information theory. Based at Canada’s Perimeter Institute for Theoretical Physics, May explains how ideas being developed in the burgeoning field of quantum information theory could help solve one of the most enduring mysteries in physics – how to reconcile quantum mechanics with Einstein’s general theory of relativity, creating a viable theory of quantum gravity.

This interview was recorded in autumn 2025 when I had the pleasure of visiting the Perimeter Institute and speaking to four physicists about their research. This is the last of those conversations to appear on the podcast.

The first interview in this series from the Perimeter Institute was with Javier Toledo-Marín, “Quantum computing and AI join forces for particle physics”; the second was with Bianca Dittrich, “Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge“; and the third was with Tim Hsieh, “Building a quantum future using topological phases of matter and error correction”.

APS logo

 

This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March 2026 in Denver, Colorado, and online.

The post Quantum information theory sheds light on quantum gravity appeared first on Physics World.

]]>
Podcasts Our podcast guest is Alex May of the Perimeter Institute for Theoretical Physics https://physicsworld.com/wp-content/uploads/2026/01/alex-may-list-2.jpg
Chess960 still results in white having an advantage, finds study https://physicsworld.com/a/chess960-still-results-in-white-having-an-advantage-finds-study/ Thu, 08 Jan 2026 13:43:26 +0000 https://physicsworld.com/?p=125885 Research suggests that the standard game, and other permutations, can be unfair to players who go second

The post Chess960 still results in white having an advantage, finds study appeared first on Physics World.

]]>
Chess is a seemingly simple game, but one that hides incredible complexity. In the standard game, the starting positions of the pieces are fixed so top players rely on memorizing a plethora of opening moves, which can sometimes result in boring, predictable games. It’s also the case that playing as white, and therefore going first, offers an advantage.

In the 1990s, former chess world champion Bobby Fischer proposed another way to play chess to encourage more creative play.

This form of the game – dubbed Chess960 – keeps the pawns in the same position but randomizes where the pieces at the back of the board – the knights, bishops, rooks, king and queen – are placed at the start while keeping the rest of the rules the same. It is named after the 960 starting positions that result from mixing it up at the back.

It was thought that Chess960 could allow for more permutations that would make the game fairer for both players. Yet research by physicist Marc Barthelemy at Paris-Saclay University suggests it’s not as simple as this.

Initial advantage

He used the open-source chess program called Stockfish to analyse each of the 960 starting positions and developed a statistical method to measure decision-making complexity by calculating how much “information” a player needs to identify the best moves.

He found that the standard game can be unfair, as players with black pieces who go second have to keep up with the moves from the player with white.

Yet regardless of starting positions at the back, Barthelemy discovered that white still has an advantage in almost all – 99.6% – of the 960 positions. He also found that the standard set-up – rook, knight, bishop, queen, king, bishop, knight, rook – is nothing special and is presumably an historical accident possibly as the starting positions are easy to remember, being visually symmetrical.

“Standard chess, despite centuries of cultural evolution, does not occupy an exceptional location in this landscape: it exhibits a typical initial advantage and moderate total complexity, while displaying above-average asymmetry in decision difficulty,” writes Barthelemy.

For a more fair and balanced match, Barthelemy suggests playing position #198, which has the starting positions as queen, knight, bishop, rook, king, bishop, knight and rook.

The post Chess960 still results in white having an advantage, finds study appeared first on Physics World.

]]>
Blog Research suggests that the standard game, and other permutations, can be unfair to players who go second https://physicsworld.com/wp-content/uploads/2026/01/chess-692798186-istock-gedzun.jpg
Tetraquark measurements could shed more light on the strong nuclear force https://physicsworld.com/a/tetraquark-measurements-could-shed-more-light-on-the-strong-nuclear-force/ Thu, 08 Jan 2026 10:00:25 +0000 https://physicsworld.com/?p=125814 CMS Collaboration focuses on a family of three all-charm exotic hadrons

The post Tetraquark measurements could shed more light on the strong nuclear force appeared first on Physics World.

]]>
The Compact Muon Solenoid (CMS) Collaboration has made the first measurements of the quantum properties of a family of three “all-charm” tetraquarks that was recently discovered at the Large Hadron Collider (LHC) at CERN. The findings could help shed more light on the properties of the strong nuclear force, which holds protons and neutrons together in nuclei. The result could help us better understand how ordinary matter forms.

In recent years, the LHC has discovered tens of massive particles called hadrons, which are made of quarks bound together by the strong force. Quarks come in six types: up, down, charm, strange, top and bottom. Most observed hadrons comprise two or three quarks (called mesons and baryons, respectively). Physicists have also observed exotic hadrons that comprise four or five quarks. These are the tetraquarks and pentaquarks respectively. Those seen so far usually contain a charm quark and its antimatter counterpart (a charm antiquark), with the remaining two or three quarks being up, down or strange quarks, or their antiquarks.

Identifying and studying tetraquarks and pentaquarks helps physicists to better understand how the strong force binds quarks together. This force also binds protons and neutrons in atomic nuclei.

Physicists are still divided as to the nature of these exotic hadrons. Some models suggest that their quarks are tightly bound via the strong force, so making these hadrons compact objects. Others say that the quarks are only loosely bound. To confuse things further, there is evidence that in some exotic hadrons, the quarks might be both tightly and loosely bound at the same time.

Now, new findings from the CMS Collaboration suggest that tetraquarks are tightly bound, but they do not completely rule out other models.

Measuring quantum numbers

In their work, which is detailed in Nature, CMS physicists studied all-charm tetraquarks. These comprise two charm quarks and two charm antiquarks and were produced by colliding protons at high energies at the LHC. Three states of this tetraquark have been identified at the LHC. These are: X(6900); X(6600); and X(7100), where the numbers denote their approximate mass in millions of electron volts. The team measured the fundamental properties of these tetraquarks, including their quantum numbers: parity (P); charge conjugation (C); angular momentum, and spin (J). P determines whether a particle has the same properties as its spatial mirror image; C whether it has the same properties as its antiparticle; and J, the total angular momentum of the hadron. These numbers provide information on the internal structure of a tetraquark.

The researchers used a version of a well-known technique called angular analysis, which is similar to the technique used to characterize the Higgs boson. This approach focuses on the angles at which the decay products of the all-charm tetraquarks are scattered.

“We call this technique quantum state tomography,” explains CMS team member Chiara Mariotti of the INFN Torino inItaly. “Here, we deduce the quantum state of an exotic state X from the analysis of its decay products. In particular, the angular distributions in the decay X → J/ψJ/ψ, followed by J/ψ decays into two muons, serve as analysers of polarization of two J/ψ particles,” she explains.

The researchers analysed all-charm tetraquarks produced at the CMS experiment between 2016 and 2018. They calculated that J is likely to be 2 and that P and C are both +1. This combination of properties is expressed as 2++.

Result favours tightly-bound quarks

“This result favours models in which all four quarks are tightly bound,” says particle physicist Timothy Gershon of the UK’s University of Warwick, who was not involved in this study. “However, the question is not completely put to bed. The sample size in the CMS analysis is not sufficient to exclude fully other possibilities, and additionally certain assumptions are made that will require further testing in future.”

Gershon adds, “These include assumptions that all three states have the same quantum numbers, and that all correspond to tetraquark decays to two J/ψ mesons with no additional particles not included in the reconstruction (for example there could be missing photons that have been radiated in the decay).”

Further studies with larger data samples are warranted, he adds. “Fortunately, CMS as well as both the LHCb and the ATLAS collaborations [at CERN] already have larger samples in hand, so we should not have to wait too long for updates.”

Indeed, the CMS Collaboration is now gathering more data and exploring additional decay modes of these exotic tetraquarks. “This will ultimately improve our understanding how this matter forms, which, in turn, could help refine our theories of how ordinary matter comes into being,” Mariotti tells Physics World.

The post Tetraquark measurements could shed more light on the strong nuclear force appeared first on Physics World.

]]>
Research update CMS Collaboration focuses on a family of three all-charm exotic hadrons https://physicsworld.com/wp-content/uploads/2026/01/collider-inside-view.jpeg newsletter1
Reinforcement learning could help airborne wind energy take off https://physicsworld.com/a/reinforcement-learning-could-help-airborne-wind-energy-take-off/ Wed, 07 Jan 2026 16:00:12 +0000 https://physicsworld.com/?p=125861 Machine learning technique teaches power-generating kites to extract energy from turbulent airflows more effectively, boosting their efficiency

The post Reinforcement learning could help airborne wind energy take off appeared first on Physics World.

]]>
When people think of wind energy, they usually think of windmill-like turbines dotted among hills or lined up on offshore platforms. But there is also another kind of wind energy, one that replaces stationary, earthbound generators with tethered kites that harvest energy as they soar through the sky.

This airborne form of wind energy, or AWE, is not as well-developed as the terrestrial version, but in principle it has several advantages. Power-generating kites are much less massive than ground-based turbines, which reduces both their production costs and their impact on the landscape. They are also far easier to install in areas that lack well-developed road infrastructure. Finally, and perhaps most importantly, wind speeds are many times greater at high altitudes than they are near the ground, significantly enhancing the power densities available for kites to harvest.

There is, however, one major technical challenge for AWE, and it can be summed up in a single word: control. AWE technology is operationally more complex than conventional turbines, and the traditional method of controlling kites (known as model-predictive control) struggles to adapt to turbulent wind conditions. At best, this reduces the efficiency of energy generation. At worst, it makes it challenging to keep devices safe, stable and airborne.

In a paper published in EPL, Antonio Celani and his colleagues Lorenzo Basile and Maria Grazia Berni of the University of Trieste, Italy, and the Abdus Salam International Centre for Theoretical Physics (ICTP) propose an alternative control method based on reinforcement learning. In this form of machine learning, an agent learns to make decisions by interacting with its environment and receiving feedback in the form of “rewards” for good performance. This form of control, they say, should be better at adapting to the variable and uncertain conditions that power-generating kites encounter while airborne.

What was your motivation for doing this work?

Our interest originated from some previous work where we studied a fascinating bird behaviour called thermal soaring. Many birds, from the humble seagull to birds of prey and frigatebirds, exploit atmospheric currents to rise in the sky without flapping their wings, and then glide or swoop down. They then repeat this cycle of ascent and descent for hours, or even for weeks if they are migratory birds. They’re able to do this because birds are very effective at extracting energy from the atmosphere to turn it into potential energy, even though the atmospheric flow is turbulent, hence very dynamic and unpredictable.

Photo of Antonio Celani at a blackboard

In those works, we showed that we could use reinforcement learning to train virtual birds and also real toy gliders to soar. That got us wondering whether this same approach could be exported to AWE.

When we started looking at the literature, we saw that in most cases, the goal was to control the kite to follow a predetermined path, irrespective of the changing wind conditions. These cases typically used only simple models of atmospheric flow, and almost invariably ignored turbulence.

This is very different from what we see in birds, which adapt their trajectories on the fly depending on the strength and direction of the fluctuating wind they experience. This led us to ask: can a reinforcement learning (RL) algorithm discover efficient, adaptive ways of controlling a kite in a turbulent environment to extract energy for human consumption?

What is the most important advance in the paper?

We offer a proof of principle that it is indeed possible to do this using a minimal set of sensor inputs and control variables, plus an appropriately designed reward/punishment structure that guides trial-and-error learning. The algorithm we deploy finds a way to manoeuvre the kite such that it generates net energy over one cycle of operation. Most importantly, this strategy autonomously adapts to the ever-fluctuating conditions induced by turbulence.

Photo of Lorenzo Basile

The main point of RL is that it can learn to control a system just by interacting with the environment, without requiring any a priori knowledge of the dynamical laws that rule its behaviour. This is extremely useful when the systems are very complex, like the turbulent atmosphere and the aerodynamics of a kite.

What are the barriers to implementing RL in real AWE kites, and how might these barriers be overcome?

The virtual environment that we use in our paper to train the kite controller is very simplified, and in general the gap between simulations and reality is wide. We therefore regard the present work mostly as a stimulus for the AWE community to look deeper into alternatives to model-predictive control, like RL.

On the physics side, we found that some phases of an AWE generating cycle are very difficult for our system to learn, and they require a painful fine-tuning of the reward structure. This is especially true when the kite is close to the ground, where winds are weaker and errors are the most punishing. In those cases, it might be a wise choice to use other heuristic, hard-wired control strategies rather than RL.

Finally, in a virtual environment like the one we used to do the RL training in this work, it is possible to perform many trials. In real power kites, this approach is not feasible – it would take too long. However, techniques like offline RL might resolve this issue by interleaving a few field experiments where data are collected with extensive off-line optimization of the strategy. We successfully used this approach in our previous work to train real gliders for soaring.

What do you plan to do next?

We would like to explore the use of offline RL to optimize energy production for a small, real AWE system. In our opinion, the application to low-power systems is particularly relevant in contexts where access to the power grid is limited or uncertain. A lightweight, easily portable device that can produce even small amounts of energy might make a big difference in the everyday life of remote, rural communities, and more generally in the global south.

The post Reinforcement learning could help airborne wind energy take off appeared first on Physics World.

]]>
Research update Machine learning technique teaches power-generating kites to extract energy from turbulent airflows more effectively, boosting their efficiency https://physicsworld.com/wp-content/uploads/2026/01/red-kite-wind-turbines-2243729779-istock-david-steinbrede.jpg newsletter
Organic LED can electrically switch the handedness of emitted light https://physicsworld.com/a/organic-led-can-electrically-switch-the-handedness-of-emitted-light/ Wed, 07 Jan 2026 13:00:19 +0000 https://physicsworld.com/?p=125738 A new type of OLED can generate left- or right-handed circularly polarized light from just one form of light-emitting molecule

The post Organic LED can electrically switch the handedness of emitted light appeared first on Physics World.

]]>
Circularly polarized (CP) light is encoded with information through its photon spin and can be utilized in applications such as low-power displays, encrypted communications and quantum technologies. Organic light emitting diodes (OLEDs) produce CP light with a left or right “handedness”, depending on the chirality of the light-emitting molecules used to create the device.

While OLEDs usually only emit either left- or right-handed CP light, researchers have now developed OLEDs that can electrically switch between emitting left- or right-handed CP light – without needing different molecules for each handedness.

“We had recently identified an alternative mechanism for the emission of circularly polarized light in OLEDs, using our chiral polymer materials, which we called anomalous circularly polarized electroluminescence,” says lead author Matthew Fuchter from the University of Oxford. “We set about trying to better understand the interplay between this new mechanism and the generally established mechanism for circularly polarized emission in the same chiral materials”.

Light handedness controlled by molecular chirality

The CP light handedness of an organic emissive molecule is controlled by its chirality. A chiral molecule is one that has two mirror-image structural isomers that can’t be superimposed on top of each other. Each of these non-superimposable molecules is called an enantiomer, and will absorb, emit and refract CP light with a defined spin angular momentum. Each enantiomer will produce CP light with a different handedness, through an optical mechanism called normal circularly polarized electroluminescence (NCPE).

OLED designs typically require access to both enantiomers, but most chemical synthesis processes will produce racemic mixtures (equal amounts of the two enantiomers) that are difficult to separate. Extracting each enantiomer so that they can be used individually is complex and expensive, but the research at Oxford has simplified this process by using a molecule that can switch between emitting left- and right-handed CP light.

The molecule in question is a helical molecule called (P)-aza[6]helicene, which is the right-handed enantiomer. Even though it is just a one-handed form, the researchers found a way to control the handedness of the OLED, enabling it to switch between both forms.

Switching handedness without changing the structure

The researchers designed the helicene molecules so that the handedness of the light could be switched electrically, without needing to change the structure of the material itself. “Our work shows that either handedness can be accessed from a single-handed chiral material without changing the composition or thickness of the emissive layer,” says Fuchter. “From a practical standpoint, this approach could have advantages in future circularly polarized OLED technologies.”

Instead of making a structural change, the researchers changed the way that the electric charges are recombined in the device, using interlayers to alter the recombination position and charge carrier mobility inside the device. Depending on where the recombination zone is located, this leads to situations where there is balanced or unbalanced charge transport, which then leads to different handedness of CP light in the device.

When the recombination zone is located in the centre of the emissive layer, the charge transport is balanced, which generates an NCPE mechanism. In these situations, the helicene adopts its normal handedness (right handedness).

However, when the recombination zone is located close to one of the transport layers, it creates an unbalanced charge transport mechanism called anomalous circularly polarized electroluminescence (ACPE). The ACPE overrides the NCPE mechanism and inverts the handedness of the device to left handedness by altering the balance of induced orbital angular momentum in electrons versus holes. The presence of these two electroluminescence mechanisms in the device enables it to be controlled electrically by tuning the charge carrier mobility and the recombination zone position.

The research allows the creation of OLEDs with controllable spin angular momentum information using a single emissive enantiomer, while probing the fundamental physics of chiral optoelectronics. “This work contributes to the growing body of evidence suggesting further rich physics at the intersection of chirality, charge and spin. We have many ongoing projects to try and understand and exploit such interplay,” Fuchter concludes.

The researchers describe their findings in Nature Photonics.

The post Organic LED can electrically switch the handedness of emitted light appeared first on Physics World.

]]>
Research update A new type of OLED can generate left- or right-handed circularly polarized light from just one form of light-emitting molecule https://physicsworld.com/wp-content/uploads/0202/01/07-01-26-oled-device.jpg
Francis Crick: a life of twists and turns https://physicsworld.com/a/francis-crick-a-life-of-twists-and-turns/ Wed, 07 Jan 2026 11:00:48 +0000 https://physicsworld.com/?p=125533 Andrew Robinson reviews Crick: a Mind in Motion – from DNA to the Brain by Matthew Cobb

The post Francis Crick: a life of twists and turns appeared first on Physics World.

]]>
Physicist, molecular biologist, neuroscientist: Francis Crick’s scientific career took many turns. And now, he is the subject of zoologist Matthew Cobb’s new book, Crick: a Mind in Motion – from DNA to the Brain.

Born in 1916, Crick studied physics at University College London in the mid-1930s, before working for the Admiralty Research Laboratory during the Second World War. But after reading physicist Erwin Schrödinger’s 1944 book What Is Life? The Physical Aspect of the Living Cell, and a 1946 article on the structure of biological molecules by chemist Linus Pauling, Crick left his career in physics and switched to molecular biology in 1947.

Six years later, while working at the University of Cambridge, he played a key role in decoding the double-helix structure of DNA, working in collaboration with biologist James Watson, biophysicist Maurice Wilkins and other researchers including chemist and X-ray crystallographer Rosalind Franklin. Crick, alongside Watson and Wilkins, went on to receive the 1962 Nobel Prize in Physiology and Medicine for the discovery.

Finally, Crick’s career took one more turn in the mid-1970s. After experiencing a mental health crisis, Crick left Britain and moved to California. He took up neuroscience in an attempt to understand the roots of human consciousness, as discussed in his 1994 book, The Astonishing Hypothesis: the Scientific Search for the Soul.

Parallel lives

When he died in 2004, Crick’s office wall at Salk Institute in La Jolla, US, carried portraits of Charles Darwin and Albert Einstein, as Cobb notes on the final page of his deeply researched and intellectually fascinating biography. But curiously, there is not a single other reference to Einstein in Cobb’s massive book. Furthermore, there is no reference at all to Einstein in the equally large 2009 biography of Crick, Francis Crick: Hunter of Life’s Secrets, by historian of science Robert Olby, who – unlike Cobb – knew Crick personally.

Nevertheless, a comparison of Crick and Einstein is illuminating. Crick’s family background (in the shoe industry), and his childhood and youth are in some ways reminiscent of Einstein’s. Both physicists came from provincial business families of limited financial success, with some interest in science yet little intellectual distinction. Both did moderately well at school and college, but were not academic stars. And both were exposed to established religion, but rejected it in their teens; they had little intrinsic respect for authority, without being open rebels until later in life.

The similarities continue into adulthood, with the two men following unconventional early scientific careers. Both of them were extroverts who loved to debate ideas with fellow scientists (at times devastatingly), although they were equally capable of long, solitary periods of concentration throughout their careers. In middle age, they migrated from their home countries – Germany (Einstein) and Britain (Crick) – to take up academic positions in the US, where they were much admired and inspiring to other scientists, but failed to match their earlier scientific achievements.

In their personal lives, both Crick and Einstein had a complicated history with women. Having divorced their first wives, they had a variety of extramarital affairs – as discussed by Cobb without revealing the names of these women – while remaining married to their second wives. Interestingly, Crick’s second wife, Odile Crick (whom he was married to for 55 years) was an artist, and drew the famous schematic drawing of the double helix published in Nature in 1953.

Stories of friendships

Although Cobb misses this fascinating comparison with Einstein, many other vivid stories light up his book. For example, he recounts Watson’s claim that just after their success with DNA in 1953, “Francis winged into the Eagle [their local pub in Cambridge] to tell everyone within hearing distance that we had found the secret of life” – a story that later appeared on a plaque outside the pub.

“Francis always denied he said anything of the sort,” notes Cobb, “and in 2016, at a celebration of the centenary of Crick’s birth, Watson publicly admitted that he had made it up for dramatic effect (a few years earlier, he had confessed as much to Kindra Crick, Francis’s granddaughter).” No wonder Watson’s much-read 1968 book The Double Helix caused a furious reaction from Crick and a temporary breakdown in their friendship, as Cobb dissects in excoriating detail.

Watson’s deprecatory comments on Franklin helped to provoke the current widespread belief that Crick and Watson succeeded by stealing Franklin’s data. After an extensive analysis of the available evidence, however, Cobb argues that the data was willingly shared with them by Franklin, but that they should have formally asked her permission to use it in their published work – “Ambition, or thoughtlessness, stayed their hand.”

In fact, it seems Crick and Franklin were friends in 1953, and remained so – with Franklin asking Crick for his advice on her draft scientific papers – until her premature death from ovarian cancer in 1958. Indeed, after her first surgery in 1956, Franklin went to stay with Crick and his wife at their house in Cambridge, and then returned to them after her second operation. There certainly appears to be no breakdown in trust between the two. When Crick was nominated for the Nobel prize in 1961, he openly stated, “The data which really helped us obtain the structure was mainly obtained by Rosalind Franklin.”

As for Crick’s later study of consciousness, Cobb comments, “It would be easy to dismiss Crick’s switch to studying the brain as the quixotic project of an ageing scientist who did not know his limits. After all, he did not make any decisive breakthrough in understanding the brain – nothing like the double helix… But then again, nobody else did, in Crick’s lifetime or since.” One is perhaps reminded once again of Einstein, and his preoccupation during later life with his unified field theory, which remains an open line of research today.

  • 2025 Profile Books £30.00hb 595pp

The post Francis Crick: a life of twists and turns appeared first on Physics World.

]]>
Opinion and reviews Andrew Robinson reviews Crick: a Mind in Motion – from DNA to the Brain by Matthew Cobb https://physicsworld.com/wp-content/uploads/2025/12/2025-12-robinson-crick-1981.jpg
Physicists overcome ‘acoustic collapse’ to levitate multiple objects with sound https://physicsworld.com/a/physicists-overcome-acoustic-collapse-to-levitate-multiple-objects-with-sound/ Wed, 07 Jan 2026 09:00:35 +0000 https://physicsworld.com/?p=125718 Finding could have applications in acoustic-levitation-assisted 3D printing, mid-air chemical synthesis and micro-robotics

The post Physicists overcome ‘acoustic collapse’ to levitate multiple objects with sound appeared first on Physics World.

]]>
Sound waves can make small objects hover in the air, but applying this acoustic levitation technique to an array of objects is difficult because the objects tend to clump together. Physicists at the Institute of Science and Technology Austria (ISTA) have now overcome this problem thanks to hybrid structures that emerge from the interplay between attractive acoustic forces and repulsive electrostatic ones. By proving that it is possible to levitate many particles while keeping them separated, the finding could pave the way for advances in acoustic-levitation-assisted 3D printing, mid-air chemical synthesis and micro-robotics.

In acoustic levitation, particles ranging in size from tens of microns to millimetres are drawn up into the air and confined by an acoustic force. The origins of this force lie in the momentum that the applied acoustic field transfers to a particle as sound waves scatter off its surface. While the technique works well for single particles, multiple particles tend to aggregate into a single dense object in mid-air because the acoustic forces they scatter can, collectively, create an attractive interaction between them.

Keeping particles separated

Led by Scott Waitukaitis, the ISTA researchers found a way to avoid this so-called “acoustic collapse” by using a tuneable repulsive electrostatic force to counteract the attractive acoustic one. They began by levitating a single silver-coated poly(methyl methacrylate) (PMMA) microsphere 250‒300 µm in diameter above a reflector plate coated with a transparent and conductive layer of indium tin oxide (ITO). They then imbued the particle with a precisely controlled amount of electrical charge by letting it rest on the ITO plate with the acoustic field off, but with a high-voltage DC potential applied between the plate and a transducer. This produces a capacitive build-up of charge on the particle, and the amount of charge can be estimated from Maxwell’s solutions for two contacting conductive spheres (assuming, in the calculations, that the lower plate acts like a sphere with infinite radius).

The next step in the process is to switch on the acoustic field and, after just 10 ms, add the electric field to it. During the short period in which both fields are on, and provided the electric field is strong enough, either field is capable of launching the particle towards the centre of the levitation setup. The electric fields is then switched off. A few seconds later, the particle levitates stably in the trap, with a charge given, in principle, by Maxwell’s approximations.

A visually mesmerizing dance of particles

This charging method works equally well for multiple particles, allowing the researchers to load particles into the trap with high efficiency and virtually any charge they want, limited only by the breakdown voltage of the surrounding air. Indeed, the physicists found they could tune the charge to levitate particles separately or collapse them into a single, dense object. They could even create hybrid states that mix separated and collapsed particles.

And that wasn’t all. According to team member Sue Shi, a PhD student at ISTA and the lead author of a paper in PNAS about the research, the most exciting moment came when they saw the compact parts of the hybrid structures spontaneously begin to rotate, while the expanded parts remained in one place while oscillating in response to the rotation. The result was “a visually mesmerizing dance,” Shi says, adding that “this is the first time that such acoustically and electrostatically coupled interactions have been observed in an acoustically levitated system.”

As well as having applications in areas such as materials science and micro-robotics, Shi says the technique developed in this work could be used to study non-reciprocal effects that lead to the particles rotating or oscillating. “This would pave the way for understanding more elusive and complex non-reciprocal forces and many-body interactions that likely influence the behaviours of our system,” Shi tells Physics World.

The post Physicists overcome ‘acoustic collapse’ to levitate multiple objects with sound appeared first on Physics World.

]]>
Research update Finding could have applications in acoustic-levitation-assisted 3D printing, mid-air chemical synthesis and micro-robotics https://physicsworld.com/wp-content/uploads/2025/12/waitkaitis.jpg
When heat moves sideways https://physicsworld.com/a/when-heat-moves-sideways/ Wed, 07 Jan 2026 08:30:17 +0000 https://physicsworld.com/?p=124971 MnPS₃ shows an unexpectedly strong thermal Hall effect, challenging current theories of quantum heat transport

The post When heat moves sideways appeared first on Physics World.

]]>
Heat travels across a metal by the movement of electrons. However, in an insulator there are no free charge carriers; instead, vibrations in the atoms (phonons) move the heat from hot regions to cool regions in a straight path. In some materials, when a magnetic field is applied, the phonons begin to move sideways, this is known as the Phonon Hall Effect. Quantised collective excitations of the spin structure, called magnons, can also do this via the Magnon Hall Effect. A combined effect occurs when magnons and phonons strongly interact and traverse sideways in the Magnon–Polaron Hall Effect.

Scientists understand the quantum mechanical property known as Berry curvature that causes this transverse heat flow. Yet in some materials, the effect is greater than what Berry curvature alone can explain. In this research, an exceptionally large thermal Hall effect is recorded in MnPS₃, an insulating antiferromagnetic material with strong magnetoelastic coupling and a spin-flop transition. The thermal Hall angle remains large down to 4 K and cannot be accounted for by standard Berry curvature-based models.

This work provides an in-depth analysis of the role of the spin-flop transition in MnPS₃’s thermal properties and highlights the need for new theoretical approaches to understand magnon–phonon coupling and scattering. Materials with large thermal Hall effects could be used to control heat in nanoscale devices such as thermal diodes and transistors.

Read the full article

Large thermal Hall effect in MnPS3

Mohamed Nawwar et al 2025 Rep. Prog. Phys. 88 080503

Do you want to learn more about this topic?

Quantum-Hall physics and three dimensions Johannes GoothStanislaw Galeski and Tobias Meng (2023)

The post When heat moves sideways appeared first on Physics World.

]]>
Research highlight MnPS₃ shows an unexpectedly strong thermal Hall effect, challenging current theories of quantum heat transport https://physicsworld.com/wp-content/uploads/2025/11/waveform-668938388-istock-swillklitch.jpg
Symmetry‑preserving route to higher‑order insulators https://physicsworld.com/a/symmetry%e2%80%91preserving-route-to-higher%e2%80%91order-insulators/ Wed, 07 Jan 2026 08:29:08 +0000 https://physicsworld.com/?p=125426 A simple boundary repositioning technique lets materials host infinitely many robust topological states useful for electronics, photonics, and phononics, with a Matryoshka-doll-like hierarchy

The post Symmetry‑preserving route to higher‑order insulators appeared first on Physics World.

]]>
Topological insulators are materials that are insulating in the bulk within the bandgap, yet exhibit conductive states on their surface at frequencies within that same bandgap. These surface states are topologically protected, meaning they cannot be easily disrupted by local perturbations. In general, a material of n‑dimensions can host n‑1-dimensional topological boundary states. If the symmetry protecting these states is further broken, a bandgap can open between the n-1-dimensional states, enabling the emergence of n-2-dimensional topological states. For example, a 3D material can host 2D protected surface states, and breaking additional symmetry can create a bandgap between these surface states, allowing for protected 1D edge states. A material undergoing such a process is said to exhibit a phenomenon known as a higher-order topological insulator. In general, higher-order topological states appear in dimensions one lower than the parent topological phase due to the further unit-cell symmetry reduction. This requires at least a 2D lattice for second-order states, with the maximal order in 3D systems being three.

The researchers here introduce a new method for repeatedly opening the bandgap between topological states and generating new states within those gaps in an unbounded manner – without breaking symmetries or reducing dimensions. Their approach creates hierarchical topological insulators by repositioning domain walls between different topological regions. This process opens bandgaps between original topological states while preserving symmetry, enabling the formation of new hierarchical states within the gaps. Using one‑ and two‑dimensional Su–Schrieffer–Heeger models, they show that this procedure can be repeated to generate multiple, even infinite, hierarchical levels of topological states, exhibiting fractal-like behavior reminiscent of a Matryoshka doll. These higher-level states are characterized by a generalized winding number that extends conventional topological classification and maintains bulk-edge correspondence across hierarchies.

The researchers confirm the existence of second‑ and third-level domain‑wall and edge states and demonstrate that these states remain robust against perturbations. Their approach is scalable to higher dimensions and applicable not only to quantum systems but also to classical waves such as phononics. This broadens the definition of topological insulators and provides a flexible way to design complex networks of protected states. Such networks could enable advances in electronics, photonics, and phonon‑based quantum information processing, as well as engineered structures for vibration control. The ability to design complex, robust, and tunable hierarchical topological states could lead to new types of waveguides, sensors, and quantum devices that are more fault-tolerant and programmable.

Read the full article

Hierarchical topological states without dimension reduction

Joel R Pyfrom et al 2025 Rep. Prog. Phys. 88 118003

Do you want to learn more about this topic?

Interacting topological insulators: a review by Stephan Rachel (2018)

The post Symmetry‑preserving route to higher‑order insulators appeared first on Physics World.

]]>
Research highlight A simple boundary repositioning technique lets materials host infinitely many robust topological states useful for electronics, photonics, and phononics, with a Matryoshka-doll-like hierarchy https://physicsworld.com/wp-content/uploads/2025/12/gold-and-light-particles-abstract-1146280754-shutterstock-kawephoto.jpg
New hybrid state of matter is a mix of solid and liquid https://physicsworld.com/a/new-hybrid-state-of-matter-is-a-mix-of-solid-and-liquid/ Tue, 06 Jan 2026 15:00:44 +0000 https://physicsworld.com/?p=125716 Finding could be important for catalysis and other thermally-activated industrial processes

The post New hybrid state of matter is a mix of solid and liquid appeared first on Physics World.

]]>
The boundary between a substance’s liquid and solid phases may not be as clear-cut as previously believed. A new state of matter that is a hybrid of both has emerged in research by scientists at the University of Nottingham, UK and the University of Ulm, Germany, and they say the discovery could have applications in catalysis and other thermally-activated processes.

In liquids, atoms move rapidly, sliding over and around each other in a random fashion. In solids, they are fixed in place. The transition between the two states, solidification, occurs when random atomic motion transitions to an ordered crystalline structure.

At least, that’s what we thought. Thanks to a specialist microscopy technique, researchers led by Nottingham’s Andrei Khlobystov found that this simple picture isn’t entirely accurate. In fact, liquid metal nanoparticles can contain stationary atoms – and as the liquid cools, their number and position play a significant role in solidification.

Some atoms remain stationary

The team used a method called spherical and chromatic aberration-corrected high-resolution transmission electron microscopy (Cc/Cs-corrected HRTEM) at the low-voltage SALVE instrument at Ulm to study melted metal nanoparticles (such as platinum, gold and palladium) deposited on an atomically thin layer of graphene. This carbon-based material acted a sort of “hob” for heating the particles, says team member Christopher Leist, who was in charge of the HRTEM experiments. “As they melted, the atoms in the nanoparticles began to move rapidly, as expected,” Leist says. “To our surprise, however, we found that some atoms remained stationary.”

At high temperatures, these static atoms bind strongly to point defects in the graphene support. When the researchers used the electron beam from the transmission microscope to increase the number of these defects, the number of stationary atoms within the liquid increased, too. Khlobystov says that this had a knock-on effect on how the liquid solidified: when the stationary atoms are few in number, a crystal forms directly from the liquid and continues to grow until the entire particle has solidified. When their numbers increase, the crystallization process cannot take place and no crystals form.

“The effect is particularly striking when stationary atoms create a ring (corral) that surrounds and confines the liquid,” he says. “In this unique state, the atoms within the liquid droplet are in motion, while the atoms forming the corral remain motionless, even at temperatures well below the freezing point of the liquid.”

Unprecedented level of detail

The researchers chose to use Cc/Cs-corrected HRTEM in their study because minimizing spherical and chromatic aberrations through specialized hardware installed on the microscope enabled them to resolve single atoms in their images.

“Additionally, we can control both the energy of the electron beam and the sample temperature (the latter using MEMS-heated chip technology),” Khlobystov explains. “As a result, we can study metal samples at temperatures of up to 800 °C, even in a molten state, without sacrificing atomic resolution. We can therefore observe atomic behaviour during crystallization while actively manipulating the environment around the metal particles using the electron beam or by cooling the particles. This level of detail under such extreme conditions is unprecedented.”

Effect could be harnessed for catalysis

The Nottingham-Ulm researchers, who report their work in ACS Nano, say they obtained their results by chance while working on an EPSRC-funded project on 1-2 nm metal particles for catalysis applications. “Our approach involves assembling catalysts from individual metal atoms, utilizing on-surface phenomena to control their assembly and dynamics,” explains Khlobystov. “To gain this control, we needed to investigate the behaviour of metal atoms at varying temperatures and within different local environments on a support material.

“We suspected that the interplay between vacancy defects in the support and the sample temperature creates a powerful mechanism for controlling the size and structure of the metal particles,” he tells Physics World. “Indeed, this study revealed the fundamental mechanisms behind this process with atomic precision.”

The experiments were far from easy, he recalls, with one of the key challenges being to identify a thin, robust and thermally conductive support material for the metal. Happily, graphene meets all these criteria.

“Another significant hurdle to overcome was to be able to control the number of defect sites surrounding each particle,” he adds. “We successfully accomplished this by using the TEM’s electron beam not just as an imaging tool, but also as a means to modify the environment around the particles by creating defects.”

The researchers say they would now like to explore whether the effect can be harnessed for catalysis. To do this, Khlobystov says it will be essential to improve control over defect production and its scale. “We also want to image the corralled particles in a gas environment to understand how the phenomenon is influenced by reaction conditions, since our present measurements were conducted in a vacuum,” he adds.

The post New hybrid state of matter is a mix of solid and liquid appeared first on Physics World.

]]>
Research update Finding could be important for catalysis and other thermally-activated industrial processes https://physicsworld.com/wp-content/uploads/2025/12/atomic-dynamics.jpg
A theoretical physicist’s journey through the food and drink industry https://physicsworld.com/a/a-theoretical-physicists-journey-through-the-food-and-drink-industry/ Tue, 06 Jan 2026 11:00:16 +0000 https://physicsworld.com/?p=125629 Theoretical physicist Rob Farr retraces his career journey in the food industry

The post A theoretical physicist’s journey through the food and drink industry appeared first on Physics World.

]]>
Rob Farr is a theorist and computer modeller whose career has taken him down an unconventional path. He studied physics at the University of Cambridge, UK, from 1991 to 1994, staying on to do a PhD in statistical physics. But while many of his contemporaries then went into traditional research fields – such as quantum science, high-energy physics and photonic technologies – Farr got a taste for the food and drink manufacturing industry. It’s a multidisciplinary field in which Farr has worked for more than 25 years.

After leaving academia in 1998, first stop was Unilever’s €13bn foods division. For two decades, latterly as a senior scientist, Farr guided R&D teams working across diverse lines of enquiry – “doing the science, doing the modelling”, as he puts it. Along the way, Farr worked on all manner of consumer products including ice-cream, margarine and non-dairy spreads, as well as “dry” goods such as bouillon cubes. There was also the occasional foray into cosmetics, skin creams and other non-food products.

As a theoretical physicist working in industrial-scale food production, Farr’s focus has always been on the materials science of the end-product and how it gets processed. “Put simply,” says Farr, “that means making production as efficient as possible – regarding both energy and materials use – while developing ‘new customer experiences’ in terms of food taste, texture and appearance.” 

Ice-cream physics

One tasty multiphysics problem that preoccupied Farr for a good chunk of his time at Unilever is ice cream. It is a hugely complex material that Farr likens to a high-temperature ceramic, in the sense that the crystalline part of it is stored very near to the melting point of ice. “Equally, the non-ice phase contains fats,” he says, “so there’s all sorts of emulsion physics and surface science to take into consideration.”

Ice cream also has polymers in the mix, so theoretical modelling needs to incorporate the complex physics of polymer–polymer phase separation as well as polymer flow, or “rheology”, which contributes to the product’s texture and material properties. “Air is another significant component of ice cream,” adds Farr, “which means it’s a foam as well as an emulsion.”

As well as trying to understand how all these subcomponents interact, there’s also the thorny issue of storage. After it’s produced, ice cream is typically kept at low temperatures of about –25 °C – first in the factory, then in transit and finally in a supermarket freezer. But once that tub of salted-caramel or mint choc chip reaches a consumer’s home, it’s likely to be popped in the ice compartment of a fridge freezer at a much milder –6 or –7 °C.

Manufacturers therefore need to control how those temperature transitions affect the recrystallization of ice. This unwanted outcome can lead to phenomena like “sintering” (which makes a harder product) and “ripening” (which can lead to big ice crystals that can be detected in the mouth and detract from the creamy texture).

“Basically, the whole panoply of soft-matter physics comes into play across the production, transport and storage of ice cream,” says Farr. “Figuring out what sort of materials systems will lead to better storage stability or a more consistent product texture are non-trivial questions given that the global market for ice cream is worth in excess of €100bn annually.”

A shot of coffee?

After almost 20 years working at Unilever, in 2017 Farr took up a role as coffee science expert at JDE Peet’s, the Dutch multinational coffee and tea company. Switching from the chilly depths of ice cream science to the dark arts of coffee production and brewing might seem like a steep career phase change, but the physics of the former provides a solid bridge to the latter.

The overlap is evident, for example, in how instant coffee gets freeze-dried – a low-temperature dehydration process that manufacturers use to extend the shelf-life of perishable materials and make them easier to transport. In the case of coffee, freeze drying (or lyophilization, as it’s commonly known) also helps to retain flavour and aromas.

If you want to study a parameter space that’s not been explored before, the only way to do that is to simulate the core processes using fundamental physics

After roasting and grinding the raw coffee beans, manufacturers extract a coffee concentrate using high pressure and water. This extract is then frozen, ground up and placed in a vacuum well below 0 °C. A small amount of heat is applied to sublime the ice away and remove the remaining water from the non-ice phase.

The quality of the resulting freeze-dried instant coffee is better than ordinary instant coffee. However, freeze-drying is also a complex and expensive process, which manufacturers seek to fine-tune by implementing statistical methods to optimize, for example, the amount of energy consumed during production.

Such approaches involve interpolating the gaps between existing experimental data sets, which is where a physics mind-set comes in. “If you want to study a parameter space that’s not been explored before,” says Farr, “the only way to do that is to simulate the core processes using fundamental physics.”

Beyond the production line, Farr has also sought to make coffee more stable when it’s stored at home. Sustainability is the big driver here: JDE Peet’s has committed to make all its packaging compostable, recyclable or reusable by 2030. “Shelf-life prediction has been a big part of this R&D initiative,” he explains. “The work entails using materials science and the physics of mass transfer to develop next-generation packaging and container systems.”

Line of sight

After eight years unpacking the secrets of coffee physics at JDE Peet’s, Farr was given the option to relocate to the Netherlands in mid-2025 as part of a wider reorganization of the manufacturer’s corporate R&D function. However, he decided to stay put in Oxford and is now deciding between another role in the food manufacturing sector, or moving into a new area of research, such as nuclear energy, or even education.

Rob Farr stood in front of a blackboard

Farr believes he gained a lot from his time at JDE Peet’s. As well as studying a wide range of physics problems, he also benefited from the company’s rigorous approach to R&D, whereby projects are regularly assessed for profitability and quickly killed off if they don’t make the cut. Such prioritization avoids wasted effort and investment, but it also demands agility from staff scientists, who have to build long-term research strategies against a project landscape in constant flux.

A senior scientist needs to be someone who colleagues come to informally to discuss their technical challenges

To thrive in that setting, Farr says collaboration and an open mind are essential. “A senior scientist needs to be someone who colleagues come to informally to discuss their technical challenges,” he says. “You can then find the scientific question which underpins seemingly disparate problems and work with colleagues to deliver commercially useful solutions.” For Farr, it’s a self-reinforcing dynamic. “As more people come to you, the more helpful you become – and I love that way of working.”

What Farr calls “line-of-sight” is another unique feature of industrial R&D in food materials. “Maybe you’re only building one span of a really long bridge,” he notes, “but when you can see the process end-to-end, as well as your part in it, that is a fantastic motivator.” Indeed, Farr believes that for physicists who want a job doing something useful, the physics of food materials makes a great career. “There are,” he concludes, “no end of intriguing and challenging research questions.”

The post A theoretical physicist’s journey through the food and drink industry appeared first on Physics World.

]]>
Feature Theoretical physicist Rob Farr retraces his career journey in the food industry https://physicsworld.com/wp-content/uploads/2026/01/2026-01-careers-coffee-beans-and-espresso-in-cup-1342135224-istock-fabiomax.jpg newsletter1
Quantum photonics network passes a scaling-up milestone https://physicsworld.com/a/quantum-photonics-network-passes-a-scaling-up-milestone/ Tue, 06 Jan 2026 09:00:15 +0000 https://physicsworld.com/?p=125809 Fibre-based circuit functions as a programmable router for entangled light

The post Quantum photonics network passes a scaling-up milestone appeared first on Physics World.

]]>
Physicists in the UK have succeeded in routing and teleporting entangled states of light between two four-user quantum networks – an important milestone in the development of scalable quantum communications. Led by Mehul Malik and Natalia Herrera Valencia of Heriot-Watt University in Edinburgh, Scotland, the team achieved this milestone thanks to a new method that uses light-scattering processes in an ordinary optical fibre to program a circuit. This approach, which is radically different from conventional methods based on photonic chips, allows the circuit to function as a programmable entanglement router that can implement several different network configurations on demand.

The team performed the experiments using commercially-available optical fibres, which are multi-mode structures that scatter light via random linear optical processes. In simple terms, Herrera Valencia explains that this means the light tends to ricochet chaotically through the fibres along hundreds of internal pathways. While this effect can scramble entanglement, researchers at the Institut Langevin in Paris, France had previously found that the scrambling can be calculated by analysing how the fibre transmits light. What is more, the light-scattering processes in such a medium can be harnessed to make programmable optical circuits – which is exactly what Malik, Herrera Valencia and colleagues did.

“Top-down” approach

The researchers explain that this “top-down” approach simplifies the circuit’s architecture because it separates the layer where the light is controlled from the layer in which it is mixed. Using waveguides for transporting and manipulating the quantum states of light also reduces optical losses. The result is a reconfigurable multi-port device that can distribute quantum entanglement between many users simultaneously in multiple patterns, switching between different channels (local connections, global connections or both) as required.

A further benefit is that the channels can be multiplexed, allowing many quantum processors to access the system at the same time. The researchers say this is similar to multiplexing in classical telecommunications networks, which makes it possible to send huge amounts of data through a single optical fibre using different wavelengths of light.

Access to a large number of modes

Although controlling and distributing entangled states of light is key for quantum networks, Malik says it comes with several challenges. One of these is that conventional methods based on photonics chips cannot be scaled up easily. They are also very sensitive to imperfections in how they’re made. In contrast, the waveguide-based approach developed by the Heriot-Watt team “opens up access to a large number of modes, providing significant improvements in terms of achievable circuit size, quality and loss,” Malik tells Physics World, adding that the approach also fits naturally with existing optical fibre infrastructures.

Gaining control over the complex scattering process inside a waveguide was not easy, though. “The main challenge was the learning curve and understanding how to control quantum states of light inside such a complex medium,” Herrera Valencia recalls. “It took time and iteration, but we now have the precise and reconfigurable control required for reliable entanglement distribution, and even more so for entanglement swapping, which is essential for scalable networks.”

While the Heriot-Watt team used the technique to demonstrate flexible quantum networking, Malik and Herrera Valencia say it might also be used for implementing large-scale photonic circuits. Such circuits could have many applications, ranging from machine learning to quantum computing and networking, they add.

Looking ahead, the researchers, who report their work in Nature Photonics, say they are now aiming to explore larger-scale circuits that can operate on more photons and light modes. “We would also like to take some of our network technology out of the laboratory and into the real world,” says Malik, adding that Herrera Valencia is leading a commercialization effort in that direction.

The post Quantum photonics network passes a scaling-up milestone appeared first on Physics World.

]]>
Research update Fibre-based circuit functions as a programmable router for entangled light https://physicsworld.com/wp-content/uploads/2026/01/heriot-entanglement.jpeg
Band-aid like wearable sensor continuously monitors foetal movement https://physicsworld.com/a/band-aid-like-wearable-sensor-continuously-monitors-foetal-movement/ Mon, 05 Jan 2026 14:00:58 +0000 https://physicsworld.com/?p=125755 An adhesive pressure–strain sensor system that can detect reduced foetal movement could provide a promising new tool for maternity care

The post Band-aid like wearable sensor continuously monitors foetal movement appeared first on Physics World.

]]>
Pressure and strain sensors on a clinical trial volunteer

The ability to continuously monitor and interpret foetal movement patterns in the third trimester of a pregnancy could help detect any potential complications and improve foetal wellbeing. Currently, however, such assessment of foetal movement is performed only periodically, with an ultrasound exam at a hospital or clinic.

A lightweight, easily wearable, adhesive patch-based sensor developed by engineers and obstetricians at Monash University in Australia may change this. The patches, two of which are worn on the abdomen, can detect foetal movements such as kicking, waving, hiccups, breathing, twitching, and head and trunk motion.

Reduced foetal movement can be associated with potential impairment in the central nervous system and musculoskeletal system, and is a common feature observed in pregnancies that end in foetal death and stillbirth. A foetus compromised in utero may reduce movements as a compensatory strategy to lower oxygen consumption and conserve energy.

To help identify foetuses at risk of complications, the Monash team developed an artificial intelligence (AI)-powered wearable pressure–strain combo sensor system that continuously and accurately detects foetal movement-induced motion in the mother’s abdominal skin. As reported in Science Advances, the “band-aid”-like sensors can discriminate between foetal and non-foetal movement with over 90% accuracy.

The system comprises two soft, thin and flexible patches designed to conform to the abdomen of a pregnant woman. One patch incorporates an octagonal gold nanowire-based strain sensor (the “Octa” sensor), the other is an interdigitated electrode-based pressure sensor.

Pressure and strain combo sensor system

The patches feature a soft polyimide-based flexible printed circuit (FPC) that integrates a thin lithium polymer battery and various integrated circuit chips, including a Bluetooth radiofrequency system for reading the sensor’s electrical resistance, storing data and communicating with a smartphone app. Each patch is encapsulated with kinesiology tape and sticks to the abdomen using a medical double-sided silicone adhesive.

The Octa sensor is attached to a separate FPC connector attached to the primary device, enabling easy replacement after each study. The pressure sensor is mounted on the silicone adhesive, to connect with the interdigitated electrode beneath the primary device. The Octa and pressure sensor patches are lightweight (about 3 g) and compact, measuring 63 x 30 x 4 mm and 62 x 28 x 2 mm, respectively.

Trialling the device

The researchers validated their foetal movement monitoring system via comparison with simultaneous ultrasound exams, examining 59 healthy pregnant women at Monash Health. Each participant had the pressure sensor attached to the area of their abdomen where they felt the most vigorous foetal movements, typically in the lower quadrant, while the strain sensor was attached to the region closest to foetal limbs. An accelerometer placed on the participant’s chest captured non-foetal movement data for signal denoising and training the machine-learning model.

Principal investigator Wenlong Cheng, now at the University of Sydney, and colleagues report that “the wearable strain sensor featured isotropic omnidirectional sensitivity, enabling detection of maternal abdominal [motion] over a large area, whereas the wearable pressure sensor offered high sensitivity with a small domain, advantageous for accurate localized foetal movement detection”.

The researchers note that the pressure sensor demonstrated higher sensitivity to movements directly beneath it compared with motion farther away, while the Octa sensor performed consistently across a wider sensing area. “The combination of both sensor types resulted in a substantial performance enhancement, yielding an overall AUROC [area under the receiver operating characteristic curve] accuracy of 92.18% in binary detection of foetal movement, illustrating the potential of combining diverse sensing modalities to achieve more accurate and reliable monitoring outcomes,” they write.

In a press statement, co-author Fae Marzbanrad explains that the device’s strength lies in a combination of soft sensing materials, intelligent signal processing and AI. “Different foetal movements create distinct strain patterns on the abdominal surface, and these are captured by the two sensors,” she says. “The machine-learning system uses the signals to detect when movement occurs while cancelling maternal movements.”

The lightweight and flexible device can be worn by pregnant women for long periods without disrupting daily life. “By integrating sensor data with AI, the system automatically captures a wider range of foetal movements than existing wearable concepts while staying compact and comfortable,” Marzbanrad adds.

The next steps towards commercialization of the sensors will include large-scale clinical studies in out-of-hospital settings, to evaluate foetal movements and investigate the relationship between movement patterns and pregnancy complications.

The post Band-aid like wearable sensor continuously monitors foetal movement appeared first on Physics World.

]]>
Research update An adhesive pressure–strain sensor system that can detect reduced foetal movement could provide a promising new tool for maternity care https://physicsworld.com/wp-content/uploads/2026/01/05-01-26-foetal-sensor-fig4-featured.jpg
Unlocking novel radiation beams for cancer treatment with upright patient positioning https://physicsworld.com/a/unlocking-novel-radiation-beams-for-cancer-treatment-with-upright-patient-positioning/ Mon, 05 Jan 2026 12:27:49 +0000 https://physicsworld.com/?p=125746 Join the audience for a live webinar at 4 p.m. GMT on 17 February 2026

Upright patient positioning opens new pathways for radiation therapy in cancer care

The post Unlocking novel radiation beams for cancer treatment with upright patient positioning appeared first on Physics World.

]]>
Since the beginning of radiation therapy, almost all treatments have been delivered with the patient lying on a table while the beam rotates around them. But a resurgence in upright patient positioning is changing that paradigm. Novel radiation accelerators such as proton therapy, VHEE, and FLASH therapy are often too large to rotate around the patient, making access limited. By instead rotating the patient, these previously hard-to-access beams could now become mainstream in the future.

Join leading clinicians and experts as they discuss how this shift in patient positioning is enabling exploration of new treatment geometries and supporting the development of advanced future cancer therapies.

Novel beams covered and their representative speaker

Serdar Charyyev – Proton Therapy – Clinical Assistant Professor at Stanford University School of Medicine
Eric Deutsch – VHEE FLASH – Head of Radiotherapy at Gustave Roussy
Bill Loo – FLASH Photons – Professor of Radiation Oncology at Stanford Medicine
Rock Mackie – Emeritus Professor at University of Wisconsin and Co-Founder and Chairman of Leo Cancer Care

The post Unlocking novel radiation beams for cancer treatment with upright patient positioning appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar at 4 p.m. GMT on 17 February 2026 Upright patient positioning opens new pathways for radiation therapy in cancer care https://physicsworld.com/wp-content/uploads/2025/12/2026-feb-17-leo-main-image.jpg
Ask me anything: Andrew Lamb – ‘Being flexible and curious matters far more than having everything mapped out from the beginning’ https://physicsworld.com/a/ask-me-anything-andrew-lamb-being-flexible-and-curious-matters-far-more-than-having-everything-mapped-out-from-the-beginning/ Mon, 05 Jan 2026 11:00:09 +0000 https://physicsworld.com/?p=125544 Andrew Lamb is the co-founder of Delta.g – a quantum gravity sensor company

The post Ask me anything: Andrew Lamb – ‘Being flexible and curious matters far more than having everything mapped out from the beginning’ appeared first on Physics World.

]]>
Andrew Lamb

What skills do you use every day in your job?

A quantum sensor is a combination of lots of different parts working together in harmony: a sensor head containing the atoms and isolating them from the environment; a laser system to probe the quantum structure and manipulate atomic states; electronics to drive the power and timing of a device; and software to control everything and interpret the data. As the person building, developing and maintaining these devices you need to have expertise across all these areas. In addition to these skills, as the CTO my role also requires me to set the company’s technical priorities, determine the focus of R&D activities and act as the top technical authority in the firm.

In a developing field like quantum metrology, evidence-based decision making is crucial as you critically assess information, disregarding what is irrelevant and making an informed choice – especially when the “right answer” may not be obvious for months or even years. Challenges arise that may never have been solved before, and the best way to do so is to dive deep into the “why and how” something happens. Once the root cause is identified a creative solution then needs to be found; whether it is something brand new, or implementing an approach from an entirely different discipline.

What do you like best and least about your job?

The best thing about my job is the way in which it enables me to grow my knowledge and understanding of a wide variety of fields, while also providing me opportunities for creative problem solving. When you surround yourself with people who are experts in their field, there is no end to the opportunities to learn. Before co-founding Delta.g I was a researcher at the University of Birmingham where I learnt my technical skills. Moving into a start-up, we built a multidisciplinary team to address the operational, regulatory and technical barriers to establish a disruptive product in the marketplace. The diversity created within our company has afforded a greater pool of experts to learn from.

As the CTO, my role sits at the intersection of the technical and the commercial within the business. That means it is my responsibility to translate commercial milestones into a scientific plan, while also explaining our progress to non-experts. This can be challenging and quite stressful at times – particularly when I need to describe our scientific achievements in a way that truly reflects our advances, while still being accessible.

What do you know today that you wish you knew when you were starting out in your career?

For a long time, I didn’t know what direction I wanted to take, and I used to worry that the lack of a clear purpose would hold me back. Today I know that it doesn’t. Instead of fixating on finding a perfect path early on, it’s far more valuable to focus on developing skills that open doors. Whether those skills are technical, managerial or commercial, no knowledge is ever wasted. I’m still surprised by how often something I learned as far back as GCSE ends up being useful in my work now.

I also wish I had understood just how important it is to stay open to new opportunities. Looking back, every pivotal point in my career – switching from civil engineering to a physics degree, choosing certain undergraduate modules, applying for unexpected roles, even co-founding Delta.g – came from being willing to make a shift when an opportunity appeared. Being flexible and curious matters far more than having everything mapped out from the beginning.

The post Ask me anything: Andrew Lamb – ‘Being flexible and curious matters far more than having everything mapped out from the beginning’ appeared first on Physics World.

]]>
Interview Andrew Lamb is the co-founder of Delta.g – a quantum gravity sensor company https://physicsworld.com/wp-content/uploads/2026/01/2026-01-ama-andrew-lamb-featured.jpg newsletter
The environmental and climate cost of war https://physicsworld.com/a/the-environmental-and-climate-cost-of-war/ Fri, 02 Jan 2026 11:00:13 +0000 https://physicsworld.com/?p=125354 Researchers and policymakers need a fuller view of the environmental and climate cost of war, to rebuild after the dust settles, as Benjamin Skuse finds

The post The environmental and climate cost of war appeared first on Physics World.

]]>
Despite not being close to the frontline of Russia’s military assault on Ukraine, life at the Ivano-Frankivsk National Technical University of Oil and Gas is far from peaceful. “While we continue teaching and research, we operate under constant uncertainty – air raid alerts, electricity outages – and the emotional toll on staff and students,” says Lidiia Davybida, an associate professor of geodesy and land management.

Last year, the university became a target of a Russian missile strike, causing extensive damage to buildings that still has not been fully repaired – although, fortunately, no casualties were reported. The university also continues to leak staff and students to the war effort – some of whom will tragically never return – while new student numbers dwindle as many school graduates leave Ukraine to study abroad.

Despite these major challenges, Davybida and her colleagues remain resolute. “We adapt – moving lectures online when needed, adjusting schedules, and finding ways to keep research going despite limited opportunities and reduced funding,” she says.

Resolute research

Davybida’s research focuses on environmental monitoring using geographic information systems (GIS), geospatial analysis and remote sensing. She has been using these techniques to monitor the devastating impact that the war is having on the environment and its significant contribution to climate change.

In 2023 she published results from using Sentinel-5P satellite data and Google Earth Engine to monitor the air quality impacts of war on Ukraine (IOP Conf. Ser.: Earth Environ. Sci. 1254 012112). As with the COVID-19 lockdowns worldwide, her results reveal that levels of common pollutants such as carbon monoxide, nitrogen dioxide and sulphur dioxide were, on average, down from pre-invasion levels. This reflects the temporary disruption to economic activity that war has brought on the country.

Rescue workers lift an elder person on a stretcher out of flood water

More worrying, from an environment and climate perspective, were the huge concentrations of aerosols, smoke and dust in the atmosphere. “High ozone concentrations damage sensitive vegetation and crops,” Davybida explains. “Aerosols generated by explosions and fires may carry harmful substances such as heavy metals and toxic chemicals, further increasing environmental contamination.” She adds that these pollutants can alter sunlight absorption and scattering, potentially disrupting local climate and weather patterns, and contributing to long-term ecological imbalances.

A significant toll has been wrought by individual military events too. A prime example is Russia’s destruction of the Kakhovka Dam in southern Ukraine in June 2023. An international team – including Ukrainian researchers – recently attempted to quantify this damage by combining on-the-ground field surveys, remote-sensing data and hydrodynamic modelling; a tool they used for predicting water flow and pollutant dispersion.

The results of this work are sobering (Science 387 1181). Though 80% of the ecosystem is expected to re-establish itself within five years, the dam’s destruction released as much as 1.7 cubic kilometres of sediment contaminated by a host of persistent pollutants, including nitrogen, phosphorous and 83,000 tonnes of heavy metals. Discharging this toxic sludge across the land and waterways will have unknown long-term environmental consequences for the region, as the contaminants could be spread by future floods, the researchers concluded (figure 1).

1 Dam destruction

Map of Ukraine with a large area of coastline highlighted in orange and smaller inland areas highlighted green

This map shows areas of Ukraine affected or threatened by dam destruction in military operations. Arabic numbers 1 to 6 indicate rivers: Irpen, Oskil, Inhulets, Dnipro, Dnipro-Bug Estuary and Dniester, respectively. Roman numbers I to VII indicate large reservoir facilities: Kyiv, Kaniv, Kremenchuk, Kaminske, Dnipro, Kakhovka and Dniester, respectively. Letters A to C indicate nuclear power plants: Chornobyl, Zaporizhzhia and South Ukraine, respectively.

Dangerous data

A large part of the reason for the researchers’ uncertainty, and indeed more general uncertainty in environmental and climate impacts of war, stems from data scarcity. It is near-impossible for scientists to enter an active warzone to collect samples and conduct surveys and experiments. Environmental monitoring stations also get damaged and destroyed during conflict, explains Davybida – a wrong she is attempting to right in her current work. Many efforts to monitor, measure and hopefully mitigate the environmental and climate impact of the war in Ukraine are therefore less direct.

In 2022, for example, climate-policy researcher Mathijs Harmsen from the PBL Netherlands Environmental Assessment Agency and international collaborators decided to study the global energy crisis (which was sparked by Russia’s invasion of Ukraine) to look at how the war will alter climate policy (Environ. Res. Lett. 19 124088).

They did this by plugging in the most recent energy price, trade and policy data (up to May 2023) into an integrated assessment model that simulates the environmental consequences of human activities worldwide. They then imposed different potential scenarios and outcomes and let it run to 2030 and 2050. Surprisingly, all scenarios led to a global reduction of 1–5% of carbon dioxide emissions by 2030, largely due to trade barriers increasing fossil fuel prices, which in turn would lead to increased uptake of renewables.

But even though the sophisticated model represents the global energy system in detail, some factors are hard to incorporate and some actions can transform the picture completely, argues Harmsen. “Despite our results, I think the net effect of this whole war is a negative one, because it doesn’t really build trust or add to any global collaboration, which is what we need to move to a more renewable world,” he says. “Also, the recent intensification of Ukraine’s ‘kinetic sanctions’ [attacks on refineries and other fossil fuel infrastructure] will likely have a larger effect than anything we explored in our paper.”

Elsewhere, Toru Kobayakawa was, until recently, working for the Japan International Cooperation Agency (JICA), leading the Ukraine support team. Kobayakawa used a non-standard method to more realistically estimate the carbon footprint of reconstructing Ukraine when the war ends (Environ. Res.: Infrastruct. Sustain. 5 015015). The Intergovernmental Panel on Climate Change (IPCC) and other international bodies only account for carbon emissions within the territorial country. “The consumption-based model I use accounts for the concealed carbon dioxide from the production of construction materials like concrete and steel imported from outside of the country,” he says.

Using an open-source database Eora26 that tracks financial flows between countries’ major economic sectors in simple input–output tables, Kobayakawa calculated that Ukraine’s post-war reconstruction will amount to 741 million tonnes carbon dioxide equivalent over 10 years. This is 4.1 times Ukraine’s pre-war annual carbon-dioxide emissions, or the combined annual emissions of Germany and Austria.

However, as with most war-related findings, these figures come with a caveat. “Our input–output model doesn’t take into account the current situation,” notes Kobayakawa “It is the worst-case scenario.” Nevertheless, the research has provided useful insights, such as that the Ukrainian construction industry will account for 77% of total emissions.

“Their construction industry is notorious for inefficiency, needing frequent rework, which incurs additional costs, as well as additional carbon-dioxide emissions,” he says. “So, if they can improve efficiency by modernizing construction processes and implementing large-scale recycling of construction materials, that will contribute to reducing emissions during the reconstruction phase and ensure that they build back better.”

Military emissions gap

As the experiences of Davybida, Harmsen and Kobayakawa show, cobbling together relevant and reliable data in the midst of war is a significant challenge, from which only limited conclusions can be drawn. Researchers and policymakers need a fuller view of the environmental and climate cost of war if they are to improve matters once a conflict ends.

That’s certainly the view of Benjamin Neimark, who studies geopolitical ecology at Queen Mary University of London. He has been trying for some time to tackle the fact that the biggest data gap preventing accurate estimates of the climate and environmental cost of war is military emissions. During the 2021 United Nations Climate Change Conference (COP26), for example, he and colleagues partnered with the Conflict and Environment Observatory (CEOBS) to launch The Military Emissions Gap, a website to track and trace what a country accounts for as its military emissions to the United Nations Framework Convention on Climate Change (UNFCCC).

At present, reporting military emissions is voluntary, so data are often absent or incomplete – but gathering such data is vital. According to a 2022 estimate extrapolated from the small number of nations that do share their data, the total military carbon footprint is approximately 5.5% of global emissions. This would make the world’s militaries the fourth biggest carbon emitter if they were a nation.

The website is an attempt to fill this gap. “We hope that the UNFCCC picks up on this and mandates transparent and visible reporting of military emissions,” Neimark says (figure 2).

2 Closing the data gap

Five sets of icons indicating categories of military and conflict-related carbon emissions

Current United Nations Framework Convention on Climate Change (UNFCCC) greenhouse-gas emissions reporting obligations do not include all the possible types of conflict emissions, and there is no commonly agreed methodology or scope on how different countries collect emissions data. In a recent publication War on the Climate: a Multitemporal Study of Greenhouse Gas Emissions of the Israel-Gaza Conflict, Benjamin Neimark et al. came up with this framework, using the UNFCCC’s existing protocols. These reporting categories cover militaries and armed conflicts, and hope to highlight previously “hidden” emissions.

Measuring the destruction

Beyond plugging the military emissions gap, Neimark is also involved in developing and testing methods that he and other researchers can use to estimate the overall climate impact of war. Building on foundational work from his collaborator, Dutch climate specialist Lennard de Klerk – who developed a methodology for identifying, classifying and providing ways of estimating the various sources of emissions associated with the Russia–Ukraine war – Neimark and colleagues are trying to estimate the greenhouse-gas emissions from the Israel–Gaza conflict.

Their studies encompass pre-conflict preparation, the conflict itself and post-conflict reconstruction. “We were working with colleagues who were doing similar work in Ukraine, but every war is different,” says Neimark. “In Ukraine, they don’t have large tunnel networks, or they didn’t, and they don’t have this intensive, incessant onslaught of air strikes from carbon-intensive F16 fighter aircraft.” Some of these factors, like the carbon impact of Hamas’ underground maze of tunnels under Gaza, seem unquantifiable, but Neimark has found a way.

“There’s some pretty good data for how big these are in terms of height, the amount of concrete, how far down they’re dug and how thick they are,” says Neimark. “It’s just the length we had to work out based on reported documentation.” Finding the total amount of concrete and steel used in these tunnels involved triangulating open-source information with media reports to finalize an estimate of the dimensions of these structures. Standard emission factors could then be applied to obtain the total carbon emissions. According to data from Neimark’s Confronting Military Greenhouse Gas Emissions report, the carbon emissions from construction of concrete infrastructure by both Israel and Hamas were more than the annual emissions of 33 individual countries and territories (figure 3).

3 Climate change and the Gaza war

Three lists of headline facts and figures about carbon emissions from the Israel-Gaza war, split into direct military actions, large war-related infrastructure, and future rebuilding)

Data from Benjamin Neimark, Patrick Bigger, Frederick Otu-Larbi and Reuben Larbi’s Confronting Military Greenhouse Gas Emissions report estimates the carbon emissions of the war in Gaza for three distinct periods: direct war activities; large-scale war infrastructure; and future reconstruction.

The impact of Hamas’ tunnels and Israel’s “iron wall” border fence are just two of many pre-war activities that must be factored in to estimate the Israel–Gaza conflict’s climate impact. Then, the huge carbon cost of the conflict itself must be calculated, including, for example, bombing raids, reconnaissance flights, tanks and other vehicles, cargo flights and munitions production.

Gaza’s eventual reconstruction must also be included, which makes up a big proportion of the total impact of the war, as Kobayakawa’s Ukraine reconstruction calculations showed. The United Nations Environment Programme (UNEP) has been systematically studying and reporting on “Sustainable debris management in Gaza” as it tracks debris from damaged buildings and infrastructure in Gaza since the outbreak of the conflict in October 2023. Alongside estimating the amounts of debris, UNEP also models different management scenarios – ranging from disposal to recycling – to evaluate the time, resource needs and environmental impacts of each option.

Visa restrictions and the security situation have prevented UNEP staff from entering the Gaza strip to undertake environmental field assessments to date. “While remote sensing can provide a valuable overview of the situation … findings should be verified on the ground for greater accuracy, particularly for designing and implementing remedial interventions,” says a UNEP spokesperson. They add that when it comes to the issue of contamination, UNEP needs “confirmation through field sampling and laboratory analysis” and that UNEP “intends to undertake such field assessments once conditions allow”.

The main risk from hazardous debris – which is likely to make up about 10–20% of the total debris – arises when it is mixed with and contaminates the rest of the debris stock. “This underlines the importance of preventing such mixing and ensuring debris is systematically sorted at source,” adds the UNEP spokesperson.

The ultimate cost

With all these estimates, and adopting a Monte Carlo analysis to account for uncertainties, Neimark and colleagues concluded that, from the first 15 months of the Israel–Gaza conflict, total carbon emissions were 32 million tonnes, which is huge given that the territory has a total area of just 365 km². The number also continues to rise.

Khan Younis in ruins

Why does this number matter? When lives are being lost in Gaza, Ukraine, and across Sudan, Myanmar and other regions of the world, calculating the environmental and climate cost of war might seem like something only worth bothering about when the fighting stops.

But doing so even while conflicts are taking place can help protect important infrastructure and land, avoid environmentally disastrous events, and to ensure the long rebuild, wherever the conflict may be happening, is informed by science. The UNEP spokesperson says that it is important to “systematically integrate environmental considerations into humanitarian and early recovery planning from the outset” rather than treating the environment as an afterthought. They highlight that governments should “embed it within response plans – particularly in areas where it can directly impact life-saving activities, such as debris clearance and management”.

With Ukraine still in the midst of war, it seems right to leave the final word to Davybida. “Armed conflicts cause profound and often overlooked environmental damage that persists long after the fighting stops,” she says. “Recognizing and monitoring these impacts is vital to guide practical recovery efforts, protect public health, prevent irreversible harm to ecosystems and ensure a sustainable future.”

The post The environmental and climate cost of war appeared first on Physics World.

]]>
Feature Researchers and policymakers need a fuller view of the environmental and climate cost of war, to rebuild after the dust settles, as Benjamin Skuse finds https://physicsworld.com/wp-content/uploads/2025/12/2026-01-skuse-ukraine-borodyanka-april-2022-after-bombing-2153692753-shutterstock-fedbul-editorial-use-only.jpg newsletter
Happy new year: what’s happening in physics in 2026? https://physicsworld.com/a/happy-new-year-so-what-will-happen-in-physics-in-2026/ Thu, 01 Jan 2026 00:15:57 +0000 https://physicsworld.com/?p=125685 Predicting the future is hard, but here’s a flavour of what’s hot in physics right now

The post Happy new year: what’s happening in physics in 2026? appeared first on Physics World.

]]>
I used to set myself the challenge every December of predicting what might happen in physics over the following year. Gazing into my imaginary crystal ball, I tried to speculate on the potential discoveries, the likely trends, and the people who might make the news over the coming year. It soon dawned on me that making predictions in physics is a difficult, if not futile, task

Apart from space missions pencilled in for launch on set dates, or particle colliders or light sources due to open, so much in science is simply unknown. That uncertainty of science is, of course, also its beauty; if you knew what was out there, looking for it wouldn’t be quite as much fun. So if you’re wondering what’s in store for 2026, I don’t know – you’ll just have to read Physics World to find out.

Having said that – and setting aside the insane upheaval going on in US science – this year’s Physics World Live series will give you some sense of what’s hot in physics right now, at least as far as we here at Physics World headquarters are concerned.

The first online panel discussion will be on quantum metrology – a burgeoning field that seeks to ensure companies and academics can test, validate and commercialize new quantum tech. Yes the International Year of Quantum Science and Technology officially ends with a closing ceremony in Ghana in February, but the impact of quantum physics will continue to reverberate throughout 2026.

You can also look forward to an online event on nuclear fusion, which offers a path to limitless energy and a potential solution to the climate crisis. But it’s a complex challenge and the route to commercialization is uncertain, despite lots of private firms being active in the area as a counterweight to the huge ITER experiment that’s being built in southern France. Among them is Tokamak Energy, which this year won a Business Award from the Institute of Physics (IOP).

Another of our online panels will be on medical physics, bringing together the current and two past editors-in-chief of Physics in Medicine & Biology. Published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine, the journal turns 70 this year. The speakers will be reflecting on the vital role of medical-physics research to medicine and biology and examining how the field’s evolved since the journal was set up.

Medical physics will also be the focus of a new “impact project” in 2026 from the IOP, which will be starting another on artificial intelligence (AI) as well. The IOP will in addition be continuing its existing impact work on metamaterials, which were of course pioneered by – among others – the Imperial College theorist John Pendry. I wonder if a Nobel prize could be in store for him this year? That’s one prediction I’ll make that would be great if it came true.

Until then, on behalf of everyone at Physics World, I wish all readers – wherever you are – a happy and successful 2026. Your continued support is greatly valued.

The post Happy new year: what’s happening in physics in 2026? appeared first on Physics World.

]]>
Blog Predicting the future is hard, but here’s a flavour of what’s hot in physics right now https://physicsworld.com/wp-content/uploads/2026/01/crystal-ball-25942067-istock-shutter-m.jpg
The quirkiest stories from the world of physics in 2025 https://physicsworld.com/a/the-quirkiest-stories-from-the-world-of-physics-in-2025/ Wed, 31 Dec 2025 10:00:50 +0000 https://physicsworld.com/?p=125626 Michael Banks picks his favourite articles this year from the world of everyday physics

The post The quirkiest stories from the world of physics in 2025 appeared first on Physics World.

]]>
From cutting onions to a LEGO Jodrell Bank, physics has had its fair share of quirky stories this year. Here is our pick of the best, not in any particular order.

Flight of the nematode

Researchers in the US this year discovered that a tiny jumping worm uses static electricity to increase its chances of attaching to unsuspecting prey. The parasitic roundworm Steinernema carpocapsae can leap some 25 times its body length by curling into a loop and springing in the air. If the nematode lands successfully on a victim, it releases bacteria that kills the insect within a couple of days upon which  the worm feasts and lays its eggs. To investigate whether static electricity aids their flight, a team at Emory University and the University of California, Berkeley, used high-speed microscopy to film the worms as they leapt onto a fruit fly that was tethered with a copper wire connected to a high-voltage power supply. The researchers found that a charge of a  few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly. They discovered that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing with 880 V resulting in an 80% probability of success. “We’re helping to pioneer the emerging field of electrostatic ecology,” notes Emory physicist Ranjiangshang Ran.

Tear-jerking result

While it is known that volatile chemicals released from onions irritate the nerves in the cornea to produce tears, how such chemical-laden droplets reach the eyes and whether they are influenced by the knife or cutting technique remain less clear. To investigate, Sunghwan Jung  from Cornell University and colleagues built a guillotine-like apparatus and used high-speed video to observe the droplets released from onions as they were cut by steel blades. They found that droplets, which can reach up to 60 cm high, were released in two stages – the first being a fast mist-like outburst that was followed by threads of liquid fragmenting into many droplets. The most energetic droplets were released during the initial contact between the blade and the onion’s skin. When they began varying the sharpness of the blade and the cutting speed, they discovered that a greater number of droplets were released by blunter blades and faster cutting speeds. “That was even more surprising,” notes Jung. “Blunter blades and faster cuts – up to 40 m/s – produced significantly more droplets with higher kinetic energy.” Another surprise was that refrigerating the onions prior to cutting also produced an increased number of droplets of similar velocity, compared to room-temperature vegetables.

LEGO telescope

Students at the University of Manchester in the UK created a 30 500-piece LEGO model of the iconic Lovell Telescope to mark the 80th anniversary of the Jodrell Bank Observatory, which was founded in December 1945. Built in 1957, the 76.2 m diameter telescope was the largest steerable dish radio telescope in the world at the time. The LEGO model has been designed by Manchester’s undergraduate physics society and is based on the telescope’s original engineering blueprints. Student James Ruxton spent six months perfecting the design, which even involved producing custom-designed LEGO bricks with a 3D printer. Ruxton and fellow students began construction in April and the end result is a model weighing 30 kg with 30500 pieces and a whopping 4000-page instruction manual. “It’s definitely the biggest and most challenging build I’ve ever done, but also the most fun,” says Ruxton. “I’ve been a big fan of LEGO since I was younger, and I’ve always loved creating my own models, so recreating something as iconic as the Lovell is like taking that to the next level!” The model has gone on display in a “specially modified cabinet” at the university’s Schuster building, taking pride of place alongside a decade-old LEGO model of CERN’s ATLAS detector.

Petal physics

The curves and curls of leaves and flower petals arise due to the interplay between their natural growth and geometry. Uneven growth in a flat sheet, in which the edges grow quicker than the interior, gives rise to strain and in plant leaves and petals, for example, this can result in a variety of shapes such as saddle and ripple shapes. Yet when it comes to rose petals, the sharply pointed cusps – a point where two curves meet – that form at the edge of the petals set it apart from soft, wavy patterns seen in many other plants.

To investigate this intriguing difference, researchers from the Hebrew University of Jerusalem carried out theoretical modelling and conducted a series of experiments with synthetic disc “petals”. They found that the pointed cusps that form at the edge of rose petals are due to a type of geometric frustration called a Mainardi–Codazzi–Peterson (MCP) incompatibility. This type of mechanism results in stress concentrating in a specific area, which goes on to form cusps to avoid tearing or forming unnatural folding. When the researchers suppressed the formation of cusps, they found that the discs revert to being smooth and concave. The researchers say that the findings could be used for applications in soft robotics and even in the deployment of spacecraft components.

Wild Card physics

The Wild Cards universe is a series of novels set largely during an alternate history of the US following the Second World War. The series follows events after an extraterrestrial virus, known as the Wild Card virus, has spread worldwide. It mutates human DNA causing profound changes in human physiology. The virus follows a fixed statistical distribution in that 90% of those infected die, 9% become physically mutated (referred to as “jokers”) and 1% gain superhuman abilities (known as “aces”). Such capabilities include the ability to fly as well as being able to move between dimensions. George R R Martin, the author who co-edits the Wild Cards series, co-authored a paper examining the complex dynamics of the Wild Card virus together with Los Alamos National Laboratory theoretical physicist Ian Tregillis, who is also a science-fiction author. The model takes into consideration the severity of the changes (for the 10% that don’t instantly die) and the mix of joker/ace traits. The result is a dynamical system in which a carrier’s state vector constantly evolves through the model space – until their “card” turns. At that point the state vector becomes fixed and its permanent location determines the fate of the carrier. “The fictional virus is really just an excuse to justify the world of Wild Cards, the characters who inhabit it, and the plot lines that spin out from their actions,” says Tregillis.

Glass of beer with foamy top

Foamy top

And finally, a clear sign of a good brew is a big head of foam at the top of a poured glass. Beer foam is made of many small bubbles of air, separated from each other by thin films of liquid. These thin films must remain stable, or the bubbles will pop, and the foam will collapse. What holds these thin films together is not completely understood and is likely conglomerates of proteins, surface viscosity or the presence of surfactants – molecules that can reduce surface tension and are found in soaps and detergents. To find out more, researchers from ETH Zurich and Eindhoven University of Technology investigated beer-foam stability for different types of beers at varying stages of the fermentation process. They found that for single-fermentation beers, the foams are mostly held together with the surface viscosity of the beer. This is mostly influenced by the proteins in the beer – the more they contain, the more viscous the film and more stable the foam will be. However, for double-fermented beers, the proteins in the beer are slightly denatured by the yeast cells and come together to form a two-dimensional membrane that keeps the foam intact longer. The head was found to be even more stable for triple-fermented beers, which include Trappist beers. The team says that the work could be used to identify ways to increase or decrease the amount of foam so that everyone can pour a perfect glass of beer every time. Cheers!

You can be sure that 2026 will throw up its fair share of quirky stories from the world of physics. See you next year!

The post The quirkiest stories from the world of physics in 2025 appeared first on Physics World.

]]>
Blog Michael Banks picks his favourite articles this year from the world of everyday physics https://physicsworld.com/wp-content/uploads/2025/06/lego-lovell-telescope-small.jpg
Winning the popularity contest: the 10 most-read physics stories of 2025 https://physicsworld.com/a/winning-the-popularity-contest-the-10-most-read-physics-stories-of-2025/ Tue, 30 Dec 2025 15:00:15 +0000 https://physicsworld.com/?p=125702 Here's a second chance to catch up with the most popular stories Physics World published in 2025

The post Winning the popularity contest: the 10 most-read physics stories of 2025 appeared first on Physics World.

]]>
Popularity isn’t everything. But it is something, so for the second year running, we’re finishing our trip around the Sun by looking back at the physics stories that got the most attention over the past 12 months. Here, in ascending order of popularity, are the 10 most-read stories published on the Physics World website in 2025.

10. Quantum on the brain

We’ve had quantum science on our minds all year long, courtesy of 2025 being UNESCO’s International Year of Quantum Science and Technology. But according to theoretical work by Partha Ghose and Dimitris Pinotsis, it’s possible that the internal workings of our brains could also literally be driven by quantum processes.

Though neurons are generally regarded as too big to display quantum effects, Ghose and Pinotsis established that the equations describing the classical physics of brain responses are mathematically equivalent to the equations describing quantum mechanics. They also derived a Schrödinger-like equation specifically for neurons. So if you’re struggling to wrap your head around complex quantum concepts, take heart: it’s possible that your brain is ahead of you.

9. Could an extra time dimension reconcile quantum entanglement with local causality?

Illustration of time

Einstein famously disliked the idea of quantum entanglement, dismissing its effects as “spooky action at a distance”. But would he have liked the idea of an extra time dimension any better? We’re not sure he would, but that is the solution proposed by theoretical physicist Marco Pettini, who suggests that wavefunction collapse could propagate through a second time dimension. Pettini got the idea from discussions with the Nobel laureate Roger Penrose and from reading old papers by David Bohm, but not everyone is impressed by these distinguished intellectual antecedents. In this article, Bohm’s former student and frequent collaborator Jeffrey Bub went on the record to say he “wouldn’t put any money on” Pettini’s theory being correct. Ouch.

8. And now for something completely different

Continuing the theme of intriguing, blue-sky theoretical research, the eighth-most-read article of 2025 describes how two theoretical physicists, Kaden Hazzard and Zhiyuan Wang, proposed a new class of quasiparticles called paraparticles. Based on their calculations, these paraparticles exhibit quantum properties that are fundamentally different from those of bosons and fermions. Notably, paraparticles strikes a balance between the exclusivity of fermions and the clustering tendency of bosons, with up to two paraparticles allowed to occupy the same quantum state (rather than zero for fermions or infinitely many for bosons). But do they really exist? No-one knows yet, but Hazzard and Wang say that experimental studies of ultracold atoms could hold the answer.

7. Shining a light on obscure Nobel prizes

A photo of bright red flowers in a vase. The colours are very vivid

The list of early Nobel laureates in physics is full of famous names – Roentgen, Curie, Becquerel, Rayleigh and so on. But if you go down the list a little further, you’ll find that the 1908 prize went to a now mostly forgotten physicist by the name of Gabriel Lippmann, for a version of colour photography that almost nobody uses (though it’s rather beautiful, as the photo shows). This article tells the story of how and why this happened. A companion piece on the similarly obscure 1912 laureate, Gustaf Dalén, fell just outside this year’s top 10; if you’re a member of the Institute of Physics, you can read both of them together in the November issue of Physics World.

6. How to teach quantum physics to everyone

Why should physicists have all the fun of learning about the quantum world? This episode of the Physics World Weekly podcast focuses on the outreach work of Aleks Kissinger and Bob Coecke, who developed a picture-driven way of teaching quantum physics to a group of 15-17-year-old students. One of the students in the original pilot programme, Arjan Dhawan, is now studying mathematics at the University of Durham, and he joined his former mentors on the podcast to answer the crucial question: did it work?

5. A great physicist’s Nobel-prize-winning mistake

Albert Einstein and Niels Bohr

Niels Bohr had many good ideas in his long and distinguished career. But he also had a few that didn’t turn out so well, and this article by science writer Phil Ball focuses on one of them. Known as the Bohr-Kramers-Slater (BKS) theory, it was developed in 1923 with help from two of the assistants/students/acolytes who flocked to Bohr’s institute in Copenhagen. Several notable physicists hated it because it violated both causality and the conservation of energy, and within two years, experiments by Walther Boethe and Hans Geiger proved them right. The twist, though, is that Boethe went on to win a share of the 1954 Nobel Prize for Physics for this work – making Bohr surely one of the only scientists who won himself a Nobel Prize for his good ideas, and someone else a Nobel Prize for a bad one.

4. Reconciling the ideas of Einstein and Newton

Black holes are fascinating objects in their own right. Who doesn’t love the idea of matter-swallowing cosmic maws floating through the universe? For some theoretical physicists, though, they’re also a way of exploring – and even extending – Einstein’s general theory of relativity. This article describes how thinking about black hole collisions inspired Jiaxi Wu, Siddharth Boyeneni and Elias Most to develop a new formulation of general relativity that mirrors the equations that describe electromagnetic interactions. According to this formulation, general relativity behaves the same way as the gravitational described by Isaac Newton more than 300 years ago, with the “gravito-electric” field fading with the inverse square of distance.

3. A list of the century’s best Nobel Prizes for Physics – so far

“Best of” lists are a real win-win. If you agree with the author’s selections, you go away feeling confirmed in your mutual wisdom. If you disagree, you get to have a good old moan about how foolish the author was for forgetting your favourites or including something you deem unworthy. Either way, it’s a success – as this very popular list of the top 5 Nobel Prizes for Physics awarded since the year 2000 (as chosen by Physics World editor-in-chief Matin Durrani) demonstrates.

2. Building bridges between gravity and quantum information theory

We’re back to black holes again for the year’s second-most-read story, which focuses on a possible link between gravity and quantum information theory via the concept of entropy. Such a link could help explain the so-called black hole information paradox – the still-unresolved question of whether information that falls into a black hole is retained in some form or lost as the black hole evaporates via Hawking radiation. Fleshing out this connection could also shed light on quantum information theory itself, and the theorist who’s proposing it, Ginestra Bianconi, says that experimental measurements of the cosmological constant could one day verify or disprove it.

1. The simplest double-slit experiment

Graphic showing a red laser beam illuminating a pair of atoms. A screen behind the atoms shows red and black interference fringes

Back in 2002, readers of Physics World voted Thomas Young’s electron double-slit experiment “the most beautiful experiment in physics”. More than 20 years later, it continues to fascinate the physics community, as this, the most widely read article of any that Physics World published in 2025, shows.

Young’s original experiment demonstrated the wave-like nature of electrons by sending them through a pair of slits and showing that they create an interference pattern on a screen even when they pass through the slits one-by-one. In this modern update, physicists at the Massachusetts Institute of Technology (MIT), US, stripped this back to the barest possible bones.

Using two single atoms as the slits, they inferred the path of photons by measuring subtle changes in the atoms’ properties after photon scattering. Their results matched the predictions of quantum theory: interference fringes when they didn’t observe the photons’ path, and two bright spots when they did.

It’s an elegant result, and the fact that the MIT team performed the experiment specifically to celebrate the International Year of Quantum Science and Technology 2025 makes its popularity with Physics World readers especially gratifying.

So here’s to another year full of elegant experiments and the theories that inspire them. Long may they both continue, and thank you, as always, for taking the time to read about them.

The post Winning the popularity contest: the 10 most-read physics stories of 2025 appeared first on Physics World.

]]>
Blog Here's a second chance to catch up with the most popular stories Physics World published in 2025 https://physicsworld.com/wp-content/uploads/2024/01/wormhole-or-black-hole-773678275-Shutterstock_Rost9.jpg
Exploring the icy moons of the solar system https://physicsworld.com/a/exploring-the-icy-moons-of-the-solar-system/ Tue, 30 Dec 2025 11:00:45 +0000 https://physicsworld.com/?p=125440 Could the icy moons of our solar system hold life beyond our planet? Keith Cooper looks at how planetary scientists intend to find out

The post Exploring the icy moons of the solar system appeared first on Physics World.

]]>
Our blue planet is a Goldilocks world. We’re at just the right distance from the Sun that Earth – like Baby Bear’s porridge – is not too hot or too cold, allowing our planet to be bathed in oceans of liquid water. But further out in our solar system are icy moons that eschew the Goldilocks principle, maintaining oceans and possibly even life far from the Sun.

We call them icy moons because their surface, and part of their interior, is made of solid water-ice. There are over 400 icy moons in the solar system – most are teeny moonlets just a few kilometres across, but a handful are quite sizeable, from hundreds to thousands of kilometres in diameter. Of the big ones, the best known are Jupiter’s moons, Europa, Ganymede and Callisto, and Saturn’s Titan and Enceladus.

Yet these moons are more than just ice. Deep beneath their frozen shells – some –160 to –200 °C cold and bathed in radiation – lie oceans of water, kept liquid thanks to tidal heating as their interiors flex in the strong gravitational grip of their parent planets. With water being a prerequisite for life as we know it, these frigid systems are our best chance for finding life beyond Earth.

The first hints that these icy moons could harbour oceans of liquid water came when NASA’s Voyager 1 and 2 missions flew past Jupiter in 1979. On Europa they saw a broken and geologically youthful-looking surface, just millions of years old, featuring dark cracks that seemed to have slushy material welling up from below. Those hints turned into certainty when NASA’s Galileo mission visited Jupiter between 1995 and 2003. Gravity and magnetometer experiments proved that not only does Europa contain a liquid layer, but so do Ganymede and Callisto.

Meanwhile at Saturn, NASA’s Cassini spacecraft (which arrived in 2004) encountered disturbances in the ringed planet’s magnetic field. They turned out to be caused by plumes of water vapour erupting out of giant fractures splitting the surface of Enceladus, and it is believed that this vapour originates from an ocean beneath the moon’s ice shell. Evidence for an ocean on Titan is a little less certain, but gravity and radio measurements performed by Cassini and its European-built lander Huygens point towards the possibility of some liquid or slushy water beneath the surface.

Water, ice and JUICE

“All of these ocean worlds are going to be different, and we have to go to all of them to understand the whole spectrum of icy moons,” says Amanda Hendrix, director of the Planetary Science Institute in Arizona, US. “Understanding what their oceans are like can tell us about habitability in the solar system and where life can take hold and evolve.”

To that end, an armada of spacecraft will soon be on their way to the icy moons of the outer planets, building on the successes of their predecessors Voyager, Galileo and Cassini–Huygens. Leading the charge is NASA’s Europa Clipper, which is already heading to Jupiter. Clipper will reach its destination in 2030, with the Jupiter Icy moons Explorer (JUICE) from the European Space Agency (ESA) just a year behind it. Europa is the primary target of scientists because it is possibly Jupiter’s most interesting moon as a result of its “astrobiological potential”. That’s the view of Olivier Witasse, who is JUICE project scientist at ESA, and it’s why Europa Clipper will perform nearly 50 fly-bys of the icy moon, some as low as 25 km above the surface. JUICE will also visit Europa twice on its tour of the Jovian system.

The challenge at Europa is that it’s close enough to Jupiter to be deep inside the giant planet’s magnetosphere, which is loaded with high-energy charged particles that bathe the moon’s surface in radiation. That’s why Clipper and JUICE are limited to fly-bys; the radiation dose in orbit around Europa would be too great to linger. Clipper’s looping orbit will take it back out to safety each time. Meanwhile, JUICE will focus more on Callisto and Ganymede – which are both farther out from Jupiter than Europa is – and will eventually go into orbit around Ganymede.

“Ganymede is a super-interesting moon,” says Witasse. For one thing, at 5262 km across it is larger than Mercury, a planet. It also has its own intrinsic magnetic field – one of only three solid bodies in the solar system to do so (the others being Mercury and Earth).

Beneath the icy exterior

It’s the interiors of these moons that are of the most interest to JUICE and Clipper. That’s where the oceans are, hidden beneath many kilometres of ice. While the missions won’t be landing on the Jovian moons, these internal structures aren’t as inaccessible as we might at first think. In fact, there are three independent methods for probing them.

A cross section of Europa

If a moon’s ocean contains salts or other electrically conductive contaminants, interesting things happen when passing through the parent planet’s variable magnetic field. “The liquid is a conductive layer within a varying magnetic field and that induces a magnetic field in the ocean that we can measure with a magnetometer using Faraday’s law,” says Witasse. The amount of salty contaminants, plus the depth of the ocean, influence the magnetometer readings.

Then there’s radio science – the way that an icy moon’s mass bends a radio signal from a spacecraft to Earth. By making multiple fly-bys with different trajectories during different points in a moon’s orbit around its planet, the moon’s gravity field can be measured. Once that is known to exacting detail, it can be applied to models of that moon’s internal structure.

Perhaps the most remarkable method, however, is using a laser altimeter to search for a tidal bulge in the surface of a moon. This is exactly what JUICE will be doing when in orbit around Ganymede. Its laser altimeter will map the shape of the surface – such as hills and crevasses – but gravitational tidal forces from Jupiter are expected to cause a bulge on the surface, deforming it by 1–10 m. How large the bulge is depends upon how deep the ocean is.

“If the surface ice is sitting above a liquid layer then the tide will be much bigger because if you sit on liquid, you are not attached to the rest of the moon,” says Witasse. “Whereas if Ganymede were solid the tide would be quite small because it is difficult to move one big, solid body.”

As for what’s below the oceans, those same gravity and radio-science experiments during previous missions have given us a general idea about the inner structures of Jupiter’s Europa, Ganymede and Callisto. All three have a rocky core. Inside Europa, the ocean surrounds the core, with a ceiling of ice above it. The rock–ocean interface potentially provides a source of chemical energy and nutrients for the ocean and any life there.

Ganymede’s interior structure is more complex. Separating the 3400 km-wide rocky core and the ocean is a layer, or perhaps several layers, of high-pressure ice, and there is another ice layer above the ocean. Without that rock–ocean interface, Ganymede is less interesting from an astrobiological perspective.

Meanwhile, Callisto, being the farthest from Jupiter, receives the least tidal heating of the three. This is reflected in Callisto’s lack of evolution, with its interior having not differentiated into layers as distinct as Europa and Ganymede. “Callisto looks very old,” says Witasse. “We’re seeing it more or less as it was at the beginning of the solar system.”

Crazy cryovolcanism

Tidal forces don’t just keep the interiors of the icy moons warm. They can also drive dramatic activity, such as cryovolcanoes – icy eruptions that spew out gases and volatile materials like liquid water (which quickly freezes in space), ammonia and hydrocarbons. The most obvious example of this is found on Saturn’s Enceladus, where giant water plumes squirt out through “tiger stripe” cracks at the moon’s south pole.

But there’s also growing evidence of cryovolcanism on Europa. In 2012 the Hubble Space Telescope caught sight of what looked like a water plume jetting out 200 km from the moon. But the discovery is controversial despite more data from Hubble and even supporting evidence found in archive data from the Galileo mission. What’s missing is cast-iron proof for Europa’s plumes. That’s where Clipper comes in.

Three of Jupiter’s moons

“We need to find out if the plumes are real,” says Hendrix. “What we do know is if there is plume activity happening on Europa then it’s not as consistent or ongoing as is clearly happening at Enceladus.”

At Enceladus, the plumes are driven by tidal forces from Saturn, which squeeze and flex the 500 km-wide moon’s innards, forcing out water from an underground ocean through the tiger stripes. If there are plumes at Europa then they would be produced the same way, and would provide access to material from an ocean that’s dozens of kilometres below the icy crust. “I think we have a lot of evidence that something is happening at Europa,” says Hendrix.

These plumes could therefore be the key to characterizing the hidden oceans. One instrument on Clipper that will play an important role in investigating the plumes at Europa is an ultraviolet spectrometer, a technique that was very useful on the Cassini mission.

Because Enceladus’ plumes were not known until Cassini discovered them, the spacecraft’s instruments had not been designed to study them. However, scientists were able to use the mission’s ultraviolet imaging spectrometer to analyse the vapour when it was between Cassini and the Sun. The resulting absorption lines in the spectrum showed the plumes to be mostly pure water, ejected into space at a rate of 200 kg per second.

Black and white image of liquid eruptions from a moon's surface

The erupted vapour freezes as it reaches space and some of it snows back down onto the surface. Cassini’s ultraviolet spectrometer was again used, this time to detect solar ultraviolet light reflected and scattered off these icy particles in the uppermost layers of Enceladus’ surface. Scientists found that any freshly deposited snow from the plumes has a different chemistry from older surface material that has been weathered and chemically altered by micrometeoroids and radiation, and therefore a different ultraviolet spectrum.

Icy moon landing

Another two instruments that Cassini’s scientists adapted to study the plumes were the cosmic dust analyser, and the ion and neutral mass spectrometer. When Cassini flew through the fresh plumes and Saturn’s E-ring, which is formed from older plume ejections, it could “taste” the material by sampling it directly. Recent findings from this data indicate that the plumes are rich in salt as well as organic molecules, including aliphatic and cyclic esters and ethers (carbon-bonded acid-based compounds such as fatty acids) (Nature Astron. 9 1662). Scientists also found nitrogen- and oxygen-bearing compounds that play a role in basic biochemistry and which could therefore potentially be building blocks of prebiotic molecules or even life in Enceladus’ ocean.

Direct image of Enceladus showing blue stripes

While Cassini could only observe Enceladus’ plumes and fresh snow from orbit, astronomers are planning a lander that could let them directly inspect the surface snow. Currently in the technology development phase, it would be launched by ESA sometime in the 2040s to arrive at the moon in 2054, when winter at Enceladus’ southern, tiger stripe-adorned pole turns to spring and daylight returns.

“What makes the mission so exciting to me is that although it looks like every large icy moon has an ocean, Enceladus is one where there is a very high chance of actually sampling ocean water,” says Jörn Helbert, head of the solar system section at ESA, and the science lead on the prospective mission.

The planned spacecraft will fly through the plumes with more sophisticated instruments than Cassini’s, designed specifically to sample the vapour (like Clipper will do at Europa). Yet adding a lander could get us even closer to the plume material. By landing close to the edge of a tiger stripe, a lander would dramatically increase the mission’s ability to analyse the material from the ocean in the form of fresh snow. In particular, it would look for biosignatures – evidence of the ocean being habitable, or perhaps even inhabited by microbes.

However, new research urges caution in drawing hasty conclusions about organic molecules present in the plumes and snow. While not as powerful as Jupiter’s, Saturn also has a magnetosphere filled with high-energy ions that bombard Enceladus. A recent laboratory study, led by Grace Richards of the Istituto Nazionale di Astrofisica e Planetologia Spaziale (IAPS-INAF) in Rome, found that when these ions hit surface-ice they trigger chemical reactions that produce organic molecules, including some that are precursors to amino acids, similar to what Cassini tasted in the plumes.

So how can we be sure that the organics in Enceladus’ plumes originate from the ocean, and not from radiation-driven chemistry on the surface? It is the same quandary for dark patches around cracks on the surface of Europa, which seem to be rich with organic molecules that could either originate via upwelling from the ocean below, or just from radiation triggering organic chemistry. A lander on Enceladus might solve not just the mystery of that particular moon, but provide important pointers to explain what we’re seeing on Europa too.

More icy companions

Enceladus is not Saturn’s only icy moon; there’s Titan too. As the ringed planet’s largest moon at 5150 km across, Titan (like Ganymede) is larger than Mercury. However, unlike the other moons in the solar system, Titan has a thick atmosphere rich in nitrogen and methane. The atmosphere is opaque, hiding the surface from spacecraft in orbit except at infrared wavelengths and radar, which means that getting below the smoggy atmosphere is a must.

ESA did this in 2005 with the Huygens lander, which, as it parachuted down to Titan’s frozen surface, revealed it to be a land of hills and dune plains with river channels, lakes and seas of flowing liquid hydrocarbons. These organic molecules originate from the methane in its atmosphere reacting with solar ultraviolet.

Until recently, it was thought that Titan has a core of rock, surrounded by a shell of high-pressure ice, above which sits a layer of salty liquid water and then an outer crust of water ice. However, new evidence from re-analysing Cassini’s data suggests that rather than oceans of liquid water, Titan has “slush” below the frozen exterior, with pockets of liquid water (Nature 648 556). The team, led by Flavio Petricca from NASA’s Jet Propulsion Laboratory, looked at how Titan’s shape morphs as it orbits Saturn. There is a several-hour lag between the moon passing the peak of Saturn’s gravitational pull and its shape shifting, implying that while there must be some form of non-solid substance below Titan’s surface to allow for deformation, more energy is lost or dissipated than would be if it was liquid water. Instead, the researchers found that a layer of high-pressure ice close to its melting point – or slush – better fits the data.

Titan's atmosphere

To find out more about Titan, NASA is planning to follow in Huygens’ footsteps with the Dragonfly mission but in an excitingly different way. Set to launch in 2028, Dragonfly should arrive at Titan in 2034 where it will deploy a rotorcraft that will fly over the moon’s surface, beneath the smog, occasionally touching down to take readings. Scientists are intending to use Dragonfly to sample surface material with a mass spectrometer to identify organic compounds and therefore better assess Titan’s biological potential. It will also perform atmospheric and geological measurements, even listening for seismic tremors while landed, which could provide further clues about Titan’s interior.

Jupiter and Saturn are also not the only planets to possess icy moons. We find them around Uranus and Neptune too. Even the dwarf planet Pluto and its largest moon Charon have strong similarities to icy moons. Whether any of these bodies, so far out from the Sun, can maintain an ocean is unclear, however.

Recent findings point to an ocean deep inside Uranus’ moon Ariel that may once have been 170 km deep, kept warm by tidal heating (Icarus 444 116822). But over time Ariel’s orbit around Uranus has become increasingly circular, weakening the tidal forces acting on it, and the ocean has partly frozen. Another of Uranus’ moons, Miranda, has a chaotic surface that appears to have melted and refrozen, and the pattern of cracks on its surface strongly suggests that the moon also contains an ocean, or at least did 150 million years ago. A new mission to Uranus is a top priority in the US’s most recent Decadal Review.

It’s becoming clear that icy ocean moons could far outnumber more traditional habitable planets like Earth, not just in our solar system, but across the galaxy (although none have been confirmed yet). Understanding the internal structures of the icy moons in our solar system, and characterizing their oceans, is vital if we are to expand the search for life beyond Earth.

The post Exploring the icy moons of the solar system appeared first on Physics World.

]]>
Feature Could the icy moons of our solar system hold life beyond our planet? Keith Cooper looks at how planetary scientists intend to find out https://physicsworld.com/wp-content/uploads/2025/12/2025-12-cooper-jupiter-moons-montage-featured.jpg newsletter
Particle and nuclear physics: quirky favourites from 2025 https://physicsworld.com/a/particle-and-nuclear-physics-quirky-favourites-from-2025/ Mon, 29 Dec 2025 11:19:22 +0000 https://physicsworld.com/?p=125784 Astrophysics, archaeology, neutrino lasers and more

The post Particle and nuclear physics: quirky favourites from 2025 appeared first on Physics World.

]]>
Particle and nuclear physics evokes evokes images of huge accelerators probing the extremes of matter. But in this round-up of my favourite research of 2025 I have chosen five stories in which particle and nuclear physics forms the basis for a range of quirky and fascinating research from astrophysics to archaeology.

CERN experiment sheds light on missing blazar radiation

The Fireball experiment installed in the HiRadMat irradiation area at CERN

My first pick involves simulating the vast cosmic plasma in the lab. Blazars are extremely bright galaxies that are powered by supermassive black holes. They emit intense jets of radiation, including teraelectronvolt gamma rays – which can be detected by astronomers if a jet happens to point at Earth. As these high-energy photons travel through intergalactic space, they interact with background starlight, producing numerous electron–positron pairs. These pairs should, in theory, generate gigaelectronvolt gamma rays – but this secondary radiation has never been observed. One explanation is that intergalactic magnetic fields deflect these pairs and the resulting gamma rays away from our line of sight. However, there is no conclusive evidence for such fields. Another theory is that plasma instabilities in the sparse intergalactic medium could dissipate the energy of the pair beams. Now, physicists working on the Fireball experiment at CERN have simulated the effect of plasma instabilities by firing a beam of electron–positron pairs through a metre-long argon plasma. They found that plasma instabilities are too weak to account for the missing gamma radiation – strengthening the case for the existence of primordial intergalactic magnetic fields.

Portable source could produce high-energy muon beams

A compact source of muons could soon be discovering hidden chambers in ancient pyramids. Muons are subatomic particles similar to electrons but 200 times heavier. They are produced in copious amounts in the atmosphere by cosmic rays. These cosmic muons can penetrate long distances into materials and are finding increasing use in “muon tomography” – a technique that has imaged the interiors of huge objects such as volcanoes, pyramids and nuclear reactors. One downside of muon tomography is that muons are always vertically incident, limiting opportunities for imaging. While beams of muons can be made in accelerators, these are large and expensive facilities – and the direction of such beams are also fixed. Now, physicists at Lawrence Berkeley National Laboratory have demonstrated a compact, and potentially portable method for generating high-energy muon beams using laser plasma acceleration. It uses an ultra-intense, tightly focused laser pulse to accelerate electrons in a short plasma channel. These electrons then strike a metal target creating a muon beam. With more work, compact and portable muon sources could be developed, leading to new possibilities for non-destructive imaging in archaeology, geology, and nuclear safety.

Radioactive BEC could be a ‘superradiant neutrino laser’

Could a “superradiant neutrino laser” be created using radioactive atoms in an ultracold Bose–Einstein condensate (BEC)? The answer is “maybe”, according to theoretical work by two physicists in the US. Their proposal involves creating a BEC of rubidium-83, which undergoes beta decay involving the emission of neutrinos. Unlike photons, neutrinos are fermions and therefore cannot form the basis of conventional laser. However, if the atoms in the BEC are close enough together, quantum interactions between the atomic nuclei could accelerate beta decay and create a coherent, laser-like burst of neutrinos. This is a well-known phenomenon called superradiance. While the idea could be tested using existing technologies for making BECs, it would be a challenge to deploy radioactive rubidium in a conventional atomic physics lab. Another drawback is that there are no obvious applications for a neutrino laser – at least for now. However, the very idea of a neutrino laser is so cool that I am hoping that someone will try to build one soon!

Antimatter could be transported by road

Photo of the BASE-STEP system being transported by overhead crane through the experimental hall of the Antimatter Factory at CERN. The system is an irregularly-shaped gray box and it's suspended from a large, bright yellow crane below the hall ceiling. A hard-hatted physicist, Marcel Leonhardt, looks on while holding a tablet displaying a dashboard of parameters.

If you happen to be driving between Geneva and Dusseldorf in the future, you might just overtake a shipment of antimatter. It will be on its way to an experiment that could solve some of the biggest mysteries in physics – including why there is much more matter than antimatter in the universe. While antielectrons (positrons) can be created in a small lab, antiprotons can only be created at large and expensive accelerators. This limits where antimatter experiments can be done. But now, physicists on the BASE collaboration at CERN have shown that it should be possible to transport antiprotons by road. Protons stood in for antiprotons in their demonstration and the particles were held in an electromagnetic trap at cryogenic temperatures and ultralow pressure. By transporting their BASE-STEP system around CERN’s Meyrin site, they showed it was stable and robust enough to handle the rigors of road travel.  The system will now be re-configured to transport antiprotons about 700 km to Germany’s Heinrich Heine University. There, physicists hope to search for charge–parity–time (CPT) violations in protons and antiprotons with a precision at least 100 times higher than is currently possible at CERN. The BASE collaboration is also cited in our Top 10 Breakthroughs of 2025 for their quantum control of a single antiproton.

Solid-state nuclear clock ticks ever closer

Solid quartz crystals revolutionized time keeping in the 20th century, so could solid-state nuclear clocks soon do the same? Today, the best timekeepers use the light emitted in atomic transitions. In principle, even better clocks could be made using very-low-energy gamma-rays emitted in some nuclear transitions. Nuclei are much smaller than atoms and these transitions are governed by the strong force. This means that such nuclear clocks would be far less susceptible to performance-degrading electromagnetic noise. And unlike atomic clocks, the nuclei could be embedded in solids – which would greatly simplify clock design. Thorium-229 shows great promise as a clock nucleus but it has two practical shortcomings: it is radioactive and extremely expensive. The solution to both of these problems is a clock design that uses only a tiny amount of thorium-229. Now researchers in the US have shown that physical vapour deposition can used to create extremely thin films of thorium tetrafluoride. Characterization using a vacuum ultraviolet laser confirmed the accessibility of the clock transition – but its lifetime was shorter and the signal less intense than measured in thorium-doped crystals. However, the researchers believe that these unexpected results should not dissuade those aiming to build nuclear clocks.

 

The post Particle and nuclear physics: quirky favourites from 2025 appeared first on Physics World.

]]>
Blog Astrophysics, archaeology, neutrino lasers and more https://physicsworld.com/wp-content/uploads/2025/11/24-11-25-cosmic-fireball.jpg
Quantum science and technology: highlights of 2025 https://physicsworld.com/a/quantum-science-and-technology-highlights-of-2025/ Sun, 28 Dec 2025 14:00:13 +0000 https://physicsworld.com/?p=125561 As the International Year of Quantum Science and Technology draws to a close, Margaret Harris revisits some of the year’s best work in this ever-popular field

The post Quantum science and technology: highlights of 2025 appeared first on Physics World.

]]>
There’s only a few days left in the International Year of Quantum Science and Technology, but we’re still finding plenty to celebrate here at Physics World HQ thanks to a long list of groundbreaking work by quantum physicists in 2025. Here are a few of our favourite stories from the past 12 months.

Observing negative time in atom-photon interactions

By this point in 2025, “negative time” may sound like the answer to the question “How long have I got left to buy holiday presents for my loved ones?” Earlier in the year, though, physicists led by experimentalist Aephraim Steinberg of the University of Toronto, Canada and theorist Howard Wiseman of Griffith University in Australia showed that the concept can also describe the average amount of time a photon spends in an excited atomic state. While experts have cautioned against interpreting “negative time” too literally – we aren’t in time machine territory here – it does seem like there’s something interesting going on in this system of ultracold rubidium atoms.

Creating an operating system for quantum networks

It is a truth universally acknowledged that any sufficiently advanced technology must be in want of a simple system to operate it. In April, the quantum world passed this milestone thanks to Stephanie Wehner and colleagues at Delft University of Technology in the Netherlands. Their operating system is called QNodeOS, and they developed it with the aim of improving access to quantum computing for the 99.99999% percent of people who aren’t (and mostly don’t need to be) intimately familiar with how quantum information processors work. Another advantage of QNodeOS is that it makes it easier for classical and quantum machines (and quantum devices built with different qbit architectures) to communicate with each other.

Pushing the boundary between the quantum and classical worlds

How big does an object have to be before it stops being quantum and starts behaving like the billiard-ball-like solids familiar from introductory classical mechanics courses? It’s a question that featured in our annual “Breakthrough of the Year” back in 2021, when two independent teams demonstrated quantum entanglement in pairs of 10-micron drumheads, and we’re returning to it this year in a different system: levitated nanoparticles around 100 nm in diameter.

In one boundary-pushing experiment, Massimiliano Rossi and colleagues at ETH Zurich, Switzerland and the Institute of Photonic Sciences in Barcelona, Spain cooled silica nanoparticles enough to extend their wave-like behaviour to 73 pm. In another study, Kiyotaka Aikawa and colleagues at the University of Tokyo, Japan performed the first quantum mechanical squeezing on a nanoparticle, narrowing its velocity distribution at the expense of its momentum distribution. We may not know exactly where the quantum-classical boundary is yet, but the list of quantum behaviours we’ve observed in usually-not-quantum objects keeps getting longer.

Using a quantum computer to generate quantum random numbers

What’s the best way to generate random numbers? In part, the answer depends on how random those numbers really need to be. For many applications, the pseudorandom numbers generated by classical computers, or the random-but-with-systematic-biases numbers found in, say, radio static, are good enough. But if you really, really need those numbers to be random, you need a quantum source – and thanks to work published this year by Scott Aaronson, Shi-Han Hung, Marco Pistoia and colleagues, that quantum source can now be a quantum computer. Which is a neat way of tying things together, don’t you think?

Giving Schrödinger’s cats a nuclear option

Left to right: UNSW researchers Benjamin Wilhelm, Xi Yu, Prof Andrea Morello, Dr Danielle Holmes

Finally, we would be remiss not to mention the work of Andrea Morello and colleagues at the University of New South Wales, Australia. This year, they became the first to create quantum superpositions known as a Schrödinger’s cat states in a heavy atom, antimony, that has a large nuclear spin. They also created what is certainly the year’s best scientific team photo, posing with cats on their laps and deadpan expressions more usually associated with too-cool-for-school indie musicians.

So congratulations to them, and to all the other teams in this list, for setting the bar high in a year that offered plenty for the quantum community to celebrate. We hope you enjoyed the International Year of Quantum Science and Technology, and we look forward to many more exciting discoveries in 2026.

The post Quantum science and technology: highlights of 2025 appeared first on Physics World.

]]>
Blog As the International Year of Quantum Science and Technology draws to a close, Margaret Harris revisits some of the year’s best work in this ever-popular field https://physicsworld.com/wp-content/uploads/2025/12/quantum-entanglement-2148151388-istock-jian-fan.jpg
Medical physics and biotechnology: highlights of 2025 https://physicsworld.com/a/medical-physics-and-biotechnology-highlights-of-2025/ Sat, 27 Dec 2025 10:00:29 +0000 https://physicsworld.com/?p=125592 From vision-restoring implants to quantum-based diagnostics, Tami Freeman looks back at some of this year’s healthcare innovations

The post Medical physics and biotechnology: highlights of 2025 appeared first on Physics World.

]]>
This year saw Physics World report on a raft of innovative and exciting developments in the worlds of medical physics and biotech. These included novel cancer therapies using low-temperature plasma or laser ablation, intriguing new devices such as biodegradable bone screws and a pacemaker smaller than a grain of rice, and neural engineering breakthroughs including an ultrathin bioelectric implant that improves movement in rats with spinal cord injuries and a tiny brain sensor that enables thought control of external devices. Here are a few more research highlights that caught my eye.

Vision transformed

One remarkable device introduced in 2025 was an eye implant that restored vision to patients with incurable sight loss. In a clinical study headed up at the University of Bonn, participants with sight loss due to age-related macular degeneration had a tiny wireless implant inserted under their retina. Used in combination with specialized glasses, the system restored the ability to read in 27 of 32 participants followed up a year later.

Study participant training with the PRIMA device

We also described a contact lens that enables wearers to see near-infrared light without night vision goggles, reported on an fascinating retinal stimulation technique that enabled volunteers to see colours never before seen by the human eye, and chatted with researchers in Hungary about how a tiny dissolvable eye insert they are developing could help astronauts suffering from eye conditions.

Radiation therapy advances

2025 saw several firsts in the field of radiation therapy. Researchers in Germany performed the first cancer treatment using a radioactive carbon ion beam, on a mouse with a bone tumour close to the spine. And a team at the Trento Proton Therapy Centre in Italy delivered the first clinical treatments using proton arc therapy – a development that made it onto our top 10 Breakthroughs of the Year.

Meanwhile, the ASTRO meeting saw Leo Cancer Care introduce its first upright photon therapy system, called Grace, which will deliver X-ray radiation to patients in an upright position. This new take on radiation delivery is also under investigation by a team at RaySearch Laboratories, who showed that combining static arcs and shoot-through beams could increase plan quality and reduce delivery times in upright proton therapy.

Among other new developments, there’s a low-cost, dual-robot radiotherapy system built by a team in Canada and targeted for use in low-resource settings, a study from Australia showing that combining microbeam radiation therapy with targeted radiosensitizers can optimize brain cancer treatment, and an investigation at Moffitt Cancer Center examining how skin luminance imaging improves Cherenkov-based radiotherapy dosimetry.

The impact of AI

It’s particularly interesting to examine how the rapid evolution of artificial intelligence (AI) is impacting healthcare, especially considering its potential for use in data-intensive tasks. Earlier this year, a team at Northwestern Medicine integrated a generative AI tool into a live clinical workflow for the first time, using it to draft radiology reports on X-ray images. In routine use, the AI model increased documentation efficiency by an average of 15.5%, while maintaining diagnostic accuracy.

Samir Abboud from Northwestern Medicine

Other promising applications include identifying hidden heart disease from electrocardiogram traces, contouring targets for brachytherapy treatment planning and detecting abnormalities in blood smear samples.

When introducing AI into the clinic, however, it’s essential that any AI-driven software is accurate, safe and trustworthy. To help assess these factors, a multinational research team identified potential pitfalls in the evaluation of algorithmic bias in AI radiology models, suggesting best practices to mitigate such bias.

A quantum focus

Finally, with 2025 being the International Year of Quantum Science and Technology, Physics World examined how quantum physics looks set to play a key role in medicine and healthcare. Many quantum-based companies and institutions are already working in the healthcare sector, with quantum sensors, in particular, close to being commercialized. As detailed in this feature on quantum sensing, such technologies are being applied for applications ranging from lab and point-of-care diagnostics to consumer wearables for medical monitoring, body scanning and microscopy.

Alongside, scientists at Jagiellonian University are applying quantum entanglement to cancer diagnostics and developing the world’s first whole-body quantum PET scanner, while researchers at the University of Warwick have created an ultrasensitive magnetometer based on nitrogen-vacancy centres in diamond that could detect small cancer metastases via keyhole surgery. There’s even a team designing a protein qubit that can be produced directly inside living cells and used as a magnetic field sensor (which also featured in this year’s top 10 breakthroughs).

And in September, we ran a Physics World Live event examining how quantum optics, quantum sensors and quantum entanglement can enable advanced disease diagnostics and transform medical imaging. The recording is available to watch here.

The post Medical physics and biotechnology: highlights of 2025 appeared first on Physics World.

]]>
Blog From vision-restoring implants to quantum-based diagnostics, Tami Freeman looks back at some of this year’s healthcare innovations https://physicsworld.com/wp-content/uploads/2025/06/2025-06-jones-quantum-healthcare-illustration-stock-pix-combined.jpg
Check your physics knowledge with our bumper end-of-year quiz https://physicsworld.com/a/check-your-physics-knowledge-with-our-bumper-end-of-year-quiz/ Wed, 24 Dec 2025 10:00:26 +0000 https://physicsworld.com/?p=125410 Try our new interactive end-of-year quiz compiled by Colin White

The post Check your physics knowledge with our bumper end-of-year quiz appeared first on Physics World.

]]>
How well have you been following events in physics? There are 20 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong.

16–20 Top quark – congratulations, you’ve hit Einstein level

11–15 Strong force – good but not quite Nobel standard

6–10 Weak force – better interaction needed

0–5 Bottom quark – not even wrong

The post Check your physics knowledge with our bumper end-of-year quiz appeared first on Physics World.

]]>
Puzzle Try our new interactive end-of-year quiz compiled by Colin White https://physicsworld.com/wp-content/uploads/2025/12/question-marks-in-speech-bubbles-2180030511-istock-microstockhub.jpg
ZAP-X radiosurgery and ZAP-Axon SRS planning: technology overview, workflow and complex case insights from a leading SRS centre https://physicsworld.com/a/zap-x-radiosurgery-and-zap-axon-srs-planning-technology-overview-workflow-and-complex-case-insights-from-a-leading-srs-centre/ Wed, 24 Dec 2025 09:11:06 +0000 https://physicsworld.com/?p=125595 Join the audience for a live webinar at 4 p.m. GMT/8 a.m PST on 19 February 2026

ZAP-X represents the second cranial radiosurgery revolution, setting new standards in treatment quality and pioneering new frontiers in advanced SRS

The post ZAP-X radiosurgery and ZAP-Axon SRS planning: technology overview, workflow and complex case insights from a leading SRS centre appeared first on Physics World.

]]>
ZAP-X is a next-generation, cobalt-free, vault-free stereotactic radiosurgery system purpose-built for the brain. Delivering highly precise, non-invasive treatments with exceptionally low whole-brain and whole-body dose, ZAP-X’s gyroscopic beam delivery, refined beam geometry and fully integrated workflow enable state-of-the-art SRS without the burdens of radioactive sources or traditional radiation bunkers.

Theresa Hofman headshot

Theresa Hofman is deputy head of medical physics at the European Radiosurgery Center Munich (ERCM), specializing in stereotactic radiosurgery with the CyberKnife and ZAP‑X systems. She has been part of the ERCM team since 2018 and has extensive clinical experience with ZAP‑X, one of the first centres worldwide to implement the technology in 2021. Since then, the team has treated more than 900 patients with ZAP‑X, and she is deeply involved in both clinical use and evaluation of its planning software.

She holds a master’s degree in physics from Ludwig Maximilian University of Munich, where she authored two first‑author publications on range verification in carbon‑ion therapy. At ERCM, she has published additional first‑author studies on CyberKnife kidney‑treatment accuracy and on comparative planning between ZAP‑X and CyberKnife. She is currently conducting further research on the latest ZAP‑X planning software. Her work is driven by the goal of advancing high‑quality radiosurgery and ensuring the best possible treatment for every patient.

The post ZAP-X radiosurgery and ZAP-Axon SRS planning: technology overview, workflow and complex case insights from a leading SRS centre appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar at 4 p.m. GMT/8 a.m PST on 19 February 2026 ZAP-X represents the second cranial radiosurgery revolution, setting new standards in treatment quality and pioneering new frontiers in advanced SRS https://physicsworld.com/wp-content/uploads/2025/12/2026-feb-19-zap-feature-image.jpg
Oscar-winning computer scientist on the physics of computer animation https://physicsworld.com/a/oscar-winning-computer-scientist-on-the-physics-of-computer-animation/ Tue, 23 Dec 2025 14:03:03 +0000 https://physicsworld.com/?p=125743 CGI pioneer Pat Hanrahan is our podcast guest

The post Oscar-winning computer scientist on the physics of computer animation appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features Pat Hanrahan, who studied nuclear engineering and biophysics before becoming a founding employee of Pixar Animation Studios. As well as winning three Academy Awards for his work on computer animation, Hanrahan won the Association for Computing Machinery’s A M Turing Award for his contributions to 3D computer graphics, or CGI.

Earlier this year, Hanrahan spoke to Physics World’s Margaret Harris at the Heidelberg Laureate Forum in Germany. He explains how he was introduced to computer graphics by his need to visualize the results of computer simulations of nervous systems. That initial interest led him to Pixar and his development of physically-based rendering, which uses the principles of physics to create realistic images.

Hanrahan explains that light interacts with different materials in very different ways, making detailed animations very challenging. Indeed, he says that creating realistic looking skin is particularly difficult – comparing it to the quest for a grand unified theory in physics.

He also talks about how having a background in physics has helped his career – citing his physicist’s knack for creating good models and then using them to solve problems.

The post Oscar-winning computer scientist on the physics of computer animation appeared first on Physics World.

]]>
Podcasts CGI pioneer Pat Hanrahan is our podcast guest https://physicsworld.com/wp-content/uploads/2025/12/23-12-25-pat-hanrahan-list.jpg newsletter
Physics-based battery model parameterization from impedance data https://physicsworld.com/a/physics-based-battery-model-parameterization-from-impedance-data/ Tue, 23 Dec 2025 07:18:44 +0000 https://physicsworld.com/?p=125024 Discover the role of impedance analysis in advancing battery-model development

The post Physics-based battery model parameterization from impedance data appeared first on Physics World.

]]>

Electrochemical impedance spectroscopy (EIS) provides valuable insights into the physical processes within batteries – but how can these measurements directly inform physics-based models? In this webinar, we present recent work showing how impedance data can be used to extract grouped parameters for physics-based models such as the Doyle–Fuller–Newman (DFN) model or the reduced-order single-particle model with electrolyte (SPMe).

We will introduce PyBaMM (Python Battery Mathematical Modelling), an open-source framework for flexible and efficient battery simulation, and show how our extension, PyBaMM-EIS, enables fast numerical impedance computation for any implemented model at any operating point. We also demonstrate how PyBOP, another open-source tool, performs automated parameter fitting of models using measured impedance data across multiple states of charge.

Battery modelling is challenging, and obtaining accurate fits can be difficult. Our technique offers a flexible way to update model equations and parameterize models using impedance data.

Join us to see how our tools create a smooth path from measurement to model to simulation.

An interactive Q&A session follows the presentation.

Noël Hallemans headshot

Noël Hallemans is a postdoctoral research assistant in engineering science at the University of Oxford, where he previously lectured in mathematics at St Hugh’s College. He earned his PhD in 2023 from the Vrije Universiteit Brussel and the University of Warwick, focusing on frequency-domain, data-driven modelling of electrochemical systems.

His research at the Battery Intelligence Lab, led by Professor David Howey, integrates electrochemical impedance spectroscopy (EIS) with physics-based modelling to improve understanding and prediction of battery behaviour. He also develops multisine EIS techniques for battery characterisation during operation (for example, charging or relaxation).

 

The Electrochemical Society, Gamry Instruments, BioLogic, EL-Cell logos

The post Physics-based battery model parameterization from impedance data appeared first on Physics World.

]]>
Webinar Discover the role of impedance analysis in advancing battery-model development https://physicsworld.com/wp-content/uploads/2025/11/2026-01-ecs-wb-feature-image.jpg
Higgs decay to muon–antimuon pairs sheds light on the origin of mass https://physicsworld.com/a/higgs-decay-to-muon-antimuon-pairs-sheds-light-on-the-origin-of-mass/ Mon, 22 Dec 2025 15:51:44 +0000 https://physicsworld.com/?p=125733 CERN’s ATLAS experiment confirms previous observation by CMS

The post Higgs decay to muon–antimuon pairs sheds light on the origin of mass appeared first on Physics World.

]]>
A new measurement by CERN’s ATLAS Collaboration has strengthened evidence that the masses of fundamental particles originate through their interaction with the Higgs field. Building on earlier results from CERN’s CMS Collaboration, the observations suggest that muon–antimuon pairs (dimuons) can be created by the decay of Higgs bosons.

In the Standard Model of particle physics, the fermionic particles are organized into three different generations, broadly in terms of their masses. The first generation comprises the two lightest quarks (up and down), the lightest lepton (the electron) and the electron neutrino. The second includes the strange and charm quarks, the muon and its neutrino; and the third generation the bottom and top quarks, the tau and its neutrino. In terms of the charged fermions, the top quark is nearly 340,000 times heavier than the lightest – the electron.

All of the quarks and leptons have both right- and left-handed components, which relate to the direction of a particle’s spin relative to its direction of motion (right-handed if both directions are aligned; left-handed if they are anti-aligned).

Right- and left-handed particles are treated the same by the strong and electromagnetic forces, regardless of their generation in the Standard Model. The weak force, however, only acts on left-handed particles.

Flipping handedness

In the 1960s, Steven Weinberg uncovered a theoretical solution to this seemingly bizarre asymmetry. He proposed that the Higgs field acts as a bridge between each particle’s left- and right-handed components, in a way that respects the Standard Model’s underlying symmetry. This interaction causes the particle to constantly flip between its two components, creating a resistance to motion that can be perceived as mass.

However, this deepens the mystery. According to Weinberg’s theory, higher-mass particles must interact more strongly with this Higgs field – but in contrast, the strong and electromagnetic forces can only differentiate between these particles according to their charges (colour and electrical). The question is how does Higgs field can distinguish between particles in different generations if their charges are identical?

Key to solving this mystery will be to observe the decay products of Higgs bosons with different interaction strengths. For stronger interactions, corresponding to heavier generations, these decays should become far more likely.

In 2022, both the ATLAS and CMS collaborations did just this. Through proton–proton collision experiments at CERN’s Large Hadron Collider (LHC), the groups independently observed Higgs bosons decaying to tau–antitau pairs. This relatively common process occurred at the same rate as predicted by theory.

Rare decay

A year earlier, similar experiments by the CMS collaboration probed the second generation by observing muon–antimuon pairs from the decays of Higgs bosons. This rarer event occurs in just 1 in 5000 Higgs decays.

In their latest study, the ATLAS collaboration have now reproduced this CMS result independently. They collided protons at about 13 TeV and observed muon–antimuon pairs in the same range of energies predicted by theory.

Through the improvements they offer on the earlier CMS analysis, these new results bring dimuon observations to a statistical significance of 3.4σ. This is well below the 5σ standard required for the observation to be considered a discovery, so more work is needed.

The research could also provide guidance in the search for much rarer Higgs interactions that involve first-generation particles. This includes decay electron–positron pairs, originating from Higgs bosons which decay in just 1 in 200 million cases.

The research is described in Physical Review Letters.

The post Higgs decay to muon–antimuon pairs sheds light on the origin of mass appeared first on Physics World.

]]>
Research update CERN’s ATLAS experiment confirms previous observation by CMS https://physicsworld.com/wp-content/uploads/2025/12/22-12-25-dimuon-atlas.jpg
Russia plans to revive abandoned Soviet-era particle accelerator https://physicsworld.com/a/russia-plans-to-revive-abandoned-soviet-era-particle-accelerator/ Mon, 22 Dec 2025 10:00:01 +0000 https://physicsworld.com/?p=125709 Accelerator could be used to generate an intense beam of neutrinos

The post Russia plans to revive abandoned Soviet-era particle accelerator appeared first on Physics World.

]]>
Russia wants to revive a Soviet-era particle accelerator that has been abandoned since the 1990s. The Kurchatov Institute for High Energy Physics has allocated 176 million rubles ($25m) to assess the current condition of the unfinished 600 GeV Proton Accelerator and Storage Complex (UNK) in Protvino near Moscow. The move is part of plans to strengthen Russia’s technological sovereignty and its activity in high-energy physics.

Although work on the UNK was officially halted in the 1990s, construction only ceased in 2013. At that time, a 21 km tunnel had been built at a depth of 60 m along with underground experimental hall lighting and ventilation systems.

In February 2025, physicist Mikhail Kovalchuk, president of the Kurchatov Institute National Research Center, noted in Russia’s Kommersant newspaper that enormous intellectual and material resources had been invested in the UNK’s design and development before it was cancelled.

According to Kovalchuk, Western sanctions provided an additional impetus to restore the project, as scientists that had previously worked in CERN projects could no longer do so.

“By participating in [CERN] projects, we not only preserved our scientific potential and survived a difficult period, but also enriched ourselves intellectually and technologically,” added Kovalchuk. “Today we are self-sufficient.”

Anatoli Romaniouk, a Russian particle physicist who has worked at CERN since 1990, told Physics World that a revival of the UNK will at least maintain fundamental physics research in Russia.

“If this project is realized, then there is hope that it will be possible to at least somewhat slow down the scientific lag of Russian physics with global science,” says Romaniouk.

While official plans for the accelerator have not been disclosed, it is thought that the proton beam energy could be upgraded to reach 3 TeV. Romaniouk says it is also unclear what kind of science will be done with the accelerator, which will depend on what ideas come forward.

Yet some Russian scientists say that it could be used to produce neutrinos. This would involve putting a neutrino detector nearby to characterize the beam before it is sent some 4000 km towards Lake Baikal where a neutrino detector – the Baikal Deep Underwater Neutrino Telescope – is already installed 1 km underground.

“I think it’s possible to find an area of ​​high-energy physics where the research with the help of this collider could be beneficial,” adds Romaniouk.

The post Russia plans to revive abandoned Soviet-era particle accelerator appeared first on Physics World.

]]>
News Accelerator could be used to generate an intense beam of neutrinos https://physicsworld.com/wp-content/uploads/2025/12/baikal-19-12-2025.jpg newsletter
Real-world quantum entanglement is far from an unlimited resource https://physicsworld.com/a/real-world-quantum-entanglement-is-far-from-an-unlimited-resource/ Fri, 19 Dec 2025 12:00:34 +0000 https://physicsworld.com/?p=125667 New study shows that operational limits redefine the cost and convertibility of entanglement

The post Real-world quantum entanglement is far from an unlimited resource appeared first on Physics World.

]]>
Achieving a profound understanding of any subject is hard. When that subject is quantum mechanics, it’s even harder. And when one departs from ideal theoretical scenarios and enters the real world of experimental limitations, it becomes more challenging still – yet that is what physicists at the Freie Universität Berlin (FU-Berlin), Germany recently did by exploring what happens to entanglement theory in real quantum computers. In doing so, they created a bridge between two fields that have so far largely developed in parallel: entanglement theory (rooted in physics) and computational complexity (rooted in computer science).

Ebits, the standard currency of entanglement

In quantum mechanics, a composite system is said to be entangled when its total wavefunction cannot be written as a product of the states of its individual subsystems. This leads to correlations between subsystems that arise from the structure of the quantum state, not from any shared classical information. Many speed-ups achieved in quantum computing, quantum cryptography and quantum metrology rely heavily on entanglement, but not every form of entanglement is equally useful. Only specific kinds of entanglement will enable a given computational or communication task.

To make quantum technologies practical, the available entangled resources must therefore often be converted into forms suitable for specific applications. One major conversion process involves transforming partially entangled states into, or extracting them from, the maximally entangled bit (ebit) that acts as the standard unit of entanglement. High-fidelity ebits – entangled pairs that are extremely close to the ideal perfectly entangled state – can be distilled from noisy or imperfect entangled states through entanglement distillation, while entanglement dilution allows one to reconstruct the desired entangled states from purified ebits.

In an idealized setting, with an infinite number of copies of entangled states and unlimited computational power, a single quantity called the von Neumann entropy fully determines how many ebits can be extracted or are required. But reality is far less forgiving: we never have infinite resources, and computational power is always limited, just like we don’t have an infinite amount of gold on Earth.

Entanglement under finite resources

In the present work, which is published in Nature Physics, the FU-Berlin team of Lorenzo Leone, Jacopo Rizzo, Jens Eisert and Sofiene Jerbi asked what happens when these ideal assumptions break down. They study the case where only a finite number of entangled states, which can scale at most polynomially with the number of quantum bits (qubits) in the system, are considered and all local operations and classical communication (LOCC) are performed in a finite polynomial time.

They found that the simple correspondence between von Neumann entropy and extractable or required ebits no longer holds: even when a state has a large von Neumann entropy, the number of ebits that can be efficiently extracted may be much lower. In these cases, the number is bounded instead by the min-entropy of the reduced state (an operational measure determined solely by the state’s largest eigenvalue that captures how much entanglement can be reliably distilled from a single copy of the state) without averaging over many uses. On the other hand, even a state with negligible von Neumann entanglement may require a maximal ebit budget for efficient dilution.

Leone and Eisert say they were inspired to perform this study by recent work on so-called pseudo-entangled states, which are states that look at lot more entangled than they are for computationally bounded observers. Their construction of pseudo-entangled states highlights a dramatic worst-case scenario: a state that appears almost unentangled by conventional measures may still require a large number of ebits to create it efficiently. The takeaway is that computability matters, and quantum resources you might have thought were available may be, in effect, locked away simply because they cannot be processed efficiently. In other words, practical limitations make the line between a “resource” and a “usable resource” even sharper.

Quantum resources in a limited world

The researchers say that their study raises multiple questions for future exploration. One such question concerns whether a similar computational‐efficiency gap exists for other quantum resources such as magic and coherence. Another is whether one can build a full resource theory with complexity constraints, where quantities reflect not just what can be converted, but how efficient that conversion is.

Regardless of the answers, the era of entanglement under infinite book‐keeping is giving way to an era of entanglement under limited books, limited clocks and limited gates. And in this more realistic space, quantum technologies may still shine, but the calculus of what can be done and what can be harnessed needs a serious retooling.

The post Real-world quantum entanglement is far from an unlimited resource appeared first on Physics World.

]]>
Research update New study shows that operational limits redefine the cost and convertibility of entanglement https://physicsworld.com/wp-content/uploads/2025/12/information-theoretic-and-computational-entanglement-manipulation.webp newsletter1
Hybrid deep-learning model eases brachytherapy planning https://physicsworld.com/a/hybrid-deep-learning-model-eases-brachytherapy-planning/ Fri, 19 Dec 2025 09:30:42 +0000 https://physicsworld.com/?p=125603 The BCTVNet neural network provides accurate and rapid target volume delineation for cervical cancer brachytherapy

The post Hybrid deep-learning model eases brachytherapy planning appeared first on Physics World.

]]>
CT scan slices and target contours

Brachytherapy – a cancer treatment that destroys tumours using small radioactive sources implanted inside the body – plays a critical role in treating cervical cancer, offering an important option for patients with inoperable locally advanced disease. Brachytherapy can deliver high radiation doses directly to the tumour while ensuring nearby healthy tissues receive minimal dose; but its effectiveness relies on accurate delineation of the treatment target. A research team in China is using a hybrid deep-learning model to help with this task.

Planning brachytherapy treatments requires accurate contouring of the clinical target volume (CTV) on a CT scan, a task that’s traditionally performed manually. The limited soft-tissue contrast of CT, however, can result in unclear target boundaries, while applicator or needle insertion (used to deliver the radioactive sources) can deform and displace nearby organs. This makes manual contouring a time-consuming and subjective task that requires a high level of operator expertise.

Automating this process could reduce reliance on operator experience, increase workflow efficiency and improve contouring consistency. With this aim, the research team – headed up by He Ma from Northeastern University and Lin Zhang from Shanghai University of International Business and Economics – developed a 3D hybrid neural network called BCTVNet.

Currently, most brachytherapy segmentation models are based on convolutional neural networks (CNNs). CNNs effectively capture local structural features and can model fine anatomical details but struggle with long-range dependencies, which can cause problems if the target extends across multiple CT slices. Another option is to use transformer-based models that can integrate spatial information across distant regions and slices; but these are less effective at capturing fine-grained local detail.

To combine the strengths of both, BCTVNet integrates CNN with transformer branches to provide strong local detail extraction along with global information integration. BCTVNet performs 3D segmentation directly on post-insertion CT images, enabling the CTV to be defined based on the actual treatment geometry.

Model comparisons

Zhang, Ma and colleagues assessed the performance of BCTVNet using a private CT dataset from 95 patients diagnosed with locally advanced cervical cancer and treated with CT-guided 3D brachytherapy (76 in the training set, 19 in the test set). The scans had an average of 96 slices per patient and a slice thickness of 3 mm.

CT scans used to plan cervical cancer brachytherapy often exhibit unclear target boundaries. To enhance the local soft-tissue contrast and improve boundary recognition, the researchers pre-processed the CT volumes with a 3D version of the CLAHE (contrast-limited adaptive histogram equalization) algorithm, which processes the entire CT scan as a volumetric input. They then normalized the intensity values to standardize the input for the segmentation models.

The researchers compared BCTVNet with 12 popular CNN- and transformer-based segmentation models, evaluating segmentation performance via a series of metrics, including Dice similarity coefficient (DSC), Jaccard index, Hausdorff distance 95th percentile (HD95) and average surface distance.

Contours generated by BCTVNet were closest to the ground truth, reaching a DSC of 83.24% and a HD95 (maximum distance from ground truth excluding the worst 5%) of 3.53 mm. BCTVNet consistently outperformed the other models across all evaluation metrics. It also demonstrated strong classification accuracy, with a precision of 82.10% and a recall of 85.84%, implying fewer false detections and successful capture of target regions.

To evaluate the model’s generalizability, the team conducted additional experiments on the public dataset SegTHOR, which contains 60 thoracic 3D CT scans (40 for training, 20 for testing) from patients with oesophageal cancer. Here again, BCTVNet achieved the best scores among all the segmentation models, with the highest average DSC of 87.09% and the lowest average HD95 of 7.39 mm.

“BCTVNet effectively overcomes key challenges in CTV segmentation and achieves superior performance compared to existing methods,” the team concludes. “The proposed approach provides an effective and reliable solution for automatic CTV delineation and can serve as a supportive tool in clinical workflows.”

The researchers report their findings in Machine Learning: Science and Technology.

The post Hybrid deep-learning model eases brachytherapy planning appeared first on Physics World.

]]>
Research update The BCTVNet neural network provides accurate and rapid target volume delineation for cervical cancer brachytherapy https://physicsworld.com/wp-content/uploads/2025/12/19-12-25-bctvnet-perfomance-comparison-featured.jpg newsletter1
Pioneers of 2D metals win the Physics World 2025 Breakthrough of the Year https://physicsworld.com/a/pioneers-of-2d-metals-win-the-physics-world-2025-breakthrough-of-the-year/ Thu, 18 Dec 2025 14:15:42 +0000 https://physicsworld.com/?p=125684 Researchers describe their success with five metals as the “tip of the iceberg”

The post Pioneers of 2D metals win the <em>Physics World</em> 2025 Breakthrough of the Year appeared first on Physics World.

]]>
Photograph of the apparatus used to create 2D metals

The Physics World 2025 Breakthrough of the Year is awarded to Guangyu ZhangLuojun Du  and colleagues at the Institute of Physics of the Chinese Academy of Sciences for producing the first 2D sheets of metal. The team produced five atomically thin 2D metals – bismuth, tin, lead, indium and gallium – with the thinnest being around 6.3 Å. The researchers say their work is just the “tip of the iceberg” and now aim to use their new materials to probe the fundamentals of physics. Their breakthrough could also lead to the development of new technologies.

Since the discovery of graphene – a sheet of carbon just one atom thick – in 2004, hundreds of other 2D materials have been fabricated and studied. In most of these, layers of covalently bonded atoms are separated by gaps where neighbouring layers are held together only by weak van der Waals (vdW) interactions, making it relatively easy to “shave off” single layers to make 2D sheets. Many thought that making atomically thin metals would be impossible given that each atom in a metal is strongly bonded to surrounding atoms in all directions.

The technique developed by Zhang, Du and colleagues involves heating powders of pure metals between two monolayer-MoS2/sapphire vdW anvils. Once the metal powders are melted into a droplet, the researchers applied a pressure of 200 MPa and continued this “vdW squeezing” until the opposite sides of the anvils cooled to room temperature and 2D sheets of metal were formed.

“Right now, we have reported five single element metals, but actually we can do more because of the 88 metals in the periodic table,” Zhang explains in today’s episode of the Physics World Weekly podcast. In the podcast, he also talks about the team’s motivation creating 2D metals and some of the possible technological applications of the materials.

The Breakthrough of the Year was chosen by the Physics World editorial team. We looked back at all the scientific discoveries we have reported on since 1 January and picked the most important. In addition to being reported in Physics World in 2025, the breakthrough must meet the following criteria:

  • Significant advance in knowledge or understanding
  • Importance of work for scientific progress and/or development of real-world applications
  • Of general interest to Physics World readers

Before we picked our winners, we released the Physics World Top 10 Breakthroughs for 2025, which served as our shortlist. The other nine breakthroughs are listed below in no particular order.

Finding the stuff of life on an asteroid

Tim McCoy and Cari Corrigan

To Tim McCoy, Sara Russell, Danny Glavin, Jason Dworkin, Yoshihiro Furukawa, Ann Nguyen, Scott Sandford, Zack Gainsforth and an international team of collaborators for identifying salt, ammonia, sugar, nitrogen- and oxygen-rich organic materials, and traces of metal-rich supernova dust, in samples returned from the near-Earth asteroid 101955 Bennu. The incredible chemical richness of this asteroid, which NASA’s OSIRIS-REx spacecraft visited in 2020, lends support to the longstanding hypothesis that asteroid impacts could have “seeded” the early Earth with the raw ingredients needed for life to form. The discoveries also enhance our understanding of how Bennu and other objects in the solar system formed out of the disc of material that coalesced around the young Sun.

The first superfluid molecule

To Takamasa Momose of the University of British Columbia, Canada, and Susumu Kuma of the RIKEN Atomic, Molecular and Optical Physics Laboratory, Japan for observing superfluidity in a molecule for the first time. Molecular hydrogen is the simplest and lightest of all molecules, and theorists predicted that it would enter a superfluid state at a temperature between 1‒2 K. But this is well below the molecule’s freezing point of 13.8 K, so Momose, Kuma and colleagues first had to develop a way to keep the hydrogen in a liquid state. Once they did that, they then had to work out how to detect the onset of superfluidity. It took them nearly 20 years, but by confining clusters of hydrogen molecules inside helium nanodroplets, embedding a methane molecule within the clusters, and monitoring the methane’s rotation, they were finally able to do it. They now plan to study larger clusters of hydrogen, with the aim of exploring the boundary between classical and quantum behaviour in this system.

Hollow-core fibres break 40-year limit on light transmission

To researchers at the University of Southampton and Microsoft Azure Fiber in the UK, for developing a new type of optical fibre that reduces signal loss, boosts bandwidth and promises faster, greener communications. The team, led by Francesco Poletti, achieved this feat by replacing the glass core of a conventional fibre with air and using glass membranes that reflect light at certain frequencies back into the core to trap the light and keep it moving through the fibre’s hollow centre. Their results show that the hollow-core fibres exhibit 35% less attenuation than standard glass fibres – implying that fewer amplifiers would be needed in long cables – and increase transmission speeds by 45%. Microsoft has begun testing the new fibres in real systems, installing segments in its network and sending live traffic through them. These trials open the door to gradual rollout and Poletti suggests that the hollow-core fibres could one day replace existing undersea cables.

First patient treatments delivered with proton arc therapy

Trento Proton Therapy Centre researchers

To Francesco Fracchiolla and colleagues at the Trento Proton Therapy Centre in Italy for delivering the first clinical treatments using proton arc therapy (PAT). Proton therapy – a precision cancer treatment – is usually performed using pencil-beam scanning to precisely paint the dose onto the tumour. But this approach can be limited by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this by moving to an arc trajectory with protons delivered over a large number of beam angles and the potential to optimize the number of energies used for each beam direction. Working with researchers at RaySearch Laboratories in Sweden, the team performed successful dosimetric comparisons with clinical proton therapy plans. Following a feasibility test that confirmed the viability of clinical PAT delivery, the researchers used PAT to treat nine cancer patients. Importantly, all treatments were performed using the centre’s existing proton therapy system and clinical workflow.

A protein qubit for quantum biosensing

To Peter Maurer and David Awschalom at the University of Chicago Pritzker School of Molecular Engineering and colleagues for designing a protein quantum bit (qubit) that can be produced directly inside living cells and used as a magnetic field sensor. While many of today’s quantum sensors are based on nitrogen–vacancy (NV) centres in diamond, they are large and hard to position inside living cells. Instead, the team used fluorescent proteins, which are just 3 nm in diameter and can be produced by cells at a desired location with atomic precision. These proteins possess similar optical and spin properties to those of NV centre-based qubits – namely that they have a metastable triplet state. The researchers used a near-infrared laser pulse to optically address a yellow fluorescent protein and read out its triplet spin state with up to 20% spin contrast. They then genetically modified the protein to be expressed in bacterial cells and measured signals with a contrast of up to 8%. They note that although this performance does not match that of NV quantum sensors, it could enable magnetic resonance measurements directly inside living cells, which NV centres cannot do.

Highest-resolution images ever taken of a single atom

To the team led by Yichao Zhang at the University of Maryland and Pinshane Huang of the University of Illinois at Urbana-Champaign for capturing the highest-resolution images ever taken of individual atoms in a material. The team used an electron-microscopy technique called electron ptychography to achieve a resolution of 15 pm, which is about 10 times smaller than the size of an atom. They studied a stack of two atomically-thin layers of tungsten diselenide, which were rotated relative to each other to create a moiré superlattice. These twisted 2D materials are of great interest to physicists because their electronic properties can change dramatically with small changes in rotation angle. The extraordinary resolution of their microscope allowed them to visualize collective vibrations in the material called moiré phasons. These are similar to phonons, but had never been observed directly until now. The team’s observations align with theoretical predictions for moiré phasons. Their microscopy technique should boost our understanding of the role that moiré phasons and other lattice vibrations play in the physics of solids. This could lead to the engineering of new and useful materials.

Quantum control of individual antiprotons

Photo of a physicist working at the BASE experiment

To CERN’s BASE collaboration for being the first to perform coherent spin spectroscopy on a single antiproton – the antimatter counterpart of the proton. Their breakthrough is the most precise measurement yet of the antiproton’s magnetic properties, and could be used to test the Standard Model of particle physics. The experiment begins with the creation of high-energy antiprotons in an accelerator. These must be cooled (slowed down) to cryogenic temperatures without being lost to annihilation. Then, a single antiproton is held in an ultracold electromagnetic trap, where microwave pulses manipulate its spin state. The resulting resonance peak was 16 times narrower than previous measurements, enabling a significant leap in precision. This level of quantum control opens the door to highly sensitive comparisons of the properties of matter (protons) and antimatter (antiprotons). Unexpected differences could point to new physics beyond the Standard Model and may also reveal why there is much more matter than antimatter in the visible universe.

A smartphone-based early warning system for earthquakes

To Richard Allen, director of the Berkeley Seismological Laboratory at the University of California, Berkeley, and Google’s Marc Stogaitis and colleagues for creating a global network of Android smartphones that acts as an earthquake early warning system. Traditional early warning systems use networks of seismic sensors that rapidly detect earthquakes in areas close to the epicentre and issue warnings across the affected region. Building such seismic networks, however, is expensive, and many earthquake-prone regions do not have them. The researchers utilized the accelerometer in millions of phones in 98 countries to create the Android Earthquake Alert (AEA) system. Testing the app between 2021 and 2024 led to the detection of an average of 312 earthquakes a month, with magnitudes ranging from 1.9 to 7.8. For earthquakes of magnitude 4.5 or higher, the system sent “TakeAction” alerts to users, sending them, on average, 60 times per month for an average of 18 million individual alerts per month. The system also delivered lesser “BeAware” alerts to regions expected to experience a shaking intensity of magnitude 3 or 4. The team now aims to produce maps of ground shaking, which could assist the emergency response services following an earthquake.

A “weather map” for a gas giant exoplanet

To Lisa Nortmann at Germany’s University of Göttingen and colleagues for creating the first detailed “weather map” of an exoplanet. The forecast for exoplanet WASP-127b is brutal with winds reaching 33,000 km/hr, which is much faster than winds found anywhere in the Solar System. The WASP-127b is a gas giant located about 520 light–years from Earth and the team used the CRIRES+ instrument on the European Southern Observatory’s Very Large Telescope to observe the exoplanet as it transited across its star in less than 7 h. Spectral analysis of the starlight that filtered through WASP-127b’s atmosphere revealed Doppler shifts caused by supersonic equatorial winds. By analysing the range of Doppler shifts, the team created a rough weather map of  WASP-127b, even though they could not resolve light coming from specific locations on the exoplanet. Nortmann and colleagues concluded that the exoplanet’s poles are cooler that the rest of WASP-127b, where temperatures can exceed 1000 °C. Water vapour was detected in the atmosphere, raising the possibility of exotic forms of rain.

ROPP banner

Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.

The post Pioneers of 2D metals win the <em>Physics World</em> 2025 Breakthrough of the Year appeared first on Physics World.

]]>
News Researchers describe their success with five metals as the “tip of the iceberg” https://physicsworld.com/wp-content/uploads/2025/12/pw-breakthrough-oty-2025-featured-1.png
How to make 2D metals: Guangyu Zhang on his team’s award-winning research https://physicsworld.com/a/how-to-make-2d-metals-guangyu-zhang-on-his-teams-award-winning-research/ Thu, 18 Dec 2025 14:13:52 +0000 https://physicsworld.com/?p=125696 This podcast features a winner of the Physics World 2025 Breakthrough of the Year

The post How to make 2D metals: Guangyu Zhang on his team’s award-winning research appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features Guangyu Zhang. Along with his colleagues at the Institute of Physics of the Chinese Academy of Sciences, Zhang has bagged the 2025 Physics World Breakthrough of the Year award for creating the first 2D metals.

In a wide-ranging conversation, we chat about the motivation behind the team’s research; the challenges in making 2D metals and how these were overcome; and how 2D metals could be used to boost our understanding of condensed-matter physics and create new technologies.

I am also joined by my Physics World colleague Matin Durrani to talk about some of the exciting physics that we will be showcasing in 2025.

ROPP banner

Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.

The post How to make 2D metals: Guangyu Zhang on his team’s award-winning research appeared first on Physics World.

]]>
Podcasts This podcast features a winner of the Physics World 2025 Breakthrough of the Year https://physicsworld.com/wp-content/uploads/2025/12/18-12-25-guangyu-zhang-list.jpg
Quantum cluster targets business growth https://physicsworld.com/a/quantum-cluster-targets-business-growth/ Thu, 18 Dec 2025 12:52:22 +0000 https://physicsworld.com/?p=125645 The Harwell Quantum Cluster is building on the success of the National Quantum Computing Centre to catalyse innovation and enable UK-based companies to thrive in a global marketplace

The post Quantum cluster targets business growth appeared first on Physics World.

]]>
Julia Sutcliffe (second from the left), Chief Scientific Advisor for the UK's Department for Business and Trade, visits the NQCC's experimental facilities on the Harwell Cluster (Courtesy: NQCC)

Ever since the National Quantum Computing Centre was launched five years ago, its core mission has been to accelerate the pathway towards practical adoption of the technology. That has required technical innovation to scale up hardware platforms and create the software tools and algorithms needed to tackle real-world applications, but there has also been a strong focus on engaging with companies to build connections, provide access to quantum resources, and identify opportunities for deriving near-term value from quantum computing.

It makes sense, then, that the NQCC should form the cornerstone of a new Quantum Cluster at the Harwell Campus of Science and Innovation in Oxfordshire. The hope is that the NQCC’s technical expertise and infrastructure, combined with the services and facilities available on the wider Harwell Campus, will provide a magnet for new quantum start-ups as well as overseas companies that are seeking to establish a presence within the UK’s quantum ecosystem.

By accelerating collaboration across government, industry and academia, we will turn research excellence into industrial strength.

“We want to leverage the public investment that has been made into the NQCC to catalyse business growth and attract more investment into the UK’s quantum sector,” said Najwa Sidqi, manager of the Harwell Quantum Cluster, at the official launch event in November. “By accelerating collaboration across government, industry and academia, we will turn research excellence into industrial strength.”

The cluster, which has been ramping up its activities over the last year, is working to ambitious targets. Over the next decade the aim is to incubate at least 100 quantum companies on the Harwell site, create more than 1000 skilled jobs, and generate more than £1 billion of private and public investment. “Our aim is to build the foundations of a globally competitive quantum economy that delivers impact far beyond science and research,” added Sidqi.

Tangible evidence that the approach works is offered by the previous clustering activities on the Harwell Campus, notably the Space Cluster that has expanded rapidly since its launch in 2010. Anchored by the RAL Space national laboratory and bolstered by the presence of ESA and the UK Space Agency, the Space Cluster now comprises more than 100 organizations that range from small start-ups to the UK technology hubs of global heavyweights such as Airbus and Lockheed Martin.

More generally, the survival rate of start-up companies operating on the Harwell site is around 95%, compared with an average of around 50%. “At Harwell there is a high density of innovators sharing the same space, which generates more connections and more ideas,” said Julia Sutcliffe, Chief Scientific Advisor for the UK’s Department for Business and Trade. “It provides an incredible combination of world-class infrastructure and expertise, accelerating the innovation pathway and helping to create a low-risk environment for early-stage businesses and investors.”

The NQCC has already seeded that innovation activity through its early engagement with both quantum companies and end users of the technology. One major initiative has been the testbed programme, which has enabled the NQCC to invest £30m in seven hardware companies to deploy prototype quantum computers on the Harwell Campus. As well as providing access to operational systems based on all of the leading qubit modalities, the testbed programme has also provided an impetus for inward investment and job creation.

One clear example is provided by QuEra Computing, a US-based spin-off from Harvard University and the Massachusetts Institute of Technology that is developing a hardware platform based on neutral atoms. QuEra was one of the companies to win funding through the testbed programme, with the firm setting up a UK-based team to deploy its prototype system on the Harwell Campus. But the company could soon see the benefits of establishing a UK centre for technology development on the site. “Harwell is immensely helpful to us,” said Ed Durking, Corporate Director of QuEra Computing UK. “It’s a nucleus where we enjoy access to world-class talent, vendors, customers, and suppliers.”

On a practical level, establishing its UK headquarters on the Harwell Campus has provided QueEra with easy access to specialist contractors and services for fitting out and its laboratories. In June the company moved into a building that is fully equipped with flexible lab space for R&D and manufacturing, and since then the UK-based team has started to build the company’s most powerful quantum computer at the facility. Longer term, establishing a base within the UK could open the door to new collaborations and funding opportunities for QuEra to further develop its technology, with the company now focused on integrating full error correction into its neutral-atom platform by 2026.

Access to the world-class infrastructure on the Harwell Campus has benefitted the other testbed providers in different ways. For ORCA Computing, a UK company developing and manufacturing photonic quantum computers, the goal was to install a testbed system within Harwell’s high-performance computing centre rather than the NQCC’s experimental labs. “Our focus is to build commercial photonic quantum systems that can be integrated into conventional datacentres, enabling hybrid quantum-classical workflows for real-world applications,” explained Geoff Barnes, Head of Customer Success at ORCA. “Having the NQCC as an expert customer enabled us to demonstrate and validate our capabilities, building the system in our own facility and then deploying it within an operational environment.”

This process provided a valuable learning experience for the ORCA engineers. The experts at Harwell helped them to navigate the constraints of installing equipment within a live datacentre, while also providing practical assistance with the networking infrastructure. Now that the system is up and running, the Harwell site also provides ORCA with an open environment for showcasing its technology to prospective customers. “As well as delivering a testbed system to the NQCC, we can now demonstrate our platform to clients within a real-world setting,” added Barnes. “It has also been a critical step toward commercial deployment on our roadmap, enabling our partners to access our systems remotely for applications development.”

Michael Cuthbert (left), Director of the NQCC, takes Sutcliffe and other visitors on a tour of the national lab (Courtesy: NQCC)

While the NQCC has already played a vital role in supporting companies as they make the transition towards commercialization, the Quantum Cluster has a wider remit to extend those efforts into other quantum technologies, such as sensing and communications, that are already finding real-world applications. It will also have a more specific focus on attracting new investment into the UK, and supporting the growth of companies that are transitioning from the start-up phase to establish larger scale commercial operations.

“In the UK we have traditionally been successful in creating spin-off activities from our strong research base, but it has been more challenging to generate the large capital investments needed to scale businesses in the technology sector,” commented Sidqi. “We want to strengthen that pipeline to ensure that the UK can translate its leadership in quantum research and early-stage innovation into long-term prosperity.”

To accelerate that process the Quantum Cluster announced a strategic partnership with Quantum Exponential, the first UK-based venture capital fund to be entirely focused on quantum technologies. Ian Pearson, the non-executive chairman of the Quantum Exponential, explained that the company is working to generate an investment fund of £100m by the end of 2027 that will support quantum companies as they commercialize their technologies and scale up their businesses. “Now is the time for investment into quantum sector,” said Pearson. “A specialist quantum fund with the expertise needed to analyse and price deals, and to do all the necessary due diligence, will attract more private investment that will help UK companies to grow and scale.”

Around two-thirds of the investments will be directed towards UK-based companies, and as part of the partnership Quantum Exponential will work with the Quantum Cluster to identify and support high-potential quantum businesses within the Harwell Campus. The Quantum Cluster will also play a crucial role in boosting investor confidence – particularly in the unique ability of the Harwell Campus to nurture successful technology businesses – and making connections with international innovation networks to provide UK-based companies with improved access to global markets.

“This new cluster strengthens our national capability and sends a clear signal to global investors that the UK is the place to develop and scale quantum technologies,” commented Michael Cuthbert, Director of the NQCC. “It will help to ensure that quantum innovation delivers benefits not just for science and industry, but for the economy and society as a whole.”

The post Quantum cluster targets business growth appeared first on Physics World.

]]>
Analysis The Harwell Quantum Cluster is building on the success of the National Quantum Computing Centre to catalyse innovation and enable UK-based companies to thrive in a global marketplace https://physicsworld.com/wp-content/uploads/2025/12/2025-12-nqcc-na-feature-image.jpg newsletter
Transparent and insulating aerogel could boost energy efficiency of windows https://physicsworld.com/a/transparent-and-insulating-aerogel-could-boost-energy-efficiency-of-windows/ Thu, 18 Dec 2025 12:07:46 +0000 https://physicsworld.com/?p=125635 Tiny pores block heat while transmitting light

The post Transparent and insulating aerogel could boost energy efficiency of windows appeared first on Physics World.

]]>
An aerogel material that is more than 99% transparent to light and is an excellent thermal insulator has been developed by Ivan Smalyukh and colleagues at the University of Colorado Boulder in the US. Called MOCHI, the material can be manufactured in large slabs and could herald a major advance in energy-efficient windows.

While the insulating properties of building materials have steadily improved over the past decades, windows have consistently lagged behind. The problem is that current materials used in windows – mostly glass – have an inherent trade-off between insulating ability and optical transparency. This is addressed to some extent by using two or three layers of glass in double- and triple-glazed windows. However, windows remain the largest source of heat loss from most buildings.

A solution to the window problem could lie with aerogels in which the liquid component of a regular gel is replaced with air. This creates solid materials with networks of pores that make aerogels the lightest solid materials ever produced. If the solid component is a poor conductor of heat, then the aerogel will be an extremely good thermal insulator.

“Conventional aerogels, like the silica and cellulose based ones, are common candidates for transparent, thermally insulating materials,” Smalyukh explains. “However, their visible-range optical transparency is intrinsically limited by the scattering induced by their polydisperse pores – which can range from nanometres to micrometres in scale.”

Hazy appearance

While this problem can be overcome fairly easily in thin aerogel films, creating appropriately-sized pores on the scale of practical windows has so far proven much more difficult, leading to a hazy, translucent appearance.

Now, Smalyukh’s team has developed a new fabrication technique involving a removable template. Their approach hinges on the tendency of surfactant molecules called CPCL to self-assemble in water. Under carefully controlled conditions, the molecules spontaneously form networks of cylindrical tubes, called micelles. Once assembled, the aerogel precursor – a silicone material called polysiloxane – condenses around the micelles, freezing their structure in place.

“The ensuing networks of micelle-templated polysiloxane tubes could be then preserved upon the removal of surfactant, and replacing the fluid solvent with air,” Smalyukh describes. The end result was a consistent mesoporous structure, with pores ranging from 2–50 nm in diameter. This is too small to scatter visible light, but large enough to interfere with heat transport.

As a result, the mesoporous, optically clear heat insulator (MOCHI) maintains its transparency even when fabricated in slabs over 3 cm thick and a square metre in area. This suggests that it could be used to create practical windows.

High thermal performance

“We demonstrated thermal conductivity lower than that of still air, as well as an average light transmission above 99%,” Smalyukh says. “Therefore, MOCHI glass units can provide a similar rate of heat transfer to high-performing building roofs and walls, with thicknesses comparable to double pane windows.”

If rolled out on commercial scales, this could lead to entirely new ways to manage interior heating and cooling. According to the team’s calculations, a building retrofitted with MOCHI windows could boost its energy efficiency from around 6% (a typical value in current buildings) to over 30%, while reducing the heat energy passing through by around 50%.

With its ability to admit light while blocking heat transport, the researchers suggest that MOCHI could unlock entirely new functionalities for conventional windows. “Such transparent insulation also allows for efficient harnessing of thermal energy from unconcentrated solar radiation in different climate zones, promising the use of parts of opaque building envelopes as solar thermal energy generating panels,” Smalyukh adds.

The new material is described in Science.

The post Transparent and insulating aerogel could boost energy efficiency of windows appeared first on Physics World.

]]>
Research update Tiny pores block heat while transmitting light https://physicsworld.com/wp-content/uploads/2025/12/17-12-25-aerogel-glass-smaller.jpg
Qubit ‘recycling’ gives neutral-atom quantum computing a boost https://physicsworld.com/a/qubit-recycling-gives-neutral-atom-quantum-computing-a-boost/ Thu, 18 Dec 2025 09:00:25 +0000 https://physicsworld.com/?p=125628 Reducing atom loss and re-using already-measured atoms enables more complex quantum computations

The post Qubit ‘recycling’ gives neutral-atom quantum computing a boost appeared first on Physics World.

]]>
Errors are the bugbear of quantum computing, and they’re hard to avoid. While quantum computers derive their computational clout from the fact that their qubits can simultaneously court multiple values, the fragility of qubit states ramps up their error rates. Many research groups are therefore seeking to reduce or manage errors so they can increase the number of qubits without reducing the whole enterprise to gibberish.

A team at the US-based firm Atom Computing is now reporting substantial success in this area thanks to a multi-part strategy for keeping large numbers of qubits operational in quantum processors based on neutral atoms. “These capabilities allow for the execution of more complex, longer circuits that are not possible without them,” says Matt Norcia, one of the Atom Computing researchers behind this work.

While neutral atoms offer several advantages over other qubit types, they traditionally have significant drawbacks for one of the most common approaches to error correction. In this approach, some of the entangled qubits are set aside as so-called “ancillaries”, used for mid-circuit measurements that can indicate how a computation is going and what error correction interventions may be necessary.

In neutral-atom quantum computing, however, such interventions are generally destructive. Atoms that are not in their designated state are simply binned off – a profligate approach that makes it challenging to scale up atom-based computers. The tendency to discard atoms is particularly awkward because the traps that confine atoms are already prone to losing atoms, which introduces additional errors while reducing the number of atoms available for computations.

Reduce, re-use, replenish

As well as demonstrating protocols for performing measurements to detect errors in quantum circuits with little atom loss, the researchers at Atom Computing also showed they could re-use ancillary atoms – a double-pronged way of retaining more atoms for calculations. In addition, they demonstrated that they could replenish the register of atoms for the computation from a spatially separated stash in a magneto-optic trap without compromising the quantum state of the atoms already in the register.

Norcia says that these achievements — replacing atoms from a continuous source, while reducing the number of atoms needing replacement to begin with — are key to running computations without running out of atoms.  “To our knowledge, any useful quantum computations will require the execution of many layers of gates, which will not be possible unless the atom number can be maintained at a steady-state level throughout the computation,” he tells Physics World.

Cool and spaced out

Norcia and his collaborators at Microsoft Quantum, the Colorado School of Mines and Stanford University worked with ytterbium (Yb) atoms, which he describes as “natural qubits” since they have two ground states. A further advantage is that the transitions between these qubit states and other states used for imaging and cooling are weak, meaning the researchers could couple just one qubit state to these other states at a time. The team also leveraged a previously-developed approach for mid-circuit measurement that scatters light from only one qubit state and does not disturb the other, making it less destructive.

Still, Norcia tells Physics World, “the challenge was to re-use atoms, and key to this was cooling and performance.” To this end, they first had to shift the atoms undergoing mid-circuit measurements away from the atoms in the computational register, to avoid scattering laser light off the latter. They further avoided laser-related collateral damage by designing the register such that the measurement and cooling light was not at the resonant wavelength of the register atoms. Next, they demonstrated they could cool already-measured atoms for re-use in the calculation. Finally, they showed they could non-disruptively replenish these atoms with others from a magneto-optical trap positioned 300 nm below the tweezer arrays that held atoms for the computational register.

Mikhail Lukin, a physicist at Harvard University, US who has also worked on the challenges of atom loss and re-use in scalable, fault-tolerant neutral atom computing, has likewise recently reported successful atom re-use and diminished atom loss. Although Lukin’s work differs from that of the Atom Computing team in various ways – using rubidium instead of ytterbium atoms and a different approach for low atom loss mid-circuit measurements, for starters – he says that the work by Norcia and his team “represents an important technical advance for the Yb quantum computing platform, complementing major progress in the neutral atom quantum computing community in 2025”.

The research appears in Physical Review X.

The post Qubit ‘recycling’ gives neutral-atom quantum computing a boost appeared first on Physics World.

]]>
Research update Reducing atom loss and re-using already-measured atoms enables more complex quantum computations https://physicsworld.com/wp-content/uploads/2025/12/18-12-2025-recycling-cold-atoms.png newsletter1
Forging a more inclusive new generation of physicists https://physicsworld.com/a/forging-a-more-inclusive-new-generation-of-physicists/ Wed, 17 Dec 2025 18:00:50 +0000 https://physicsworld.com/?p=125575 Learn about CUWiP+, a special event to build connection and confidence among women and non-binary students

The post Forging a more inclusive new generation of physicists appeared first on Physics World.

]]>
The latest episode of Physics World Stories takes you inside CUWiP+, the Conference for Undergraduate Women and Non-Binary Physicists, and the role the annual event plays in shaping early experiences of studying physics.

CUWIP+ US and Ireland logoThe episode features June McCombie from the University of Nottingham, who discusses what happens at CUWiP+ events and why they are so important for improving the retention of women and non-binary students in STEM. She reflects on how the conferences create space for students to explore career paths, build confidence and see themselves as part of the physics community.

Reflections and tips from CUWiP+ 2025

University of Birmingham students Tanshpreet Kaur and Harriett McCormick share their experiences of attending the 2025 CUWiP+ event at the University of Warwick and explain why they are excited for the next event, set for Birmingham, 19–22 March 2026. They describe standout moments from 2025, including being starstruck at meeting Dame Jocelyn Bell Burnell, who discovered radio pulsars in 1967.

The episode provides practical advice to get the most out of the event. Organizers design the programme to cater for all personalities – whether you thrive in lively, social situations, or prefer time to step back and reflect. Either way, CUWiP+ offers opportunities to be inspired and to make meaningful connections.

Hosted by Andrew Glester, the episode highlights how shared experiences and supportive networks can balance the often-solitary nature of studying physics, especially when you feel excluded from the majority group.

The post Forging a more inclusive new generation of physicists appeared first on Physics World.

]]>
Physics World Forging a more inclusive new generation of physicists full 48:28 Podcasts Learn about CUWiP+, a special event to build connection and confidence among women and non-binary students https://physicsworld.com/wp-content/uploads/2025/12/diverse-crowd-140073805-shutterstock-natalia-sheinkin.jpg newsletter
Learning through laughter at Quantum Carousel  https://physicsworld.com/a/learning-through-laughter-at-quantum-carousel/ Wed, 17 Dec 2025 16:00:24 +0000 https://physicsworld.com/?p=125610 Zulekha Samiullah and Hugh Barrett talk about the highlights from this year’s Quantum Carousel, a variety show for quantum physicists 

The post Learning through laughter at Quantum Carousel  appeared first on Physics World.

]]>
Quantum physics, kung-fu, LEGO and singing are probably not things you would normally put together. But that’s exactly what happened at this year’s Quantum Carousel 

The event is a free variety show where incredible performers from across academia and industry converge for an evening of science communication. Held in Bristol, UK, on 14 November 2025, this was the second year the event was run – and once again it was entirely sold out.

As organizers, our goal was to bring together those involved in quantum and adjacent fields for an evening of learning and laughter. Each act was only seven minutes long and audience participation was encouraged, with questions saved for the dinner and drinks intervals.

Photo of particpants at Quantum Carousel on stage.

The evening kicked off with a rousing speech and song from Chris Stewart, motivating the promotion of science communication and understanding. Felix Flicker related electron spin rotations to armlocks, with a terrific demonstration on volunteer Tony Short, while Michael Berry entertained us all with his eye-opening talk on how quantum physics has democratized music.  

PhD student double act Eesa Ali and Sebastien Bisdee then welcomed volunteers to the stage to see who could align a laser fastest. Maria Violaris expertly taught us the fundamentals of quantum error correction using LEGO.

Mike Shubrook explained the quantum thermodynamics of beer through stand-up comedy. And finally, John Rarity and his assistant Hugh Barrett (event co-organizer and co-author of this article) rounded off the night by demonstrating the magic of entanglement.  

Our event sponsors introduced the food and drinks portions of the evening, with Antonia Seymour (chief executive of IOP Publishing) and Matin Durrani (editor-in-chief of Physics World) opening the dinner interval, while Josh Silverstone (founder and chief executive of Hartley Ultrafast) kickstarted the networking drinks reception.  

Singing praises

Whether it was singing along to an acoustic guitar or rotating hands to emulate electron spin, everyone got involved, and feedback cited audience participation as a highlight.

“The event ran very smoothly, it was lots of fun and a great chance to network in a relaxed atmosphere,” said one attendee. Another added: “The atmosphere was really fun, and it was a really nice event to get loads of the quantum community together in an enjoyable setting.”

Appreciation of the atmosphere went both ways, with one speaker saying that their favourite part of the night was that “the audience was very inviting and easy to perform to”.  

Audience members also enjoyed developing a better understanding of the science that drives their industry. “I understood it and I don’t have any background in physics,” said one attendee. “I feel a marker of being a good scientist is being able to explain it in layperson’s terms.”

Reaching out

With the quantum community rapidly expanding, it needs people from a wide range of backgrounds such as computer science, engineering and business. Quantum Carousel was designed to strike a balance between high-level academic discussion and entertainment through entry-level talks, such as explaining error correction with props, or relating research to impact from stimulated emission to CDs.

By focusing on real-world analogies, these talks can help newcomers to develop an intuitive and memorable understanding. Meanwhile, those already in the field can equip themselves with new ways of communicating elements of their research. 

We look forward to hosting Quantum Carousel again in the future. We want to make it bigger and better, with an even greater range of diverse acts.

But if you’re interested in organizing a similar outreach event of your own, it helps to consider how you can create an environment that can best spark connections between both speakers and attendees. Consider your audience and how your event can attract different people for different reasons. In our case, this included the chance to network, engage with the performances, and enjoy the food and drink. 

  • Quantum Carousel was founded by Zulekha Samiullah in 2024, and she and Hugh Barrett now co-lead the event. Quantum Carousel 2025 was sponsored by the QE-CDT, IOP Publishing and Hartley Ultrafast.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Learning through laughter at Quantum Carousel  appeared first on Physics World.

]]>
Blog Zulekha Samiullah and Hugh Barrett talk about the highlights from this year’s Quantum Carousel, a variety show for quantum physicists  https://physicsworld.com/wp-content/uploads/2025/12/quantum-caroussel-maria-violaris.jpg
Korea’s long-term strategy for 2D materials: fundamental science is the secret of success https://physicsworld.com/a/koreas-long-term-strategy-for-2d-materials-fundamental-science-is-the-secret-of-success/ Wed, 17 Dec 2025 15:03:17 +0000 https://physicsworld.com/?p=125565 An interview with Moon-Ho Jo, director of the IBS Center for Van der Waals Quantum Solids in Korea

The post Korea’s long-term strategy for 2D materials: fundamental science is the secret of success appeared first on Physics World.

]]>
ibs center image

What’s the research mission of the IBS Center for Van der Waals Quantum Solids (IBS-VdWQS)?

Our multidisciplinary team aims to create heteroepitaxial van der Waals quantum solids at system scales, where the crystal lattices and symmetries of these novel 2D materials are artificially moulded to atomic precision via epitaxial growth. Over time, we also hope to translate these new solids into quantum device platforms.

Clearly there’s all sorts of exotic materials physics within that remit.

Correct. We form van der Waals heterostructures by epitaxial manipulation of the crystal lattice in diverse, atomically thin 2D materials – for example, 2D heterostructures incorporating graphene, boron nitride or transition-metal dichalcogenides (such as MoS2, WSe2, NbSe2, TaSe2 and so on). Crucially, the material layers are held in place only by weak van der Waals forces and with no dangling chemical bonds in the direction normal to the layers.

These 2D layers can also be laterally “stitched” into hexagonal or honeycomb lattices, with the electronic and atomic motions confined into the atomic layers. Using state-of-the-art epitaxial techniques, our team can then artificially stack these lattices to form a new class of condensed matter with exotic interlayer couplings and emergent electronic, optical and magnetic properties – properties that, we hope, will find applications in next-generation quantum devices.

The IBS-VdWQS is part of Korea’s Institute for Basic Science (IBS). How does this arrangement work?

moon-ho jo image

The IBS headquarters was established in 2011 as Korea’s first dedicated institute for fundamental science. It’s an umbrella organization coordinating the activity of 38 centres-of-excellence across the physical sciences, life sciences, as well as mathematics and data science. In this way, IBS specializes in long-range initiatives that require large groups of researchers from Korea and abroad.

Our IBS-VdWQS is a catalyst for advances in fundamental materials science and condensed-matter physics, essentially positioned as a central-government-funded research institution in a research-oriented university. Particularly important in this regard is our colocation on the campus of Pohang University of Science and Technology (POSTECH), one of Korea’s leading academic centres, and our adjacency to large-scale facilities like the Pohang Synchrotron Radiation Facility (PAL) and Pohang X-ray free-electron laser (PAL-XFEL). It’s worth noting as well that all the principal investigators (PIs) in our centre hold dual positions as IBS researchers and POSTECH professors.

So IBS is majoring on strategic research initiatives?

Absolutely – and that perspective also underpins our funding model. The IBS-VdWQS was launched in 2022 and is funded by IBS for an initial period through to 2032 (with a series of six-year extensions subject to the originality and impact of our research). As such, we are able to encourage autonomy across our 2D materials programme, giving scientists the academic freedom to pursue questions in basic research without the bureaucracy and overhead of endless grant proposals. Team members know that, with plenty of hard work and creativity, they have everything they need here to do great science and build their careers.

Your core remit is fundamental science, but what technologies could eventually emerge from the IBS-VdWQS research programme?

While the focus is very much on basic science, epitaxial scalability is hard-wired into all our lines of enquiry. In short: we are creating new 2D materials via epitaxial growth and this ultimately opens a pathway to wafer-scale industrial production of van der Waals materials with commercially interesting semiconducting, superconducting or emergent properties in general.

Right now, we are investigating van der Waals semiconductors and the potential integration of MoS2 and WSe2 with silicon for new generations of low-power logic circuitry. On a longer timeline, we are developing new types of high-Tc (around 10 K) van der Waals superconductors for applications in Josephson junctions, which are core building blocks in superconducting quantum computers.

There’s a parallel opportunity in photonic quantum computing, with van der Waals materials shaping up as promising candidates for quantum light-emitters that generate on-demand (deterministic) and highly coherent (indistinguishable) single-photon streams.

Establishing a new research centre from scratch can’t have been easy. How are things progressing?

It’s been a busy three years since the launch of the IBS-VdWQS. The most important task at the outset was centralization – pulling together previously scattered resources, equipment and staff from around the POSTECH campus. We completed the move into our purpose-built facility, next door to the PAL synchrotron light source, at the end of last year and have now established dedicated laboratory areas for the van der Waals Epitaxy Division; Quantum Device and Optics Division; Quantum Device Fabrication Division; and the Imaging and Spectroscopy Division.

One of our front-line research efforts is building a van der Waals Quantum Solid Cluster, an integrated system of multiple instruments connected by ultra-high-vacuum lines to maintain atomically clean surfaces. We believe this advanced capability will allow us to reliably study air-sensitive van der Waals materials and open up opportunities to discover new physics in previously inaccessible van der Waals platforms.

Are there plans to scale the IBS-VdWQS work programme?

Right now, my priority is to promote opportunities for graduate students, postdoctoral researchers and research fellows to accelerate the centre’s expanding research brief. Diversity is strength, so I’m especially keen to encourage more in-bound applications from talented experimental and theoretical physicists in Europe and North America. Our current research cohort comprises 30+ PhD students, seven postdocs (from the US, India, China and Korea) and seven PIs.

Over the next five years, we aim to scale up to 25+ postdocs and research fellows and push out in new directions such as scalable quantum devices. In particular, we are looking for scientists with specialist know-how and expertise in areas like materials synthesis, quantum transport, optical spectroscopy and scanning probe microscopy (SPM) to accelerate our materials research.

How do you support your early-career researchers at IBS-VdWQS?

We are committed to nurturing global early-career talent and provide a clear development pathway from PhD through postdoctoral studies to student research fellow and research fellow/PI. Our current staff PIs have diverse academic backgrounds – materials science, physics, electronic engineering and chemistry – and we therefore allow early-career scientists to have a nominated co-adviser alongside their main PI. This model means research students learn in an integrated fashion that encourages a “multidisciplinarian” mindset – majoring in epitaxial growth, low-temperature electronic devices and optical spectroscopy, say, while also maintaining a watching brief (through their co-adviser) on the latest advances in materials characterization and analysis.

What does success look like at the end of the current funding cycle?

With 2032 as the first milestone year in this budget cycle, we are working to establish a global hub for van der Waals materials science – a highly collaborative and integrated research programme spanning advanced fabrication, materials characterization/analysis and theoretical studies. More capacity, more research infrastructure, more international scientists are all key to delivering our development roadmap for 2D semiconductor and superconductor integration towards scalable, next-generation low-power electronics and quantum computing devices.

Building a scientific career in 2D materials

myungchul oh image

Myungchul Oh joined the IBS-VdWQS in 2023 after a five-year stint as a postdoctoral physicist at Princeton University in the US, where he studied strongly correlated phenomena, superconductivity and topological properties in “twisted” graphene systems.

Recruited as an IBS-POSTECH research fellow, Oh holds dual academic positions: team leader for the quantum-device microscopy investigations at IBS-VdWQS and assistant professor in the semiconductor engineering department at POSTECH.

Van der Waals heterostructures, assembled layer-by-layer from 2D materials, enable precise engineering of quantum properties through the interaction between different atomic layers. By extension, Oh and his colleagues are focused on the development of novel van der Waals systems; their integration into devices via nanofabrication; and the study of electrical, magnetic and topological properties under extreme conditions, where quantum-mechanical effects dominate.

“We are  exploring the microscopic nature of quantum materials and their device applications,” Oh explains. “Our research combines novel 2D van der Waals heterostructure device fabrication techniques with cryogenic scanning probe microscopy (SPM) measurements – the latter to access the atomic-scale electronic structure and local physical properties of quantum phases in 2D materials.”

ibs logo

The post Korea’s long-term strategy for 2D materials: fundamental science is the secret of success appeared first on Physics World.

]]>
Interview An interview with Moon-Ho Jo, director of the IBS Center for Van der Waals Quantum Solids in Korea https://physicsworld.com/wp-content/uploads/2025/12/2025-ibs-na-figure01.jpg newsletter
Atomic system acts like a quantum Newton’s cradle https://physicsworld.com/a/atomic-system-acts-like-a-quantum-newtons-cradle/ Wed, 17 Dec 2025 11:15:48 +0000 https://physicsworld.com/?p=125625 Quantum simulator enables scientists to test laws of transport phenomena at the quantum level

The post Atomic system acts like a quantum Newton’s cradle appeared first on Physics World.

]]>
Atoms in a one-dimensional quantum gas behave like a Newton’s cradle toy, transferring energy from atom to atom without dissipation. Developed by researchers at the TU Wien, Austria, this quantum fluid of ultracold, confined rubidium atoms can be used to simulate more complex solid-state systems. By measuring transport quantities within this “perfect” atomic system, the team hope to obtain a deeper understanding of how transport phenomena and thermodynamics behave at the quantum level.

Physical systems transport energy, charge and mass in various ways. Electrical currents streaming along a wire, heat flowing through a solid and light travelling down an optical fibre are just three examples. How easily these quantities move inside a material depends on the resistance they experience, with collisions and friction slowing them down and making them fade away. This level of resistance largely determines whether the material is classed as an insulator, a conductor or a superconductor.

The mechanisms behind such transport fall into two main categories. The first is ballistic transport, which features linear movement without loss, like a bullet travelling in a straight line. The second is diffusive transport, where the quantity is subject to many random collisions. A good example is heat conduction, where the heat moves through a material gradually, travelling in many directions at once.

Breaking the rules

Most systems are strongly affected by diffusion, which makes it surprising that the TU Wien researchers could build an atomic system where mass and energy flowed freely without it. According to study leader Frederik Møller, the key to this unusual behaviour is the magnetic and optical fields that keep the rubidium atoms confined to one dimension, “freezing out” interactions in the atoms’ two transverse directions.

Because the atoms can only move along a single direction, Møller explains, they transfer momentum perfectly, without scattering their energy as would be the case in normal matter. Consequently, the 1D atomic system does not thermalize despite being subject to thousands of collisions.

To quantify the transport of mass (charge) and energy within this system, the researchers measured quantities known as Drude weights, which are fundamental parameters that describe ballistic, dissipationless transport in solid-state environments. According to these measurements, the single-dimensional interacting bosonic atoms do indeed demonstrate perfect dissipationless transport. The results also agree with the generalized hydrodynamics (GHD) theoretical framework, which describes the large-scale, inhomogeneous dynamics of one-dimensional integrable quantum many-body systems such as ultracold atomic gases or specific spin chains.

A Newton’s cradle for atoms

According to team leader Jörg Schmiedmayer, the experiment is analogous to a Newton’s cradle toy, which consists of a row of metal balls suspended on wires (see below). When the ball on one end of the row is made to collide with the one next to it, its momentum transfers straight through the other balls to the ball on the opposite end, which swings out. Schmiedmayer adds that the system makes it possible to study transport under perfectly controlled conditions and could open new ways of understanding how resistance emerges, or disappears, at the quantum level. “Our next steps are applying the method to strongly correlated transport and to transport in a topological fluid,” he tells Physics World.

 

Karèn Kheruntsyan, a theoretical physicist at the University of Queensland, Australia, who was not involved in this research, calls it “a significant step for studying quantum transport”. He says the team’s work clearly demonstrates ballistic (dissipationless) transport at a finite temperature, providing an experimental benchmark for theories of integrability and disorder. The work also validates the thermodynamic meaning of Drude weights, while confirming that linear-response theory and GHD accurately describe transport in quantum systems.

In Kheruntsyan’s view, though, the team’s biggest achievement is the quantitative extraction of Drude weights that characterize atomic and energy currents, with “excellent agreement” between experiment and theory. This, he says, shows truly ballistic transport in an interacting many-body system. One caveat, though, is that the system’s limited spatial resolution and near-ideal integrability prevent it from being used to explore diffusive regimes or stronger interaction effects, leaving microscopic dynamics such as dispersive shock waves unresolved.

The study is published in Science.

The post Atomic system acts like a quantum Newton’s cradle appeared first on Physics World.

]]>
Research update Quantum simulator enables scientists to test laws of transport phenomena at the quantum level https://physicsworld.com/wp-content/uploads/2025/12/17-12-2025-quantum-gas-team-at-tu-wien-web.jpg newsletter1
Want a strong future for physics? Here’s why we must focus on students from under-represented groups https://physicsworld.com/a/want-a-strong-future-for-physics-heres-why-we-must-focus-on-students-from-under-represented-groups/ Wed, 17 Dec 2025 11:00:50 +0000 https://physicsworld.com/?p=125316 Jenna Padgett says fostering a sense of belonging can help boost physics

The post Want a strong future for physics? Here’s why we must focus on students from under-represented groups appeared first on Physics World.

]]>
Physics students from under-represented groups consistently report a lower sense of belonging at university than their over-represented peers. These students experience specific challenges that make them feel undervalued and excluded. Yet a strong sense of belonging has been shown to lead to improved academic performance, greater engagement in courses and better mental wellbeing. It is vital, then, that universities make changes to help eliminate these challenges.

Students are uniquely placed to describe the issues when it comes to belonging in physics. With this mind, as an undergraduate physics student with a passion for making the discipline more diverse and inclusive, I conducted focus groups with current and former physics students, interviewed experts and performed an analysis of current literature.  This was part of a summer project funded by the Royal Institution and is currently being finalized for publication.

From this work it became clear that under-represented groups face many challenges to developing a strong sense of belonging in physics, but, at the same time, there are ways to improve the everyday experiences of students. When it comes to barriers, one is the widely held belief – reflected in the way physicists are depicted in the media and textbooks – that you need to be a “natural genius” to succeed in university physics. This notion hampers students from under-represented groups, who see peers from the over-represented majority appearing to grasp concepts more quickly and lecturers suggesting certain topics are “easy”.

The feeling that physics demands natural ability also arises from the so-called “weed out” culture, which is defined as courses that are intentionally designed to filter students out, reduce class sizes and diminish sense of belonging. Students who we surveyed believe that the high fail rate is caused by a disconnect between the teaching and workshops on the course and the final exam.

A third cause of this perception that you need some innate ability to succeed in physics is the attitudes and behaviour of some professors, lecturers and demonstrators. This includes casual sexist and racist behaviour; belittling students who ask for help; and acting as if they’re uninterested in teaching. Students from under-represented groups report significantly lower levels of respect and recognition from instructors, which damages their resilience and weakens sense of belonging.

Students from under-represented groups are also more likely to be isolated from their class mates and feel socially excluded from them. This means they lack a support network, leaving them with no-one to turn to when they encounter challenges. Outside the lecture theatre, students from under-represented groups typically face many microaggressions in their day-to-day university experience. These are subtle indignities or insults, unconsciously or consciously, towards minorities such as people of colour being told they “speak English very well”, male students refusing to accept women’s ideas, and the assumption that gender minorities will take on administrative roles in group projects.

Focus on the future

So what can be done? The good news is that there are many solutions to mitigate these issues and improve a sense of belonging. First, institutions should place more emphasis on small group “active learning” – which includes discussions, problem solving and peer-based learning. These pedagogical strategies have been shown to boost belonging, particularly for female students. After these active-learning sessions, non-academic, culturally sensitive social lunches can help turn “course friends” to “real friends” who choose to meet socially and can become a support network. This can help build connections within and between degree cohorts.

Another solution is for universities to invite former students to speak about their sense of belonging and how it evolved or improved throughout their degree. Hearing about struggles and learning tried-and-tested strategies to improve resilience can help students better prepare for stressful situations. Alumni are more relatable than generic messaging from the university wellbeing team.

Building closer links between students and staff also enhances a sense of belonging. It helps humanise lecturers and demonstrate that staff care about student wellbeing and success. This should be implemented by recognizing staff efforts formally so that the service roles of faculty members are formally recognized and professionalized.

Universities should also focus on hiring more diverse teaching staff, who can serve as role models, using their experiences to relate to and engage with under-represented students. Students will end up feeling more embedded within the physics community, improving both their sense of belonging and performance.

One practical way to increase diversity in hiring is for institutions to re-evaluate what they value. While securing large grants is valuable, so is advocating for equality, diversity and inclusion; public engagement; and the ability to inspire the next generation of physicists.

Another approach is to establish “departmental action teams” to find tailored solutions to unite undergraduates, postgraduates, teaching and research staff. Such teams identify issues specific to their particular university, and they can gather data through surveying the department to identify trends and recommend practical changes to boost belonging.

Implementing these measures will not only improve the sense of belonging for students from under-represented groups but also cultivate a more inclusive, diverse physics workforce. That in turn will boost the overall research culture, opening up research directions that may have previously been overlooked, and yielding stronger scientific outputs. It is crucial that we do more to support physics students from under-represented groups to create a more diverse physics community. Ultimately, it will benefit physics and society as a whole.

The post Want a strong future for physics? Here’s why we must focus on students from under-represented groups appeared first on Physics World.

]]>
Opinion and reviews Jenna Padgett says fostering a sense of belonging can help boost physics https://physicsworld.com/wp-content/uploads/2025/12/2025-12-forum-padgett-students-diverse-group-1417921867-istock-seventyfour.jpg newsletter
Improving precision in muon g-2 calculations https://physicsworld.com/a/improving-precision-in-muon-g-2-calculations/ Wed, 17 Dec 2025 08:16:58 +0000 https://physicsworld.com/?p=124969 A model-independent approach reduces uncertainty in hadronic light-by-light scattering, strengthening Standard Model tests

The post Improving precision in muon g-2 calculations appeared first on Physics World.

]]>
The gyromagnetic ratio is the ratio of a particle’s magnetic moment and its angular momentum. This value determines how a particle responds to a magnetic field. According to classical physics, muons should have a gyromagnetic ratio equal to 2. However, owing to quantum mechanics, there is a small difference between the expected gyromagnetic ratio and the observed value. This discrepancy is known as the anomalous magnetic moment.

The anomalous magnetic moment is incredibly sensitive to quantum fluctuations. It can be used to test the Standard Model of physics, and previous consistent experimental discrepancies have hinted at new physics beyond the Standard Model. The search for the anomalous magnetic moment is one of the most precise tests in modern physics.

To calculate the anomalous magnetic moment, experiments such as Fermilab’s Muon g-2 experiment have been set up where researchers measure the muon’s wobble frequency, which is caused by its magnetic moment. But effects such as hadronic vacuum polarization and hadronic light-by-light scattering cause uncertainty in the measurement. Unlike hadronic vacuum polarization, hadronic light-by-light cannot be directly extracted from experimental cross-section data, making it dependent on the model used and a significant computational challenge.

In this work, the researcher took a major step in resolving the anomalous magnetic moment of the muon. Their method calculated how the neutral pion contributes to hadronic light-by-light scattering, used domain wall fermions to preserve symmetry, employed eight different lattice configurations with variational pion masses, and introduced a pion structure function to find the key contributions in a model-independent method. The pion transition form factor was computed directly at arbitrary space-like photon momenta, and a Gegenbauer expansion was used to confirm that about 98% of the π⁰-pole contribution was determined in a model-independent way. The analysis also included finite-volume corrections and chiral and continuum extrapolations and yielded a value for the π⁰ decay width.

The development of a more accurate and model-independent anomalous magnetic moment for the muon has reduced major theoretical uncertainties and can make Standard Model precision tests more robust.

Do you want to learn more about this topic?

The muon Smasher’s guide Hind Al Ali et al (2022)

The post Improving precision in muon g-2 calculations appeared first on Physics World.

]]>
Research highlight A model-independent approach reduces uncertainty in hadronic light-by-light scattering, strengthening Standard Model tests https://physicsworld.com/wp-content/uploads/2025/11/particle-waves-637705212-istock-piranka.jpg
How does quantum entanglement move between different particles? https://physicsworld.com/a/how-does-quantum-entanglement-move-between-different-particles/ Wed, 17 Dec 2025 08:16:11 +0000 https://physicsworld.com/?p=125453 New study reveals how quantum entanglement is transferred in ultrafast photoionisation experiments, offering us insights into how quantum information develops from microscopic to macroscopic scales

The post How does quantum entanglement move between different particles? appeared first on Physics World.

]]>
Entanglement is a phenomenon where two or more particles become linked in such a way that a measurement on one of the particles instantly influences the state of the other, no matter how far apart they are. It is a defining property of quantum mechanics, which is key to all quantum technologies and remains a serious challenge to realize in large systems.

However, a team of researchers from Sweden and Spain has recently made a large step forward in the field of ultrafast entanglement. Here, pairs of extreme ultraviolet pulses are used to exert quantum control on the attosecond timescale (a few quintillionths of a second).

Specifically, they studied ultrafast photoionisation. In this process, a high-energy light pulse hits an atom, ejecting an electron and leaving behind an ion.

This process can create entanglement between the electron and the ion in a controlled way. However, the entanglement is fragile and can be disrupted or transferred as the system evolves.

For instance, as the newly-created ion emits a photon to release energy, the entanglement shifts from the electron – ion pair to the electron–photon pair. This transfer process takes a considerable amount of time, on the scale of 10s of nanoseconds. This means that the ion-electron pair is macroscopically separated, on the centimetre scale.

The team found that during this transition, all three particles – electron, ion, and photon – are entangled together in a multipartite state.

They did this by using a mathematical tool called von Neumann entropy to track how much information is shared between all three particles.

Although this work was purely theoretical, they also proposed an experimental method to study entanglement transfer. The setup would use two synchronised free-electron laser pulses, with attosecond precision, to measure the electron’s energy and to detect if a photon was emitted. By measuring both particles in coincidence, entanglement can be detected.

The results could be generalised to other scenarios and will help us understand how quantum information can move between different particles.  This brings us one small step closer to future technologies like quantum communication and computing.

Read the full article

Entanglement transfer in a composite electron–ion–photon system – IOPscience

A. Stenquist et al 2025 Rep. Prog. Phys. 88 080502

 

 

The post How does quantum entanglement move between different particles? appeared first on Physics World.

]]>
Research highlight New study reveals how quantum entanglement is transferred in ultrafast photoionisation experiments, offering us insights into how quantum information develops from microscopic to macroscopic scales https://physicsworld.com/wp-content/uploads/2025/12/atoms-491519159-istock-agsandrew.jpg
Motion through quantum space–time is traced by ‘q-desics’ https://physicsworld.com/a/motion-through-quantum-space-time-is-traced-by-q-desics/ Tue, 16 Dec 2025 16:19:23 +0000 https://physicsworld.com/?p=125602 Subtle quantum effects could be observed in how particles traverse cosmological distances

The post Motion through quantum space–time is traced by ‘q-desics’ appeared first on Physics World.

]]>
Physicists searching for signs of quantum gravity have long faced a frustrating problem. Even if gravity does have a quantum nature, its effects are expected to show up only at extremely small distances, far beyond the reach of experiments. A new theoretical study by Benjamin Koch and colleagues at the Technical University of Vienna in Austria suggests a different strategy. Instead of looking for quantum gravity where space–time is tiny, the researchers argue that subtle quantum effects could influence how particles and light move across huge cosmical distances.

Their work introduces a new concept called q-desics, short for quantum-corrected paths through space–time. These paths generalize the familiar trajectories predicted by Einstein’s general theory of relativity and could, in principle, leave observable fingerprints in cosmology and astrophysics.

General relativity and quantum mechanics are two of the most successful theories in physics, yet they describe nature in radically different ways. General relativity treats gravity as the smooth curvature of space–time, while quantum mechanics governs the probabilistic behavior of particles and fields. Reconciling the two has been one of the central challenges of theoretical physics for decades.

“One side of the problem is that one has to come up with a mathematical framework that unifies quantum mechanics and general relativity in a single consistent theory,” Koch explains. “Over many decades, numerous attempts have been made by some of the most brilliant minds humanity has to offer.” Despite this effort, no approach has yet gained universal acceptance.

Deeper difficulty

There is another, perhaps deeper difficulty. “We have little to no guidance, neither from experiments nor from observations that could tell us whether we actually are heading in the right direction or not,” Koch says. Without experimental clues, many ideas about quantum gravity remain largely speculative.

That does not mean the quest lacks value. Fundamental research often pays off in unexpected ways. “We rarely know what to expect behind the next tree in the jungle of knowledge,” Koch says. “We only can look back and realize that some of the previously explored trees provided treasures of great use and others just helped us to understand things a little better.”

Almost every test of general relativity relies on a simple assumption. Light rays and freely falling particles follow specific paths, known as geodesics, determined entirely by the geometry of space–time. From gravitational lensing to planetary motion, this idea underpins how physicists interpret astronomical data.

Koch and his collaborators asked what happens to this assumption when space–time itself is treated as a quantum object. “Almost all interpretations of observational astrophysical and astronomical data rest on the assumption that in empty space light and particles travel on a path which is described by the geodesic equation,” Koch says. “We have shown that in the context of quantum gravity this equation has to be generalized.”

Generalized q-desic

The result is the q-desic equation. Instead of relying only on an averaged, classical picture of space–time, q-desics account for the underlying quantum structure more directly. In practical terms, this means that particles may follow paths that deviate slightly from those predicted by classical general relativity, even when space–time looks smooth on average.

Crucially, the team found that these deviations are not confined to tiny distances. “What makes our first results on the q-desics so interesting is that apart from these short distance effects, there are also long range effects possible, if one takes into account the existence of the cosmological constant,” Koch says.

This opens the door to possible tests using existing astronomical data. According to the study, q-desics could differ from ordinary geodesics over cosmological distances, affecting how matter and light propagate across the universe.

“The q-desics might be distinguished from geodesics at cosmological large distances,” Koch says, “which would be an observable manifestation of quantum gravity effects.”

Cosmological tensions

The researchers propose revisiting cosmological observations. “Currently, there are many tensions popping up between the Standard Model of cosmology and observed data,” Koch notes. “All these tensions are linked, one way or another, to the use of geodesics at vastly different distance scales.” The q-desic framework offers a new lens through which to examine such discrepancies.

So far, the team has explored simplified scenarios and idealized models of quantum space–time. Extending the framework to more realistic situations will require substantial effort.

“The initial work was done with one PhD student (Ali Riahina) and one colleague (Ángel Rincón),” Koch says. “There are many things to be revisited and explored that our to-do list is growing far too long for just a few people.” One immediate goal is to encourage other researchers to engage with the idea and test it in different theoretical settings.

Whether q-desics will provide an observational window into quantum gravity remains to be seen. But by shifting attention from the smallest scales to the largest structures in the cosmos, the work offers a fresh perspective on an enduring problem.

The research is described in Physical Review D.

The post Motion through quantum space–time is traced by ‘q-desics’ appeared first on Physics World.

]]>
Research update Subtle quantum effects could be observed in how particles traverse cosmological distances https://physicsworld.com/wp-content/uploads/2025/12/16-12-25-quantum-gravity.jpg newsletter1
From building a workforce to boosting research and education – future quantum leaders have their say https://physicsworld.com/a/from-building-a-workforce-to-boosting-research-and-education-future-quantum-leaders-have-their-say/ Tue, 16 Dec 2025 11:15:08 +0000 https://physicsworld.com/?p=125232 Matin Durrani talks to four leaders from quantum science and technology about where the field is going next

The post From building a workforce to boosting research and education – future quantum leaders have their say appeared first on Physics World.

]]>
The International Year of Quantum Science and Technology has celebrated all the great developments in the sector – but what challenges and opportunities lie in store? That was the question deliberated by four future leaders in the field at the Royal Institution in central London in November. The discussion took place during the two-day conference “Quantum science and technology: the first 100 years; our quantum future”, which was part of a week-long series of quantum-related events in the UK organized by the Institute of Physics.

As well as outlining the technical challenges in their fields, the speakers all stressed the importance of developing a “skills pipeline” so that the quantum sector has enough talented people to meet its needs. Also vital will be the need to communicate the mysteries and potential of quantum technology – not just to the public but to industrialists, government officials and venture capitalists.

Two of the speakers – Nicole Gillett (Riverlane) and Muhammad Hamza Waseem (Quantinuum) – are from the quantum tech industry, with Mehul Malik (Heriot-Watt University) and Sarah Alam Malik (University College London) based in academia. The following is an edited version of the discussion.

Quantum’s future leaders

Muhammad Hamza Waseem, Sarah Alam Malik, Mehul Malik, Nicole Gillett and Matin Durrani

Nicole Gillett is a senior software engineer at Riverlane, in Cambridge, UK. The company is a leader in quantum error correction, which is a critical part of a fully functioning, fault-tolerant quantum computer. Errors arise because quantum bits, or qubits, are so fragile and correcting them is far trickier than with classical devices. Riverlane is therefore trying to find ways to correct for errors without disturbing a device’s quantum states. Gillett is part of a team trying to understand how best to implement error-correcting algorithms on real quantum-computing chips.

Mehul Malik, who studied physics at a liberal arts college in New York, was attracted to quantum physics because of what he calls a “weird middle ground between artistic creative thought and the rigour of physics”. After doing a PhD at the University of Rochester, he spent five years as a postdoc with Anton Zeilinger at the University of Vienna in Austria before moving to Heriot-Watt University in the UK. As head of its Beyond Binary Quantum Information research group, Malik works on quantum information processing and communication and fundamental studies of entanglement.

Sarah Alam Malik is a particle physicist at University College London, using particle colliders to detect and study potential candidates for dark matter. She is also trying to use quantum computers to speed up the discovery of new physics given that what she calls “our most cherished and compelling theories” for physics beyond the Standard Model, such as supersymmetry, have not yet been seen. In particular, Malik is trying to find new physics in a way that’s “model agnostic” – in other words, using quantum computers to search particle-collision data for anomalous events that have not been seen before.

Muhammad Hamza Waseem studied electrical engineering in Pakistan, but got hooked on quantum physics after getting involved in recreating experiments to test Bell’s inequalities in what he claims was the first quantum optics lab in the country. Waseem then moved to the the University of Oxford in the UK, to do a PhD studying spin waves to make classical and quantum logic circuits. Unable to work when his lab shut during the COVID-19 pandemic, Waseem approached Quantinuum to see if he could help them in their quest to build quantum computers using ion traps. Now based at the company, he studies how quantum computers can do natural-language processing. “Think ChatGPT, but powered with quantum computers,” he says.

What will be the biggest or most important application of quantum technology in your field over the next 10 years?

Nicole Gillett: If you look at roadmaps of quantum-computing companies, you’ll find that IBM, for example, intends to build the world’s first utility scale and fault-tolerant quantum computer by the end of the decade. Beyond 2033, they’re committing to have a system that could support 2000 “logical qubits”, which are essentially error-corrected qubits, in which the data of one qubit has been encoded into many qubits.

What can be achieved with that number of qubits is a difficult question to answer but some theorists, such as Juan Maldacena, have proposed some very exotic ideas, such as using a system of 7000 qubits to simulate black-hole dynamics. Now that might not be a particularly useful industry application, but it tells you about the potential power of a machine like this.

Mehul Malik: In my field, quantum networks that can distribute individual quantum particles or entangled states over large and short distances will have a significant impact within the next 10 years. Quantum networks will connect smaller, powerful quantum processors to make a larger quantum device, whether for computing or communication. The technology is quite mature – in fact, we’ve already got a quantum network connecting banks in London.

I will also add something slightly controversial. We often try to distinguish between quantum and non-quantum technologies, but what we’re heading towards is combining classical state-of-the-art devices with technology based on inherently quantum effects – what you might call “quantum adjacent technology”. Single-photon detectors, for example, are going to revolutionize healthcare, medical imaging and even long-distance communication.

Sarah Alam Malik: For me, the biggest impact of quantum technology will be applying quantum computing algorithms in physics. Can we quantum simulate the dynamics of, say, proton–proton collisions in a more efficient and accurate manner? Can we combine quantum computing with machine learning to sift through data and identify anomalous collisions that are beyond those expected from the Standard Model?

Quantum technology is letting us ask very fundamental questions about nature.

Sarah Alam Malik, University College London

Quantum technology, in other words, is letting us ask very fundamental questions about nature. Emerging in theoretical physics, for example, is the idea that the fundamental layer of reality may not be particles and fields, but units of quantum information. We’re looking at the world through this new quantum-theoretic lens and asking questions like, whether it’s possible to measure entanglement in top quarks and even explore Bell-type inequalities at particle colliders.

One interesting quantity is “magic”, which is a measure of how far you are from having something that can be classically simulable (Phys. Rev. D 110 116016). The more magic there is in a system the less easy it is to simulate classically – and therefore  the greater the computational resource it possesses for quantum computing. We’re asking how much “magic” there is in, for instance, top quarks produced at the Large Hadron Collider. So one of the most important developments for me may well be asking questions in a very different way to before.

Muhammad Hamza Waseem: Technologically speaking, the biggest impact will be simulating quantum systems using a quantum computer. In fact, researchers from Google already claim to have simulated a wormhole in a quantum computer, albeit a very simple version that could have been tackled with a classical device (Nature 612 55).

But the most significant impact has to do with education. I believe quantum theory teaches us that reality is not about particles and individuals – but relations. I’m not saying that particles don’t exist but they emerge from the relations. In fact, with colleagues at the University of Oxford, we’ve used this idea to develop a new way of teaching quantum theory, called Quantum in Pictures.

We’ve already tried our diagrammatic approach with a group of 16–18-year-olds, teaching them the entire quantum-information course that’s normally given to postgraduates at Oxford. At the end of our two-month course, which had one lecture and tutorial per week, students took an exam with questions from past Oxford papers. An amazing 80% of students passed and half got distinctions.

For quantum theory to have a big impact, we have to make quantum physics more accessible to everyone.

Muhammad Hamza Waseem, Quantinuum

I’ve also tried the same approach on pupils in Pakistan: the youngest, who was just 13, can now explain quantum teleportation and quantum entanglement. My point is that for quantum theory to have a big impact, we have to make quantum physics more accessible to everyone.

What will be the biggest challenges and difficulties over the next 10 years for people in quantum science and technology?

Nicole Gillett: The challenge will be building up a big enough quantum workforce. Sometimes people hear the words “quantum computer” and get scared, worrying they’re going to have to solve Hamiltonians all the time. But is it possible to teach students at high-school level about these concepts? Can we get the ideas across in a way that is easy to understand so people are interested and excited about quantum computing?

At Riverlane, we’ve run week-long summer workshops for the last two years, where we try to teach undergraduate students enough about quantum error correction so they can do “decoding”. That’s when you take the results of error correction and try to figure out what errors occurred on your qubits. By combining lectures and hands-on tutorials we found we could teach students about error corrections – and get them really excited too.

Our biggest challenge will be not having a workforce ready for quantum computing.

Nicole Gillett, Riverlane

We had students from physics, philosophy, maths and computer science take the course – the only pre-requisite, apart from being curious about quantum computers, is some kind of coding ability. My point is that these kinds of boot camps are going to be so important to inspire future generations. We need to make the information accessible to people because otherwise our biggest challenge will be not having a workforce ready for quantum computing.

Mehul Malik: One of the big challenges is international cooperation and collaboration. Imagine if, in the early days of the Internet, the US military had decided they’d keep it to themselves for national-security reasons or if CERN hadn’t made the World Wide Web open source. We face the same challenge today because we live in a world that’s becoming polarized and protectionist – and we don’t want that to hamper international collaboration.

Over the last few decades, quantum science has developed in a very international way and we have come so far because of that. I have lived in four different continents, but when I try to recruit internationally, I face significant hurdles from the UK government, from visa fees and so on. To really progress in quantum tech, we need to collaborate and develop science in a way that’s best for humanity not just for each nation.

Sarah Alam Malik: One of the most important challenges will be managing the hype that inevitably surrounds the field right now. We’ve already seen this with artificial intelligence (AI), which has gone though the whole hype cycle. Lots of people were initially interested, then the funding dried up when reality didn’t match expectations. But now AI has come back with such resounding force that we’re almost unprepared for all the implications of it.

Quantum can learn from the AI hype cycle, finding ways to manage expectations of what could be a very transformative technology. In the near- and mid-term, we need to not overplay things and be cautious of this potentially transformative technology – yet be braced for the impact it could potentially have. It’s a case of balancing hype with reality.

Muhammad Hamza Waseem: Another important challenge is how to distribute funding between research on applications and research on foundations. A lot of the good technology we use today emerged from foundational ideas in ways that were not foreseen by the people originally working on them. So we must ensure that foundational research gets the funding it deserves or we’ll hit a dead end at some point.

Will quantum tech alter how we do research, just as AI could do?

Mehul Malik: AI is already changing how I do research, speeding up the way I discover knowledge. Using Google Gemini, for example, I now ask my browser questions instead of searching for specific things. But you still have to verify all the information you gather, for example, by checking the links it cites. I recently asked AI a complex physics question to which I knew the answer and the solution it gave was terrible. As for how quantum is changing research, I’m less sure, but better detectors through quantum-enabled research will certainly be good.

Muhammad Hamza Waseem: AI is already being deployed in foundational research, for example, to discover materials for more efficient batteries. A lot of these applications could be integrated with quantum computing in some way to speed work up. In other words, a better understanding of quantum tech will let us develop AI that is safer, more reliable, more interpretable – and if something goes wrong, you know how to fix it. It’s an exciting time to be a researcher, especially in physics.

Sarah Alam Malik: I’ve often wondered if AI, with the breadth of knowledge that it has across all different fields, already has answers to questions that we couldn’t answer – or haven’t been able to answer – just because of the boundaries between disciplines. I’m a physicist and so can’t easily solve problems in biology. But could AI help us to do breakthrough research at the interface between disciplines?

What lessons can we learn from the boom in AI when it comes to the long-term future of quantum tech?

Nicole Gillett: As a software engineer, I once worked at an Internet security company called CloudFlare, which taught me that it’s never too early to be thinking about how any new technology – both AI and quantum – might be abused. What’s also really interesting is whether AI and machine learning can be used to build quantum computers by developing the coding algorithms they need. Companies like Google are active in this area and so are Riverlane too.

Mehul Malik: I recently discussed this question with a friend who works in AI, who said that the huge AI boom in industry, with all the money flowing in to it, has effectively killed academic research in the field. A lot of AI research is now industry-led and goal-orientated – and there’s a risk that the economic advantages of AI will kill curiosity-driven research. The remedy, according to my friend, is to pay academics in AI more as they are currently being offered much larger salaries to work in the private sector.

We need to diversify so that the power to control or chart the course of quantum technologies is not in the hands of a few privileged monopolies.

Mehul Malik, Heriot-Watt University

Another issue is that a lot of power is in the hands a just a few companies, such as Nvidia and ASML. The lesson for the quantum sector is that we need to diversify early on so that the power to control or chart the course of quantum technologies is not in the hands of a few privileged monopolies.

Sarah Alam Malik: Quantum technology has a lot to learn from AI, which has shown that we need to break down the barriers between disciplines. After all, some of the most interesting and impactful research in AI has happened because companies can hire whoever they need to work on a particular problem, whether it’s a computer scientist, a biologist, a chemist, a physicist or a mathematician.

Nature doesn’t differentiate between biology and physics. In academia we not only need people who are hyper specialized but also a crop of generalists who are knee-deep in one field but have experience in other areas too.

The lesson from the AI boom is to blur the artificial boundaries between disciplines and make them more porous. In fact, quantum is a fantastic playground for that because it is inherently interdisciplinary. You have to bring together people from different disciplines to deliver this kind of technology.

Muhammad Hamza Waseem: AI research is in a weird situation where there are lots of excellent applications but so little is understood about how AI machines work. We have no good scientific theory of intelligence or of consciousness. We need to make sure that quantum computing research does not become like that and that academic research scientists are well-funded and not distracted by all the hype that industry always creates.

At the start of the previous century, the mathematician David Hilbert said something like “physics is becoming too difficult for the physicists”. I think quantum computing is also somewhat becoming too challenging for the quantum physicists. We need everyone to get involved for the field to reach its true potential.

Towards “green” quantum technology

Green leaf on the converging point of computer circuit board

Today’s AI systems use vast amounts of energy, but should we also be concerned about the environmental impact of quantum computers? Google, for example, has already carried out quantum error-correction experiments in which data from the company’s quantum computers had to be processed once every microsecond per round of error correction (Nature 638 920). “Finding ways to process it to keep up with the rate at which it’s being generated is a very interesting area of research,” says Nicole Gillett.

However, quantum computers could cut our energy consumption by allowing calculations to be performed far more quickly and efficiently than is possible with classical machines. For Mehul Malik, another important step towards “green” quantum technology will be to lower the energy that quantum devices require and to build detectors that work at room temperature and are robust against noise. Quantum computers themselves can also help, he thinks, by discovering energy-efficient technologies, materials and batteries.

A quantum laptop?

Futuristic abstract low poly wireframe vector illustration with glowing briefcase and speech bubbles

Will we ever see portable quantum computers or will they always be like today’s cloud-computing devices in distant data centres? Muhammad Hamza Waseem certainly does not envisage a word processor that uses a quantum computer. But he points to companies like SPINQ, which has built a two quantum bit computer for educational purposes. “In a sense, we already have a portable quantum computer,” he says. For Mehul Malik, though, it’s all about the market. “If there’s a need for it,” he joked, “then somebody will make it.”

If I were science minister…

Politician speaking to reporters illustration

When asked by Peter Knight – one of the driving forces behind the UK’s quantum-technology programme – what the panel would do if they were science minister, Nicole Gillett said she would seek to make the UK the leader in quantum computing by investing heavily in education. Mehul Malik would cut the costs of scientists moving across borders, pointing out that many big firms have been founded by immigrants. Sarah Alam Malik called for long-term funding – and not to give up if short-term gains don’t transpire. Muhammad Hamza Waseem, meanwhile, said we should invest more in education, research and the international mobility of scientists.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post From building a workforce to boosting research and education – future quantum leaders have their say appeared first on Physics World.

]]>
Feature Matin Durrani talks to four leaders from quantum science and technology about where the field is going next https://physicsworld.com/wp-content/uploads/2025/12/2025-12-Durrani-quantum-laptop-frontis.jpg newsletter
Will this volcano explode, or just ooze? A new mechanism could hold some answers https://physicsworld.com/a/will-this-volcano-explode-or-just-ooze-a-new-mechanism-could-hold-some-answers/ Mon, 15 Dec 2025 16:00:53 +0000 https://physicsworld.com/?p=125498 Discovery of shear-induced bubble formation sheds more light on the divide between eruption and effusion

The post Will this volcano explode, or just ooze? A new mechanism could hold some answers appeared first on Physics World.

]]>
A figure containing a diagram of a volcanic system and a photo of bubbles forming in a container

An international team of researchers has discovered a new mechanism that can trigger the formation of bubbles in magma – a major driver of volcanic eruptions. The finding could improve our understanding of volcanic hazards by improving models of magma flow through conduits beneath Earth’s surface.

Volcanic eruptions are thought to occur when magma deep within the Earth’s crust decompresses. This decompression allows volatile chemicals dissolved in the magma to escape in gaseous form, producing bubbles. The more bubbles there are in the viscous magma, the faster it will rise, until eventually it tears itself apart.

“This process can be likened to a bottle of sparkling water containing dissolved volatiles that exolve when the bottle is opened and the pressure is released,” explains Olivier Roche, a member of the volcanology team at the Magmas and Volcanoes Laboratory (LMV) at the Université Clermont Auvergne (UCA) in France and lead author of the study.

Magma shearing forces could induce bubble nucleation

The new work, however, suggests that this explanation is incomplete. In their study, Roche and colleagues at UCA, the French National Research Institute for Sustainable Development (IRD), Brown University in the US and ETH Zurich in Switzerland began with the assumption that the mechanical energy in magma comes from the pressure gradient between the nucleus of a gas bubble and the ambient liquid. “However, mechanical energy may also be provided by shear stress in the magma when it is in motion,” Roche notes. “We therefore hypothesized that magma shearing forces could induce bubble nucleation too.”

To test their theory, the researchers reproduced the internal movements of magma in liquid polyethylene oxide saturated with carbon dioxide at 80°C. They then set up a device to observe bubble nucleation in situ while the material was experiencing shear stress. They found that the energy provided by viscous shear is large enough to trigger bubble formation – even if decompression isn’t present.

The effect, which the team calls shear-induced bubble nucleation, depends on the magma’s viscosity and on the amount of gas it contains. According to Roche, the presence of this effect could help researchers determine whether an eruption is likely to be explosive or effusive. “Understanding which mechanism is at play is fundamental for hazard assessment,” he says. “If many gas bubbles grow deep in the volcano conduit in a volatile-rich magma, for example, they can combine with each other and form larger bubbles that then open up degassing conduits connected to the surface.

“This process will lead to effusive eruptions, which is counterintuitive (but supported by some earlier observations),” he tells Physics World. “It calls for the development of new conduit flow models to predict eruptive style for given initial conditions (essentially volatile content) in the magma chamber.”

Enhanced predictive power

By integrating this mechanism into future predictive models, the researchers aim to develop tools that anticipate the intensity of eruptions better, allowing scientists and local authorities to improve the way they manage volcanic hazards.

Looking ahead, they are planning new shear experiments on liquids that contain solid particles, mimicking crystals that form in magma and are believed to facilitate bubble nucleation. In the longer term, they plan to study combinations of shear and compression, though Roche acknowledges that this “will be challenging technically”.

They report their present work in Science.

The post Will this volcano explode, or just ooze? A new mechanism could hold some answers appeared first on Physics World.

]]>
Research update Discovery of shear-induced bubble formation sheds more light on the divide between eruption and effusion https://physicsworld.com/wp-content/uploads/2025/12/volcano-schematic-cropped.jpg newsletter1
Remote work expands collaboration networks but reduces research impact, study suggests https://physicsworld.com/a/remote-work-expands-collaboration-networks-but-reduces-research-impact-study-suggests/ Mon, 15 Dec 2025 12:41:01 +0000 https://physicsworld.com/?p=125503 Despite a 'concerning decline' in citation impact, there were, however, benefits to increasing remote interactions

The post Remote work expands collaboration networks but reduces research impact, study suggests appeared first on Physics World.

]]>
Academics who switch to hybrid working and remote collaboration do less impactful research. That’s according to an analysis of how scientists’ collaboration networks and academic outputs evolved before, during and after the COVID-19 pandemic (arXiv: 2511.18481). It involved studying author data from the arXiv preprint repository and the online bibliographic catalogue OpenAlex.

To explore the geographic spread of collaboration networks, Sara Venturini from the Massachusetts Institute of Technology and colleagues looked at the average distance between the institutions of co-authors. They found that while the average distance between team members on publications increased from 2000 to 2021, there was a particularly sharp rise after 2022.

This pattern, the researchers claim, suggests that the pandemic led to scientists collaborating more often with geographically distant colleagues. They found consistent patterns when they separated papers related to COVID-19 from those in unrelated areas, suggesting the trend was not solely driven by research on COVID-19.

The researchers also examined how the number of citations a paper received within a year of publication changed with distance between the co-authors’ institutions. In general, as the average distance between collaborators increases, citations fall, the authors found.

They suggest that remote and hybrid working hampers research quality by reducing spontaneous, serendipitous in-person interactions that can lead to deep discussions and idea exchange.

Despite what the authors say is a “concerning decline” in citation impact, there are, however, benefits to increasing remote interactions. In particular, as the geography of collaboration networks increases, so too does international partnerships and authorship diversity.

Remote tools

Lingfei Wu, a computational social scientist at the University of Pittsburgh, who was not involved in the study, told Physics World that he was surprised by the finding that remote teams produce less impactful work.

“In our earlier research, we found that historically, remote collaborations tended to produce more impactful but less innovative work,” notes Wu. “For example, the Human Genome Project published in 2001 shows how large, geographically distributed teams can also deliver highly impactful science. One would expect the pandemic-era shift toward remote collaboration to increase impact rather than diminish it.”

Wu says his work suggests that remote work is effective for implementing ideas but less effective for generating them, indicating that scientists need a balance between remote and in-person interactions. “Use remote tools for efficient execution, but reserve in-person time for discussion, brainstorming, and informal exchange,” he adds.

The post Remote work expands collaboration networks but reduces research impact, study suggests appeared first on Physics World.

]]>
News Despite a 'concerning decline' in citation impact, there were, however, benefits to increasing remote interactions https://physicsworld.com/wp-content/uploads/2025/12/work-from-home-woman-featured-1098154178-istock-apichon-tee.jpg
How well do you know AI? Try our interactive quiz to find out https://physicsworld.com/a/how-well-do-you-know-ai-try-our-interactive-quiz-to-find-out/ Mon, 15 Dec 2025 12:00:02 +0000 https://physicsworld.com/?p=125462 Test your knowledge of the deep connections between physics, big data and AI

The post How well do you know AI? Try our interactive quiz to find out appeared first on Physics World.

]]>
There are 12 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong. Check your scores at the end – and why not test your colleagues too?

How did you do?

10–12 Top shot – congratulations, you’re the next John Hopfield

7–9 Strong skills – good, but not quite Nobel standard

4–6 Weak performance – should have asked ChatGPT

0–3 Worse than random – are you a bot?

Reports on Progress in Physics

 

Physics World‘s coverage of this interactive quiz is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.

The post How well do you know AI? Try our interactive quiz to find out appeared first on Physics World.

]]>
Puzzle Test your knowledge of the deep connections between physics, big data and AI https://physicsworld.com/wp-content/uploads/2025/12/binary-and-light-181051741-istock-loops7.jpg
International Year of Quantum Science and Technology quiz https://physicsworld.com/a/international-year-of-quantum-science-and-technology-quiz/ Mon, 15 Dec 2025 10:00:08 +0000 https://physicsworld.com/?p=125406 What do you really know about quantum physics?

The post International Year of Quantum Science and Technology quiz appeared first on Physics World.

]]>
This quiz was first published in February 2025. Now you can enjoy it in our new interactive quiz format and check your final score. There are 18 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong.

 

The post International Year of Quantum Science and Technology quiz appeared first on Physics World.

]]>
Puzzle What do you really know about quantum physics? https://physicsworld.com/wp-content/uploads/2025/12/2025-02-quiz-iyq-mascot-jorge-cham.jpg
Components of RNA among life’s building blocks found in NASA asteroid sample https://physicsworld.com/a/components-of-rna-among-lifes-building-blocks-found-in-nasa-asteroid-sample/ Fri, 12 Dec 2025 11:30:01 +0000 https://physicsworld.com/?p=125521 Samples from the near-Earth asteroid Bennu found to contain molecules and compounds vital to the origin of life

The post Components of RNA among life’s building blocks found in NASA asteroid sample appeared first on Physics World.

]]>
More molecules and compounds vital to the origin of life have been detected in asteroid samples delivered to Earth by NASA’s OSIRIS-REx mission. The discovery strengthens the case that not only did life’s building blocks originate in space, but that the ingredients of RNA, and perhaps RNA itself, were brought to our planet by asteroids.

Two new papers in Nature Geoscience and Nature Astronomy describe the discovery of the sugars ribose and glucose in the 120 g of samples returned from the near-Earth asteroid 101955 Bennu, as well as an unusual carbonaceous “gum” that holds important compounds for life. The findings complement the earlier discovery of amino acids and the nucleobases of RNA and DNA in the Bennu samples.

A third new paper, in Nature Astronomy, addresses the abundance of pre-solar grains, which is dust that originated from before the birth of our Solar System, such as dust from supernovae. Scientists led by Ann Nguyen of NASA’s Johnson Space Center found six times more dust direct from supernova explosions than is found, on average, in meteorites and other sampled asteroids. This could suggest differences in the concentration of different pre-solar dust grains in the disc of gas and dust that formed the Solar System.

Space gum

It’s the discovery of organic materials useful for life that steals the headlines, though. For example, the discovery of the space gum, which is essentially a hodgepodge chain of polymers, represents something never found in space before.

Scott Sandford of NASA’s Ames Research Center, co-lead author of the Nature Astronomy paper describing the gum discovery, tells Physics World: “The material we see in our samples is a bit of a molecular jumble. It’s carbonaceous, but much richer in nitrogen and, to a lesser extent, oxygen, than most of the organic compounds found in extraterrestrial materials.”

Sandford refers to the material as gum because of its pliability, bending and dimpling when pressure is applied, rather like chewing gum. And while much of its chemical functionality is replicated in similar materials on our planet, “I doubt it matches exactly with anything seen on Earth,” he says.

Initially, Sandford found the gum using an infrared microscope, nicknaming the dust grains containing the gum “Lasagna” and “Neapolitan” because the grains are layered. To extract them from the rock in the sample, Sandford went to Zack Gainsforth of the University of California, Berkeley, who specializes in analysing and extracting materials from samples like this.

Platinum scaffolding

Having welded a tungsten needle to the Neapolitan sample in order to lift it, the pair quickly realised that the grain was very delicate.

“When we tried to lift the sample it began to deform,” Gainsforth says. “Scott and I practically jumped out of our chairs and brainstormed what to do. After some discussion, we decided that we should add straps to give it enough mechanical rigidity to survive the lift.”

Microscopic particle of asteroid Bennu

By straps, Gainsforth is referring to micro-scale platinum scaffolding applied to the grain to reinforce its structure while they cut it away with an ion beam. Platinum is often used as a radiation shield to protect samples from an ion beam, “but how we used it was anything but standard,” says Gainsforth. “Scott and I made an on-the-fly decision to reinforce the samples based on how they were reacting to our machinations.”

With the sample extracted and reinforced, they used the ion beam cutter to shave it down until it was a thousand times thinner than a human hair, at which point it could be studied by electron microscopy and X-ray spectrometry. “It was a joy to watch Zack ‘micro-manipulate’ [the sample],” says Sandford.

The nitrogen in the gum was found to be in nitrogen heterocycles, which are the building blocks of nucleobases in DNA and RNA. This brings us to the other new discovery, reported in Nature Geoscience, of the sugars ribose and glucose in the Bennu samples, by a team led by Yoshihiro Furukawa of Tohoku University in Japan.

The ingredients of RNA

Glucose is the primary source of energy for life, while ribose is a key component of the sugar-phosphate backbone that connects the information-carrying nucleobases in RNA molecules. Furthermore, the discovery of ribose now means that everything required to assemble RNA molecules is present in the Bennu sample.

Notable by its absence, however, was deoxyribose, which is ribose minus one oxygen atom. Deoxyribose in DNA performs the same job as ribose in RNA, and Furukawa believes that its absence supports a popular hypothesis about the origin of life on Earth called RNA world. This describes how the first life could have used RNA instead of DNA to carry genetic information, catalyse biochemical reactions and self-replicate.

Intriguingly, the presence of all RNA’s ingredients on Bennu raises the possibility that RNA could have formed in space before being brought to Earth.

“Formation of RNA from its building blocks requires a dehydration reaction, which we can expect to have occurred both in ancient Bennu and on primordial Earth,” Furukawa tells Physics World.

However, RNA would be very hard to detect because of its expected low abundance in the samples, making identifying it very difficult. So until there’s information to the contrary, “the present finding means that the ingredients of RNA were delivered from space to the Earth,” says Furukawa.

Nevertheless, these discoveries are major milestones in the quest of astrobiologists and space chemists to understand the origin of life on Earth. Thanks to Bennu and the asteroid 162173 Ryugu, from which a sample was returned by the Japanese Aerospace Exploration Agency (JAXA) mission Hayabusa2, scientists are increasingly confident that the building blocks of life on Earth came from space.

The post Components of RNA among life’s building blocks found in NASA asteroid sample appeared first on Physics World.

]]>
Research update Samples from the near-Earth asteroid Bennu found to contain molecules and compounds vital to the origin of life https://physicsworld.com/wp-content/uploads/2025/12/12-12-25-bennu-molecules-of-life.jpg newsletter1
Institute of Physics celebrates 2025 Business Award winners at parliamentary event https://physicsworld.com/a/institute-of-physics-celebrates-2025-business-award-winners-at-parliamentary-event/ Fri, 12 Dec 2025 11:00:26 +0000 https://physicsworld.com/?p=125471 Some 14 firms have won IOP business awards in 2025, bringing total tally to 102

The post Institute of Physics celebrates 2025 Business Award winners at parliamentary event appeared first on Physics World.

]]>
A total of 14 physics-based firms in sectors from quantum and energy to healthcare and aerospace have won 2025 Business Awards from the Institute of Physics (IOP), which publishes Physics World. The awards were presented at a reception in the Palace of Westminster yesterday attended by senior parliamentarians and policymakers as well as investors, funders and industry leaders.

The IOP Business Awards, which have been running since 2012, recognise the role that physics and physicists play in the economy, creating jobs and growth “by powering innovation to meet the challenges facing us today, ranging from climate change to better healthcare and food production”. More than 100 firms have now won Business Awards, with around 90% of those companies still commercially active.

The parliamentary event honouring the 2025 winners were hosted by Dave Robertson, the Labour MP for Lichfield, who spent 10 years as a physics teacher in Birmingham before working for teaching unions. There was also a speech from Baron Sharma, who studied applied physics before moving into finance and later becoming a Conservative MP, Cabinet minister and president of the COP-26 climate summit.

Seven firms were awarded 2025 IOP Business Innovation Awards, which recognize companies that have “delivered significant economic and/or societal impact through the application of physics”. They include Oxford-based Tokamak Energy, which has developed “compact, powerful, robust, quench-resilient” high-temperature superconducting magnets for commercial fusion energy and for  propulsion systems, accelerators and scientific instruments.

Oxford Instruments was honoured for developing a novel analytical technique for scanning electron microscopes, enabling new capabilities and accelerating time to results by at least an order of magnitude. Ionoptika, meanwhile, was recognized for developing Q-One, which is a new generation of focused ion-beam instrumentation, providing single atom through to high-dose nanoscale advanced materials engineering for photonic and quantum technologies.

The other four winners were: electronics firm FlexEnable for their organic transistor materials; Lynkeos Technology for the development of muonography in the nuclear industry; the renewable energy company Sunamp for their thermal storage system; and the defence and security giant Thales UK for the development of a solid-state laser for laser rangefinders.

Business potential

Six other companies have won an IOP Start-up Award, which celebrates young companies “with a great business idea founded on a physics invention, with the potential for business growth and significant societal impact”. They include Astron Systems for developing “long-lifetime turbomachinery to enable multi-reuse small rocket engines and bring about fully reusable small launch vehicles”, along with MirZyme Therapeutics for “pioneering diagnostics and therapeutics to eliminate preeclampsia and transform maternal health”.

The other four winners were: Celtic Terahertz Technology for a metamaterial filter technology; Nellie Technologies for a algae-based carbon removal technology; Quantum Science for their development of short-wave infrared quantum dot technology; and Wayland Additive for the development and commercialisation of charge-neutralised electron beam metal additive manufacturing.

James McKenzie, a former vice-president for business at the IOP, who was involved in judging the awards, says that all awardees are “worthy winners”. “It’s the passion, skill and enthusiasm that always impresses me,” McKenzie told Physics World.

iFAST Diagnostics were also awarded the IOP Lee Lucas Award that recognises early-stage companies taking innovative products into the medical and healthcare sector. The firm, which was spun out of the University of Southampton, develops blood tests that can test the treatment of bacterial infections in a matter of hours rather than days. They are expecting to have approval for testing next year.

“Especially inspiring was the team behind iFAST,” adds McKenzie, “who developed a method to test very rapid tests cutting time from 48 hours to three hours, so patients can be given the right antibiotics.”

“The award-winning businesses are all outstanding examples of what can be achieved when we build upon the strengths we have, and drive innovation off the back of our world-leading discovery science,” noted Tom Grinyer, IOP chief executive officer. “In the coming years, physics will continue to shape our lives, and we have some great strengths to build upon here in the UK, not only in specific sectors such as quantum, semiconductors and the green economy, but in our strong academic research and innovation base, our growing pipeline of spin-out and early-stage companies, our international collaborations and our growing venture capital community.”

For the full list of winners, see here.

The post Institute of Physics celebrates 2025 Business Award winners at parliamentary event appeared first on Physics World.

]]>
News Some 14 firms have won IOP business awards in 2025, bringing total tally to 102 https://physicsworld.com/wp-content/uploads/2025/12/iop-awards3-12-12-2025.jpg newsletter
Leftover gamma rays produce medically important radioisotopes https://physicsworld.com/a/leftover-gamma-rays-produce-medically-important-radioisotopes/ Fri, 12 Dec 2025 09:00:46 +0000 https://physicsworld.com/?p=125481 GeV-scale bremsstrahlung from an electron accelerator can be used to produce copper-64 and copper-67

The post Leftover gamma rays produce medically important radioisotopes appeared first on Physics World.

]]>
The “leftover” gamma radiation produced when the beam of an electron accelerator strikes its target is usually discarded. Now, however, physicists have found a new use for it: generating radioactive isotopes for diagnosing and treating cancer. The technique, which piggybacks on an already-running experiment, uses bremsstrahlung from an accelerator facility to trigger nuclear reactions in a layer of zinc foil. The products of these reactions include copper isotopes that are hard to make using conventional techniques, meaning that the technique could reduce their costs and expand access to treatments.

Radioactive nuclides are commonly used to treat cancer, and so-called theranostic pairs are especially promising. These pairs occur when one isotope of an element provides diagnostic imaging while another delivers therapeutic radiation – a combination that enables precision tumour targeting to improve treatment outcomes.

One such pair is 64Cu and 67Cu: the former emits positrons that can identify tumours in PET scans while the latter produces beta particles that can destroy cancerous cells. They also have a further clinical advantage in that copper binds to antibodies and other biomolecules, allowing the isotopes to be delivered directly into cells. Indeed, these isotopes have already been used to treat cancer in mice, and early clinical studies in humans are underway.

“Wasted” photons might be harnessed

Researchers led by Mamad Eslami of the University of York, UK have now put forward a new way to make both isotopes. Their method exploits the fact that gamma rays generated by the intense electron beams in particle accelerator experiments interact only weakly with matter (relative to electrons or neutrons, at least). This means that many of them pass right through their primary target and into a beam dump. These “wasted” photons still carry enough energy to drive further nuclear reactions, though, and Eslami and colleagues realized that they could be harnessed to produce 64Cu and 67Cu.

Eslami and colleagues tested their idea at the Mainz Microtron, an electron accelerator at Johannes Gutenberg University Mainz in Germany. “We wanted to see whether GeV-scale bremsstrahlung, already available at the electron accelerator, could be used in a truly parasitic configuration,” Eslami says. The real test, he adds, was whether they could produce 67Cu alongside the primary experiment, which was using the same electron beam and photon field to study hadron physics, without disturbing it or degrading the beam conditions.

The answer turned out to be “yes”. What’s more, the researchers found that their approach could produce enough 67Cu for medical applications in about five days – roughly equal to the time required for a nuclear reactor to produce the equivalent amount of another important medical radionuclide, lutetium-177.

Improving nuclear medicine treatments and reducing costs

“Our results indicate that, under suitable conditions, high-energy electron and photon facilities that were originally built for nuclear or particle physics experiments could also be used to produce 67Cu and other useful radionuclides,” Eslami tells Physics World. In practice, however, Eslami adds that this will be only realistic at sites with a strong, well-characterized bremsstrahlung fields. High-power multi-GeV electron facilities such as the planned Electron-Ion Collider at Brookhaven National Laboratory in the US, or a high-repetition laser-plasma electron source, are two possibilities.

Even with this restriction, team member Mikhail Bashkanov is excited about the advantages. “If we could do away with the necessity of using nuclear reactors to produce medical isotopes and solely generate them with high-energy photon beams from laser-plasma accelerators, we could significantly improve nuclear medicine treatments and reduce their costs,” Bashkanov says.

The researchers, who detail their work in Physical Review C, now plan to test their method at other electron accelerators, especially those with higher beam power and GeV-scale beams, to quantify the 67Cu yields they can expect to achieve in realistic target and beam-dump configurations. In parallel, Eslami adds, they want to explore parasitic operation at emerging laser-plasma-driven electron sources that are being developed for muon tomography. They would also like to link their irradiation studies to target design, radiochemistry and timing constraints to see whether the method can deliver clinically useful activities of 67Cu and other useful isotopes in a reliable and cost-effective way.

The post Leftover gamma rays produce medically important radioisotopes appeared first on Physics World.

]]>
Research update GeV-scale bremsstrahlung from an electron accelerator can be used to produce copper-64 and copper-67 https://physicsworld.com/wp-content/uploads/2025/12/12-12-2025-leftover-gamma-rays-web-rotated.jpg newsletter1
Top 10 Breakthroughs of the Year in physics for 2025 revealed https://physicsworld.com/a/top-10-breakthroughs-of-the-year-in-physics-for-2025-revealed/ Thu, 11 Dec 2025 14:27:27 +0000 https://physicsworld.com/?p=125485 A molecular superfluid, high-resolution microscope and a protein qubit are on our list

The post Top 10 Breakthroughs of the Year in physics for 2025 revealed appeared first on Physics World.

]]>
Physics World Top 10 breakthroughsPhysics World is delighted to announce its Top 10 Breakthroughs of the Year for 2025, which includes research in astronomy, antimatter, atomic and molecular physics and more. The Top Ten is the shortlist for the Physics World Breakthrough of the Year, which will be revealed on Thursday 18 December.

Our editorial team has looked back at all the scientific discoveries we have reported on since 1 January and has picked 10 that we think are the most important. In addition to being reported in Physics World in 2025, the breakthroughs must meet the following criteria:

  • Significant advance in knowledge or understanding
  • Importance of work for scientific progress and/or development of real-world applications
  • Of general interest to Physics World readers

Here, then, are the Physics World Top 10 Breakthroughs for 2025, listed in no particular order. You can listen to Physics World editors make the case for each of our nominees in the Physics World Weekly podcast. And, come back next week to discover who has bagged the 2025 Breakthrough of the Year.

Finding the stuff of life on an asteroid

Tim McCoy and Cari Corrigan

To Tim McCoy, Sara Russell, Danny Glavin, Jason Dworkin, Yoshihiro Furukawa, Ann Nguyen, Scott Sandford, Zack Gainsforth and an international team of collaborators for identifying salt, ammonia, sugar, nitrogen- and oxygen-rich organic materials, and traces of metal-rich supernova dust, in samples returned from the near-Earth asteroid 101955 Bennu. The incredible chemical richness of this asteroid, which NASA’s OSIRIS-REx spacecraft visited in 2020, lends support to the longstanding hypothesis that asteroid impacts could have “seeded” the early Earth with the raw ingredients needed for life to form. The discoveries also enhance our understanding of how Bennu and other objects in the solar system formed out of the disc of material that coalesced around the young Sun.

The first superfluid molecule

To Takamasa Momose of the University of British Columbia, Canada, and Susumu Kuma of the RIKEN Atomic, Molecular and Optical Physics Laboratory, Japan for observing superfluidity in a molecule for the first time. Molecular hydrogen is the simplest and lightest of all molecules, and theorists predicted that it would enter a superfluid state at a temperature between 1‒2 K. But this is well below the molecule’s freezing point of 13.8 K, so Momose, Kuma and colleagues first had to develop a way to keep the hydrogen in a liquid state. Once they did that, they then had to work out how to detect the onset of superfluidity. It took them nearly 20 years, but by confining clusters of hydrogen molecules inside helium nanodroplets, embedding a methane molecule within the clusters, and monitoring the methane’s rotation, they were finally able to do it. They now plan to study larger clusters of hydrogen, with the aim of exploring the boundary between classical and quantum behaviour in this system.

Hollow-core fibres break 40-year limit on light transmission

To researchers at the University of Southampton and Microsoft Azure Fiber in the UK, for developing a new type of optical fibre that reduces signal loss, boosts bandwidth and promises faster, greener communications. The team, led by Francesco Poletti, achieved this feat by replacing the glass core of a conventional fibre with air and using glass membranes that reflect light at certain frequencies back into the core to trap the light and keep it moving through the fibre’s hollow centre. Their results show that the hollow-core fibres exhibit 35% less attenuation than standard glass fibres – implying that fewer amplifiers would be needed in long cables – and increase transmission speeds by 45%. Microsoft has begun testing the new fibres in real systems, installing segments in its network and sending live traffic through them. These trials open the door to gradual rollout and Poletti suggests that the hollow-core fibres could one day replace existing undersea cables.

First patient treatments delivered with proton arc therapy

Trento Proton Therapy Centre researchers

To Francesco Fracchiolla and colleagues at the Trento Proton Therapy Centre in Italy for delivering the first clinical treatments using proton arc therapy (PAT). Proton therapy – a precision cancer treatment – is usually performed using pencil-beam scanning to precisely paint the dose onto the tumour. But this approach can be limited by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this by moving to an arc trajectory with protons delivered over a large number of beam angles and the potential to optimize the number of energies used for each beam direction. Working with researchers at RaySearch Laboratories in Sweden, the team performed successful dosimetric comparisons with clinical proton therapy plans. Following a feasibility test that confirmed the viability of clinical PAT delivery, the researchers used PAT to treat nine cancer patients. Importantly, all treatments were performed using the centre’s existing proton therapy system and clinical workflow.

A protein qubit for quantum biosensing

To Peter Maurer and David Awschalom at the University of Chicago Pritzker School of Molecular Engineering and colleagues for designing a protein quantum bit (qubit) that can be produced directly inside living cells and used as a magnetic field sensor. While many of today’s quantum sensors are based on nitrogen–vacancy (NV) centres in diamond, they are large and hard to position inside living cells. Instead, the team used fluorescent proteins, which are just 3 nm in diameter and can be produced by cells at a desired location with atomic precision. These proteins possess similar optical and spin properties to those of NV centre-based qubits – namely that they have a metastable triplet state. The researchers used a near-infrared laser pulse to optically address a yellow fluorescent protein and read out its triplet spin state with up to 20% spin contrast. They then genetically modified the protein to be expressed in bacterial cells and measured signals with a contrast of up to 8%. They note that although this performance does not match that of NV quantum sensors, it could enable magnetic resonance measurements directly inside living cells, which NV centres cannot do.

First two-dimensional sheets of metal

To Guangyu ZhangLuojun Du and colleagues at the Institute of Physics of the Chinese Academy of Sciences for producing the first 2D sheets of metal. Since the discovery of graphene – a sheet of carbon just one atom thick – in 2004, hundreds of other 2D materials have been fabricated and studied. In most of these, layers of covalently bonded atoms are separated by gaps where neighbouring layers are held together only by weak van der Waals (vdW) interactions, making it relatively easy to “shave off” single layers to make 2D sheets. Many thought that making atomically thin metals, however, would be impossible given that each atom in a metal is strongly bonded to surrounding atoms in all directions. The technique developed by Zhang and Du and colleagues involves heating powders of pure metals between two monolayer-MoS2/sapphire vdW anvils. Once the metal powders are melted into a droplet, the researchers applied a pressure of 200 MPa and continued this “vdW squeezing” until the opposite sides of the anvils cooled to room temperature and 2D sheets of metal were formed. The team produced five atomically thin 2D metals – bismuth, tin, lead, indium and gallium – with the thinnest being around 6.3 Å. The researchers say their work is just the “tip of the iceberg” and now aim to study fundamental physics with the new materials.

Quantum control of individual antiprotons

Photo of a physicist working at the BASE experiment

To CERN’s BASE collaboration for being the first to perform coherent spin spectroscopy on a single antiproton – the antimatter counterpart of the proton. Their breakthrough is the most precise measurement yet of the antiproton’s magnetic properties, and could be used to test the Standard Model of particle physics. The experiment begins with the creation of high-energy antiprotons in an accelerator. These must be cooled (slowed down) to cryogenic temperatures without being lost to annihilation. Then, a single antiproton is held in an ultracold electromagnetic trap, where microwave pulses manipulate its spin state. The resulting resonance peak was 16 times narrower than previous measurements, enabling a significant leap in precision. This level of quantum control opens the door to highly sensitive comparisons of the properties of matter (protons) and antimatter (antiprotons). Unexpected differences could point to new physics beyond the Standard Model and may also reveal why there is much more matter than antimatter in the visible universe.

A smartphone-based early warning system for earthquakes

To Richard Allen, director of the Berkeley Seismological Laboratory at the University of California, Berkeley, and Google’s Marc Stogaitis and colleagues for creating a global network of Android smartphones that acts as an earthquake early warning system. Traditional early warning systems use networks of seismic sensors that rapidly detect earthquakes in areas close to the epicentre and issue warnings across the affected region. Building such seismic networks, however, is expensive, and many earthquake-prone regions do not have them. The researchers utilized the accelerometer in millions of phones in 98 countries to create the Android Earthquake Alert (AEA) system. Testing the app between 2021 and 2024 led to the detection of an average of 312 earthquakes a month, with magnitudes ranging from 1.9 to 7.8. For earthquakes of magnitude 4.5 or higher, the system sent “TakeAction” alerts to users, sending them, on average, 60 times per month for an average of 18 million individual alerts per month. The system also delivered lesser “BeAware” alerts to regions expected to experience a shaking intensity of magnitude 3 or 4. The team now aims to produce maps of ground shaking, which could assist the emergency response services following an earthquake.

A “weather map” for a gas giant exoplanet

To Lisa Nortmann at Germany’s University of Göttingen and colleagues for creating the first detailed “weather map” of an exoplanet. The forecast for exoplanet WASP-127b is brutal with winds reaching 33,000 km/hr, which is much faster than winds found anywhere in the Solar System. The WASP-127b is a gas giant located about 520 light–years from Earth and the team used the CRIRES+ instrument on the European Southern Observatory’s Very Large Telescope to observe the exoplanet as it transited across its star in less than 7 h. Spectral analysis of the starlight that filtered through WASP-127b’s atmosphere revealed Doppler shifts caused by supersonic equatorial winds. By analysing the range of Doppler shifts, the team created a rough weather map of  WASP-127b, even though they could not resolve light coming from specific locations on the exoplanet. Nortmann and colleagues concluded that the exoplanet’s poles are cooler that the rest of WASP-127b, where temperatures can exceed 1000 °C. Water vapour was detected in the atmosphere, raising the possibility of exotic forms of rain.

Highest-resolution images ever taken of a single atom

To the team led by Yichao Zhang at the University of Maryland and Pinshane Huang of the University of Illinois at Urbana-Champaign for capturing the highest-resolution images ever taken of individual atoms in a material. The team used an electron-microscopy technique called electron ptychography to achieve a resolution of 15 pm, which is about 10 times smaller than the size of an atom. They studied a stack of two atomically-thin layers of tungsten diselenide, which were rotated relative to each other to create a moiré superlattice. These twisted 2D materials are of great interest to physicists because their electronic properties can change dramatically with small changes in rotation angle. The extraordinary resolution of their microscope allowed them to visualize collective vibrations in the material called moiré phasons. These are similar to phonons, but had never been observed directly until now. The team’s observations align with theoretical predictions for moiré phasons. Their microscopy technique should boost our understanding of the role that moiré phasons and other lattice vibrations play in the physics of solids. This could lead to the engineering of new and useful materials.

ROPP banner

Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.

The post Top 10 Breakthroughs of the Year in physics for 2025 revealed appeared first on Physics World.

]]>
News A molecular superfluid, high-resolution microscope and a protein qubit are on our list https://physicsworld.com/wp-content/uploads/2025/12/top-10-breakthroughs.jpg 1
Exploring this year’s best physics research in our Top 10 Breakthroughs of 2025 https://physicsworld.com/a/exploring-this-years-best-physics-research-in-our-top-10-breakthroughs-of-2025/ Thu, 11 Dec 2025 14:27:25 +0000 https://physicsworld.com/?p=125501 Lively chat about exoplanet weather, proton arc therapy, 2D metals and more

The post Exploring this year’s best physics research in our Top 10 Breakthroughs of 2025 appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features a lively discussion about our Top 10 Breakthroughs of 2025, which include important research in quantum sensing, planetary science, medical physics, 2D materials and more. Physics World editors explain why we have made our selections and look at the broader implications of this impressive body of research.

The top 10 serves as the shortlist for the Physics World Breakthrough of the Year award, the winner of which will be announced on 18 December.

Links to all the nominees, more about their research and the selection criteria can be found here.

ROPP banner

Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.

The post Exploring this year’s best physics research in our Top 10 Breakthroughs of 2025 appeared first on Physics World.

]]>
Podcasts Lively chat about exoplanet weather, proton arc therapy, 2D metals and more https://physicsworld.com/wp-content/uploads/2025/12/top-10-breakthroughs.jpg
Astronomers observe a coronal mass ejection from a distant star https://physicsworld.com/a/astronomers-observe-a-coronal-mass-ejection-from-a-distant-star/ Thu, 11 Dec 2025 09:00:30 +0000 https://physicsworld.com/?p=125467 Burst from M-dwarf star could be powerful enough to strip the atmosphere of any planets that orbit it, with implications for the search for extraterrestrial life

The post Astronomers observe a coronal mass ejection from a distant star appeared first on Physics World.

]]>
The Sun regularly produces energetic outbursts of electromagnetic radiation called solar flares. When these flares are accompanied by flows of plasma, they are known as coronal mass ejections (CMEs). Now, astronomers at the Netherlands Institute for Radio Astronomy (ASTRON) have spotted a similar event occurring on a star other than our Sun – the first unambiguous detection of a CME outside our solar system.

Astronomers have long predicted that the radio emissions associated with CMEs from other stars should be detectable. However, Joseph Callingham, who led the ASTRON study, says that he and his colleagues needed the highly sensitive low-frequency radio telescope LOFAR – plus ESA’s XMM-Newton space observatory and “some smart software” developed by Cyril Tasse and Philippe Zarka at the Observatoire de Paris-PSL, France – to find one.

A short, intense radio signal from StKM 1-1262

Using these tools, the team detected short, intense radio signals from a star located around 40 light-years away from Earth. This star, called StKM 1-1262, is very different from our Sun. At only around half of the Sun’s mass, it is classed as an M-dwarf star. It also rotates 20 times faster and boasts a magnetic field 300 times stronger. Nevertheless, the burst it produced had the same frequency, time and polarization properties as the plasma emission from an event called a solar type II burst that astronomers identify as a fast CME when it comes from the Sun.

“This work opens up a new observational frontier for studying and understanding eruptions and space weather around other stars,” says Henrik Eklund, an ESA research fellow working at the European Space Research and Technology Centre (ESTEC) in Noordwijk, Netherlands, who was not involved in the study. “We’re no longer limited to extrapolating our understanding of the Sun’s CMEs to other stars.”

Implications for life on exoplanets

The high speed of this burst – around 2400 km/s – would be atypical for our own Sun, with only around 1 in every 20 solar CMEs reaching that level. However, the ASTRON team says that M-dwarfs like StKM 1-1262 could emit CMEs of this type as often as once a day.

An artist's impression of the XMM-Newton telescope, showing the telescope against a black, starry background with the Earth nearby

According to Eklund, this has implications for extraterrestrial life, as most of the known planets in the Milky Way are thought to orbit stars of this type, and such bursts could be powerful enough to strip their atmospheres. “It seems that intense space weather may be even more extreme around smaller stars – the primary hosts of potentially habitable exoplanets,” he says. “This has important implications for how these planets keep hold of their atmospheres and possibly remain habitable over time.”

Erik Kuulkers, a project scientist at XMM-Newton who was also not directly involved in the study, suggests that this atmosphere-stripping ability could modify the way we hunt for life in stellar systems akin to our Solar System. “A planet’s habitability for life as we know it is defined by its distance from its parent star – whether or not it sits within the star’s ‘habitable zone’, a region where liquid water can exist on the surface of planets with suitable atmospheres,” Kuulkers says. “What if that star was especially active, regularly producing CMEs, however? A planet regularly bombarded by these ejections might lose its atmosphere entirely, leaving behind a barren uninhabitable world, despite its orbit being ‘just right’.

Kuulkers adds that the study’s results also contain lessons for our own Solar System. “Why is there still life on Earth despite the violent material being thrown at us?” he asks. “It is because we are safeguarded by our atmosphere.”

Seeking more data

The ASTRON team’s next step will be to look for more stars like StKM 1-1262, which Kuulkers agrees is a good idea. “The more events we can find, the more we learn about CMEs and their impact on a star’s environment,” he says. Additional observations at other wavelengths “would help”, he adds, “but we have to admit that events like the strong one reported on in this work don’t happen too often, so we also need to be lucky enough to be looking at the right star at the right time.”

For now, the ASTRON researchers, who report their work in Nature, say they have reached the limit of what they can detect with LOFAR. “The next step is to use the next generation Square Kilometre Array, which will let us find many more such stars since it is so much more sensitive,” Callingham tells Physics World.

The post Astronomers observe a coronal mass ejection from a distant star appeared first on Physics World.

]]>
Research update Burst from M-dwarf star could be powerful enough to strip the atmosphere of any planets that orbit it, with implications for the search for extraterrestrial life https://physicsworld.com/wp-content/uploads/2025/12/11-12-2025-coronal-mass-ejection-web.jpg newsletter1
Sterile neutrinos: KATRIN and MicroBooNE come up empty handed https://physicsworld.com/a/sterile-neutrinos-katrin-and-microboone-come-up-empty-handed/ Wed, 10 Dec 2025 16:49:58 +0000 https://physicsworld.com/?p=125473 Fourth flavour not seen in beta-decay and oscillation

The post Sterile neutrinos: KATRIN and MicroBooNE come up empty handed appeared first on Physics World.

]]>
Two major experiments have found no evidence for sterile neutrinos – hypothetical particles that could help explain some puzzling observations in particle physics. The KATRIN experiment searched for sterile neutrinos that could be produced during the radioactive decay of tritium; whereas the MicroBooNE experiment looked for the effect of sterile neutrinos on the transformation of muon neutrinos into electron neutrinos.

Neutrinos are low-mass subatomic particles with zero electric charge that interact with matter only via the weak nuclear force and gravity. This makes neutrinos difficult to detect, despite the fact that the particles are produced in copious numbers by the Sun, nuclear reactors and collisions in particle accelerators.

Neutrinos were first proposed in 1930 to explain the apparent missing momentum, spin and energy in the radioactive beta decay of nuclei. The they were first observed in 1956 and by 1975 physicists were confident that three types (flavours) of neutrino existed – electron, muon and tau – along with their respective antiparticles. At the same time, however, it was becoming apparent that something was amiss with the Standard Model description of neutrinos because the observed neutrino flux from sources like the Sun did not tally with theoretical predictions.

Gaping holes

Then in the late 1990s experiments in Canada and Japan revealed that neutrinos of one flavour transform into other flavours as then propagate through space. This quantum phenomenon is called neutrino oscillation and requires that neutrinos have both flavour and mass. Takaaki Kajita and Art McDonald shared the 2015 Nobel Prize for Physics for this discovery – but that is not the end of the story.

One gaping hole in our knowledge is that physicists do not know the neutrino masses – having only measured upper limits for the three flavours. Furthermore, there is some experimental evidence that the current Standard-Model description of neutrino oscillation is not quite right. This includes lower-than-expected neutrino fluxes from some beta-decaying nuclei and some anomalous oscillations in neutrino beams.

One possible explanation for these oscillation anomalies is the existence of a fourth type of neutrino. Because we have yet to detect this particle, the assumption is that it does not interact via the weak interaction – which is why these hypothetical particles are called sterile neutrinos.

Electron energy curve

Now, two very different neutrino experiments have both reported no evidence of sterile neutrinos. One is KATRIN, which is located at the Karlsruhe Institute of Technology (KIT) in Germany. It has the prime mission of making a very precise measurement of the mass of the electron antineutrino. The idea is to measure the energy spectrum of electrons emitted in the beta decay of tritium and infer an upper limit on the mass of the electron antineutrino from the shape of the curve.

If sterile neutrinos exist, then they could sometimes be emitted in place of electron antineutrinos during beta decay. This would change the electron energy spectrum – but this was not observed at KATRIN.

“In the measurement campaigns underlying this analysis, we recorded over 36 million electrons and compared the measured spectrum with theoretical models. We found no indication of sterile neutrinos,” says Kathrin Valerius of the Institute for Astroparticle Physics at KIT and co-spokesperson of the KATRIN collaboration.

Meanwhile, physicists on the MicroBooNE experiment at Fermilab in the US have looked for evidence for sterile neutrinos in how muon neutrinos oscillate into electron neutrinos. Beams of muon neutrinos are created by firing a proton beam at a solid target. The neutrinos at Fermilab then travel several hundred metres (in part through solid ground) to MicroBooNE’s liquid-argon time projection chamber. This detects electron neutrinos with high spatial and energy resolution, allowing detailed studies of neutrino oscillations.

If sterile neutrinos exist, they would be involved in the oscillation process and would therefore affect the number of electron neutrinos detected by MicroBooNE. Neutrino beams from two different sources were used in the experiments, but no evidence for sterile neutrinos was found.

Together, these two experiments rule out sterile neutrinos as an explanation for some – but not all – previously observed oscillation anomalies. So more work is needed to fully understand neutrino physics. Indeed, current and future neutrino experiments are well placed to discover physics beyond the Standard Model, which could lead to solutions to some of the greatest mysteries of physics.

“Any time you rule out one place where physics beyond the Standard Model could be, that makes you look in other places,” says Justin Evans at the UK’s University of Manchester, who is co-spokesperson for MicroBooNE. “This is a result that is going to really spur a creative push in the neutrino physics community to come up with yet more exciting ways of looking for new physics.”

Both groups report their results in papers in Nature: Katrin paper; MicroBooNE paper.

The post Sterile neutrinos: KATRIN and MicroBooNE come up empty handed appeared first on Physics World.

]]>
Research update Fourth flavour not seen in beta-decay and oscillation https://physicsworld.com/wp-content/uploads/2025/12/10-12-25-no-sterile-neutrinos-katrin.jpg newsletter1
Bridging borders in medical physics: guidance, challenges and opportunities https://physicsworld.com/a/bridging-borders-in-medical-physics-guidance-challenges-and-opportunities/ Wed, 10 Dec 2025 14:00:03 +0000 https://physicsworld.com/?p=125455 New book provides expert advice for those looking to participate in global health initiatives

The post Bridging borders in medical physics: guidance, challenges and opportunities appeared first on Physics World.

]]>
Book cover: Global Medical Physics: A Guide for International Collaboration

As the world population ages and the incidence of cancer and cardiac disease grows alongside, there’s an ever-increasing need for reliable and effective diagnostics and treatments. Medical physics plays a central role in both of these areas – from the development of a suite of advanced diagnostic imaging modalities to the ongoing evolution of high-precision radiotherapy techniques.

But access to medical physics resources – whether equipment and infrastructure, education and training programmes, or the medical physicists themselves – is massively imbalanced around the world. In low- and middle-income countries (LMICs), fewer than 50% of patients have access to radiotherapy, with similar shortfalls in the availability of medical imaging equipment. Lower-income countries also have the least number of medical physicists per capita.

This disparity has led to an increasing interest in global health initiatives, with professional organizations looking to provide support to medical physicists in lower income regions. Alongside, medical physicists and other healthcare professionals seek to collaborate internationally in clinical, educational and research settings.

Successful multicultural collaborations, however, can be hindered by cultural, language and ethical barriers, as well as issues such as poor access to the internet and the latest technology advances. And medical physicists trained in high-income contexts may not always understand the circumstances and limitations of those working within lower income environments.

Aiming to overcome these obstacles, a new book entitled Global Medical Physics: A Guide for International Collaboration provides essential guidance for those looking to participate in such initiatives. The text addresses the various complexities of partnering with colleagues in different countries and working within diverse healthcare environments, encompassing clinical and educational medical physics circles, as well as research and academic environments.

“I have been involved in providing support to medical physicists in lower income contexts for a number of years, especially through the International Atomic Energy Agency (IAEA), but also through professional organizations like the American Association of Physicists in Medicine (AAPM),” explains the book’s editor Jacob Van Dyk, emeritus professor at Western University in Canada. “It is out of these experiences that I felt it might be appropriate and helpful to provide some educational materials that address these issues. The outcome was this book, with input from those with these collaborative experiences.”

Shared experience

The book brings together contributions from 34 authors across 21 countries, including both high- and low-resource settings. The authors – selected for their expertise and experience in global health and medical physics activities – provide guidelines for success, as well as noting potential barriers and concerns, on a wide range of themes targeted at multiple levels of expertise.

This guidance includes, for example: advice on how medical physicists can contribute to educational, clinical and research-based global collaborations and the associated challenges; recommendations on building global inter-institutional collaborations, covering administrative, clinical and technical challenges and ethical issues; and a case study on the Radiation Planning Assistant project, which aims to use automated contouring and treatment planning to assist radiation oncologists in LMICs.

In another chapter, the author describes the various career paths available to medical physicists, highlighting how they can help address the disparity in healthcare resources through their careers. There’s also a chapter focusing on CERN as an example of a successful collaboration engaging a worldwide community, including a discussion of CERN’s involvement in collaborative medical physics projects.

With the rapid emergence of artificial intelligence (AI) in healthcare, the book takes a look at the role of information and communication technologies and AI within global collaborations. Elsewhere, authors highlight the need for data sharing in medical physics, describing example data sharing applications and technologies.

Other chapters consider the benefits of cross-sector collaborations with industry, sustainability within global collaborations, the development of effective mentoring programmes – including a look at challenges faced by LMICs in providing effective medical physics education and training – and equity, diversity and inclusion and ethical considerations in the context of global medical physics.

The book rounds off by summarizing the key topics discussed in the earlier chapters. This information is divided into six categories: personal factors, collaboration details, project preparation, planning and execution, and post-project considerations.

“Hopefully, the book will provide an awareness of factors to consider when involved in global international collaborations, not only from a high-income perspective but also from a resource-constrained perspective,” says Van Dyk. “It was for this reason that when I invited authors to develop chapters on specific topics, they were encouraged to invite a co-author from another part of the world, so that it would broaden the depth of experience.”

The post Bridging borders in medical physics: guidance, challenges and opportunities appeared first on Physics World.

]]>
Blog New book provides expert advice for those looking to participate in global health initiatives https://physicsworld.com/wp-content/uploads/2025/12/10-12-25-global-medical-physics-book-cover-featured.jpg newsletter
Can we compare Donald Trump’s health chief to Soviet science boss Trofim Lysenko? https://physicsworld.com/a/can-we-compare-donald-trumps-health-chief-to-soviet-science-boss-trofim-lysenko/ Wed, 10 Dec 2025 11:00:15 +0000 https://physicsworld.com/?p=125204 Robert P Crease notes parallels between US and Soviet science

The post Can we compare Donald Trump’s health chief to Soviet science boss Trofim Lysenko? appeared first on Physics World.

]]>
The US has turned Trofim Lysenko into a hero.

Born in 1898, Lysenko was a Ukrainian plant breeder, who in 1927 found he could make pea and grain plants develop at different rates by applying the right temperatures to their seeds. The Soviet news organ Pravda was enthusiastic, saying his discovery could make crops grow in winter, turn barren fields green, feed starving cattle and end famine.

Despite having trained as a horticulturist, Lysenko rejected the then-emerging science of genetics in favour of Lamarckism, according to which organisms can pass on acquired traits to offspring. This meshed well with the Soviet philosophy of “dialectical materialism”, which sees both the natural and human worlds as evolving not through mechanisms but environment.

Stalin took note of Lysenko’s activities and had him installed as head of key Soviet science agencies. Once in power, Lysenko dismissed scientists who opposed his views, cancelled their meetings, funded studies of discredited theories, and stocked committees with loyalists. Although Lysenko had lost his influence by the time Stalin died in 1953 – with even Pravda having turned against him – Soviet agricultural science had been destroyed.

A modern parallel

Lysenko’s views and actions have a resonance today when considering the activities of Robert F Kennedy Jr, who was appointed by Donald Trump as secretary of the US Department of Health and Human Services in February 2025. Of course, Trump has repeatedly sought to impose his own agenda on US science, with his destructive impact outlined in a detailed report published by the Union of Concerned Scientists in July 2025.

Last May Trump signed executive order 14303, “Restoring Gold Standard Science”, which blasts scientists for not acting “in the best interests of the public”. He has withdrawn the US from the World Health Organization (WHO), ordered that Federal-sponsored research fund his own priorities, redefined the hazards of global warming, and cancelled the US National Climate Assessment (NSA), which had been running since 2000.

But after Trump appointed Kennedy, the assault on science continued into US medicine, health and human services. In what might be called a philosophy of “political materialism”, Kennedy fired all 17 members of the Advisory Committee on Immunization Practices of the US Centers for Disease Control and Prevention (CDC), cancelled nearly $500m in mRNA vaccine contracts, hired a vaccine sceptic to study its connection with autism despite numerous studies that show no connection, and ordered the CDC to revise its website to reflect his own views on the cause of autism.

In his 2021 book The Real Anthony Fauci: Bill Gates, Big Pharma, and the Global War on Democracy and Public Health, Kennedy promotes not germ theory but what he calls “miasma theory”, according to which diseases are prevented by nutrition and lifestyle.

Divergent stories

Of course, there are fundamental differences between the 1930s Soviet Union and the 2020s United States. Stalin murdered and imprisoned his opponents, while the US administration only defunds and fires them. Stalin and Lysenko were not voted in, while Trump came democratically to power, with elected representatives confirming Kennedy. Kennedy has also apologized for his most inflammatory remarks, though Stalin and Lysenko never did (nor does Trump for that matter).

What’s more, Stalin’s and Lysenko’s actions were more grounded in apparent scientific realities and social vision than Trump’s or Kennedy’s. Stalin substantially built up much of the Soviet science and technology infrastructure, whose dramatic successes include launching the first Earth satellite Sputnik in 1957. Though it strains credulity to praise Stalin, his vision to expand Soviet agricultural production during a famine was at least plausible and its intention could be portrayed as humanitarian. Lysenko was a scientist, Kennedy is not.

As for Lysenko, his findings seemed to carry on those of his scientific predecessors. Experimentally, he expanded the work of Russian botanist Ivan Michurin, who bred new kinds of plants able to grow in different regions. Theoretically, his work connected not only with dialectical materialism but also with that of the French naturalist Jean-Baptiste Lamarck, who claimed that acquired traits can be inherited.

Trump and Kennedy are off-the-wall by comparison. Trump has called climate change a con job and hoax and seeks to stop research that says otherwise. In 2019 he falsely stated that Hurricane Dorian was predicted to hit Alabama, then ordered the National Oceanic and Atmospheric Administration to issue a statement supporting him. Trump has said he wants the US birth rate to rise and that he will be the “fertilization president”, but later fired fertility and IVF researchers at the CDC.

As for Kennedy, he has said that COVID-19 “is targeted to attack Caucasians and Black people” and that Ashkenazi Jews and Chinese are the most immune (he disputed the remark, but it’s on video). He has also sought to retract a 2025 vaccine study from the Annals of Internal Medicine (178 1369) that directly refuted his views on autism.

The critical point

US Presidents often have pet scientific projects. Harry Truman created the National Science Foundation, Dwight D Eisenhower set up NASA, John F Kennedy started the Apollo programme, while Richard Nixon launched the Environmental Protection Agency (EPA) and the War on Cancer. But it’s one thing to support science that might promote a political agenda and another to quash science that will not.

One ought to be able to take comfort in the fact that if you fight nature, you lose – except that the rest of us lose as well. Thanks to Lysenko’s actions, the Soviet Union lost millions of tons of grain and hundreds of herds of cattle. The promise of his work evaporated and Stalin’s dreams vanished.

Lysenko, at least, was motivated by seeming scientific promise and social vision; the US has none. Trump has damaged the most important US scientific agencies, destroyed databases and eliminated the EPA’s research arm, while Kennedy has replaced health advisory committees with party loyalists.

While Kennedy may not last his term – most Trump Cabinet officials don’t – the paths he has sent science policy on surely will. For Trump and Kennedy, the policy seems to consist only of supporting pet projects. Meanwhile, cases of measles in the US have reached their highest level in three decades, the seas continue to rise and the climate is changing. It is hard to imagine how enemy agents could damage US science more effectively.

The post Can we compare Donald Trump’s health chief to Soviet science boss Trofim Lysenko? appeared first on Physics World.

]]>
Opinion and reviews Robert P Crease notes parallels between US and Soviet science https://physicsworld.com/wp-content/uploads/2025/12/2025-11-cp-lysenko-kennedy-lysenko.jpg newsletter
Diagnosing brain cancer without a biopsy https://physicsworld.com/a/diagnosing-brain-cancer-without-a-biopsy/ Wed, 10 Dec 2025 09:19:15 +0000 https://physicsworld.com/?p=124965 A black phosphorus-based system detects micro-RNA in aqueous humor, enabling safe diagnosis of Primary Central Nervous System Lymphoma

The post Diagnosing brain cancer without a biopsy appeared first on Physics World.

]]>
Early diagnosis of primary central nervous system lymphoma (PCNSL) remains challenging because brain biopsies are invasive and imaging often lacks molecular specificity. A team led by researchers at Shenzhen University has now developed a minimally invasive fibre-optic plasmonic sensor capable of detecting PCNSL-associated microRNAs in the eye’s aqueous humor with attomolar sensitivity.

At the heart of the approach is a black phosphorus (BP)–engineered surface plasmon resonance (SPR) interface. An ultrathin BP layer is deposited on a gold-coated fiber tip. Because of the work-function difference between BP and gold, electrons transfer from BP into the Au film, creating a strongly enhanced local electric field at the metal–semiconductor interface. This BP–Au charge-transfer nano-interface amplifies refractive-index changes at the surface far more efficiently than conventional metal-only SPR chips, enabling the detection of molecular interactions that would otherwise be too subtle to resolve and pushing the limit of detection down to 21 attomolar without nucleic-acid amplification. The BP layer also provides a high-area, biocompatible surface for immobilizing RNA reporters.

To achieve sequence specificity, the researchers integrated CRISPR-Cas13a, an RNA-guided nuclease that becomes catalytically active only when its target sequence is perfectly matched to a designed CRISPR RNA (crRNA). When the target microRNA (miR-21) is present, activated Cas13a cleaves RNA reporters attached to the BP-modified fiber surface, releasing gold nanoparticles and reducing the local refractive index. The resulting optical shift is read out in real time through the SPR response of the BP-enhanced fiber probe, providing single-nucleotide-resolved detection directly on the plasmonic interface.

With this combined strategy, the sensor achieved a limit of detection of 21 attomolar in buffer and successfully distinguished single-base-mismatched microRNAs. In tests on aqueous-humor samples from patients with PCNSL, the CRISPR-BP-FOSPR assay produced results that closely matched clinical qPCR data, despite operating without any amplification steps.

Because aqueous-humor aspiration is a minimally invasive ophthalmic procedure, this BP-driven plasmonic platform may offer a practical route for early PCNSL screening, longitudinal monitoring, and potentially the diagnosis of other neurological diseases reflected in eye-fluid biomarkers. More broadly, the work showcases how black-phosphorus-based charge-transfer interfaces can be used to engineer next-generation, fibre-integrated biosensors that combine extreme sensitivity with molecular precision.

Do you want to learn more about this topic?

Theoretical and computational tools to model multistable gene regulatory networks by Federico BocciDongya JiaQing NieMohit Kumar Jolly and José Onuchic (2023)

The post Diagnosing brain cancer without a biopsy appeared first on Physics World.

]]>
Research highlight A black phosphorus-based system detects micro-RNA in aqueous humor, enabling safe diagnosis of Primary Central Nervous System Lymphoma https://physicsworld.com/wp-content/uploads/2025/11/rna-chain-1145414300-istock-christoph-burgstedt.jpg
5f electrons and the mystery of δ-plutonium https://physicsworld.com/a/5f-electrons-and-the-mystery-of-%ce%b4-plutonium/ Wed, 10 Dec 2025 09:18:58 +0000 https://physicsworld.com/?p=124967 Scientists uncover the role of magnetic fluctuations in the counterintuitive behaviour of this rare plutonium phase

The post 5f electrons and the mystery of δ-plutonium appeared first on Physics World.

]]>
Plutonium is considered a fascinating element. It was first chemically isolated in 1941 at the University of California, but its discovery was hidden until after the Second World War. There are six distinct allotropic phases of plutonium with very different properties. At ambient pressure, continuously increasing the temperature converts the room-temperature, simple monoclinic a phase through five phase transitions, the final one occurring at approximately 450°C.

The delta (δ) phase is perhaps the most interesting allotrope of plutonium. δ-plutonium is technologically important, has a very simple crystal structure, but its electronic structure has been debated for decades. Researchers have attempted to understand its anomalous behaviour and how the properties of δ-plutonium are connected to the 5f electrons.

The 5f electrons are found in the actinide group of elements which includes plutonium. Their behaviour is counterintuitive. They are sensitive to temperature, pressure and composition, and behave in both a localised manner, staying close to the nucleus and in a delocalised (itinerant) manner, more spread out and contributing to bonding. Both these states can support magnetism depending on actinide element. The 5f electrons contribute to δ-phase stability, anomalies in the material’s volume and bulk modulus, and to a negative thermal expansion where the δ-phase reduces in size when heated.

Research group from Lawrence Livermore National Laboratory

In this work, the researchers present a comprehensive model to predict the thermodynamic behaviour of δ-plutonium, which has a face-centred cubic structure. They use density functional theory, a computational technique that explores the overall electron density of the system and incorporate relativistic effects to capture the behaviour of fast-moving electrons and complex magnetic interactions. The model includes a parameter-free orbital polarization mechanism to account for orbital-orbital interactions, and incorporates anharmonic lattice vibrations and magnetic fluctuations, both transverse and longitudinal modes, driven by temperature-induced excitations. Importantly, it is shown that negative thermal expansion results from magnetic fluctuations.

This is the first model to integrate electronic effects, magnetic fluctuations, and lattice vibrations into a cohesive framework that aligns with experimental observations and semi-empirical models such as CALPHAD. It also accounts for fluctuating states beyond the ground state and explains how gallium composition influences thermal expansion. Additionally, the model captures the positive thermal expansion behaviour of the high-temperature epsilon phase, offering new insight into plutonium’s complex thermodynamics.

Read the full article

First principles free energy model with dynamic magnetism for δ-plutonium

Per Söderlind et al 2025 Rep. Prog. Phys. 88 078001

Do you want to learn more about this topic?

Pu 5f population: the case for n = 5.0 J G Tobin and M F Beaux II (2025)

The post 5f electrons and the mystery of δ-plutonium appeared first on Physics World.

]]>
Research highlight Scientists uncover the role of magnetic fluctuations in the counterintuitive behaviour of this rare plutonium phase https://physicsworld.com/wp-content/uploads/2025/11/atom-129757751-shutterstock-roman-sigaev.jpg
Scientists explain why ‘seeding’ clouds with silver iodide is so efficient https://physicsworld.com/a/scientists-explain-why-seeding-clouds-with-silver-iodide-is-so-efficient/ Wed, 10 Dec 2025 08:58:07 +0000 https://physicsworld.com/?p=125449 New characterization of the material's surface reveals how an atom-level rearrangement aids the formation of ice crystals and promotes precipitation

The post Scientists explain why ‘seeding’ clouds with silver iodide is so efficient appeared first on Physics World.

]]>
Silver iodide crystals have long been used to “seed” clouds and trigger precipitation, but scientists have never been entirely sure why the material works so well for that purpose. Researchers at TU Wien in Austria are now a step closer to solving the mystery thanks to a new study that characterized surfaces of the material in atomic-scale detail.

“Silver iodide has been used in atmospheric weather modification programs around the world for several decades,” explains Jan Balajka from TU Wien’s Institute of Applied Physics, who led this research. “In fact, it was chosen for this purpose as far back as the 1940s because of its atomic crystal structure, which is nearly identical to that of ice – it has the same hexagonal symmetry and very similar distances between atoms in its lattice structure.”

The basic idea, Balajka continues, originated with the 20th-century American atmospheric scientist Bernard Vonnegut, who suggested in 1947 that introducing small silver iodide (AgI) crystals into a cloud could provide nuclei for ice to grow on. But while Vonnegut’s proposal worked (and helped to inspire his brother Kurt’s novel Cat’s Cradle), this simple picture is not entirely accurate. The stumbling block is that nucleation occurs at the surface of a crystal, not inside it, and the atomic structure of an AgI surface differs significantly from its interior.

A task that surface science has solved

To investigate further, Balajka and colleagues used high-resolution atomic force microscopy (AFM) and advanced computer simulations to study the atomic structure of 2‒3 nm diameter AgI crystals when they are broken into two pieces. The team’s measurements revealed that the surfaces of both freshly cleaved structures differed from those found inside the crystal.

More specifically, team member Johanna Hütner, who performed the experiments, explains that when an AgI crystal is cleaved, the silver atoms end up on one side while the iodine atoms appear on the other. This has implications for ice growth, because while the silver side maintains a hexagonal arrangement that provides an ideal template for the growth of ice layers, the iodine side reconstructs into a rectangular pattern that no longer lattice-matches the hexagonal symmetry of ice crystals. The iodine side is therefore incompatible with the epitaxial growth of hexagonal ice.

“Our works solves this decades-long controversy of the surface vs bulk structure of AgI, and shows that structural compatibility does matter,” Balajka says.

Difficult experiments

According to Balajka, the team’s experiments were far from easy. Many experimental methods for studying the structure and properties of material surfaces are based on interactions with charged particles such as electrons or ions, but AgI is an electrical insulator, which “excludes most of the tools available,” he explains. Using AFM enabled them to overcome this problem, he adds, because this technique detects interatomic forces between a sharp tip and the surface and does not require a conductive sample.

Another problem is that AgI is photosensitive and decomposes when exposed to visible light. While this property is useful in other contexts – AgI was a common ingredient in early photographic plates – it created complications for the TU Wien team. “Conventional AFM setups make use of optical laser detection to map the topography of a sample,” Balajka notes.

To avoid destroying their sample while studying it, the researchers therefore had to use a non-contact AFM based on a piezoelectric sensor that detects electrical signals and does not require optical readout. They also adapted their setup to operate in near-darkness, using only red light while manipulating the Ag to ensure that stray light did not degrade the samples.

The computational modelling part of the work introduced yet another hurdle to overcome. “Both Ag and I are atoms with a high number of electrons in their electron shells and are thus highly polarizable,” Balajka explains. “The interaction between such atoms cannot be accurately described by standard computational modelling methods such as density functional theory (DFT), so we had to employ highly accurate random-phase approximation (RPA) calculations to obtain reliable results.”

Highly controlled conditions

The researchers acknowledge that their study, which is detailed in Science Advances, was conducted under highly controlled conditions – ultrahigh vacuum, low pressure and temperature and a dark environment – that are very different from those that prevail inside real clouds. “The next logical step for us is therefore to confirm whether our findings hold under more representative conditions,” Balajka says. “We would like to find out whether the structure of AgI surfaces is the same in air and water, and if not, why.”

The researchers would also like to better understand the atomic arrangement of the rectangular reconstruction of the iodine surface. “This would complete the picture for the use of AgI in ice nucleation, as well as our understanding of AgI as a material overall,” Balajka says.

The post Scientists explain why ‘seeding’ clouds with silver iodide is so efficient appeared first on Physics World.

]]>
Research update New characterization of the material's surface reveals how an atom-level rearrangement aids the formation of ice crystals and promotes precipitation https://physicsworld.com/wp-content/uploads/2025/12/silver-iodide.jpg newsletter1
Slow spectroscopy sheds light on photodegradation https://physicsworld.com/a/slow-spectroscopy-sheds-light-on-photodegradation/ Tue, 09 Dec 2025 17:22:00 +0000 https://physicsworld.com/?p=125428 Technique reveals how organic materials accumulate charge

The post Slow spectroscopy sheds light on photodegradation appeared first on Physics World.

]]>
Using a novel spectroscopy technique, physicists in Japan have revealed how organic materials accumulate electrical charge through long-term illumination by sunlight – leading to material degradation. Ryota Kabe and colleagues at the Okinawa Institute of Science and Technology have shown how charge separation occurs gradually via a rare multi-photon ionization process, offering new insights into how plastics and organic semiconductors degrade in sunlight.

In a typical organic solar cell, an electron-donating material is interfaced with an electron acceptor. When the donor absorbs a photon, one of its electrons may jump across the interface, creating a bound electron-hole pair which may eventually dissociate – creating two free charges from which useful electrical work can be extracted.

Although such an interface vastly boosts the efficiency of this process, it is not necessary for charge separation to occur when an electron donor is illuminated. “Even single-component materials can generate tiny amounts of charge via multiphoton ionization,” Kabe explains. “However, experimental evidence has been scarce due to the extremely low probability of this process.”

To trigger charge separation in this way, an electron needs to absorb one or more additional photons while in its excited state. Since the vast majority of electrons fall back into their ground states before this can happen, the spectroscopic signature of this charge separation is very weak. This makes it incredibly difficult to detect using conventional spectroscopy techniques, which can generally only make observations over timescales of up to a few milliseconds.

The opposite approach

“While weak multiphoton pathways are easily buried under much stronger excited-state signals, we took the opposite approach in our work,” Kabe describes. “We excited samples for long durations and searched for traces of accumulated charges in the slow emission decay.”

Key to this approach was an electron donor called NPD. This organic material has a relatively long triplet lifetime, where an excited electron is prevented from transitioning back to its ground state. As a result, these molecules emit phosphorescence over relatively long timescales.

In addition, Kabe’s team dispersed their NPD samples into different host materials with carefully selected energy levels. In one medium, the energies of both the highest-occupied and lowest-unoccupied molecular orbitals lay below NPD’s corresponding levels, so that the host material acted as an electron acceptor. As a result, charge transfer occurred in the same way as it would across a typical donor-acceptor interface.

Yet in another medium, the host’s lowest-unoccupied orbital lay above NPD’s – blocking charge transfer, and allowing triplet states to accumulate instead. In this case, the only way for charge separation to occur was through multi-photon ionization.

Slow emission decay analysis

Since NPD’s long triplet lifetime allowed its electrons to be excited gradually over an extended period of illumination, its weak charge accumulation became detectable through slow emission decay analysis. In contrast, more conventional methods involve multiple, ultra-fast laser pulses, severely restricting the timescale over which measurements can be made. Altogether, this approach enabled the team to clearly distinguish between the two charge generation pathways.

“Using this method, we confirmed that charge generation occurred via resonance-enhanced multiphoton ionization mediated by long-lived triplet states, even in single-component organic materials,” Kabe describes.

This result offers insights into how plastics and organic semiconductors are degraded by sunlight over years or decades. The conventional explanation is that sunlight generates free radicals. These are molecules that lose an electron through ionization, leaving behind an unpaired electron which readily reacts with other molecules in the surrounding environment. Since photodegradation unfolds over such a long timescale, researchers could not observe this charge generation in single-component organic materials – until now.

“The method will be useful for analysing charge behaviour in organic semiconductor devices and for understanding long-term processes such as photodegradation that occur gradually under continuous light exposure,” Kabe says.

The research is described in Science Advances.

The post Slow spectroscopy sheds light on photodegradation appeared first on Physics World.

]]>
Research update Technique reveals how organic materials accumulate charge https://physicsworld.com/wp-content/uploads/2025/12/9-12-25-slow-spectroscopy-cropped.jpg
Fermilab opens new building dedicated to Tevatron pioneer Helen Edwards https://physicsworld.com/a/fermilab-opens-new-building-dedicated-to-tevatron-pioneer-helen-edwards/ Tue, 09 Dec 2025 14:59:37 +0000 https://physicsworld.com/?p=125431 The Helen Edwards Engineering Research Center is designed to act as a collaborative space for scientists and engineers

The post Fermilab opens new building dedicated to Tevatron pioneer Helen Edwards appeared first on Physics World.

]]>
Fermilab has officially opened a new building named after the particle physicist Helen Edwards. Officials from the lab and the US Department of Energy (DOE) opened the Helen Edwards Engineering Research Center at a ceremony held on 5 December.  The new building is the lab’s largest purpose-built lab and office space since the lab’s iconic Wilson Hall, which was completed in 1974.

Construction of the Helen Edwards Engineering Research Center began in 2019 and was completed three years later. The centre is an 7500 m2 multi-story lab and office building that is adjacent and connected to Wilson Hall.

The new centre is designed as a collaborative lab where engineers, scientists and technicians design, build and test technologies across several areas of research such as neutrino science, particle detectors, quantum science and electronics.

The centre also features cleanrooms, vibration-sensitive labs and cryogenic facilities in which the components of the near detector for the Deep Underground Neutrino Experiment will be assembled and tested.

A pioneering spirit

With a PhD in experimental particle physics from Cornell University, Edwards was heavily involved with commissioning the university’s 10 GeV electron synchrotron. In 1970 Fermilab’s director Robert Wilson appointed Edwards as associate head of the lab’s booster section and she later became head of the accelerator division.

While at Fermilab, Edwards’ primary responsibility was designing, constructing, commissioning and operating the Tevatron, which led to the discoveries of the top quark in 1995 and the tau neutrino in 2000.

Edwards retired in the early 1990s but continued to work as guest scientists at Fermilab and officially switched the Tevatron off during a ceremony held on 30 September 2011. Edwards died in 2016.

Darío Gil, the undersecretary for science at the DOE says that Edwards’ scientific work “is a symbol of the pioneering spirit of US research”.

“Her contributions to the Tevatron and the lab helped the US become a world leader in the study of elementary particles,” notes Gil. “We honour her legacy by naming this research centre after her as Fermilab continues shaping the next generation of research using [artificial intelligence], [machine learning] and quantum physics.”

The post Fermilab opens new building dedicated to Tevatron pioneer Helen Edwards appeared first on Physics World.

]]>
News The Helen Edwards Engineering Research Center is designed to act as a collaborative space for scientists and engineers https://physicsworld.com/wp-content/uploads/2025/12/helen-edwards-engineering-research-center-09-12-2025.jpg newsletter
Memristors could measure a single quantum of resistance https://physicsworld.com/a/memristors-could-measure-a-single-quantum-of-resistance/ Tue, 09 Dec 2025 09:52:54 +0000 https://physicsworld.com/?p=125415 Devices could eliminate the strong magnetic fields currently required to define the standard unit of resistance

The post Memristors could measure a single quantum of resistance appeared first on Physics World.

]]>
A proposed new way of defining the standard unit of electrical resistance would do away with the need for strong magnetic fields when measuring it. The new technique is based on memristors, which are programmable resistors originally developed as building blocks for novel computing architectures, and its developers say it would considerably simplify the experimental apparatus required to measure a single quantum of resistance for some applications.

Electrical resistance is a physical quantity that represents how much a material opposes the flow of electrical current. It is measured in ohms (Ω), and since 2019, when the base units of the International System of Units (SI) were most recently revised, the ohm has been defined in terms of the von Klitzing constant h/e2, where h and e are the Planck constant and the charge on an electron, respectively.

To measure this resistance with high precision, scientists use the fact that the von Klitzing constant is related to the quantized change in the Hall resistance of a two-dimensional electron system (such as the one that forms in a semiconductor heterostructure) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in a material like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. Generating such high fields typically requires a superconducting electromagnet, however.

A completely different approach

Researchers connected to a European project called MEMQuD are now advocating a completely different approach. Their idea is based on memristors, which are programmable resistors that “remember” their previous resistance state even after they have been switched off. This previous resistance state can be changed by applying a voltage or current.

In the new work, a team led by Gianluca Milano of Italy’s Istituto Nazionale di Ricerca Metrologia (INRiM); Vitor Cabral of the Instituto Português da Qualidade; and Ilia Valov of the Institute of Electrochemistry and Energy Systems at the Bulgarian Academy of Sciences studied a device based on memristive nanoionics cells made from conducting filaments of silver. When an electrical field is applied to these filaments, their conductance changes in distinct, quantized steps.

The MEMQuD team reports that the quantum conductance levels achieved in this set-up are precise enough to be exploited as intrinsic standard values. Indeed, a large inter-laboratory comparison confirmed that the values deviated by just -3.8% and 0.6% from the agreed SI values for the fundamental quantum of conductance, G0, and 2G0, respectively. The researchers attribute this precision to tight, atomic-level control over the morphology of the nanochannels responsible for quantum conductance effects, which they achieved by electrochemically polishing the silver filaments into the desired configuration.

A national metrology institute condensed into a microchip

The researchers say their results are building towards a concept known as an “NMI-in-a-chip” – that is, condensing the services of a national metrology institute into a microchip. “This could lead to measuring devices that have their resistance references built-in directly into the chip,” says Milano, “so doing away with complex measurements in laboratories and allowing for devices with zero-chain traceability – that is, those that do not require calibration since they have embedded intrinsic standards.”

Yuma Okazaki of Japan’s National Institute of Advanced Industrial Science and Technology (AIST), who was not involved in this work, says that the new technique could indeed allow end users to directly access a quantum resistance standard.

“Notably, this method can be demonstrated at room temperature and under ambient conditions, in contrast to conventional methods that require cryogenic and vacuum equipment, which is expensive and require a lot of electrical power,” Okazaki says. “If such a user-friendly quantum standard becomes more stable and its uncertainty is improved, it could lead to a new calibration scheme for ensuring the accuracy of electronics used in extreme environments, such as space or the deep ocean, where traditional quantum standards that rely on cryogenic and vacuum conditions cannot be readily used.”

The MEMQuD researchers, who report their work in Nature Nanotechnology, now plan to explore ways to further decrease deviations from the agreed SI values for G0 and 2G0. These include better material engineering, an improved measurement protocol, and strategies for topologically protecting the memristor’s resistance.

The post Memristors could measure a single quantum of resistance appeared first on Physics World.

]]>
Research update Devices could eliminate the strong magnetic fields currently required to define the standard unit of resistance https://physicsworld.com/wp-content/uploads/2025/12/nmi-on-chip.jpg newsletter1
Oak Ridge Quantum Science Center prioritizes joined-up thinking, multidisciplinary impacts https://physicsworld.com/a/oak-ridge-quantum-science-center-prioritizes-joined-up-thinking-multidisciplinary-impacts/ Mon, 08 Dec 2025 14:00:07 +0000 https://physicsworld.com/?p=125371 QSC to accelerate convergence of quantum computing and exascale high-performance computing

The post Oak Ridge Quantum Science Center prioritizes joined-up thinking, multidisciplinary impacts appeared first on Physics World.

]]>
Travis Humble is a research leader who’s thinking big, dreaming bold, yet laser-focused on operational delivery. The long-game? To translate advances in fundamental quantum science into a portfolio of enabling technologies that will fast-track the practical deployment of quantum computers for at-scale scientific, industrial and commercial applications.

As director of the Quantum Science Center (QSC) at Oak Ridge National Laboratory (ORNL) in East Tennessee, Humble and his management team are well placed to transform that research vision into scientific, economic and societal upside. Funded to the tune of $115 million through its initial five-year programme (2020–25), QSC is one of five dedicated National Quantum Information Science Research Centers (NQISRC) within the US Department of Energy (DOE) National Laboratory system.

Validation came in spades last month when, despite the current turbulence around US science funding, QSC was given follow-on DOE backing of $125 million over five years (2025–30) to create “a new scientific ecosystem” for fault-tolerant, quantum-accelerated high-performance computing (QHPC). In short, QSC will target the critical research needed to amplify the impact of quantum computing through its convergence with leadership-class exascale HPC systems.

“Our priority in Phase II QSC is the creation of a common software ecosystem to host the compilers, programming libraries, simulators and debuggers needed to develop hybrid-aware algorithms and applications for QHPC,” explains Humble. Equally important, QSC researchers will develop and integrate new techniques in quantum error correction, fault-tolerant computing protocols and hybrid algorithms that combine leading-edge computing capabilities for pre- and post-processing of quantum programs. “These advances will optimize quantum circuit constructions and accelerate the most challenging computational tasks within scientific simulations,” Humble adds.

Classical computing, quantum opportunity

At the heart of the QSC programme sits ORNL’s leading-edge research infrastructure for classical HPC, a capability that includes Frontier, the first supercomputer to break the exascale barrier and still one of the world’s most powerful. On that foundation, QSC is committed to building QHPC architectures that take advantage of both quantum computers and exascale supercomputing to tackle all manner of scientific and industrial problems beyond the reach of today’s HPC systems alone.

“Hybrid classical-quantum computing systems are the future,” says Humble. “With quantum computers connecting both physically and logically to existing HPC systems, we can forge a scalable path to integrate quantum technologies into our scientific infrastructure.”

Frontier, a high-performance supercomputer

Industry partnerships are especially important in this regard. Working in collaboration with the likes of IonQ, Infleqtion and QuEra, QSC scientists are translating a range of computationally intensive scientific problems – quantum simulations of exotic matter, for example – onto the vendors’ quantum computing platforms, generating excellent results out the other side.

“With our broad representation of industry partners,” notes Humble, “we will establish a common framework by which scientific end-users, software developers and hardware architects can collaboratively advance these tightly coupled, scalable hybrid computing systems.”

It’s a co-development model that industry values greatly. “Reciprocity is key,” Humble adds. “At QSC, we get to validate that QHPC can address real-world research problems, while our industry partners gather user feedback to inform the ongoing design and optimization of their quantum hardware and software.”

Quantum impact

Innovation being what it is, quantum computing systems will continue to trend on an accelerating trajectory, with more qubits, enhanced fidelity, error correction and fault-tolerance key reference points on the development roadmap. Phase II QSC, for its part, will integrate five parallel research thrusts to advance the viability and uptake of QHPC technologies.

The collaborative software effort, led by ORNL’s Vicente Leyton, will develop openQSE, an adaptive, end-to-end software ecosystem for QHPC systems and applications. Yigit Subasi from Los Alamos National Laboratory (LANL) will lead the hybrid algorithms thrust, which will design algorithms that combine conventional and quantum methods to solve challenging problems in the simulation of model materials.

Meanwhile, the QHPC architectures thrust, under the guidance of ORNL’s Chris Zimmer, will co-design hybrid computing systems that integrate quantum computers with leading-edge HPC systems. The scientific applications thrust, led by LANL’s Andrew Sornberger, will develop and validate applications of quantum simulation to be implemented on prototype QHPC systems. Finally, ORNL’s Michael McGuire will lead the thrust to establish experimental baselines for quantum materials that ultimately validate QHPC simulations against real-world measurements.

Longer term, ORNL is well placed to scale up the QHPC model. After all, the laboratory is credited with pioneering the hybrid supercomputing model that uses graphics processing units in addition to conventional central processing units (including the launch in 2012 of Titan, the first supercomputer of this type operating at over 10 petaFLOPS).

“The priority for all the QSC partners,” notes Humble, “is to transition from this still-speculative research phase in quantum computing, while orchestrating the inevitable convergence between quantum technology, existing HPC capabilities and evolving scientific workflows.”

Collaborate, coordinate, communicate

Much like its NQISRC counterparts (which have also been allocated further DOE funding through 2030), QSC provides the “operational umbrella” for a broad-scope collaboration of more than 300 scientists and engineers from 20 partner institutions. With its own distinct set of research priorities, that collective activity cuts across other National Laboratories (Los Alamos and Pacific Northwest), universities (among them Berkeley, Cornell and Purdue) and businesses (including IBM and IQM) to chart an ambitious R&D pathway addressing quantum-state (qubit) resilience, controllability and, ultimately, the scalability of quantum technologies.

“QSC is a multidisciplinary melting pot,” explains Humble, “and I would say, alongside all our scientific and engineering talent, it’s the pooled user facilities that we are able to exploit here at Oak Ridge and across our network of partners that gives us our ‘grand capability’ in quantum science [see box, “Unique user facilities unlock QSC opportunities”]. Certainly, when you have a common research infrastructure, orchestrated as part a unified initiative like QSC, then you can deliver powerful science that translates into real-world impacts.”

Unique user facilities unlock QSC opportunities

Stephen Streiffer tours the LINAC Tunnel at the Spallation Neutron Source

Deconstructed, QSC’s Phase I remit (2020–25) spanned three dovetailing and cross-disciplinary research pathways: discovery and development of advanced materials for topological quantum computing (in which quantum information is stored in a stable topological state – or phase – of a physical system rather than the properties of individual particles or atoms); development of next-generation quantum sensors (to characterize topological states and support the search for dark matter); as well as quantum algorithms and simulations (for studies in fundamental physics and quantum chemistry).

Underpinning that collective effort: ORNL’s unique array of scientific user facilities. A case in point is the Spallation Neutron Source (SNS), an accelerator-based neutron-scattering facility that enables a diverse programme of pure and applied research in the physical sciences, life sciences and engineering. QSC scientists, for example, are using SNS to investigate entirely new classes of strongly correlated materials that demonstrate topological order and quantum entanglement – properties that show great promise for quantum computing and quantum metrology applications.

“The high-brightness neutrons at SNS give us access to this remarkable capability for materials characterization,” says Humble. “Using the SNS neutron beams, we can probe exotic materials, recover the neutrons that scatter off of them and, from the resultant signals, infer whether or not the materials exhibit quantum properties such as entanglement.”

While SNS may be ORNL’s “big-ticket” user facility, the laboratory is also home to another high-end resource for quantum studies: the Center for Nanophase Material Science (CNMS), one of the DOE’s five national Nanoscience Research Centers, which offers QSC scientists access to specialist expertise and equipment for nanomaterials synthesis; materials and device characterization; as well as theory, modelling and simulation in nanoscale science and technology.

Thanks to these co-located capabilities, QSC scientists pioneered another intriguing line of enquiry – one that will now be taken forward elsewhere within ORNL – by harnessing so-called quantum spin liquids, in which electron spins can become entangled with each other to demonstrate correlations over very large distances (relative to the size of individual atoms).

In this way, it is possible to take materials that have been certified as quantum-entangled and use them to design new types of quantum devices with unique geometries – as well as connections to electrodes and other types of control systems – to unlock novel physics and exotic quantum behaviours. The long-term goal? Translation of quantum spin liquids into a novel qubit technology to store and process quantum information.

SNS, CNMS and Oak Ridge Leadership Computing Facility (OLCF) are DOE Office of Science user facilities.

When he’s not overseeing the technical direction of QSC, Humble is acutely attuned to the need for sustained and accessible messaging. The priority? To connect researchers across the collaboration – physicists, chemists, material scientists, quantum information scientists and engineers – as well as key external stakeholders within the DOE, government and industry.

“In my experience,” he concludes, ”the ability of the QSC teams to communicate efficiently – to understand each other’s concepts and reasoning and to translate back and forth across disciplinary boundaries – remains fundamental to the success of our scientific endeavours.”

Further information

Listen to the Physics World podcast: Oak Ridge’s Quantum Science Center takes a multidisciplinary approach to developing quantum materials and technologies

Scaling the talent pipeline in quantum science

Quantum science graduate students and postdoctoral researchers present and discuss their work during a poster session

With an acknowledged shortage of skilled workers across the quantum supply chain, QSC is doing its bit to bolster the scientific and industrial workforce. Front-and-centre: the fifth annual QSC Summer School, which was held at Purdue University in April this year, hosting 130 graduate students (the largest cohort to date) through an intensive four-day training programme.

The Summer School sits as part of a long-term QSC initiative to equip ambitious individuals with the specialist domain knowledge and skills needed to thrive in a quantum sector brimming with opportunity – whether that’s in scientific research or out in industry with hardware companies, software companies or, ultimately, the end-users of quantum technologies in key verticals like pharmaceuticals, finance and healthcare.

“While PhD students and postdocs are integral to the QSC research effort, the Summer School exposes them to the fundamental ideas of quantum science elaborated by leading experts in the field,” notes Vivien Zapf, a condensed-matter physicist at Los Alamos National Laboratory who heads up QSC’s advanced characterization efforts.

“It’s all about encouraging the collective conversation,” she adds, “with lots of opportunities for questions and knowledge exchange. Overall, our emphasis is very much on training up scientists and engineers to work across the diversity of disciplines needed to translate quantum technologies out of the lab into practical applications.”

The programme isn’t for the faint-hearted, though. Student delegates kicked off this year’s proceedings with a half-day of introductory presentations on quantum materials, devices and algorithms. Next up: three and a half days of intensive lectures, panel discussions and poster sessions covering everything from entangled quantum networks to quantum simulations of superconducting qubits.

Many of the Summer School’s sessions were also made available virtually on Purdue’s Quantum Coffeehouse Live Stream on YouTube – the streamed content reaching quantum learners across the US and further afield. Lecturers were drawn from the US National Laboratories, leading universities (such as Harvard and Northwestern) and the quantum technology sector (including experts from IBM, PsiQuantum, NVIDIA and JPMorganChase).

The post Oak Ridge Quantum Science Center prioritizes joined-up thinking, multidisciplinary impacts appeared first on Physics World.

]]>
Analysis QSC to accelerate convergence of quantum computing and exascale high-performance computing https://physicsworld.com/wp-content/uploads/2025/12/ornl-qsc-travis-humble-2-web.jpg newsletter
So you want to install a wind turbine? Here’s what you need to know https://physicsworld.com/a/so-you-want-to-install-a-wind-turbine-heres-what-you-need-to-know/ Mon, 08 Dec 2025 11:00:42 +0000 https://physicsworld.com/?p=125173 Janina Moereke discovers the practicalities of installing wind turbines in a forest

The post So you want to install a wind turbine? Here’s what you need to know appeared first on Physics World.

]]>
As a physicist in industry, I spend my days developing new types of photovoltaic (PV) panels. But I’m also keen to do something for the transition to green energy outside work, which is why I recently installed two PV panels on the balcony of my flat in Munich. Fitting them was great fun – and I can now enjoy sunny days even more knowing that each panel is generating electricity.

However, the panels, which each have a peak power of 440 W, don’t cover all my electricity needs, which prompted me to take an interest in a plan to build six wind turbines in a forest near me on the outskirts of Munich. Curious about the project, I particularly wanted to find out when the turbines will start generating electricity for the grid. So when I heard that a weekend cycle tour of the site was being organized to showcase it to local residents, I grabbed my bike and joined in.

As we cycle, I discover that the project – located in Forstenrieder Park – is the joint effort of four local councils and two “citizen-energy” groups, who’ve worked together for the last five years to plan and start building the six turbines. Each tower will be 166 m high and the rotor blades will be 80 m long, with the plan being for them to start operating in 2027.

I’ve never thought of Munich as a particularly windy city, but at the height at which the blades operate, there’s always a steady, reliable flow of wind

I’ve never thought of Munich as a particularly windy city. But tour leader Dieter Maier, who’s a climate adviser to Neuried council, explains that at the height at which the blades operate, there’s always a steady, reliable flow of wind. In fact, each turbine has a designed power output of 6.5 MW and will deliver a total of 10 GWh in energy over the course of a year.

Practical questions

Cycling around, I’m excited to think that a single turbine could end up providing the entire electricity demand for Neuried. But installing wind turbines involves much more than just the technicalities of generating electricity. How do you connect the turbines to the grid? How do you ensure planes don’t fly into the turbines? What about wildlife conservation and biodiversity?

At one point of our tour, we cycle round a 90-degree bend in the forest and I wonder how a huge, 80 m-long blade will be transported round that kind of tight angle? Trees will almost certainly have to be felled to get the blade in place, which sounds questionable for a supposedly green project. Fortunately, project leaders have been working with the local forest manager and conservationists, finding ways to help improve the local biodiversity despite the loss of trees.

As a representative of BUND (one of Germany’s biggest conservation charities) explains on the tour, a natural, or “unmanaged”, forest consists of a mix of areas with a higher or lower density of trees. But Forstenrieder Park has been a managed forest for well over a century and is mostly thick with trees. Clearing trees for the turbines will therefore allow conservationists to grow more of the bushes and plants that currently struggle to find space to flourish.

Small group of bikes at the edge of a large clearing in a forest

To avoid endangering birds and bats native to this forest, meanwhile, the turbines will be turned off when the animals are most active, which coincidentally corresponds to low wind periods in Munich. Insurance costs have to be factored in too. Thankfully, it’s quite unlikely that a turbine will burn down or get ice all over its blades, which means liability insurance costs are low. But vandalism is an ever-present worry.

In fact, at the end of our bike tour, we’re taken to a local wind turbine that is already up and running about 13 km further south of Forstenrieder Park. This turbine, I’m disappointed to discover, was vandalized back in 2024, which led to it being fenced off and video surveillance cameras being installed.

But for all the difficulties, I’m excited by the prospect of the wind turbines supporting the local energy needs. I can’t wait for the day when I’m on my balcony, solar panels at my side, sipping a cup of tea made with water boiled by electricity generated by the rotor blades I can see turning round and round on the horizon.

The post So you want to install a wind turbine? Here’s what you need to know appeared first on Physics World.

]]>
Blog Janina Moereke discovers the practicalities of installing wind turbines in a forest https://physicsworld.com/wp-content/uploads/2025/12/2025-12-moereke-cycle-tour.jpg newsletter
Galactic gamma rays could point to dark matter https://physicsworld.com/a/galactic-gamma-rays-could-point-to-dark-matter/ Fri, 05 Dec 2025 14:21:05 +0000 https://physicsworld.com/?p=125391 Spectrum from the Milky Way’s halo matches WIMP annihilation

The post Galactic gamma rays could point to dark matter appeared first on Physics World.

]]>
Fermi telescope data

Gamma rays emitted from the halo of the Milky Way could be produced by hypothetical dark-matter particles. That is the conclusion of an astronomer in Japan who has analysed data from NASA’s Fermi Gamma-ray Space Telescope. The energy spectrum of the emission is what would be expected from the annihilation of particles called WIMPs. If this can be verified, it would mark the first observation of dark matter via electromagnetic radiation.

Since the 1930s astronomers have known that there is something odd about galaxies, galaxy clusters and larger structures in the universe. The problem is that there is not nearly enough visible matter in these objects to explain their dynamics and structure. A rotating galaxy, for example, should be flinging out its stars because it does not have enough self-gravitation to hold itself together.

Today, the most popular solution to this conundrum is the existence of a hypothetical substance called dark matter. Dark-matter particles would have mass and interact with each other and normal matter via the gravitational force, gluing rotating galaxies together. However, the fact that we have never observed dark matter directly means that the particles must rarely, if ever, interact via the other three forces.

Annihilating WIMPs

The weakly interacting massive particle (WIMP) is a dark-matter candidate that interacts via the weak nuclear force (or a similarly weak force). As a result of this interaction, pairs of WIMPs are expected to occasionally annihilate to create high-energy gamma rays and other particles. If this is true, dense areas of the universe such as galaxies should be sources of these gamma rays.

Now, Tomonori Totani of the University of Tokyo has analysed data from the Fermi telescope  and identified an excess of gamma rays emanating from the halo of the Milky Way. What is more, Totani’s analysis suggests that the energy spectrum of the excess radiation (from about 10−100 GeV) is consistent with hypothetical WIMP annihilation processes.

“If this is correct, to the extent of my knowledge, it would mark the first time humanity has ‘seen’ dark matter,” says Totani. “This signifies a major development in astronomy and physics,” he adds.

While Totani is confident of his analysis, his conclusion must be verified independently. Furthermore, work will be needed to rule out conventional astrophysical sources of the excess radiation.

Catherine Heymans, who is Astronomer Royal for Scotland told Physics World, “I think it’s a really nice piece of work, and exactly what should be happening with the Fermi data”.  The research is described in Journal of Cosmology and Astroparticle Physics. Heymans describes Totani’s paper as “well written and thorough”.

The post Galactic gamma rays could point to dark matter appeared first on Physics World.

]]>
Research update Spectrum from the Milky Way’s halo matches WIMP annihilation https://physicsworld.com/wp-content/uploads/2025/12/5-12-25-excess-gamma-rays-list.jpg
Simple feedback mechanism keeps flapping flyers stable when hovering https://physicsworld.com/a/simple-feedback-mechanism-keeps-flapping-flyers-stable-when-hovering/ Fri, 05 Dec 2025 09:00:59 +0000 https://physicsworld.com/?p=125365 Discovery could improve the performance of hovering robots and even artificial pollinators

The post Simple feedback mechanism keeps flapping flyers stable when hovering appeared first on Physics World.

]]>
Researchers in the US have shed new light on the puzzling and complex flight physics of creatures such as hummingbirds, bumblebees and dragonflies that flap their wings to hover in place. According to an interdisciplinary team at the University of Cincinnati, the mechanism these animals deploy can be described by a very simple, computationally basic, stable and natural feedback mechanism that operates in real time. The work could aid the development of hovering robots, including those that could act as artificial pollinators for crops.

If you’ve ever watched a flapping insect or hummingbird hover in place – often while engaged in other activities such as feeding or even mating – you’ll appreciate how remarkable they are. To stay aloft and stable, these animals must constantly sense their position and motion and make corresponding adjustments to their wing flaps.

Feedback mechanism relies on two main components

Biophysicists have previously put forward many highly complex explanations for how they do this, but according to the Cincinnati team of Sameh Eisa and Ahmed Elgohary, some of this complexity is not necessary. Earlier this year, the pair developed their own mathematical and control theory based on a mechanism they call “extremum seeking for vibrational stabilization”.

Eisa describes this mechanism as “very natural” because it relies on just two main components. The first is the wing flapping motion itself, which he says is “naturally built in” for flapping creatures that use it to propel themselves. The second is a simple feedback mechanism involving sensations and measurements related to the altitude at which the creatures aim to stabilize their hovering.

The general principle, he continues, is that a system (in this case an insect or hummingbird) can steer itself towards a stable position by continuously adjusting a high-amplitude, high-frequency input control or signal (in this case, a flapping wing action). “This adjustment is simply based on the feedback of measurement (the insects’ perceptions) and stabilization (hovering) occurs when the system optimizes what it is measuring,” he says.

As well as being relatively easy to describe, Eisa tells Physics World that this mechanism is biologically plausible and computationally basic, dramatically simplifying the physics of hovering. “It is also categorically different from all available results and explanations in the literature for how stable hovering by insects and hummingbirds can be achieved,” he adds.

Researchers at dinner

Interdisciplinary work

In the latest study, which is detailed in Physical Review E, the researchers compared their simulation results to reported biological data on a hummingbird and five flapping insects (a bumblebee, a cranefly, a dragonfly, a hawkmoth and a hoverfly). They found that their simulation fit the data very closely. They also ran an experiment on a flapping, light-sensing robot and observed that it behaved like a moth: it elevated itself to the level of the light source and then stabilized its hovering motion.

Eisa says he has always been fascinated by such optimized biological behaviours. “This is especially true for flyers, where mistakes in execution could potentially mean death,” he says. “The physics behind the way they do it is intriguing and it probably needs elegant and sophisticated mathematics to be described. However, the hovering creatures appear to be doing this very simply and I found discovering the secret of this puzzle very interesting and exciting.”

Eisa adds that this element of the work ended up being very interdisciplinary, and both his own PhD in applied mathematics and the aerospace engineering background of Elgohary came in very useful. “We also benefited from lengthy discussions with a biologist colleague who was a reviewer of our paper,” Eisa says. “Luckily, they recognized the value of our proposed technique and ended up providing us with very valuable inputs.”

Eisa thinks the work could open up new lines of research in several areas of science and engineering. “For example, it opens up new ideas in neuroscience and animal sensory mechanisms and could almost certainly be applied to the development of airborne robotics and perhaps even artificial pollinators,” he says. “The latter might come in useful in the future given the high rate of death many species of pollinating insects are encountering today.”

The post Simple feedback mechanism keeps flapping flyers stable when hovering appeared first on Physics World.

]]>
Research update Discovery could improve the performance of hovering robots and even artificial pollinators https://physicsworld.com/wp-content/uploads/2025/12/hummer.jpg newsletter1
Building a quantum future using topological phases of matter and error correction https://physicsworld.com/a/building-a-quantum-future-using-topological-phases-of-matter-and-error-correction/ Thu, 04 Dec 2025 14:55:09 +0000 https://physicsworld.com/?p=125383 Tim Hsieh of the Perimeter Institute is our podcast guest

The post Building a quantum future using topological phases of matter and error correction appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features Tim Hsieh of Canada’s Perimeter Institute for Theoretical Physics. We explore some of today’s hottest topics in quantum science and technology – including topological phases of matter; quantum error correction and quantum simulation.

Our conversation begins with an exploration of the quirky properties quantum matter and how these can be exploited to create quantum technologies. We look at the challenges that must be overcome to create large-scale quantum computers; and Hsieh reveals which problem he would solve first if he had access to a powerful quantum processor.

This interview was recorded earlier this autumn when I had the pleasure of visiting the Perimeter Institute and speaking to four physicists about their research. This is the third of those conversations to appear on the podcast.

The first interview in this series from the Perimeter Institute was with Javier Toledo-Marín, “Quantum computing and AI join forces for particle physics”; and the second was with Bianca Dittrich, “Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge“.

APS logo

 

This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online.

The post Building a quantum future using topological phases of matter and error correction appeared first on Physics World.

]]>
Podcasts Tim Hsieh of the Perimeter Institute is our podcast guest https://physicsworld.com/wp-content/uploads/2025/12/4-12-25-tim-hsieh-list.jpg newsletter
Generative AI model detects blood cell abnormalities https://physicsworld.com/a/generative-ai-model-detects-blood-cell-abnormalities/ Thu, 04 Dec 2025 13:00:09 +0000 https://physicsworld.com/?p=125348 The CytoDiffusion classifier analyses the shape and structure of blood cells to detect abnormalities that may indicate blood disorders

The post Generative AI model detects blood cell abnormalities appeared first on Physics World.

]]>
Blood cell images

The shape and structure of blood cells provide vital indicators for diagnosis and management of blood disease and disorders. Recognizing subtle differences in the appearance of cells under a microscope, however, requires the skills of experts with years of training, motivating researchers to investigate whether artificial intelligence (AI) could help automate this onerous task. A UK-led research team has now developed a generative AI-based model, known as CytoDiffusion, that characterizes blood cell morphology with greater accuracy and reliability than human experts.

Conventional discriminative machine learning models can match human performance at classifying cells in blood samples into predefined classes. But discriminative models, which learn to recognise cell images based on expert labels, struggle with never-before-seen cell types and images from differing microscopes and staining techniques.

To address these shortfalls, the team – headed up at the University of Cambridge, University College London and Queen Mary University of London – created CytoDiffusion around a diffusion-based generative AI classifier. Rather than just learning to separate cell categories, CytoDiffusion models the full range of blood cell morphologies to provide accurate classification with robust anomaly detection.

“Our approach is motivated by the desire to achieve a model with superhuman fidelity, flexibility and metacognitive awareness that can capture the distribution of all possible morphological appearances,” the researchers write.

Authenticity and accuracy

For AI-based analysis to be adopted in the clinic, it’s essential that users trust a model’s learned representations. To assess whether CytoDiffusion could effectively capture the distribution of blood cell images, the team used it to generate synthetic blood cell images. Analysis by experienced haematologists revealed that these synthetic images were near-indistinguishable from genuine images, showing that CytoDiffusion genuinely learns the morphological distribution of blood cells rather than using artefactual shortcuts.

The researchers used multiple datasets to develop and evaluate their diffusion classifier, including CytoData, a custom dataset containing more than half a million anonymized cell images from almost 3000 blood smear slides. In standard classification tasks across these datasets, CytoDiffusion achieved state-of-the-art performance, matching or exceeding the capabilities of traditional discriminative models.

Effective diagnosis from blood smear samples also requires the ability to detect rare or previously unseen cell types. The researchers evaluated CytoDiffusion’s ability to detect blast cells (immature blood cells) in the test datasets. Blast cells are associated with blood malignancies such as leukaemia, and high detection sensitivity is essential to minimize false negatives.

In one dataset, CytoDiffusion detected blast cells with sensitivity and specificity of 0.905 and 0.962, respectively. In contrast, a discriminative model exhibited a poor sensitivity of 0.281. In datasets with erythroblasts as the abnormal cells, CytoDiffusion again outperformed the discriminative model, demonstrating that it can detect abnormal cell types not present in its training data, with the high sensitivity required for clinical applications.

Robust model

It’s important that a classification model is robust to different imaging conditions and can function with sparse training data, as commonly found in clinical applications. When trained and tested on diverse image datasets (different hospitals, microscopes and staining procedures), CytoDiffusion achieved state-of-the-art accuracy in all cases. Likewise, after training on limited subsets of 10, 20 and 50 images per class, CytoDiffusion consistently outperformed discriminative models, particularly in the most data-scarce conditions.

Another essential feature of clinical classification tasks, whether performed by a human or an algorithm, is knowing the uncertainty in the final decision. The researchers developed a framework for evaluating uncertainty and showed that CytoDiffusion produced superior uncertainty estimates to human experts. With uncertainty quantified, cases with high certainty could be processed automatically, with uncertain cases flagged for human review.

“When we tested its accuracy, the system was slightly better than humans,” says first author Simon Deltadahl from the University of Cambridge in a press statement. “But where it really stood out was in knowing when it was uncertain. Our model would never say it was certain and then be wrong, but that is something that humans sometimes do.”

Finally, the team demonstrated CytoDiffusion’s ability to create heat maps highlighting regions that would need to change for an image to be reclassified. This feature provides insight into the model’s decision-making process and shows that it understands subtle differences between similar cell types. Such transparency is essential for clinical deployment of AI, making models more trustworthy as practitioners can verify that classifications are based on legitimate morphological features.

“The true value of healthcare AI lies not in approximating human expertise at lower cost, but in enabling greater diagnostic, prognostic and prescriptive power than either experts or simple statistical models can achieve,” adds co-senior author Parashkev Nachev from University College London.

CytoDiffusion is described in Nature Machine Intelligence.

The post Generative AI model detects blood cell abnormalities appeared first on Physics World.

]]>
Research update The CytoDiffusion classifier analyses the shape and structure of blood cells to detect abnormalities that may indicate blood disorders https://physicsworld.com/wp-content/uploads/2025/12/4-12-25-blood-cell-grid-featured.jpg newsletter1
Light pollution from satellite mega-constellations threaten space-based observations https://physicsworld.com/a/light-pollution-from-satellite-mega-constellations-threaten-space-based-observations/ Thu, 04 Dec 2025 11:50:11 +0000 https://physicsworld.com/?p=125319 Study finds 96% of images from planned telescopes could be compromised

The post Light pollution from satellite mega-constellations threaten space-based observations appeared first on Physics World.

]]>
Almost every image that will be taken by future space observatories in low-Earth orbit could be tainted due to light contamination from satellites. That is according to a new analysis from researchers at NASA, which stresses that light pollution from satellites orbiting Earth must be reduced to guarantee astronomical research is not affected.

The number of satellites orbiting Earth has increased from about 2000 in 2019 to 15 000 today. Many of these are part of so-called mega-constellations that provide services such as Internet coverage around the world, including in areas that were previously unable to access it. Examples of such constellations include SpaceX’s Starlink as well as Amazon’s Kuiper and Eutelsat’s OneWeb.

Many of these mega-constellations share the same space as space-based observatories such as NASA’s Hubble Space Telescope. This means that the telescopes can capture streaks of reflected light from the satellites that render the images or data completely unusable for research purposes. That is despite anti-reflective coating that is applied to some newer satellites in SpaceX’s Starlink constellation, for example.

Previous work has explored the impact of such satellites constellations on ground-based astronomy, both optical and radioastronomy. Yet their impact on telescopes in space has been overlooked.

To find out more, Alejandro Borlaff from NASA’s Ames Research Center, and colleagues simulated the view of four space-based telescopes: Hubble and the near-infrared observatory SPHEREx, which launched in 2025, as well at the European Space Agency’s proposed near-infrared ARRAKIHS mission and China’s planned Xuntian telescopes.

These observatories are, or will be placed, between 400 and 800 km from the Earth’s surface.

The authors found that if the population of mega-constellation satellites grows to the 56 000 that is projected by the end of the decade, it would contaminate about 39.6% of Hubble’s images and 96% of images from the other three telescopes.

Borlaff and colleagues predict that the average number of satellites observed per exposure would be 2.14 for Hubble, 5.64 for SPHEREx, 69 for ARRAKIHS, and 92 for Xuntian.

The authors note that one solution could be to deploy satellites at lower orbits than the telescopes operate, which would make them about four magnitudes dimmer. The downside is that emissions from these lower satellites could have implications for Earth’s ozone layer.

An ‘urgent need for dialogue’

Katherine Courtney, chair of the steering board for the Global Network on Sustainability in Space, says that without astronomy, the modern space economy “simply wouldn’t exist”.

“The space industry owes its understanding of orbital mechanics, and much of the technology development that has unlocked commercial opportunities for satellite operators, to astronomy,” she says. “The burgeoning growth of the satellite population brings many benefits to life on Earth, but the consequences for the future of astronomy must be taken into consideration.”

Courtney adds that there is now “an urgent need for greater dialogue and collaboration between astronomers and satellite operators to mitigate those impacts and find innovative ways for commercial and scientific operations to co-exist in space.”

  • Katherine Courtney, chairs the Global Network on Sustainability in Space, and Alice Gorman from Flinders University in Adelaide, Australia, appeared on a Physics World Live panel discussion about the impact of space debris that was held on 10 November. A recording of the event is available here.

The post Light pollution from satellite mega-constellations threaten space-based observations appeared first on Physics World.

]]>
News Study finds 96% of images from planned telescopes could be compromised https://physicsworld.com/wp-content/uploads/2025/12/constellations-03-12-2025.jpg newsletter1
Physicists use a radioactive molecule’s own electrons to probe its internal structure https://physicsworld.com/a/physicists-use-a-radioactive-molecules-own-electrons-to-probe-its-internal-structure/ Thu, 04 Dec 2025 09:00:45 +0000 https://physicsworld.com/?p=125325 Work on radium monofluoride could shed light on the asymmetry of matter and antimatter in the universe

The post Physicists use a radioactive molecule’s own electrons to probe its internal structure appeared first on Physics World.

]]>
Physicists have obtained the first detailed picture of the internal structure of radium monofluoride (RaF) thanks to the molecule’s own electrons, which penetrated the nucleus of the molecule and interacted with its protons and neutrons. This behaviour is known as the Bohr-Weisskopf effect, and study co-leader Shane Wilkins says that this marks the first time it has been observed in a molecule. The measurements themselves, he adds, are an important step towards testing for nuclear symmetry violation, which might explain why our universe contains much more matter than antimatter.

RaF contains the radioactive isotope 225Ra, which is not easy to make, let alone measure. Producing it requires a large accelerator facility at high temperature and high velocity, and it is only available in tiny quantities (less than a nanogram in total) for short periods (it has a nuclear half-life of around 15 days).

“This imposes significant challenges compared to the study of stable molecules, as we need extremely selective and sensitive techniques in order to elucidate the structure of molecules containing 225Ra,” says Wilkins, who performed the measurements as a member of Ronald Fernando Garcia Ruiz’s research group at the Massachusetts Institute of Technology (MIT), US.

The team chose RaF despite these difficulties because theory predicts that it is particularly sensitive to small nuclear effects that break the symmetries of nature. “This is because, unlike most atomic nuclei, the radium atom’s nucleus is octupole deformed, which basically means it has a pear shape,” explains the study’s other co-leader, Silviu-Marian Udrescu.

Electrons inside the nucleus

In their study, which is detailed in Science, the MIT team and colleagues at CERN, the University of Manchester, UK and KU Leuven in the Netherlands focused on RaF’s hyperfine structure. This structure arises from interactions between nuclear and electron spins, and studying it can reveal valuable clues about the nucleus. For example, the nuclear magnetic dipole moment can provide information on how protons and neutrons are distributed inside the nucleus.

In most experiments, physicists treat electron-nucleus interactions as taking place at (relatively) long ranges. With RaF, that’s not the case. Udrescu describes the radium atom’s electrons as being “squeezed” within the molecule, which increases the probability that they will interact with, and penetrate, the radium nucleus. This behaviour manifests itself as a slight shift in the energy levels of the radium atom’s electrons, and the team’s precision measurements – combined with state-of-the-art molecular structure calculations – confirm that this is indeed what happens.

“We see a clear breakdown of this [long-range interactions] picture because the electrons spend a significant amount of time within the nucleus itself due to the special properties of this radium molecule,” Wilkins explains. “The electrons thus act as highly sensitive probes to study phenomena inside the nucleus.”

Searching for violations of fundamental symmetries

According to Udrescu, the team’s work “lays the foundations for future experiments that use this molecule to investigate nuclear symmetry violation and test the validity of theories that go beyond the Standard Model of particle physics.” In this model, each of the matter particles we see around us – from baryons like protons to leptons such as electrons – should have a corresponding antiparticle that is identical in every way apart from its charge and magnetic properties (which are reversed).

The problem is that the Standard Model predicts that the Big Bang that formed our universe nearly 14 billion years ago should have generated equal amounts of antimatter and matter – yet measurements and observations made today reveal an almost entirely matter-based universe. Subtler differences between matter particles and their antimatter counterparts might explain why the former prevailed, so by searching for these differences, physicists hope to explain antimatter-matter asymmetry.

Wilkins says the team’s work will be important for future such searches in species like RaF. Indeed, Wilkins, who is now at Michigan State University’s Facility for Rare Isotope Beams (FRIB), is building a new setup to cool and slow beams of radioactive molecules to enable higher-precision spectroscopy of species relevant to nuclear structure, fundamental symmetries and astrophysics. His long-term goal, together with other members of the RaX collaboration (which includes FRIB and the MIT team as well as researchers at Harvard University and the California Institute of Technology), is to implement advanced laser-based techniques using radium-containing molecules.

The post Physicists use a radioactive molecule’s own electrons to probe its internal structure appeared first on Physics World.

]]>
Research update Work on radium monofluoride could shed light on the asymmetry of matter and antimatter in the universe https://physicsworld.com/wp-content/uploads/2025/12/nuclear-magnetism.jpg newsletter1
Quantum-scale thermodynamics offers a tighter definition of entropy https://physicsworld.com/a/quantum-scale-thermodynamics-offers-a-tighter-definition-of-entropy/ Wed, 03 Dec 2025 16:18:34 +0000 https://physicsworld.com/?p=125357 New formulation sheds light on the three-level maser

The post Quantum-scale thermodynamics offers a tighter definition of entropy appeared first on Physics World.

]]>
A new, microscopic formulation of the second law of thermodynamics for coherently driven quantum systems has been proposed by researchers in Switzerland and Germany. The researchers applied their formulation to several canonical quantum systems, such as a three-level maser. They believe the result provides a tighter definition of entropy in such systems, and could form a basis for further exploration.

In any physical process, the first law of thermodynamics says that the total energy must always be conserved, with some converted to useful work and the remainder dissipated as heat. The second law of thermodynamics says that, in any allowed process, the total amount of heat (the entropy) must always increase.

“I like to think of work being mediated by degrees of freedom that we control and heat being mediated by degrees of freedom that we cannot control,” explains theoretical physicist Patrick Potts of the University of Basel in Switzerland. “In the macroscopic scenario, for example, work would be performed by some piston – we can move it.” The heat, meanwhile, goes into modes such as phonons generated by friction.

Murky at small scales

This distinction, however, becomes murky at small scales: “Once you go microscopic everything’s microscopic, so it becomes much more difficult to say ‘what is it that that you control – where is the work mediated – and what is it that you cannot control?’,” says Potts.

Potts and colleagues in Basel and at RWTH Aachen University in Germany examined the case of optical cavities driven by laser light, systems that can do work: “If you think of a laser as being able to promote a system from a ground state to an excited state, that’s very important to what’s being done in quantum computers, for example,” says Potts. “If you rotate a qubit, you’re doing exactly that.”

The light interacts with the cavity and makes an arbitrary number of bounces before leaking out. This emergent light is traditionally treated as heat in quantum simulations. However, it can still be partially coherent – if the cavity is empty, it can be just as coherent as the incoming light and can do just as much work.

In 2020, quantum optician Alexia Auffèves of Université Grenoble Alpes in France and colleagues noted that the coherent component of the light exiting a cavity could potentially do work. In the new study, the researchers embedded this in a consistent thermodynamic framework. They studied several examples and formulated physically consistent laws of thermodynamics.

In particular, they looked at the three-level maser, which is a canonical example of a quantum heat engine. However, it has generally been modelled semi-classically by assuming that the cavity contains a macroscopic electromagnetic field.

Work vanishes

“The old description will tell you that you put energy into this macroscopic field and that is work,” says Potts, “But once you describe the cavity quantum mechanically using the old framework then – poof! – the work is gone…Putting energy into the light field is no longer considered work, and whatever leaves the cavity is considered heat.”

The researchers new thermodynamic treatment allows them to treat the cavity quantum mechanically and to parametrize the minimum degree of entropy in the radiation that emerges – how much radiation must be converted to uncontrolled degrees of freedom that can do no useful work and how much can remain coherent.

The researchers are now applying their formalism to study thermodynamic uncertainty relations as an extension of the traditional second law of thermodynamics. “It’s actually a trade-off between three things – not just efficiency and power, but fluctuations also play a role,” says Potts. “So the more fluctuations you allow for, the higher you can get the efficiency and the power at the same time. These three things are very interesting to look at with this new formalism because these thermodynamic uncertainty relations hold for classical systems, but not for quantum systems.”

“This [work] fits very well into a question that has been heavily discussed for a long time in the quantum thermodynamics community, which is how to properly define work and how to  properly define useful resources,” says quantum theorist Federico Cerisola of the UK’s University of Exeter. “In particular, they very convincingly argue that, in the particular family of experiments they’re describing, there are resources that have been ignored in the past when using more standard approaches that can still be used for something useful.”

Cerisola says that, in his view, the logical next step is to propose a system – ideally one that can be implemented experimentally – in which radiation that would traditionally have been considered waste actually does useful work.

The research is described in Physical Review Letters.  

The post Quantum-scale thermodynamics offers a tighter definition of entropy appeared first on Physics World.

]]>
Research update New formulation sheds light on the three-level maser https://physicsworld.com/wp-content/uploads/2025/12/3-12-25-quantum-thermodynamics.jpg
Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time https://physicsworld.com/a/bring-gravity-back-down-to-earth-from-giraffes-and-tree-snakes-to-squishy-space-time/ Wed, 03 Dec 2025 13:00:22 +0000 https://physicsworld.com/?p=125126 Emma Chapman reviews Crush: Close Encounters with Gravity by James Riordon

The post Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time appeared first on Physics World.

]]>
When I was five years old, my family moved into a 1930s semi-detached house with a long strip of garden. At the end of the garden was a miniature orchard of eight apple trees the previous owners had planted – and it was there that I, much like another significantly more famous physicist, learned an important lesson about gravity.

As I read in the shade of the trees, an apple would sometimes fall with a satisfying thunk into the soft grass beside me. Less satisfyingly, they sometimes landed on my legs, or even my head – and the big cooking apples really hurt. I soon took to sitting on old wooden pallets crudely wedged among the higher branches. It was not comfortable, but at least I could return indoors without bruises.

The effects of gravity become common sense so early in life that we rarely stop to think about them past childhood. In his new book Crush: Close Encounters with Gravity, James Riordon has decided to take us back to the basics of this most fundamental of forces. Indeed, he explores an impressively wide range of topics – from why we dream of falling and why giraffes should not exist (but do), to how black holes form and the existence of “Planet 9”.

Riordon, a physicist turned science writer, makes for a deeply engaging author. He is not afraid to put himself into the story, introducing difficult concepts through personal experience and explaining them with the help of everything including the kitchen sink, which in his hands becomes an analogue for a black hole.

Gravity as a subject can easily be both too familiar and too challenging. In Riordon’s words, “Things with mass attract each other. That’s really all there is to Newtonian gravity.” While Albert Einstein’s theory of general relativity, by contrast, is so intricate that it takes years of university-level study to truly master. Riordon avoids both pitfalls: he manages to make the simple fascinating again, and the complex understandable.

He provides captivating insights into how gravity has shaped the animal kingdom, a perspective I had never much considered. Did you know that tree snakes have their hearts positioned closer to their heads than their land-based cousins? I certainly didn’t. The higher placement ensures a steady blood flow to the brain, even when the snake is climbing vertically. It is one of many examples that make you look again at the natural world with fresh eyes.

Riordon’s treatment of gravity in Einstein’s abstract space–time is equally impressive, perhaps unsurprisingly, as his previous books include Very Easy Relativity and Relatively Easy Relativity. Riordon takes a careful, patient approach – though I have never before heard general relativity reduced to “space–time is squishy”. But why not? The phrase sticks and gives us a handhold as we scale the complications of the theory. For those who want to extend the challenge, a mathematical background to the theory is provided in an appendix, and every chapter is well referenced and accompanied with suggestions for further reading.

If anything, I found myself wanting more examples of gravity as experienced by humans and animals on Earth, as opposed to in the context of the astronomical realm. I found these down-to-earth chapters the most fascinating: they formed a bridge between the vast and the local, reminding us that the same force that governs the orbits of galaxies also brings an apple to the ground. This may be a reaction only felt by astronomers like me, who already spend their days looking upward. I can easily see how the balance Riordon chose is necessary for someone without that background, and Einstein’s gravity does require galactic scales to appreciate, after all.

Crush is a generally uncomplicated and pleasurable read. The anecdotes can sometimes be a little long-winded and there are parts of the book that are not without challenge. But it is pitched perfectly for the curious general reader and even for those dipping their toes into popular science for the first time. I can imagine an enthusiastic A-level student devouring it; it is exactly the kind of book I would have loved at that age. Even if some of it would have gone over my head, Riordon’s enthusiasm and gift for storytelling would have kept me more than interested, as I sat up on that pallet in my favourite apple tree.

I left that house, and that tree, a long time ago, but just a few miles down the road from where I live now stands another, far more famous apple tree. In the garden of Woolsthorpe Manor near Grantham, Newton is said to have watched an apple fall. From that small event, he began to ask the questions that reshaped his and our understanding of the universe. Whether or not the story is true hardly matters – Newton was constantly inspired by the natural world, so it isn’t improbable, and that apple tree remains a potent symbol of curiosity and insight.

“[Newton] could tell us that an apple falls, and how quickly it will do it. As for the question of why it falls, that took Einstein to answer,” writes Riordon. Crush is a crisp and fresh tour through a continuum from orchards to observatories, showing that every planetary orbit, pulse of starlight and even every apple fall is part of the same wondrous story.

  • 2025 MIT Press 288pp £27hb

The post Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time appeared first on Physics World.

]]>
Opinion and reviews Emma Chapman reviews Crush: Close Encounters with Gravity by James Riordon https://physicsworld.com/wp-content/uploads/2025/11/2025-12-chapman-girl-reading-grass-windfall-fruit-679902794-istock-svitanola.jpg newsletter
Ice XXI appears in a diamond anvil cell https://physicsworld.com/a/ice-xxi-appears-in-a-diamond-anvil-cell/ Wed, 03 Dec 2025 12:00:45 +0000 https://physicsworld.com/?p=125323 Previously unknown ice phase exists at room temperature and pressures of 2 GPa

The post Ice XXI appears in a diamond anvil cell appeared first on Physics World.

]]>
A new phase of water ice, dubbed ice XXI, has been discovered by researchers working at the European XFEL and PETRA III facilities. The ice, which exists at room temperature and is structurally distinct from all previously observed phases of ice, was produced by rapidly compressing water to high pressures of 2 GPa. The finding could shed light on how different ice phases form at high pressures, including on icy moons and planets.

On Earth, ice can take many forms, and its properties depend strongly on its structure. The main type of naturally-occurring ice is hexagonal ice (Ih), so-called because the water molecules arrange themselves in a hexagonal lattice (this is the reason why snowflakes have six-fold symmetry). However, under certain conditions – usually involving very high pressures and low temperatures – ice can take on other structures. Indeed, 20 different forms of ice have been identified so far, denoted by roman numerals (ice I, II, III and so on up to ice XX).

Pressures of up to 2 GPa allow ice to form even at room temperature

Researchers from the Korea Research Institute of Standards and Science (KRISS) have now produced a 21st form of ice by applying pressures of up to two gigapascals. Such high pressures are roughly 20 000 times higher than normal air pressure at sea level, and they allow ice to form even at room temperature – albeit only within a device known as a dynamic diamond anvil cell (dDAC) that is capable of producing such extremely high pressures.

“In this special pressure cell, samples are squeezed between the tips of two opposing diamond anvils and can be compressed along a predefined pressure pathway,” explains Cornelius Strohm, a member of the DESY HIBEF team that set up the experiment using the High Energy Density (HED) instrument at the European XFEL.

Much more tightly packed molecules

The structure of ice XXI is different from all previously observed phases of ice because its molecules are much more tightly packed. This gives it the largest unit cell volume of all currently known types of ice, says KRISS scientist Geun Woo Lee. It is also metastable, meaning that it can exist even though another form of ice (in this case ice VI) would be more stable under the conditions in the experiment.

“This rapid compression of water allows it to remain liquid up to higher pressures, where it should have already crystallized to ice VI,” explains Lee. “Ice VI is an especially intriguing phase, thought to be present in the interior of icy moons such as Titan and Ganymede. Its highly distorted structure may allow complex transition pathways that lead to metastable ice phases.”

Ice XXI has a body-centred tetragonal crystal structure

To study how the new ice sample formed, the researchers rapidly compressed and decompressed it over 1000 times in the diamond anvil cell while imaging it every microsecond using the European XFEL, which produces megahertz frequency X-ray pulses at extremely high rates. They found that the liquid water crystallizes into different structures depending on how supercompressed it is.

The KRISS team then used the P02.2 beamline at PETRA III to determine that the ice XXI has a body-centred tetragonal crystal structure with a large unit cell (a = b = 20.197 Å and c = 7.891 Å) at approximately 1.6 GPa. This unit cell contains 152 water molecules, resulting in a density of 1.413 g cm−3.

The experiments were far from easy, recalls Lee. Upon crystallization, Ice XXI grows upwards (that is, in the vertical direction), which makes it difficult to precisely analyse its crystal structure. “The difficulty for us is to keep it stable for a long enough period to make precise structural measurements in single crystal diffraction study,” he says.

The multiple pathways of ice crystallization unearthed in this work, which is detailed in Nature Materials, imply that many more ice phases may exist. Lee says it is therefore important to analyse the mechanism behind the formation of these phases. “This could, for example, help us better understand the formation and evolution of these phases on icy moons or planets,” he tells Physics World.

The post Ice XXI appears in a diamond anvil cell appeared first on Physics World.

]]>
Research update Previously unknown ice phase exists at room temperature and pressures of 2 GPa https://physicsworld.com/wp-content/uploads/2025/12/hed-experiment.jpg
Studying the role of the quantum environment in attosecond science https://physicsworld.com/a/studying-the-role-of-the-quantum-environment-in-attosecond-science/ Wed, 03 Dec 2025 10:00:03 +0000 https://physicsworld.com/?p=125341 Researchers have developed a new way to model dephasing in attosecond experiments

The post Studying the role of the quantum environment in attosecond science appeared first on Physics World.

]]>
Attosecond science is undoubtedly one of the fastest growing branches of physics today.

Its popularity was demonstrated by the award of the 2023 Nobel Prize in Physics to Anne L’Huillier, Paul Corkum and Ferenc Krausz for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter.

One of the most important processes in this field is dephasing. This happens when an electron loses its phase coherence because of interactions with its surroundings.

This loss of coherence can obscure the fine details of electron dynamics, making it harder to capture precise snapshots of these rapid processes.

The most common way to model this process in light-matter interactions is by using the relaxation time approximation. This approach greatly simplifies the picture as it avoids the need to model every single particle in the system.

Its use is fine for dilute gases, but it doesn’t work as well with intense lasers and denser materials, such as solids, because it greatly overestimates ionisation.

This is a significant problem as ionisation is the first step in many processes such as electron acceleration and high-harmonic generation.

To address this problem, a team led by researchers from the University of Ottawa have developed a new method to correct for this problem.

By introducing a heat bath into the model they were able to represent the many-body environment that interacts with electrons, without significantly increasing the complexity.

This new approach should enable the identification of new effects in attosecond science or wherever strong electromagnetic fields interact with matter.

Read the full article

Strong field physics in open quantum systems – IOPscience

N. Boroumand et al, 2025 Rep. Prog. Phys. 88 070501

 

The post Studying the role of the quantum environment in attosecond science appeared first on Physics World.

]]>
Research highlight Researchers have developed a new way to model dephasing in attosecond experiments https://physicsworld.com/wp-content/uploads/2025/12/blue-light-abstract-ultrawide-1369765935-istock-sakkmesterke.jpg
Characterising quantum many-body states https://physicsworld.com/a/characterising-quantum-many-body-states/ Wed, 03 Dec 2025 09:59:45 +0000 https://physicsworld.com/?p=125344 A team of researchers have developed a new method for characterising quantum properties of large systems using graph theory

The post Characterising quantum many-body states appeared first on Physics World.

]]>
Describing the non-classical properties of a complex many-body system (such as entanglement or coherence) is an important part of quantum technologies.

An ideal tool for this task would work well with large systems, be easily computable and easily measurable. Unfortunately, such a tool for every situation does not yet exist.

With this goal in mind a team of researchers – Marcin Płodzień and Maciej Lewenstein (ICFO, Barcelona, Spain) and Jan Chwedeńczuk (University of Warsaw, Poland) – began work on a special type of quantum state used in quantum computing – graph states.

These states can be visualised as graphs or networks where each vertex represents a qubit, and each edge represents an interaction between pairs of qubits.

The team studied four different shapes of graph states using new mathematical tools they developed. They found that one of these in particular, the Turán graph, could be very useful in quantum metrology.

Their method is (relatively) straightforward and does not require many assumptions. This means that it could be applied to any shape of graph beyond the four studied here.

The results will be useful in various quantum technologies wherever precise knowledge of many-body quantum correlations is necessary.

Read the full article

Many-body quantum resources of graph states – IOPscience

M. Płodzień et al, 2025 Rep. Prog. Phys. 88 077601

 

The post Characterising quantum many-body states appeared first on Physics World.

]]>
Research highlight A team of researchers have developed a new method for characterising quantum properties of large systems using graph theory https://physicsworld.com/wp-content/uploads/2025/12/quantum-entanglement-2223817753-istock-koto-feja.jpg
Extra carbon in the atmosphere may disrupt radio communications https://physicsworld.com/a/extra-carbon-in-the-atmosphere-may-disrupt-radio-communications/ Tue, 02 Dec 2025 14:13:19 +0000 https://physicsworld.com/?p=125321 Increasing CO2 levels are triggering changes in the ionosphere that will adversely affect signals, say scientists

The post Extra carbon in the atmosphere may disrupt radio communications appeared first on Physics World.

]]>
Higher levels of carbon dioxide (CO2) in the Earth’s atmosphere could harm radio communications by enhancing a disruptive effect in the ionosphere. According to researchers at Kyushu University, Japan, who modelled the effect numerically for the first time, this little-known consequence of climate change could have significant impacts on shortwave radio systems such as those employed in broadcasting, air traffic control and navigation.

“While increasing CO2 levels in the atmosphere warm the Earth’s surface, they actually cool the ionosphere,” explains study leader Huixin Liu of Kyushu’s Faculty of Science. “This cooling doesn’t mean it is all good: it decreases the air density in the ionosphere and accelerates wind circulation. These changes affect the orbits and lifespan of satellites and space debris and also disrupt radio communications through localized small-scale plasma irregularities.”

The sporadic E-layer

One such irregularity is a dense but transient layer of metal ions that forms between 90‒120 km above the Earth’s surface. This sporadic E-layer (Es), as it is known, is roughly 1‒5 km thick and can stretch from tens to hundreds of kilometres in the horizontal direction. Its density is highest during the day, and it peaks around the time of the summer solstice.

The formation of the Es is hard to predict, and the mechanisms behind it are not fully understood. However, the prevailing “wind shear” theory suggests that vertical shears in horizontal winds, combined with the Earth’s magnetic field, cause metallic ions such as Fe+, Na+ and Ca+ to converge in the ionospheric dynamo region and form thin layers of enhanced ionization. The ions themselves largely come from metals in meteoroids that enter the Earth’s atmosphere and disintegrate at altitudes of around 80‒100 km.

Effects of increasing CO2 concentrations

While previous research has shown that increases in CO2 trigger atmospheric changes on a global scale, relatively little is known about how these increases affect smaller-scale ionospheric phenomena like the Es. In the new work, which is published in Geophysical Research Letters, Liu and colleagues used a whole-atmosphere model to simulate the upper atmosphere at two different CO2 concentrations: 315 ppm and 667 ppm.

“The 315 ppm represents the CO2 concentration in 1958, the year in which recordings started at the Mauna Loa observatory, Hawaii,” Liu explains. “The 667 ppm represents the projected CO2 concentration for the year 2100, based on a conservative assumption that the increase in CO2 is constant at a rate of around 2.5 ppm/year since 1958.”

The researchers then evaluated how these different CO2 levels influence a phenomenon known as vertical ion convergence (VIC) which, according to the wind shear theory, drives the Es. The simulations revealed that the higher the atmospheric CO2 levels, the greater the VIC at altitudes of 100–120 km. “What is more, this increase is accompanied by the VIC hotspots shifting downwards by approximately 5 km,” says Liu. “The VIC patterns also change dramatically during the day and these diurnal variability patterns continue into the night.”

According to the researchers, the physical mechanism underlying these changes depends on two factors. The first is reduced collisions between metallic ions and the neutral atmosphere as a direct result of cooling in the ionosphere. The second is changes in the zonal wind shear, which are likely caused by long-term trends in atmosphere tides.

“These results are exciting because they show that the impacts of CO2 increase can extend all the way from Earth’s surface to altitudes at which HF and VHF radio waves propagate and communications satellites orbit,” Liu tells Physics World. “This may be good news for ham radio amateurs, as you will likely receive more signals from faraway countries more often. For radio communications, however, especially at HF and VHF frequencies employed for aviation, ships and rescue operations, it means more noise and frequent disruption in communication and hence safety. The telecommunications industry might therefore need to adjust their frequencies or facility design in the future.”

The post Extra carbon in the atmosphere may disrupt radio communications appeared first on Physics World.

]]>
Research update Increasing CO2 levels are triggering changes in the ionosphere that will adversely affect signals, say scientists https://physicsworld.com/wp-content/uploads/2025/12/huixin-release.jpg newsletter1
Phase-changing material generates vivid tunable colours https://physicsworld.com/a/phase-changing-material-generates-vivid-tunable-colours/ Tue, 02 Dec 2025 12:00:32 +0000 https://physicsworld.com/?p=125330 A multilayer stack containing a thin film of temperature-sensitive vanadium dioxide creates tunable structural colours on rigid and flexible surfaces

The post Phase-changing material generates vivid tunable colours appeared first on Physics World.

]]>
A toy gecko featuring a flexible layer of the thermally tunable colour coating

Structural colours – created using nanostructures that scatter and reflect specific wavelengths of light – offer a non-toxic, fade-resistant and environmentally friendly alternative to chemical dyes. Large-scale production of structural colour-based materials, however, has been hindered by fabrication challenges and a lack of effective tuning mechanisms.

In a step towards commercial viability, a team at the University of Central Florida has used vanadium dioxide (VO2) – a material with temperature-sensitive optical and structural properties – to generate tunable structural colour on both rigid and flexible surfaces, without requiring complex nanofabrication.

Senior author Debashis Chanda and colleagues created their structural colour platform by stacking a thin layer of VO2 on top of a thick, reflective layer of aluminium to form a tunable thin-film cavity. At specific combinations of VO2 grain size and layer thickness this structure strongly absorbs certain frequency bands of visible light, producing the appearance of vivid colours.

The key enabler of this approach is the fact that at a critical transition temperature, VO2 reversibly switches from insulator to metal, accompanied by a change in its crystalline structure. This phase change alters the interference conditions in the thin-film cavity, varying the reflectance spectra and changing the perceived colour. Controlling the thickness of the VO2 layer enables the generation of a wide range of structural colours.

The bilayer structures are grown via a combination of magnetron sputtering and electron-beam deposition, techniques compatible with large-scale production. By adjusting the growth parameters during fabrication, the researchers could broaden the colour palette and control the temperature at which the phase transition occurs. To expand the available colour range further, they added a third ultrathin layer of high-refractive index titanium dioxide on top of the bilayer.

The researchers describe a range of applications for their flexible coloration platform, including a colour-tunable maple leaf pattern, a thermal sensing label on a coffee cup and tunable structural coloration on flexible fabrics. They also demonstrated its use on complex shapes, such as a toy gecko with a flexible tunable colour coating and an embedded heater.

“These preliminary demonstrations validate the feasibility of developing thermally responsive sensors, reconfigurable displays and dynamic colouration devices, paving the way for innovative solutions across fields such as wearable electronic, cosmetics, smart textiles and defence technologies,” the team concludes.

The research is described in Proceedings of the National Academy of Sciences.

The post Phase-changing material generates vivid tunable colours appeared first on Physics World.

]]>
Research update A multilayer stack containing a thin film of temperature-sensitive vanadium dioxide creates tunable structural colours on rigid and flexible surfaces https://physicsworld.com/wp-content/uploads/2025/12/2-12-25-camouflage-featured.jpg newsletter1
Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics https://physicsworld.com/a/semiconductor-laser-pioneer-susumu-noda-wins-2026-rank-prize-for-optoelectronics/ Tue, 02 Dec 2025 09:00:41 +0000 https://physicsworld.com/?p=125289 Noda made breakthroughs in the development of the Photonic Crystal Surface Emitting Laser

The post Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics appeared first on Physics World.

]]>
Susumu Noda of Kyoto University has won the 2026 Rank Prize for Optoelectronics for the development of the Photonic Crystal Surface Emitting Laser (PCSEL). For more than 25 years, Noda developed this new form of laser, which has potential applications in high-precision manufacturing as well as in LIDAR technologies.

Following the development of the laser in 1960, in more recent decades optical fibre lasers and semiconductor lasers have become competing technologies.

A semiconductor laser works by pumping an electrical current into a region where an n-doped (excess of electrons) and a p-doped (excess of “holes”) semiconductor material meet, causing electrons and holes to combine and release photons.

Semiconductors have several advantages in terms of their compactness, high “wallplug” efficiency, and ruggedness, but lack in other areas such as having a low brightness and functionality.

This means that conventional semiconductor lasers required external optical and mechanical elements to improve their performance, which results in large and impractical systems.

‘A great honour’

In the late 1990s, Noda began working on a new type of semiconductor laser that could challenge the performance of optical fibre lasers. These so-called PCSELs employ a photonic crystal layer  in between the semiconductor layers. Photonic crystals are nanostructured materials in which a periodic variation of the dielectric constant — formed, for example, by a lattice of holes — creates a photonic band-gap.

Noda and his research made a series of breakthrough in the technology such as demonstrating control of polarization and beam shape by tailoring the phonic crystal structure and expansion into blue–violet wavelengths.

The resulting PCSELs emit a high-quality, symmetric beam with narrow divergence and boast high brightness and high functionality while maintaining the benefits of conventional semiconductor lasers. In 2013, 0.2 W PCSELs became available and a few years later Watt-class PCSEL lasers became operational.

Noda says that it is “a great honour and a surprise” to receive the prize. “I am extremely happy to know that more than 25 years of research on photonic-crystal surface-emitting lasers has been recognized in this way,” he adds. “I do hope to continue to further develop the research and its social implementation.”

Susumu Noda received his BSc and then PhD in electronics from Kyoto University in 1982 and 1991, respectively. From 1984 he also worked at Mitsubishi Electric Corporation, before joining Kyoto University in 1988 where he is currently based.

Founded in 1972 by the British industrialist and philanthropist Lord J Arthur Rank, the Rank Prize is awarded biennially in nutrition and optoelectronics. The 2026 Rank Prize for Optoelectronics, which has a cash award of £100 000, will be awarded formally at an event held in June.

The post Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics appeared first on Physics World.

]]>
News Noda made breakthroughs in the development of the Photonic Crystal Surface Emitting Laser https://physicsworld.com/wp-content/uploads/2025/12/susumu-noda.jpg newsletter
Staying the course with lockdowns could end future pandemics in months https://physicsworld.com/a/staying-the-course-with-lockdowns-could-end-future-pandemics-in-months/ Mon, 01 Dec 2025 14:00:32 +0000 https://physicsworld.com/?p=125241 New calculation of viral spread suggests that rapid elimination of SARS-CoV-2-like viruses is scientifically feasible, though social challenges remain

The post Staying the course with lockdowns could end future pandemics in months appeared first on Physics World.

]]>
As a theoretical and mathematical physicist at Imperial College London, UK, Bhavin Khatri spent years using statistical physics to understand how organisms evolve. Then the COVID-19 pandemic struck, and like many other scientists, he began searching for ways to apply his skills to the crisis. This led him to realize that the equations he was using to study evolution could be repurposed to model the spread of the virus – and, crucially, to understand how it could be curtailed.

In a paper published in EPL, Khatri models the spread of a SARS-CoV-2-like virus using branching process theory, which he’d previously used to study how advantageous alleles (variations in a genetic sequence) become more prevalent in a population. He then uses this model to assess the duration that interventions such as lockdowns would need to be applied in order to completely eliminate infections, with the strength of the intervention measured in terms of the number of people each infected person goes on to infect (the virus’ effective reproduction number, R).

Tantalizingly, the paper concludes that applying such interventions worldwide in June 2020 could have eliminated the COVID virus by January 2021, several months before the widespread availability of vaccines reduced its impact on healthcare systems and led governments to lift restrictions on social contact. Physics World spoke to Khatri to learn more about his research and its implications for future pandemics.

What are the most important findings in your work?

One important finding is that we can accurately calculate the distribution of times required for a virus to become extinct by making a relatively simple approximation. This approximation amounts to assuming that people have relatively little population-level “herd” immunity to the virus – exactly the situation that many countries, including the UK, faced in March 2020.

Making this approximation meant I could reduce the three coupled differential equations of the well-known SIR model (which models pandemics via the interplay between Susceptible, Infected and Recovered individuals) to a single differential equation for the number of infected individuals in the population. This single equation turned out to be the same one that physics students learn when studying radioactive decay. I then used the discrete stochastic version of exponential decay and standard approaches in branching process theory to calculate the distribution of extinction times.

Alongside the formal theory, I also used my experience in population genetic theory to develop an intuitive approach for calculating the mean of this extinction time distribution. In population genetics, when a mutation is sufficiently rare, changes in its number of copies in the population are dominated by randomness. This is true even if the mutation has a large selective advantage: it has to grow by chance to sufficient critical size – on the order of 1/(selection strength) – for selection to take hold.

The same logic works in reverse when applied to a declining number of infections. Initially, they will decline deterministically, but once they go below a threshold number of individuals, changes in infection numbers become random. Using the properties of such random walks, I calculated an expression for the threshold number and the mean duration of the stochastic phase. These agree well with the formal branching process calculation.

In practical terms, the main result of this theoretical work is to show that for sufficiently strong lockdowns (where, on average, only one of every two infected individuals goes on to infect another person, R=0.5), this distribution of extinction times was narrow enough to ensure that the COVID pandemic virus would have gone extinct in a matter of months, or at most a year.

How realistic is this counterfactual scenario of eliminating SARS-CoV-2 within a year?

Leaving politics and the likelihood of social acceptance aside for the moment, if a sufficiently strong lockdown could have been maintained for a period of roughly six months across the globe, then I am confident that the virus could have been reduced to very low levels, or even made extinct.

The question then is: is this a stable situation? From the perspective of a single nation, if the rest of the world still has infections, then that nation either needs to maintain its lockdown or be prepared to re-impose it if there are new imported cases. From a global perspective, a COVID-free world should be a stable state, unless an animal reservoir of infections causes re-infections in humans.

Photo of Bhavin Khatri. He has a salt-and-pepper beard and glasses, he's wearing a button-down shirt with fine red checks that's open at the collar, and he's sitting in front of a window in an office

As for the practical success of such a strategy, that depends on politics and the willingness of individuals to remain in lockdown. Clearly, this is not in the model. One thing I do discuss, though, is that this strategy becomes far more difficult once more infectious variants of SARS-CoV-2 evolve. However, the problem I was working on before this one (which I eventually published in PNAS) concerned the probability of evolutionary rescue or resistance, and that work suggests that evolution of new COVID variants reduces significantly when there are fewer infections. So an elimination strategy should also be more robust against the evolution of new variants.

What lessons would you like experts (and the public) to take from this work when considering future pandemic scenarios?

I’d like them to conclude that pandemics with similar properties are, in principle, controllable to small levels of infection – or complete extinction – on timescales of months, not years, and that controlling them minimizes the chance of new variants evolving. So, although the question of the political and social will to enact such an elimination strategy is not in the scope of the paper, I think if epidemiologists, policy experts, politicians and the public understood that lockdowns have a finite time horizon, then it is more likely that this strategy could be adopted in the future.

I should also say that my work makes no comment on the social harms of lockdowns, which shouldn’t be minimized and would need to be weighed against the potential benefits.

What do you plan to do next?

I think the most interesting next avenue will be to develop theory that lets us better understand the stability of the extinct state at the national and global level, under various assumptions about declining infections in other countries that adopted different strategies and the role of an animal reservoir.

It would also be interesting to explore the role of “superspreaders”, or infected individuals who infect many other people. There’s evidence that many infections spread primarily through relatively few superspreaders, and heuristic arguments suggest that taking this into account would decrease the time to extinction compared to the estimates in this paper.

I’ve also had a long-term interest in understanding the evolution of viruses from the lens of what are known as genotype phenotype maps, where we consider the non-trivial and often redundant mapping from genetic sequences to function, where the role of stochasticity in evolution can be described by statistical physics analogies. For the evolution of the antibodies that help us avoid virus antigens, this would be a driven system, and theories of non-equilibrium statistical physics could play a role in answering questions about the evolution of new variants.

The post Staying the course with lockdowns could end future pandemics in months appeared first on Physics World.

]]>
Research update New calculation of viral spread suggests that rapid elimination of SARS-CoV-2-like viruses is scientifically feasible, though social challenges remain https://physicsworld.com/wp-content/uploads/2022/01/coronavirus-featured-1203426591-iStock_RomoloTavani-1.jpg newsletter
When is good enough ‘good enough’? https://physicsworld.com/a/when-is-good-enough-good-enough/ Mon, 01 Dec 2025 11:00:21 +0000 https://physicsworld.com/?p=125063 Honor Powrie extols the virtues being just “good enough” in life

The post When is good enough ‘good enough’? appeared first on Physics World.

]]>
Whether you’re running a business project, carrying out scientific research, or doing a spot of DIY around the house, knowing when something is “good enough” can be a tough question to answer. To me, “good enough” means something that is fit for purpose. It’s about striking a balance between the effort required to achieve perfection and the cost of not moving forward. It’s an essential mindset when perfection is either not needed or – as is often the case – not attainable.

When striving for good enough, the important thing to focus on is that your outcome should meet expectations, but not massively exceed them. Sounds simple, but how often have we heard people say things like they’re “polishing coal”, striving for “gold plated” or “trying to make a silk purse out of a sow’s ear”. It basically means they haven’t understood, defined or even accepted the requirements of the end goal.

Trouble is, as we go through school, college and university, we’re brought up to believe that we should strive for the best in whatever we study. Those with the highest grades, we’re told, will probably get the best opportunities and career openings. Unfortunately, this approach means we think we need to aim for perfection in everything in life, which is not always a good thing.

How to be good enough

So why is aiming for “good enough” a good thing to do? First, there’s the notion of “diminishing returns”. It takes a disproportionate amount of effort to achieve the final, small improvements that most people won’t even notice. Put simply, time can be wasted on unnecessary refinements, as embodied by the 80/20 rule (see box).

The 80/20 rule: the guiding principle of “good enough”

Also known as the Pareto principle – in honour of the Italian economist Vilfredo Pareto who first came up with the idea – the 80/20 rule states that for many outcomes, 80% of consequences or results come from 20% of the causes or effort. The principle helps to identify where to prioritize activities to boost productivity and get better results. It is a guideline, and the ratios can vary, but it can be applied to many things in both our professional and personal lives.

Examples from the world of business include the following:

Business sales: 80% of a company’s revenue might come from 20% of its customers.

Company productivity: 80% of your results may come from 20% of your daily tasks.

Software development: 80% of bugs could be caused by 20% of the code.

Quality control: 20% of defects may cause 80% of customer complaints.

Good enough also helps us to focus efforts. When a consumer or customer doesn’t know exactly what they want, or a product development route is uncertain, it can be better to deliver things in small chunks. Providing something basic but usable can be used to solicit feedback to help clarify requirements or make improvements or additions that can be incorporated into the next chunk. This is broadly along the lines of a “minimum viable product”.

Not seeking perfection reminds us too that solutions to problems are often uncertain. If it’s not clear how, or even if, something might work, a proof of concept (PoC) can instead be a good way to try something out. Progress can be made by solving a specific technical challenge, whether via a basic experiment, demonstration or short piece of research. A PoC should help avoid committing significant time and resource to something that will never work.

Aiming for “good enough” naturally leads us to the notion of “continuous improvement”. It’s a personal favourite of mine because it allows for things to be improved incrementally as we learn or get feedback, rather than producing something in one go and then forgetting about it. It helps keep things current and relevant and encourages a culture of constantly looking for a better way to do things.

Finally, when searching for good enough, don’t forget the idea of ballpark estimates. Making approximations sounds too simple to be effective, but sometimes a rough estimate is really all you need. If an approximate guess can inform and guide your next steps or determine whether further action will be necessary then go for it. 

The benefits of good enough

Being good enough doesn’t just lead to practical outcomes, it can benefit our personal well-being too. Our time, after all, is a precious commodity and we can’t magically increase this resource. The pursuit of perfection can lead to stagnation, and ultimately burnout, whereas achieving good enough allows us to move on in a timely fashion.

A good-enough approach will even make you less stressed. By getting things done sooner and achieving more, you’ll feel freer and happier about your work even if it means accepting imperfection. Mistakes and errors are inevitable in life, so don’t be afraid to make them; use them as learning opportunities, rather than seeing them as something bad. Remember – the person who never made a mistake never got out of bed.

Recognizing that you’ve done the best you can for now is also crucial for starting new projects and making progress. By accepting good enough you can build momentum, get more things done, and consistently take actions toward achieving your goals.

Finally, good enough is also about shared ownership. By inviting someone else to look at what you’ve done, you can significantly speed up the process. In my own career I’ve often found myself agonising over some obscure detail or feeling something is missing, only to have my quandary solved almost instantly simply by getting someone else involved – making me wish I’d asked them sooner.

Caveats and conclusions

Good enough comes with some caveats. Regulatory or legislative requirements means there will always be projects that have to reach a minimum standard, which will be your top priority. The precise nature of good enough will also depend on whether you’re making stuff (be it cars or computers) or dealing with intangible commodities such as software or services.

So what’s the conclusion? Well, in the interests of my own time, I’ve decided to apply the 80/20 rule and leave it to you to draw your own conclusion. As far as I’m concerned, I think this article has been good enough, but I’m sure you’ll let me know if it hasn’t. Consider it as a minimally viable product that I can update in a future column.

The post When is good enough ‘good enough’? appeared first on Physics World.

]]>
Opinion and reviews Honor Powrie extols the virtues being just “good enough” in life https://physicsworld.com/wp-content/uploads/2025/12/2025-12-transactions-excellent-score-dial-2224174894-istock-narvo-vexar-scaled.jpg newsletter
Looking for inconsistencies in the fine structure constant https://physicsworld.com/a/looking-for-inconsistencies-in-the-fine-structure-constant/ Mon, 01 Dec 2025 09:06:19 +0000 https://physicsworld.com/?p=125309 High-precision laser spectroscopy measurements on the thorium-229 nucleus could reveal new physics, say TU Wien physicists

The post Looking for inconsistencies in the fine structure constant appeared first on Physics World.

]]>
a crystal containing thorium atoms

New high-precision laser spectroscopy measurements on thorium-229 nuclei could shed more light on the fine structure constant, which determines the strength of the electromagnetic interaction, say physicists at TU Wien in Austria.

The electromagnetic interaction is one of the four known fundamental forces in nature, with the others being gravity and the strong and weak nuclear forces. Each of these fundamental forces has an interaction constant that describes its strength in comparison with the others. The fine structure constant, α, has a value of approximately 1/137. If it had any other value, charged particles would behave differently, chemical bonding would manifest in another way and light-matter interactions as we know them would not be the same.

“As the name ‘constant’ implies, we assume that these forces are universal and have the same values at all times and everywhere in the universe,” explains study leader Thorsten Schumm from the Institute of Atomic and Subatomic Physics at TU Wien. “However, many modern theories, especially those concerning the nature of dark matter, predict small and slow fluctuations in these constants. Demonstrating a non-constant fine-structure constant would shatter our current understanding of nature, but to do this, we need to be able to measure changes in this constant with extreme precision.”

With thorium spectroscopy, he says, we now have a very sensitive tool to search for such variations.

Nucleus becomes slightly more elliptic

The new work builds on a project that led, last year, to the worlds’s first nuclear clock, and is based on precisely determining how the thorium-229 (229Th) nucleus changes shape when one of its neutrons transitions from a ground state to a higher-energy state. “When excited, the 229Th nucleus becomes slightly more elliptic,” Schumm explains. “Although this shape change is small (at the 2% level), it dramatically shifts the contributions of the Coulomb interactions (the repulsion between protons in the nucleus) to the nuclear quantum states.”

The result is a change in the geometry of the 229Th nucleus’ electric field, to a degree that depends very sensitively on the value of the fine structure constant. By precisely observing this thorium transition, it is therefore possible to measure whether the fine-structure constant is actually a constant or whether it varies slightly.

After making crystals of 229Th doped in a CaF2 matrix at TU Wien, the researchers performed the next phase of the experiment in a JILA laboratory at the University of Colorado, Boulder, US, firing ultrashort laser pulses at the crystals. While they did not measure any changes in the fine structure constant, they did succeed in determining how such changes, if they exist, would translate into modifications to the energy of the first nuclear excited state of 229Th.

“It turns out that this change is huge, a factor 6000 larger than in any atomic or molecular system, thanks to the high energy governing the processes inside nuclei,” Schumm says. “This means that we are by a factor of 6000 more sensitive to fine structure variations than previous measurements.”

Increasing the spectroscopic accuracy of the 229Th transition

Researchers in the field have debated the likelihood of such an “enhancement factor” for decades, and theoretical predictions of its value have varied between zero and 10 000. “Having confirmed such a high enhancement factor will now allow us to trigger a ‘hunt’ for the observation of fine structure variations using our approach,” Schumm says.

Andrea Caputo of CERN’s theoretical physics department, who was not involved in this work, calls the experimental result “truly remarkable”, as it probes nuclear structure with a precision that has never been achieved before. However, he adds that the theoretical framework is still lacking. “In a recent work published shortly before this work, my collaborators and I showed that the nuclear-clock enhancement factor K is still subject to substantial theoretical uncertainties,” Caputo says. “Much progress is therefore still required on the theory side to model the nuclear structure reliably.”

Schumm and colleagues are now working on increasing the spectroscopic accuracy of their 229Th transition measurement by another one to two orders of magnitude. “We will then start hunting for fluctuations in the transition energy,” he reveals, “tracing it over time and – through the Earth’s movement around the Sun – space.

The present work is detailed in Nature Communications.

The post Looking for inconsistencies in the fine structure constant appeared first on Physics World.

]]>
Research update High-precision laser spectroscopy measurements on the thorium-229 nucleus could reveal new physics, say TU Wien physicists https://physicsworld.com/wp-content/uploads/2025/12/thorium-crystal.jpg
Heat engine captures energy as Earth cools at night https://physicsworld.com/a/heat-engine-captures-energy-as-earth-cools-at-night/ Fri, 28 Nov 2025 14:45:22 +0000 https://physicsworld.com/?p=125292 System can generate electricity when solar cells cannot

The post Heat engine captures energy as Earth cools at night appeared first on Physics World.

]]>
A new heat engine driven by the temperature difference between Earth’s surface and outer space has been developed by Tristan Deppe and Jeremy Munday at the University of California Davis. In an outdoor trial, the duo showed how their engine could offer a reliable source of renewable energy at night.

While solar cells do a great job of converting the Sun’s energy into electricity, they have one major drawback, as Munday explains: “Lack of power generation at night means that we either need storage, which is expensive, or other forms of energy, which often come from fossil fuel sources.”

One solution is to exploit the fact that the Earth’s surface absorbs heat from the Sun during the day and then radiates some of that energy into space at night. While space has a temperature of around −270° C, the average temperature of Earth’s surface is a balmy 15° C. Together, these two heat reservoirs provide the essential ingredients of a heat engine, which is a device that extracts mechanical work as thermal energy flows from a heat source to a heat sink.

Coupling to space

“At first glance, these two entities appear too far apart to be connected through an engine. However, by radiatively coupling one side of the engine to space, we can achieve the needed temperature difference to drive the engine,” Munday explains.

For the concept to work, the engine must radiate the energy it extracts from the Earth within the atmospheric transparency window. This is a narrow band of infrared wavelengths that pass directly into outer space without being absorbed by the atmosphere.

To demonstrate this concept, Deppe and Munday created a Stirling engine, which operates through the cyclical expansion and contraction of an enclosed gas as it moves between hot and cold ends. In their setup, the ends were aligned vertically, with a pair of plates connecting each end to the corresponding heat reservoir.

For the hot end, an aluminium mount was pressed into soil, transferring the Earth’s ambient heat to the engine’s bottom plate. At the cold end, the researchers attached a black-coated plate that emitted an upward stream of infrared radiation within the transparency window.

Outdoor experiments

In a series of outdoor experiments performed throughout the year, this setup maintained a temperature difference greater than 10° C between the two plates during most months. This was enough to extract more than 400 mW per square metre of mechanical power throughout the night.

“We were able to generate enough power to run a mechanical fan, which could be used for air circulation in greenhouses or residential buildings,” Munday describes. “We also configured the device to produce both mechanical and electrical power simultaneously, which adds to the flexibility of its operation.”

With this promising early demonstration, the researchers now predict that future improvements could enable the system to extract as much as 6 W per square metre under the same conditions. If rolled out commercially, the heat engine could help reduce the reliance of solar power on night-time energy storage – potentially opening a new route to cutting carbon emissions.

The research has described in Science Advances.

The post Heat engine captures energy as Earth cools at night appeared first on Physics World.

]]>
Research update System can generate electricity when solar cells cannot https://physicsworld.com/wp-content/uploads/2025/11/28-11-25-heat-engine.jpg newsletter1
Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics https://physicsworld.com/a/microscale-wave-on-a-chip-device-sheds-light-on-nonlinear-hydrodynamics/ Fri, 28 Nov 2025 09:40:36 +0000 https://physicsworld.com/?p=125273 New device could help us better understand phenomena from ocean waves and hurricanes to weather and climate

The post Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics appeared first on Physics World.

]]>
A new microscale version of the flumes that are commonly used to reproduce wave behaviour in the laboratory will make it far easier to study nonlinear hydrodynamics. The device consists of a layer of superfluid helium just a few atoms thick on a silicon chip, and its developers at the University of Queensland, Australia, say it could help us better understand phenomena ranging from oceans and hurricanes to weather and climate.

“The physics of nonlinear hydrodynamics is extremely hard to model because of instabilities that ultimately grow into turbulence,” explains study leader Warwick Bowen of Queensland’s Quantum Optics Laboratory. “It is also very hard to study in experiments since these often require hundreds-of-metre-long wave flumes.”

While such flumes are good for studying shallow-water dynamics like tsunamis and rogue waves, Bowen notes that they struggle to access many of the complex wave behaviours, such as turbulence, found in nature.

Amplifying the nonlinearities in complex behaviours

The team say that the geometrical structure of the new wave-on-a-chip device can be designed at will using lithographic techniques and built in a matter of days. Superfluid helium placed on its surface can then be controlled optomechanically. Thanks to these innovations, the researchers were able to experimentally measure nonlinear hydrodynamics millions of times faster than would be possible using traditional flumes. They could also “amplify” the nonlinearities of complex behaviours, making them orders of magnitude stronger than is possible in even the largest wave flumes.

“This promises to change the way we do nonlinear hydrodynamics, with the potential to discover new equations that better explain the complex physics behind it,” Bowen says. “Such a technique could be used widely to improve our ability to predict both natural and engineered hydrodynamic behaviours.”

So far, the team has measured several effects, including wave steepening, shock fronts and solitary wave fission thanks to the chip. While these nonlinear behaviours had been predicted in superfluids, they had never been directly observed there until now.

Waves can be generated in a very shallow depth

The Quantum Optics Laboratory researchers have been studying superfluid helium for over a decade. A key feature of this quantum liquid is that it flows without resistance, similar to the way electrons move without resistance in a superconductor. “We realized that this behaviour could be exploited in experimental studies of nonlinear hydrodynamics because it allows waves to be generated in a very shallow depth – even down to just a few atoms deep,” Bowen explains.

In conventional fluids, Bowen continues, resistance to motion becomes hugely important at small scales, and ultimately limits the nonlinear strengths accessible in traditional flume-based testing rigs. “Moving from the tens-of-centimetre depths of these flumes to tens-of-nanometres, we realized that superfluid helium could allow us to achieve many orders of magnitude stronger nonlinearities – comparable to the largest flows in the ocean – while also greatly increasing measurement speeds. It was this potential that attracted us to the project.”

The experiments were far from simple, however. To do them, the researchers needed to cryogenically cool the system to near absolute zero temperatures. They also needed to fabricate exceptionally thin superfluid helium films that interact very weakly with light, as well as optical devices with structures smaller than a micron. Combining all these components required what Bowen describes as “something of a hero experiment”, with important contributions coming from the team’s co-leader, Christopher Baker, and Walter Wasserman, who was then a PhD student in the group. The wave dynamics themselves, Bowen adds, were “exceptionally complex” and were analysed by Matthew Reeves, the first author of a Science paper describing the device.

As well as the applications areas mentioned earlier, the team say the new work, which is supported by the US Defense Advanced Research Project Agency’s APAQuS Program, could also advance our understanding of strongly-interacting quantum structures that are difficult to model theoretically. “Superfluid helium is a classic example of such a system,” explains Bowen, “and our measurements represent the most precise measurements of wave physics in these. Other applications may be found in quantum technologies, where the flow of superfluid helium could – somewhat speculatively – replace superconducting electron flow in future quantum computing architectures.”

The researchers now plan to use the device and machine learning techniques to search for new hydrodynamics equations.

The post Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics appeared first on Physics World.

]]>
Research update New device could help us better understand phenomena from ocean waves and hurricanes to weather and climate https://physicsworld.com/wp-content/uploads/2025/11/tiny-wave.jpg
Electrical charge on objects in optical tweezers can be controlled precisely https://physicsworld.com/a/electrical-charge-on-objects-in-optical-tweezers-can-be-controlled-precisely/ Thu, 27 Nov 2025 16:21:23 +0000 https://physicsworld.com/?p=125230 New technique could shed light on electrification of aerosols

The post Electrical charge on objects in optical tweezers can be controlled precisely appeared first on Physics World.

]]>
An effect first observed decades ago by Nobel laureate Arthur Ashkin has been used to fine tune the electrical charge on objects held in optical tweezers. Developed by an international team led by Scott Waitukaitis of the Institute of Science and Technology Austria, the new technique could improve our understanding of aerosols and clouds.

Optical tweezers use focused laser beams to trap and manipulate small objects about 100 nm to 1 micron in size. Their precision and versatility have made them a staple across fields from quantum optics to biochemistry.

Ashkin shared the 2018 Nobel prize for inventing optical tweezers and in the 1970s he noticed that trapped objects can be electrically charged by the laser light. “However, his paper didn’t get much attention, and the observation has essentially gone ignored,” explains Waitukaitis.

Waitukaitis’ team rediscovered the effect while using optical tweezers to study how charges build up in the ice crystals accumulating inside clouds. In their experiment, micron-sized silica spheres stood in for the ice, but Ashkin’s charging effect got in their way.

Bummed out

“Our goal has always been to study charged particles in air in the context of atmospheric physics – in lightning initiation or aerosols, for example,” Waitukaitis recalls. “We never intended for the laser to charge the particle, and at first we were a bit bummed out that it did so.”

Their next thought was that they had discovered a new and potentially useful phenomenon. “Out of due diligence we of course did a deep dive into the literature to be sure that no one had seen it, and that’s when we found the old paper from Ashkin, “ says Waitukaitis.

In 1976, Ashkin described how optically trapped objects become charged through a nonlinear process whereby electrons absorb two photons simultaneously. These electrons can acquire enough energy to escape the object, leaving it with a positive charge.

Yet beyond this insight, Ashkin “wasn’t able to make much sense of the effect,” Waitukaitis explains. “I have the feeling he found it an interesting curiosity and then moved on.”

Shaking and scattering

To study the effect in more detail, the team modified their optical tweezers setup so its two copper lens holders doubled as electrodes, allowing them to apply an electric field along the axis of the confining, opposite-facing laser beams. If the silica sphere became charged, this field would cause it to shake, scattering a portion of the laser light back towards each lens.

The researchers picked off this portion of the scattered light using a beam splitter, then diverted it to a photodiode, allowing them to track the sphere’s position. Finally, they converted the measured amplitude of the shaking particle into a real-time charge measurement. This allowed them to track the relationship between the sphere’s charge and the laser’s tuneable intensity.

Their measurements confirmed Ashkin’s 1976 hypothesis that electrons on optically-trapped objects undergo two-photon absorption, allowing them to escape. Waitukaitis and colleagues improved on this model and showed how the charge on a trapped object can be controlled precisely by simply adjusting the laser’s intensity.

As for the team’s original research goal, the effect has actually been very useful for studying the behaviour of charged aerosols.

“We can get [an object] so charged that it shoots off little ‘microdischarges’ from its surface due to breakdown of the air around it, involving just a few or tens of electron charges at a time,” Waitukaitis says. “This is going to be really cool for studying electrostatic phenomena in the context of particles in the atmosphere.“

The study is described in Physical Review Letters.

The post Electrical charge on objects in optical tweezers can be controlled precisely appeared first on Physics World.

]]>
Research update New technique could shed light on electrification of aerosols https://physicsworld.com/wp-content/uploads/2025/11/27-11-25-optical-tweezers-charging.jpg
Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge https://physicsworld.com/a/quantum-gravity-we-explore-spin-foams-and-other-potential-solutions/ Thu, 27 Nov 2025 15:00:43 +0000 https://physicsworld.com/?p=125246 Bianca Dittrich of the Perimeter Institute is our podcast guest

The post Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge appeared first on Physics World.

]]>
Earlier this autumn I had the pleasure of visiting the Perimeter Institute for Theoretical Physics in Waterloo Canada – where I interviewed four physicists about their research. This is the second of those conversations to appear on the podcast – and it is with Bianca Dittrich, whose research focuses on quantum gravity.

Albert Einstein’s general theory of relativity does a great job at explaining gravity but it is thought to be incomplete because it is incompatible with quantum mechanics. This is an important shortcoming because quantum mechanics is widely considered to be one of science’s most successful theories.

Developing a theory of quantum gravity is a crucial goal in physics, but it is proving to be extremely difficult. In this episode, Dittrich explains some of the challenges and talks about ways forward – including her current research on spin foams. We also chat about the intersection of quantum gravity and condensed matter physics; and the difficulties of testing theories against observational data.

IOP Publishing’s new Progress In Series: Research Highlights website offers quick, accessible summaries of top papers from leading journals like Reports on Progress in Physics and Progress in Energy. Whether you’re short on time or just want the essentials, these highlights help you expand your knowledge of leading topics.

The post Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge appeared first on Physics World.

]]>
Podcasts Bianca Dittrich of the Perimeter Institute is our podcast guest https://physicsworld.com/wp-content/uploads/2025/11/27-11-25-bianca-dittrich-list.jpg newsletter
Can fast qubits also be robust? https://physicsworld.com/a/can-fast-qubits-also-be-robust/ Thu, 27 Nov 2025 10:32:21 +0000 https://physicsworld.com/?p=125186 Spin-orbit interaction adjustment produces "best of both worlds" scenario

The post Can fast qubits also be robust? appeared first on Physics World.

]]>
National center of competence in research spin

Qubits – the building blocks of quantum computers – are plagued with a seemingly unsurmountable dilemma. If they’re fast, they aren’t robust. And if they’re robust, they aren’t fast. Both qualities are important, because all potentially useful quantum algorithms rely on being able to perform many manipulations on a qubit before its state decays. But whereas faster qubits are typically realized by strongly coupling them to the external environment, enabling them to interact more strongly with the driving field, robust qubits with long coherence times are typically achieved by isolating them from their environment.

These seemingly contradictory requirements made simultaneously fast and robust qubits an unsolved challenge – until now. In an article published in Nature Communications, a team of physicists led by Dominik Zumbühl from the University of Basel, Switzerland show that it is, in fact, possible to increase both the coherence time and operational speed of a qubit, demonstrating a pathway out of this long-standing impasse.

The magic ingredient

The key ingredient driving this discovery is something called the direct Rashba spin-orbit interaction. The best-known example of spin-orbit interaction comes from atomic physics. Consider a hydrogen atom, in which a single electron revolves around a single proton in the nucleus. During this orbital motion, the electron interacts with the static electric field generated by the positively charged nucleus. The electron in turn experiences an effective magnetic field that couples to the electron’s intrinsic magnetic moment, or spin. This coupling of the electron’s orbital motion to its spin is called spin-orbit (SO) interaction.

Aided by collaborators at the University of Oxford, UK and TU Eindhoven in the Netherlands, Zumbühl and colleagues chose to replace this simple SO interaction with a far more complex landscape of electrostatic potential generated by a 10-nanometer-thick germanium wire coated with a thin silicon shell. By removing a single electron from this wire, they create states known as holes that can be used as qubits, with quantum information being encoded in the hole’s spin.

Importantly, the underlying crystal structure of the silicon-coated germanium wire constrains these holes to discrete energy levels called bands. “If you were to mathematically model a low-level hole residing in one of these bands using perturbation theory – a commonly applied method in which more remote bands are treated as corrections to the ground state – you would find a term that looks structurally similar to the spin–orbit interaction known from atomic physics,” explains Miguel Carballido, who conducted the work during his PhD at Basel, and is now a senior research associate at the University of New South Wales’ School of Electrical Engineering and Telecommunications in Sydney, Australia.

By encoding the quantum states in these energy levels, the spin-orbit interaction can be used to drive the hole-qubit between its two spin states. What makes this interaction special is that it can be tuned using an external electric field. Thus, by applying a stronger electric field, the interaction can be strengthened – resulting in faster qubit manipulation.

Comparison of graphs of qubit speed and qubit coherence times, showing showing qubit speed plateauing (top panel) and qubit coherence times peaking (bottom) at an applied electric field around 1330 mV

Reaching a plateau

This ability to make a qubit faster by tuning an external parameter isn’t new. The difference is that whereas in other approaches, a stronger interaction also means higher sensitivity to fluctuations in the driving field, the Basel researchers found a way around this problem. As they increase the electric field, the spin-orbit interaction increases up to a certain point. Beyond this point, any further increase in the electric field will cause the hole to remain stuck within a low energy band. This restricts the hole’s ability to interact with other bands to change its spin, causing the SO interaction strength to drop.

By tuning the electric field to this peak, they can therefore operate in a “plateau” region where the SO interaction is the strongest, but the sensitivity to noise is the lowest. This leads to high coherence times (see figure), meaning that the qubit remains in the desired quantum state for longer. By reaching this plateau, where the qubit is both fast and robust, the researchers demonstrate the ability to operate their device in the “compromise-free” regime.

So, is quantum computing now a solved problem? The researchers’ answer is “not yet”, as there are still many challenges to overcome. “A lot of the heavy lifting is being done by the quasi 1D system provided by the nanowire,” remarks Carballido, “but this also limits scalability.” He also notes that the success of the experiment depends on being able to fabricate each qubit device very precisely, and doing this reproducibly remains a challenge.

The post Can fast qubits also be robust? appeared first on Physics World.

]]>
Research update Spin-orbit interaction adjustment produces "best of both worlds" scenario https://physicsworld.com/wp-content/uploads/2025/11/26-11-2025-national-center-of-competence-in-research-spin.jpg newsletter1
Did cannibal stars and boson stars populate the early universe? https://physicsworld.com/a/did-cannibal-stars-and-boson-stars-populate-the-early-universe/ Wed, 26 Nov 2025 13:52:29 +0000 https://physicsworld.com/?p=125211 Objects formed by exotic particles could have created primordial black holes

The post Did cannibal stars and boson stars populate the early universe? appeared first on Physics World.

]]>
In the early universe, moments after the Big Bang and cosmic inflation, clusters of exotic, massive particles could have collapsed to form bizarre objects called cannibal stars and boson stars. In turn, these could have then collapsed to form primordial black holes – all before the first elements were able to form.

This curious chain of events is predicted by a new model proposed by a trio of scientists at SISSA, the International School for Advanced Studies in Trieste, Italy.

Their proposal involves a hypothetical moment in the early universe called the early matter-dominated (EMD) epoch. This would have lasted only a few seconds after the Big Bang, but could have been dominated by exotic particles, such as the massive, supersymmetric particles predicted by string theory.

“There are no observations that hint at the existence of an EMD epoch – yet!” says SISSA’s Pranjal Ralegankar. “But many cosmologists are hoping that an EMD phase occurred because it is quite natural in many models.”

Some models of the early universe predict the formation of primordial black holes from quantum fluctuations in the inflationary field. Now, Ralegankar and his colleagues, Daniele Perri and Takeshi Kobayashi propose a new and more natural pathway for forming primordial holes via an EMD epoch.

They postulate that in the first second of existence when the universe was small and incredibly hot, exotic massive particles emerged and clustered in dense haloes. The SISSA physicists propose that the haloes then collapsed into hypothetical objects called cannibal stars and boson stars.

Cannibal stars are powered by particles annihilating each other, which would have allowed the objects to resist further gravitational collapse for a few seconds. However, they would not have produced light like normal stars.

“The particles in a cannibal star can only talk to each other, which is why they are forced to annihilate each other to counter the immense pressure from gravity,” Ralegankar tells Physics World. “They are immensely hot, simply because the particles that we consider are so massive. The temperature of our cannibal stars can range from a few GeV to on the order of 1010 GeV. For comparison, the Sun is on the order of keV.”

Boson stars, meanwhile, would be made from pure a Bose–Einstein condensate, which is a state of matter whereby the individual particles quantum mechanically act as one.

Both the cannibal stars and boson stars would exist within larger haloes that would quickly collapse to form primordial black holes with masses about the same as asteroids (about 1014–1019 kg). All of this could have taken place just 10 s after the Big Bang.

Dark matter possibility

Ralegankar, Perri and Kobayashi point out that the total mass of primordial black holes that their model produces matches the amount of dark matter in the universe.

“Current observations rule out black holes to be dark matter, except in the asteroid-mass range,” says Ralegankar. “We showed that our models can produce black holes in that mass range.”

Richard Massey, who is a dark-matter researcher at Durham University in the UK, agrees that microlensing observations by projects such as the Optical Gravitational Lensing Experiment (OGLE) have ruled out a population of black holes with planetary masses, but not asteroid masses. However, Massey is doubtful that these black holes could make up dark matter.

“It would be pretty contrived for them to make up a large fraction of what we call dark matter,” he says. “It’s possible that dark matter could be these primordial black holes, but they’d need to have been created with the same mass no matter where they were and whatever environment they were in, and that mass would have to be tuned to evade current experimental evidence.”

In the coming years, upgrades to OGLE and the launch of NASA’s Roman Space Telescope should finally provide sensitivity to microlensing events produced by objects in the asteroid mass range, allowing researchers to settle the matter.

It is also possible that cannibal and boson stars exist today, produced by collapsing haloes of dark matter. But unlike those proposed for the early universe, modern cannibal and boson stars would be stable and long-lasting.

“Much work has already been done for boson stars from dark matter, and we are simply suggesting that future studies should also think about the possibility of cannibal stars from dark matter,” explains Ralegankar. “Gravitational lensing would be one way to search for them, and depending on models, maybe also gamma rays from dark-matter annihilation.”

The research is described in Physical Review D.

The post Did cannibal stars and boson stars populate the early universe? appeared first on Physics World.

]]>
Research update Objects formed by exotic particles could have created primordial black holes https://physicsworld.com/wp-content/uploads/2025/11/26-11-25-exotic-stars.jpg newsletter1
Academic assassinations are a threat to global science https://physicsworld.com/a/academic-assassinations-are-a-threat-to-global-science/ Wed, 26 Nov 2025 11:00:31 +0000 https://physicsworld.com/?p=124829 Alireza Qaiumzadeh says that science can only exist if scientists are protected as civilians

The post Academic assassinations are a threat to global science appeared first on Physics World.

]]>
The deliberate targeting of scientists in recent years has become one of the most disturbing, and overlooked, developments in modern conflict. In particular, Iranian physicists and engineers have been singled out for almost two decades, with sometimes fatal consequences. In 2007 Ardeshir Hosseinpour, a nuclear physicist at Shiraz University, died in mysterious circumstances that were widely attributed to poisoning or radioactive exposure.

Over the following years, at least five more Iranian researchers have been killed. They include particle physicist Masoud Ali-Mohammadi, who was Iran’s representative at the Synchrotron-light for Experimental Science and Applications in the Middle East project. Known as SESAME, it is the only scientific project in the Middle East where Iran and Israel collaborate.

Others to have died include nuclear engineer Majid Shahriari, another Iranian representative at SESAME, and nuclear physicist Mohsen Fakhrizadeh, who were both killed by bombing or gunfire in Tehran. These attacks were never formally acknowledged, nor were they condemned by international scientific institutions. The message, however, was implicit: scientists in politically sensitive fields could be treated as strategic targets, even far from battlefields.

What began as covert killings of individual researchers has now escalated, dangerously, into open military strikes on academic communities. Israeli airstrikes on residential areas in Tehran and Isfahan during the 12-day conflict between the two countries in June led to at least 14 Iranian scientists and engineers and members of their family being killed. The scientists worked in areas such as materials science, aerospace engineering and laser physics. I believe this shift, from covert assassinations to mass casualties, crossed a line. It treats scientists as enemy combatants simply because of their expertise.

The assassinations of scientists are not just isolated tragedies; they are a direct assault on the global commons of knowledge, corroding both international law and international science. Unless the world responds, I believe the precedent being set will endanger scientists everywhere and undermine the principle that knowledge belongs to humanity, not the battlefield.

Drawing a red line

International humanitarian law is clear: civilians, including academics, must be protected. Targeting scientists based solely on their professional expertise undermines the Geneva Convention and erodes the civilian–military distinction at the heart of international law.

Iran, whatever its politics, remains a member of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency. Its scientists are entitled under international law to conduct peaceful research in medicine, energy and industry. Their work is no more inherently criminal than research that other countries carry out in artificial intelligence (AI), quantum technology or genetics.

If we normalize the preemptive assassination of scientists, what stops global rivals from targeting, say, AI researchers in Silicon Valley, quantum physicists in Beijing or geneticists in Berlin? Once knowledge itself becomes a liability, no researcher is safe. Equally troubling is the silence of the international scientific community with organizations such as the UN, UNESCO and the European Research Council as well as leading national academies having not condemned these killings, past or present.

Silence is not neutral. It legitimizes the treatment of scientists as military assets. It discourages international collaboration in sensitive but essential research and it creates fear among younger researchers, who may abandon high-impact fields to avoid risk. Science is built on openness and exchange, and when researchers are murdered for their expertise, the very idea of science as a shared human enterprise is undermined.

The assassinations are not solely Iran’s loss. The scientists killed were part of a global community; collaborators and colleagues in the pursuit of knowledge. Their deaths should alarm every nation and every institution that depends on research to confront global challenges, from climate change to pandemics.

I believe that international scientific organizations should act. At a minimum, they should publicly condemn the assassination of scientists and their families; support independent investigations under international law; as well as advocate for explicit protections for scientists and academic facilities in conflict zones.

Importantly, voices within Israel’s own scientific community can play a critical role too. Israeli academics, deeply committed to collaboration and academic freedom, understand the costs of blurring the boundary between science and war. Solidarity cannot be selective.

Recent events are a test case for the future of global science. If the international community tolerates the targeting of scientists, it sets a dangerous precedent that others will follow. What appears today as a regional assault on scientists from the Global South could tomorrow endanger researchers in China, Europe, Russia or the US.

Science without borders can only exist if scientists are recognized and protected as civilians without borders. That principle is now under direct threat and the world must draw a red line – killing scientists for their expertise is unacceptable. To ignore these attacks is to invite a future in which knowledge itself becomes a weapon, and the people who create it expendable. That is a world no-one should accept.

The post Academic assassinations are a threat to global science appeared first on Physics World.

]]>
Opinion and reviews Alireza Qaiumzadeh says that science can only exist if scientists are protected as civilians https://physicsworld.com/wp-content/uploads/2025/11/2025-11-forum-quaimzadeh-dialogue-heads-1393436208-istock-kubkoo.jpg newsletter
DNA as a molecular architect https://physicsworld.com/a/dna-as-a-molecular-architect/ Wed, 26 Nov 2025 08:39:10 +0000 https://physicsworld.com/?p=124963 A new model shows how programmable DNA strands control interactions between diverse colloidal particles

The post DNA as a molecular architect appeared first on Physics World.

]]>
DNA is a fascinating macromolecule that guides protein production and enables cell replication. It has also found applications in nanoscience and materials design.

Colloidal crystals are ordered structures made from tiny particles suspended in fluid that can bond to other particles and add functionalisation to materials. By controlling colloidal particles, we can build advanced nanomaterials using a bottom-up approach. There are several ways to control colloidal particle design, ranging from experimental conditions such as pH and temperature to external controls like light and magnetic fields.

An exciting approach is to use DNA-mediated processes. DNA binds to colloidal surfaces and regulates how the colloids organize, providing molecular-level control. These connections are reversible and can be broken using standard experimental conditions (e.g., temperature), allowing for dynamic and adaptable systems. One important motivation is their good biocompatibility, which has enabled applications in biomedicine such as drug delivery, biosensing, and immunotherapy.

Programmable Atom Equivalents (PAEs) are large colloidal particles whose surfaces are functionalized with single-stranded DNA, while separate, much smaller DNA-coated linkers, called Electron Equivalents (EEs), roam in solution and mediate bonds between PAEs. In typical PAE-EE systems, the EEs carry multiple identical DNA ends that can all bind the same type of PAE, which limits the complexity of the assemblies and makes it harder to program highly specific connections between different PAE types.

In this study, the researchers investigate how EEs with arbitrary valency, carrying many DNA arms, regulate interactions in a binary mixture of two types of PAEs. Each EE has multiple single-stranded DNA ends of two different types, each complementary to the DNA on one of the PAE species. The team develops a statistical mechanical model to predict how EEs distribute between the PAEs and to calculate the effective interaction, a measure of how strongly the PAEs attract each other, which in turn controls the structures that can form.

Using this model, they inform Monte Carlo simulations to predict system behaviour under different conditions. The model shows quantitative agreement with simulation results and reveals an anomalous dependence of PAE-PAE interactions on EE valency, with interactions converging at high valency. Importantly, the researchers identify an optimal valency that maximizes selectivity between targeted and non-targeted binding pairs. This groundbreaking research provides design principles for programmable self-assembly and offers a framework that can be integrated into DNA nanoscience.

Read the full article

Designed self-assembly of programmable colloidal atom-electron equivalents

Xiuyang Xia et al 2025 Rep. Prog. Phys. 88 078101

Do you want to learn more about this topic?

Assembly of colloidal particles in solution by Kun Zhao and Thomas G Mason (2018)

The post DNA as a molecular architect appeared first on Physics World.

]]>
Research highlight A new model shows how programmable DNA strands control interactions between diverse colloidal particles https://physicsworld.com/wp-content/uploads/2025/11/dna-illustration-1160281740-istock-ktsimage.jpg
The link between protein evolution and statistical physics https://physicsworld.com/a/the-link-between-protein-evolution-and-statistical-physics/ Wed, 26 Nov 2025 08:37:20 +0000 https://physicsworld.com/?p=125199 Protein evolution plays a key role in many biological processes that are essential for life – but what does it have to do with physics?

The post The link between protein evolution and statistical physics appeared first on Physics World.

]]>
Proteins are made up of a sequence of building blocks called amino acids. Understanding these sequences is crucial for studying how proteins work, how they interact with other molecules, and how changes (mutations) can lead to diseases.

These mutations happen over vastly different time periods and are not completely random but strongly correlated, both in space (distinct sites along the sequences) and in time (subsequent mutations of the same site).

It turns out that these correlations are very reminiscent of disordered physical systems, notably glasses, emulsions, and foams.

A team of researchers from Italy and France have now used this similarity to build a new statistical model to simulate protein evolution.  They went on to study the role of different factors causing these mutations.

They found that the initial (ancestral) protein sequence has a significant influence on the evolution process, especially in the short term. This means that information from the ancestral sequence can be traced back over a certain period and is not completely lost.

The strength of interactions between different amino acids within the protein affects how long this information persists.

Although ultimately the team did find differences between the evolution of physical systems and that of protein sequences, this kind of insight would not have been possible without using the language of statistical physics, i.e. space-time correlations.

The researchers expect that their results will soon be tested in the lab thanks to upcoming advances in experimental techniques.

Read the full article

Fluctuations and the limit of predictability in protein evolution – IOPscience

S. Rossi et al, 2025 Rep. Prog. Phys. 88 078102

The post The link between protein evolution and statistical physics appeared first on Physics World.

]]>
Research highlight Protein evolution plays a key role in many biological processes that are essential for life – but what does it have to do with physics? https://physicsworld.com/wp-content/uploads/2025/11/20251125_rossi_protein-1.jpg
‘Caustic’ light patterns inspire new glass artwork https://physicsworld.com/a/caustic-light-patterns-inspire-new-glass-artwork/ Tue, 25 Nov 2025 17:00:07 +0000 https://physicsworld.com/?p=125195 The piece is based on the research of theoretical physicist Michael Berry

The post ‘Caustic’ light patterns inspire new glass artwork appeared first on Physics World.

]]>
UK artist Alison Stott has created a new glass and light artwork – entitled Naturally Focused – that is inspired by the work of theoretical physicist Michael Berry from the University of Bristol.

Stott, who recently competed an MA in glass at Arts University Plymouth, spent over two decades previously working in visual effects for film and television, where she focussed on creating photorealistic imagery.

Her studies touched on how complex phenomena can arise from seemingly simple set-ups, for example in a rotating glass sculpture lit by LEDs.

“My practice inhabits the spaces between art and science, glass and light, craft and experience,” notes Stott. “Working with molten glass lets me embrace chaos, indeterminacy, and materiality, and my work with caustics explores the co-creation of light, matter, and perception.”

The new artwork is based on “caustics” – the curved patterns that form when light is reflected or refracted by curved surfaces or objects

The focal point of the artwork is a hand-blown glass lens that was waterjet-cut into a circle and polished so that its internal structure and optical behaviour are clearly visible. The lens is suspended within stainless steel gyroscopic rings and held by a brass support and stainless stell backplate.

The rings can be tilted or rotated to “activate shifting field of caustic projections that ripple across” the artwork. Mathematical equations are also engraved onto the brass that describe the “singularities of light” that are visible on the glass surface.

The work is inspired by Berry’s research into the relationship between classical and quantum behaviour and how subtle geometric structures govern how waves and particles behave.

Berry recently won the 2025 Isaac Newton Medal and Prize, which is presented by the Institute of Physics, for his “profound contributions across mathematical and theoretical physics in a career spanning over 60 years”.

Stott says that working with Berry has pushed her understanding of caustics. “The more I learn about how these structures emerge and why they matter across physics, the more compelling they become,” notes Stott. “My aim is to let the phenomena speak for themselves, creating conditions where people can directly encounter physical behaviour and perhaps feel the same awe and wonder I do.”

The artwork will go on display at the University of Bristol following a ceremony to be held on 27 November.

The post ‘Caustic’ light patterns inspire new glass artwork appeared first on Physics World.

]]>
Blog The piece is based on the research of theoretical physicist Michael Berry https://physicsworld.com/wp-content/uploads/2025/11/naturally-focused-25-11-2025.jpeg newsletter1
Is your WiFi spying on you? https://physicsworld.com/a/is-your-wifi-spying-on-you/ Tue, 25 Nov 2025 16:00:12 +0000 https://physicsworld.com/?p=125150 "Beamforming feedback information" in latest version of the technology can identify individuals passing through radio networks with almost 100% accuracy, say researchers

The post Is your WiFi spying on you? appeared first on Physics World.

]]>
WiFi networks could pose significant privacy risks even to people who aren’t carrying or using WiFi-enabled devices, say researchers at the Karlsruhe Institute of Technology (KIT) in Germany. According to their analysis, the current version of the technology passively records information that is detailed enough to identify individuals moving through networks, prompting them to call for protective measures in the next iteration of WiFi standards.

Although wireless networks are ubiquitous and highly useful, they come with certain privacy and security risks. One such risk stems from a phenomenon known as WiFi sensing, which the researchers at KIT’s Institute of Information Security and Dependability (KASTEL) define as “the inference of information about the networks’ environment from its signal propagation characteristics”.

“As signals propagate through matter, they interfere with it – they are either transmitted, reflected, absorbed, polarized, diffracted, scattered, or refracted,” they write in their study, which is published in the Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security (CCS ’25). “By comparing an expected signal with a received signal, the interference can be estimated and used for error correction of the received data.”

 An under-appreciated consequence, they continue, is that this estimation contains information about any humans who may have unwittingly been in the signal’s path. By carefully analysing the signal’s interference with the environment, they say, “certain aspects of the environment can be inferred” – including whether humans are present, what they are doing and even who they are.

“Identity inference attack” is a threat

The KASTEL team terms this an “identity inference attack” and describes it as a threat that is as widespread as it is serious. “This technology turns every router into a potential means for surveillance,” says Julian Todt, who co-led the study with his KIT colleague Thorsten Strufe. “For example, if you regularly pass by a café that operates a WiFi network, you could be identified there without noticing it and be recognized later – for example by public authorities or companies.”

While Todt acknowledges that security services, cybercriminals and others do have much simpler ways of tracking individuals – for example by accessing data from CCTV cameras or video doorbells – he argues that the ubiquity of wireless networks lends itself to being co-opted as a near-permanent surveillance infrastructure. There is, he adds, “one concerning property” about wireless networks: “They are invisible and raise no suspicion.”

Identity of individuals could be extracted using a machine-learning model

Although the possibility of using WiFi networks in this way is not new, most previous WiFi-based security attacks worked by analysing so-called channel state information (CSI). These data indicate how a radio signal changes when it reflects off walls, furniture, people or animals. However, the KASTEL researchers note that the latest WiFi standard, known as WiFi 5 (802.11ac), changes the picture by enabling a new and potentially easier form of attack based on beamforming feedback information (BFI).

While beamforming uses similar information as CSI, Todt explains that it does so on the sender’s side instead of the receiver’s. This means that a BFI-based surveillance method would require nothing more than standard devices connected to the WiFi network. “The BFI could be used to create images from different perspectives that might then serve to identify persons that find themselves in the WiFi signal range,” Todt says. “The identity of individuals passing through these radio waves could then be extracted using a machine-learning model. Once trained, this model would make an identification in just a few seconds.”

In their experiments, Todt and colleagues studied 197 participants as they walked through a WiFi field while being simultaneously recorded with CSI and BFI from four different angles. The participants had five different “walking styles” (such as walking normally and while carrying a backpack) as well as different gaits. The researchers found that they could identify individuals with nearly 100% accuracy, regardless of the recording angle or the individual’s walking style or gait.

“Risks to our fundamental rights”

“The technology is powerful, but at the same time entails risks to our fundamental rights, especially to privacy,” says Strufe. He warns that authoritarian states could use the technology to track demonstrators and members of opposition groups, prompting him and his colleagues to “urgently call” for protective measures and privacy safeguards to be included in the forthcoming IEEE 802.11bf WiFi standard.

“The literature on all novel sensing solutions highlights their utility for various novel applications,” says Todt, “but the privacy risks that are inherent to such sensing are often overlooked, or worse — these sensors are claimed to be privacy-friendly without any rationale for these claims. As such, we feel it necessary to point out the privacy risks that novel solutions such as WiFi sensing bring with them.”

The researchers say they would like to see approaches developed that can mitigate the risk of identity inference attack. However, they are aware that this will be difficult, since this type of attack exploits the physical properties of the actual WiFi signal. “Ideally, we would influence the WiFi standard to contain privacy-protections in future versions,” says Todt, “but even the impact of this would be limited as there are already millions of WiFi devices out there that are vulnerable to such an attack.”

The post Is your WiFi spying on you? appeared first on Physics World.

]]>
Research update "Beamforming feedback information" in latest version of the technology can identify individuals passing through radio networks with almost 100% accuracy, say researchers https://physicsworld.com/wp-content/uploads/2025/11/wifi.jpg newsletter1
Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness https://physicsworld.com/a/ramy-shelbaya-the-physicist-and-ceo-capitalizing-on-quantum-randomness/ Tue, 25 Nov 2025 13:59:34 +0000 https://physicsworld.com/?p=124976 Ramy Shelbaya from Quantum Dice talks about using quantum mechanics to generate random numbers

The post Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness appeared first on Physics World.

]]>
Ramy Shelbaya has been hooked on physics ever since he was a 12-year-old living in Egypt and read about the Joint European Torus (JET) fusion experiment in the UK. Biology and chemistry were interesting to him but never quite as “satisfying”, especially as they often seemed to boil down to physics in the end. “So I thought, maybe that’s where I need to go,” Shelbaya recalls.

His instincts seem to have led him in the right direction. Shelbaya is now chief executive of Quantum Dice, an Oxford-based start-up he co-founded in 2020 to develop quantum hardware for exploiting the inherent randomness in quantum mechanics. It closed its first funding round in 2021 with a seven-figure investment from a consortium of European investors, while also securing grant funding on the same scale.

Now providing cybersecurity hardware systems for clients such as BT, Quantum Dice is launching a piece of hardware for probabilistic computing, based on the same core innovation. Full of joy and zeal for his work, Shelbaya admits that his original decision to pursue physics was “scary”. Back then, he didn’t know anyone who had studied the subject and was not sure where it might lead.

The journey to a start-up

Fortunately, Shelbaya’s parents were onboard from the start and their encouragement proved “incredibly helpful”. His teachers also supported him to explore physics in his extracurricular reading, instilling a confidence in the subject that eventually led Shelbaya to do undergraduate and master’s degrees in physics at École normale supérieure PSL in France.

He then moved to the UK to do a PhD in atomic and laser physics at the University of Oxford. Just as he was wrapping up his PhD, Oxford University Innovation (OUI) – which manages its technology transfer and consulting activities – launched a new initiative that proved pivotal to Shelbaya’s career.

Ramy Shelbaya

OUI had noted that the university generated a lot of IP and research results that could be commercialized but that the academics producing it often favoured academic work over progressing the technology transfer themselves. On the other hand, lots of students were interested in entering the world of business.

To encourage those who might be business-minded to found their own firms, while also fostering more spin-outs from the university’s patents and research, OUI launched the Student Entrepreneurs’ Programme (StEP). A kind of talent show to match budding entrepreneurs with technology ready for development, StEP invited participants to team up, choose commercially promising research from the university, and pitch for support and mentoring to set up a company.

As part of Oxford’s atomic and laser physics department, Shelbaya was aware that it had been developing a quantum random number generator. So when the competition was launched, he collaborated with other competition participants to pitch the device. “My team won, and this is how Quantum Dice was born.”

Random value

The initial technology was geared towards quantum random number generation, for particular use in cybersecurity. Random numbers are at the heart of all encryption algorithms, but generating truly random numbers has been a stumbling block, with the “pseudorandom” numbers people make do with being prone to prediction and hence security violation.

Quantum mechanics provides a potential solution because there is inherent randomness in the values of certain quantum properties. Although for a long time this randomness was “a bane to quantum physicists”, as Shelbaya puts it, Quantum Dice and other companies producing quantum random number generators are now harnessing it for useful technologies.

Where Quantum Dice sees itself as having an edge over its competitors is in its real-time quality assurance of the true quantum randomness of the device’s output. This means it should be able to spot any corruption to the output due to environmental noise or someone tampering with the device, which is an issue with current technologies.

Quantum Dice already offers Quantum Random Number Generator (QRNG) products in a range of form factors that integrate directly within servers, PCs and hardware security systems. Clients can also access the company’s cloud-based solution –  Quantum Entropy-as-a-Service – which, powered by its QRNG hardware, integrates into the Internet of Things and cloud infrastructure.

Recently Quantum Dice has also launched a probabilistic computing processor based on its QRNG for use in algorithms centred on probabilities. These are often geared towards optimization problems that apply in a number of sectors, including supply chains and logistics, finance, telecommunications and energy, as well as simulating quantum systems, and Boltzmann machines – a type of energy-based machine learning model for which Shelbaya says researchers “have long sought efficient hardware”.

Stress testing

After winning the start-up competition in 2019 things got trickier when Quantum Dice was ready to be incorporated, which occurred just at the start of the first COVID-19 lockdown. Shelbaya moved the prototype device into his living room because it was the only place they could ensure access to it, but it turned out the real challenges lay elsewhere.

“One of the first things we needed was investments, and really, at that stage of the company, what investors are investing in is you,” explains Shelbaya, highlighting how difficult this is when you cannot meet in person. On the plus side, since all his meetings were remote, he could speak to investors in Asia in the morning, Europe in the afternoon and the US in the evening, all within the same day.

Another challenge was how to present the technology simply enough so that people would understand and trust it, while not making it seem so simple that anyone could be doing it. “There’s that sweet spot in the middle,” says Shelbaya. “That is something that took time, because it’s a muscle that I had never worked.”

Due rewards

The company performed well for its size and sector in terms of securing investments when their first round of funding closed in 2021. Shelbaya is shy of attributing the success to his or even the team’s abilities alone, suggesting this would “underplay a lot of other factors”. These include the rising interest in quantum technologies, and the advantages of securing government grant funding programmes at the same time, which he feels serves as “an additional layer of certification”.

For Shelbaya every day is different and so are the challenges. Quantum Dice is a small new company, where many of the 17 staff are still fresh from university, so nurturing trust among clients, particularly in the high-stakes world of cybersecurity is no small feat. Managing a group of ambitious, energetic and driven young people can be complicated too.

But there are many rewards, ranging from seeing a piece of hardware finally work as intended and closing a deal with a client, to simply seeing a team “you have been working to develop, working together without you”.

For others hoping to follow a similar career path, Shelbaya’s advice is to do what you enjoy – not just because you will have fun but because you will be good at it too. “Do what you enjoy,” he says, “because you will likely be great at it.”

The post Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness appeared first on Physics World.

]]>
Feature Ramy Shelbaya from Quantum Dice talks about using quantum mechanics to generate random numbers https://physicsworld.com/wp-content/uploads/2025/11/2025-11-careers-demming-numbers-falling.jpg newsletter1
‘Patchy’ nanoparticles emerge from new atomic stencilling technique https://physicsworld.com/a/patchy-nanoparticles-emerge-from-new-atomic-stencilling-technique/ Tue, 25 Nov 2025 09:00:39 +0000 https://physicsworld.com/?p=125152 Multipurpose structures could find use in targeted drug delivery, catalysis, microelectronics and tissue engineering

The post ‘Patchy’ nanoparticles emerge from new atomic stencilling technique appeared first on Physics World.

]]>
Researchers in the US and Korea have created nanoparticles with carefully designed “patches” on their surfaces using a new atomic stencilling technique. These patches can be controlled with incredible precision, and could find use in targeted drug delivery, catalysis, microelectronics and tissue engineering.

The first step in the stencilling process is to create a mask on the surface of gold nanoparticles. This mask prevents a “paint” made from grafted-on polymers from attaching to certain areas of the nanoparticles.

“We then use iodide ions as a stencil,” explains Qian Chen, a materials scientist and engineer at the University of Illinois at Urbana-Champaign, US, who led the new research effort. “These adsorb (stick) to the surface of the nanoparticles in specific patterns that depend on the shape and atomic arrangement of the nanoparticles’ facets. That’s how we create the patches – the areas where the polymers selectively bind.” Chen adds that she and her collaborators can then tailor the surface chemistry of these tiny patchy nanoparticles in a very controlled way.

A gap in the field of microfabrication stencilling

The team decided to develop the technique after realizing there was a gap in the field of microfabrication stencilling. While techniques in this area have advanced considerably in recent years, allowing ever-smaller microdevices to be incorporated into ever-faster computer chips, most of them rely on top-down approaches for precisely controlling nanoparticles. By comparison, Chen says, bottom-up methods have been largely unexplored even though they are low-cost, solution-processable, scalable and compatible with complex, curved and three-dimensional surfaces.

Reporting their work in Nature, the researchers say they were inspired by the way proteins naturally self-assemble. “One of the holy grails in the field of nanomaterials is making complex, functional structures from nanoscale building blocks,” explains Chen. “It’s extremely difficult to control the direction and organization of each nanoparticle. Proteins have different surface domains, and thanks to their interactions with each other, they can make all the intricate machines we see in biology. We therefore adopted that strategy by creating patches or distinct domains on the surface of the nanoparticles.”

“Elegant and impressive”

Philip Moriarty, a physicist of the University of Nottingham, UK who was not involved in the project, describes it as “elegant and impressive” work. “Chen and colleagues have essentially introduced an entirely new mode of self-assembly that allows for much greater control of nanoparticle interactions,” he says, “and the ‘atomic stencil’ concept is clever and versatile.”

The team, which includes researchers at the University of Michigan, Pennsylvania State University, Cornell, Brookhaven National Laboratory and Korea’s Chonnam National University as well as Urbana-Champaign, agrees that the potential applications are vast. “Since we can now precisely control the surface properties of these nanoparticles, we can design them to interact with their environment in specific ways,” explains Chen. “That opens the door for more effective drug delivery, where nanoparticles can target specific cells. It could also lead to new types of catalysts, more efficient microelectronic components and even advanced materials with unique optical and mechanical properties.”

She and her colleagues say they now want to extend their approach to different types of nanoparticles and different substrates to find out how versatile it truly is. They will also be developing computational models that can predict the outcome of the stencilling process – something that would allow them to design and synthesize patchy nanoparticles for specific applications on demand.

The post ‘Patchy’ nanoparticles emerge from new atomic stencilling technique appeared first on Physics World.

]]>
Research update Multipurpose structures could find use in targeted drug delivery, catalysis, microelectronics and tissue engineering https://physicsworld.com/wp-content/uploads/2025/11/nanoparticles.jpg newsletter1
Scientists in China celebrate the completion of the underground JUNO neutrino observatory https://physicsworld.com/a/scientists-in-china-celebrate-the-completion-of-the-underground-juno-neutrino-observatory/ Mon, 24 Nov 2025 17:00:14 +0000 https://physicsworld.com/?p=125155 The observatory has also released its first results on the so-called solar neutrino tension

The post Scientists in China celebrate the completion of the underground JUNO neutrino observatory appeared first on Physics World.

]]>
The $330m Jiangmen Underground Neutrino Observatory (JUNO) has released its first results following the completion of the huge underground facility in August.

JUNO is located in Kaiping City, Guangdong Province, in the south of the country around 150 km west of Hong Kong.

Construction of the facility began in 2015 and was set to be complete some five years later. Yet the project suffered from serious flooding, which delayed construction.

JUNO, which is expected to run for more than 30 years, aims to study the relationship between the three types of neutrino: electron, muon and tau. Although JUNO will be able to detect neutrinos produced by supernovae as well as those from Earth, the observatory will mainly measure the energy spectrum of electron antineutrinos released by the Yangjiang and Taishan nuclear power plants, which both lie 52.5 km away.

To do this, the facility has a 80 m high and 50 m diameter experimental hall located 700 m underground. Its main feature is a 35 m radius spherical neutrino detector, containing 20,000 tonnes of liquid scintillator. When an electron antineutrino occasionally bumps into a proton in the liquid, it triggers a reaction that results in two flashes of light that are detected by the 43,000 photomultiplier tubes that observe the scintillator.

On 18 November, a paper was submitted to the arXiv preprint server concluding that the detector’s key performance indicators fully meet or surpass design expectations.

New measurement 

Neutrinos oscillate from one flavour to another as they travel near the speed of light, rarely interacting with matter. This oscillation is a result of each flavour being a combination of three neutrino mass states.

Yet scientists do not know the absolute masses of the three neutrinos but can measure neutrino oscillation parameters, known as θ12, θ23 and θ13, as well as the square of the mass differences (Δm2) between two different types of neutrinos.

A second JUNO paper submitted on 18 November used data collected between 26 August and 2 November to measure the solar neutrino oscillation parameter θ12 and Δm221 with a factor of 1.6 better precision than previous experiments.

Those earlier results, which used solar neutrinos instead of reactor antineutrinos, showed a 1.5 “sigma” discrepancy with the Standard Model of particle physics. The new JUNO measurements confirmed this difference, dubbed the solar neutrino tension, but further data will be needed to prove or disprove the finding.

“Achieving such precision within only two months of operation shows that JUNO is performing exactly as designed,” says Yifang Wang from the Institute of High Energy Physics of the Chinese Academy of Sciences, who is JUNO project manager and spokesperson. “With this level of accuracy, JUNO will soon determine the neutrino mass ordering, test the three-flavour oscillation framework, and search for new physics beyond it.”

JUNO, which is an international collaboration of more than 700 scientists from 75 institutions across 17 countries including China, France, Germany, Italy, Russia, Thailand, and the US, is the second neutrino experiment in China, after the Daya Bay Reactor Neutrino Experiment. It successfully measured a key neutrino oscillation parameter called θ13 in 2012 before being closed down in 2020.

JUNO is also one of three next-generation neutrino experiments, the other two being the Hyper-Kamiokande in Japan and the Deep Underground Neutrino Experiment in the US. Both are expected to become operational later this decade.

The post Scientists in China celebrate the completion of the underground JUNO neutrino observatory appeared first on Physics World.

]]>
News The observatory has also released its first results on the so-called solar neutrino tension https://physicsworld.com/wp-content/uploads/2025/11/juno-24-11-2025.jpg newsletter
Accelerator experiment sheds light on missing blazar radiation https://physicsworld.com/a/accelerator-experiment-sheds-light-on-missing-blazar-radiation/ Mon, 24 Nov 2025 15:08:30 +0000 https://physicsworld.com/?p=125159 Measurement discounts loss from plasma instabilities

The post Accelerator experiment sheds light on missing blazar radiation appeared first on Physics World.

]]>
New experiments at CERN by an international team have ruled out a potential source of intergalactic magnetic fields. The existence of such fields is invoked to explain why we do not observe secondary gamma rays originating from blazars.

Led by Charles Arrowsmith at the UK’s University of Oxford, the team suggests the absence of gamma rays could be the result of an unexplained phenomenon that took place in the early universe.

A blazar is an extraordinarily bright object with a supermassive black hole at its core. Some of the matter falling into the black hole is accelerated outwards in a pair of opposing jets, creating intense beams of radiation. If a blazar jet points towards Earth, we observe a bright source of light including high-energy teraelectronvolt gamma rays.

During their journey across intergalactic space, these gamma-ray photons will occasionally collide with the background starlight that permeates the universe. These collisions can create cascades of electrons and positrons that can then scatter off photons to create gamma rays in the gigaelectronvolt energy range. These gamma-rays should travel in the direction of the original jet, but this secondary radiation has never been detected.

Deflecting field

Magnetic fields could be the reason for this dearth, as Arrowsmith explains: “The electrons and positrons in the pair cascade would be deflected by an intergalactic magnetic field, so if this is strong enough, we could expect these pairs to be steered away from the line of sight to the blazar, along with the reprocessed gigaelectronvolt gamma rays.” It is not clear, however, that such fields exist – and if they do, what could have created them.

Another explanation for the missing gamma rays involves the extremely sparse plasma that permeates intergalactic space. The beam of electron–positron pairs could interact with this plasma, generating magnetic fields that separate the pairs. Over millions of years of travel, this process could lead to beam–plasma instabilities that reduce the beam’s ability to create gigaelectronvolt gamma rays that are focused on Earth.

Oxford’s Gianluca Gregori  explains, “We created an experimental platform at the HiRadMat facility at CERN to create electron–positron pairs and transport them through a metre-long ambient argon plasma, mimicking the interaction of pair cascades from blazars with the intergalactic medium”. Once the pairs had passed through the plasma, the team measured the degree to which they had been separated.

Tightly focused

Called Fireball, the experiment found that the beams remained far more tightly focused than expected. “When these laboratory results are scaled up to the astrophysical system, they confirm that beam–plasma instabilities are not strong enough to explain the absence of the gigaelectronvolt gamma rays from blazars,” Arrowsmith explains. Unless the pair beam is perfectly collimated, or composed of pairs with exactly equal energies, instabilities were actively suppressed in the plasma.

While the experiment suggests that an intergalactic magnetic field remains the best explanation for the lack of gamma rays, the mystery is far from solved. Gregori explains, “The early universe is believed to be extremely uniform – but magnetic fields require electric currents, which in turn need gradients and inhomogeneities in the primordial plasma.” As a result, confirming the existence of such a field could point to new physics beyond the Standard Model, which may have dominated in the early universe.

More information could come with opening of the Cherenkov Telescope Array Observatory. This will comprise ground-based gamma-ray detectors planned across facilities in Spain and Chile, which will vastly improve on the resolutions of current-generation detectors.

The research is described in PNAS.

The post Accelerator experiment sheds light on missing blazar radiation appeared first on Physics World.

]]>
Research update Measurement discounts loss from plasma instabilities https://physicsworld.com/wp-content/uploads/2025/11/24-11-25-cosmic-fireball.jpg
Why quantum metrology is the driving force for best practice in quantum standardization https://physicsworld.com/a/why-quantum-metrology-is-the-driving-force-for-best-practice-in-quantum-standardization/ Mon, 24 Nov 2025 11:10:28 +0000 https://physicsworld.com/?p=125133 International efforts on standards development will fast-track the adoption and commercialization of quantum technologies

The post Why quantum metrology is the driving force for best practice in quantum standardization appeared first on Physics World.

]]>
3d render quantum computer featuring qubit chip

How do standards support the translation of quantum science into at-scale commercial opportunities?

The standardization process helps to promote the legitimacy of emerging quantum technologies by distilling technical inputs and requirements from all relevant stakeholders across industry, research and government. Put simply: if you understand a technology well enough to standardize elements of it, that’s when you know it’s moved beyond hype and theory into something of practical use for the economy and society.

What are the upsides of standardization for developers of quantum technologies and, ultimately, for end-users in industry and the public sector?

Standards will, over time, help the quantum technology industry achieve critical mass on the supply side, with those economies of scale driving down prices and increasing demand. As the nascent quantum supply chain evolves – linking component manufacturers, subsystem developers and full-stack quantum computing companies – standards will also ensure interoperability between products from different vendors and different regions.

Those benefits flow downstream as well because standards, when implemented properly, increase trust among end-users by defining a minimum quality of products, processes and services. Equally important, as new innovations are rolled out into the marketplace by manufacturers, standards will ensure compatibility across current and next-generation quantum systems, reducing the likelihood of lock-ins to legacy technologies.

What’s your role in coordinating NPL’s standards effort in quantum science and technology?

I have strategic oversight of our core technical programmes in quantum computing, quantum networking, quantum metrology and quantum-enabled PNT (position, navigation and timing). It’s a broad-scope remit that spans research, training as well as responsibility for standardization and international collaboration, with the latter often going hand-in-hand.

Right now, we have over 150 people working within the NPL quantum metrology programme. Their collective focus is on developing the measurement science necessary to build, test and evaluate a wide range of quantum devices and systems. Our research helps innovators, whether in an industry or university setting, to push the limits of quantum technology by providing leading-edge capabilities and benchmarking to measure the performance of new quantum products and services.

Tim Prior

It sounds like there are multiple layers of activity.

That’s right. For starters, we have a team focusing on the inter-country strategic relationships, collaborating closely with colleagues at other National Metrology Institutes (like NIST in the US and PTB in Germany). A key role in this regard is our standards specialist who, given his background working in the standards development organizations (SDOs), acts as a “connector” between NPL’s quantum metrology teams and, more widely, the UK’s National Quantum Technology Programme and the international SDOs.

We also have a team of technical experts who sit on specialist working groups within the SDOs. Their inputs to standards development are not about NPL’s interests, rather providing expertise and experience gained from cutting-edge metrology; also building a consolidated set of requirements gathered from stakeholders across the quantum community to further the UK’s strategic and technical priorities in quantum.

So NPL’s quantum metrology programme provides a focal point for quantum standardization?

Absolutely. We believe that quantum metrology and standardization are key enablers of quantum innovation, fast-tracking the adoption and commercialization of quantum technologies while building confidence among investors and across the quantum supply chain and early-stage user base. For NPL and its peers, the task right now is to agree on the terminology and best practice as we figure out the performance metrics, benchmarks and standards that will enable quantum to go mainstream.

How does NPL engage the UK quantum community on standards development?

Front-and-centre is the UK Quantum Standards Network Pilot. This initiative – which is being led by NPL – brings together representatives from industry, academia and government to work on all aspects of standards development: commenting on proposals and draft standards; discussing UK standards policy and strategy; and representing the UK in the European and international SDOs. The end-game? To establish the UK as a leading voice in quantum standardization, both strategically and technically, and to ensure that UK quantum technology companies have access to global supply chains and markets.

What about NPL outreach to prospective end-users of quantum technologies?

The Quantum Standards Network Pilot also provides a direct line to prospective end-users of quantum technologies in business sectors like finance, healthcare, pharmaceuticals and energy. What’s notable is that the end-users are often preoccupied with questions that link in one way or another to standardization. For example: how well do quantum technologies stack up against current solutions? Are quantum systems reliable enough yet? What does quantum cost to implement and maintain, including long-term operational costs? Are there other emerging technologies that could do the same job? Is there a solid, trustworthy supply chain?

It’s clear that international collaboration is mandatory for successful standards development. What are the drivers behind the recently announced NMI-Q collaboration?

The quantum landscape is changing fast, with huge scope for disruptive innovation in quantum computing, quantum communications and quantum sensing. Faced with this level of complexity, NMI-Q leverages the combined expertise of the world’s leading National Metrology Institutes – from the G7 countries and Australia – to accelerate the development and adoption of quantum technologies.

No one country can do it all when it comes to performance metrics, benchmarks and standards in quantum science and technology. As such, NMI-Q’s priorities are to conduct collaborative pre-standardization research; develop a set of “best measurement practices” needed by industry to fast-track quantum innovation; and, ultimately, shape the global standardization effort in quantum. NPL’s prominent role within NMI-Q (I am the co-chair along with Barbara Goldstein of NIST) underscores our commitment to evidence-based decision-making in standards development and, ultimately, to the creation of a thriving quantum ecosystem.

What are the attractions of NPL’s quantum programme for early-career physicists?

Every day, our measurement scientists address cutting-edge problems in quantum – as challenging as anything they’ll have encountered previously in an academic setting. What’s especially motivating, however, is that the NPL is a mission-driven endeavour with measurement outcomes linking directly to wider societal and economic benefits – not just in the UK, but internationally as well.

Quantum metrology: at your service

Measurement for Quantum (M4Q) is a flagship NPL programme that provides industry partners with up to 20 days of quantum metrology expertise to address measurement challenges in applied R&D and product development. The service – which is free of charge for projects approved after peer review – helps companies to bridge the gap from technology prototype to full commercialization.

To date, more than two-thirds of the companies to participate in M4Q report that their commercial opportunity has increased as a direct result of NPL support. In terms of specifics, the M4Q offering includes the following services:

  • Small-current and quantum-noise measurements
  • Measurement of material-induced noise in superconducting quantum circuits
  • Nanoscale imaging of physical properties for applications in quantum devices
  • Characterization of single-photon sources and detectors
  • Characterization of compact lasers and other photonic components
  • Semiconductor device characterisation at cryogenic temperatures

Apply for M4Q support here.

Further reading

Performance metrics and benchmarks point the way to practical quantum advantage

End note: NPL retains copyright on this article.

The post Why quantum metrology is the driving force for best practice in quantum standardization appeared first on Physics World.

]]>
Analysis International efforts on standards development will fast-track the adoption and commercialization of quantum technologies https://physicsworld.com/wp-content/uploads/2025/11/2025-11-npl-ef-feature-image.jpg newsletter
Ask me anything: Jason Palmer – ‘Putting yourself in someone else’s shoes is a skill I employ every day’ https://physicsworld.com/a/ask-me-anything-jason-palmer-putting-yourself-in-someone-elses-shoes-is-a-skill-i-employ-every-day/ Mon, 24 Nov 2025 11:00:09 +0000 https://physicsworld.com/?p=124935 Jason Palmer talks about how a career in journalism offers a variety of opportunities, but you have to be okay with not being the expert in the room

The post Ask me anything: Jason Palmer – ‘Putting yourself in someone else’s shoes is a skill I employ every day’ appeared first on Physics World.

]]>
What skills do you use every day in your job?

One thing I can say for sure that I got from working in academia is the ability to quickly read, summarize and internalize information from a bunch of sources. Journalism requires a lot of that. Being able to skim through papers – reading the abstract, reading the conclusion, picking the right bits from the middle and so on – that is a life skill.

In terms of other skills, I’m always considering who’s consuming what I’m doing rather than just thinking about how I’d like to say something. You have to think about how it’s going to be received – what’s the person on the street going to hear? Is this clear enough? If I were hearing this for the first time, would I understand it? Putting yourself in someone else’s shoes – be it the listener, reader or viewer – is a skill I employ every day.

What do you like best and least about your job?

The best thing is the variety. I ended up in this business and not in scientific research because of a desire for a greater breadth of experience. And boy, does this job have it. I get to talk to people around the world about what they’re up to, what they see, what it’s like, and how to understand it. And I think that makes me a much more informed person than I would be had I chosen to remain a scientist.

When I did research – and even when I was a science journalist – I thought “I don’t need to think about what’s going on in that part of the world so much because that’s not my area of expertise.” Now I have to, because I’m in this chair every day. I need to know about lots of stuff, and I like that feeling of being more informed.

I suppose what I like the least about my job is the relentlessness of it. It is a newsy time. It’s the flip side of being well informed, you’re forced to confront lots of bad things – the horrors that are going on in the world, the fact that in a lot of places the bad guys are winning.

What do you know today that you wish you knew when you were starting out in your career?

When I started in science journalism, I wasn’t a journalist – I was a scientist pretending to be one. So I was always trying to show off what I already knew as a sort of badge of legitimacy. I would call some professor on a topic that I wasn’t an expert in yet just to have a chat to get up to speed, and I would spend a bunch of time showing off, rabbiting on about what papers I’d read and what I knew, just to feel like I belonged in the room or on that call. And it’s a waste of time. You have to swallow your ego and embrace the idea that you may sound like you don’t know stuff even if you do. You might sound dumber, but that’s okay – you’ll learn more and faster, and you’ll probably annoy people less.

In journalism in particular, you don’t want to preload the question with all of the things that you already know because then the person you’re speaking to can fill in those blanks – and they’re probably going to talk about things you didn’t know you didn’t know, and take your conversation in a different direction.

It’s one of the interesting things about science in general. If you go into a situation with experts, and are open and comfortable about not knowing it all, you’re showing that you understand that nobody can know everything and that science is a learning process.

The post Ask me anything: Jason Palmer – ‘Putting yourself in someone else’s shoes is a skill I employ every day’ appeared first on Physics World.

]]>
Interview Jason Palmer talks about how a career in journalism offers a variety of opportunities, but you have to be okay with not being the expert in the room https://physicsworld.com/wp-content/uploads/2025/11/2025-11-ama-jason-palmer.jpg
Sympathetic cooling gives antihydrogen experiment a boost https://physicsworld.com/a/sympathetic-cooling-gives-antihydrogen-experiment-a-boost/ Fri, 21 Nov 2025 14:20:32 +0000 https://physicsworld.com/?p=125129 Having more antimatter could help solve profound mysteries of physics

The post Sympathetic cooling gives antihydrogen experiment a boost appeared first on Physics World.

]]>
Physicists working on the Antihydrogen Laser Physics Apparatus (ALPHA) experiment at CERN have trapped and accumulated 15,000 antihydrogen atoms in less than 7 h. This accumulation rate is more than 20 times the previous record. Large ensembles of antihydrogen could be used to search for tiny, unexpected differences between matter and antimatter – which if discovered could point to physics beyond the Standard Model.

According to the Standard Model every particle has an antimatter counterpart – or antiparticle. It also says that roughly equal amounts of matter and antimatter were created in the Big Bang. But, today there is much more matter than antimatter in the visible universe, and the reason for this “baryon asymmetry” is one of the most important mysteries of physics.

The Standard Model predicts the properties of antiparticles. An antiproton, for example, has the same mass as a proton and the opposite charge. The Standard Model also predicts how antiparticles interact with matter and antimatter. If physicists could find discrepancies between the measured and predicted properties of antimatter, it could help explain the baryon asymmetry and point to other new physics beyond the Standard Model.

Powerful probe

Just as a hydrogen atom comprises a proton bound to an electron, an antihydrogen antiatom comprises an antiproton bound to an antielectron (positron). Antihydrogen offers physicists several powerful ways to probe antimatter at a fundamental level. Trapped antiatoms can be released in freefall to determine if they respond to gravity in the same way as atoms. Spectroscopy can be used to make precise measurements of how the electromagnetic force binds the antiproton and positron in antihydrogen with the aim of finding differences compared to hydrogen.

So far, antihydrogen’s gravitational and electromagnetic properties appear to be identical to hydrogen. However, these experiments were done using small numbers of antiatoms, and having access to much larger ensembles would improve the precision of such measurements and could reveal tiny discrepancies. However, creating and storing antihydrogen is very difficult.

Today, antihydrogen can only be made in significant quantities at CERN in Switzerland. There, a beam of protons is fired at a solid target, creating antiprotons that are then cooled and stored using electromagnetic fields. Meanwhile, positrons are gathered from the decay of radioactive nuclei and cooled and stored using electromagnetic fields. These antiprotons and positrons are then combined in a special electromagnetic trap to create antihydrogen.

This process works best when the antiprotons and positrons have very low kinetic energies (temperatures) when combined. If the energy is too high, many antiatoms will be escape the trap. So, it is crucial that the positrons and antiprotons to be as cold as possible.

Sympathetic cooling

Recently, ALPHA physicists have used a technique called sympathetic cooling on positrons, and in a new paper they describe their success.  Sympathetic cooling has been used for several decades to cool atoms and ions. It originally involved mixing a hard-to-cool atomic species with atoms that are relatively easy to cool using lasers. Energy is transferred between the two species via the electromagnetic interaction, which chills the hard-to-cool atoms.

The ALPHA team used beryllium ions to sympathetically cool positrons to 10 K, which is five degrees colder than previously achieved using other techniques. These cold positrons boosted the efficiency of the creation and trapping of antihydrogen, allowing the team to accumulate 15,000 antihydrogen atoms in less than 7 h. This is more than a 20-fold improvement over their previous record of accumulating 2000 antiatoms in 24 h.

Science fiction

“These numbers would have been considered science fiction 10 years ago,” says ALPHA spokesperson Jeffrey Hangst, who is a Denmark’s Aarhus University.

Team member Maria Gonçalves, a PhD student at the UK’s Swansea University, says, “This result was the culmination of many years of hard work. The first successful attempt instantly improved the previous method by a factor of two, giving us 36 antihydrogen atoms”.

The effort was led by Niels Madsen of the UK’s Swansea University. He enthuses, “It’s more than a decade since I first realized that this was the way forward, so it’s incredibly gratifying to see the spectacular outcome that will lead to many new exciting measurements on antihydrogen”.

The cooling technique is described in Nature Communications.

The post Sympathetic cooling gives antihydrogen experiment a boost appeared first on Physics World.

]]>
Research update Having more antimatter could help solve profound mysteries of physics https://physicsworld.com/wp-content/uploads/2025/11/21-11-25-alpha-cern.jpg newsletter1
Plasma bursts from young stars could shed light on the early life of the Sun https://physicsworld.com/a/plasma-bursts-from-young-stars-could-shed-light-on-the-early-life-of-the-sun/ Fri, 21 Nov 2025 09:00:02 +0000 https://physicsworld.com/?p=125099  New multi-temperature coronal mass ejection observations might help us better understand how life emerged and evolved on Earth

The post Plasma bursts from young stars could shed light on the early life of the Sun appeared first on Physics World.

]]>
The Sun frequently ejects high-energy bursts of plasma that then travel through interplanetary space. These so-called coronal mass ejections (CMEs) are accompanied by strong magnetic fields, which, when they interact with the Earth’s atmosphere, can trigger solar storms that can severely damage satellite systems and power grids.

In the early days of the solar system, the Sun was far more active than it is today and ejected much bigger CMEs. These might have been energetic enough to affect our planet’s atmosphere and therefore influence how life emerged and evolved on Earth, according to some researchers.

Since it is impossible to study the early Sun, astronomers use proxies – that is, stars that resemble it. These “exo-suns” are young G-, K- and M-type stars and are far more active than our Sun is today. They frequently produce CMEs with energies far larger than the most energetic solar flares recorded in recent times, which might not only affect their planets’ atmospheres, but may also affect the chemistry on these planets.

Until now, direct observational evidence for eruptive CME-like phenomena on young solar analogues has been limited. This is because clear signatures of stellar eruptions are often masked by the brightness of their host stars and flares on these. Measurements of Doppler shifts in optical lines have allowed astronomers to detect a few possible stellar eruptions associated with giant superflares on a young solar analogue, but these detections have been limited to single-wavelength data at “low temperatures” of around 104 K. Studies at higher temperatures have been few and far between. And although scientists have tried out promising techniques, such as X-ray and UV dimming, to advance their understanding of these “cool” stars, few simultaneous multi-wavelength observations have been made.

A large Carrington-class flare from EK Draconis

On 29 March 2024, astronomers at Kyoto University in Japan detected a large Carrington-class flare – or superflare – in the far-ultraviolet from EK Draconis, a G-type star located approximately 112 light-years away from the Sun. Thanks to simultaneous observations in the ultraviolet and optical ranges of the electromagnetic spectrum, they say they have now been able to obtain the first direct evidence for a multi-temperature CME from this young solar analogue (which is around 50 to 125 million years old and has a radius similar to the Sun).

The researchers’ campaign spanned four consecutive nights from 29 March to 1 April 2024. They made their ultraviolet observations with the Hubble Space Telescope and the Transiting Exoplanet Survey Satellite (TESS) and performed optical monitoring using three ground-based telescopes in Japan, Korea and the US.

They found that the far-ultraviolet and optical lines were Doppler shifted during and just before the superflare, with the ultraviolet observations showing blueshifted emission indicative of hot plasma. About 10 minutes later, the optical telescopes observed blueshifted absorption in the hydrogen Hα line, which indicates cooler gases. According to the team’s calculations, the hot plasma had a temperature of 100 000 K and was ejected at speeds of 300–550 km/s, while the “cooler” gas (with a temperature of 10 000 K) was ejected at 70 km/s.

“These findings imply that it is the hot plasma rather than the cool plasma that carries kinetic energy into planetary space,” explains study leader Kosuke Namekata. “The existence of this plasma suggests that such CMEs from our Sun in the past, if frequent and strong, could have driven shocks and energetic particles capable of eroding or chemically altering the atmosphere of the early Earth and the other planets in our solar system.”

“The discovery,” he tells Physics World, “provides the first observational link between solar and stellar eruptions, bridging stellar astrophysics, solar physics and planetary science.”

Looking forward, the researchers, who report their work in Nature Astronomy, now plan to conduct similar, multiwavelength campaigns on other young solar analogues to determine how frequently such eruptions occur and how they vary from star to star.

“In the near future, next-generation ultraviolet space telescopes such as JAXA’s LAPYUTA and NASA’s ESCAPADE, coordinated with ground-based facilities, will allow us to trace these events more systematically and understand their cumulative impact on planetary atmospheres,” says Namekata.

The post Plasma bursts from young stars could shed light on the early life of the Sun appeared first on Physics World.

]]>
Research update  New multi-temperature coronal mass ejection observations might help us better understand how life emerged and evolved on Earth https://physicsworld.com/wp-content/uploads/2025/11/draconis.jpg
Flattened halo of dark matter could explain high-energy ‘glow’ at Milky Way’s heart https://physicsworld.com/a/flattened-halo-of-dark-matter-could-explain-high-energy-glow-at-milky-ways-heart/ Thu, 20 Nov 2025 17:00:54 +0000 https://physicsworld.com/?p=125094 Finding brings us a step closer to solving the mystery of dark matter, say astronomers

The post Flattened halo of dark matter could explain high-energy ‘glow’ at Milky Way’s heart appeared first on Physics World.

]]>
Astronomers have long puzzled over the cause of a mysterious “glow” of very high energy gamma radiation emanating from the centre of our galaxy. One possibility is that dark matter – the unknown substance thought to make up more than 25% of the universe’s mass – might be involved. Now, a team led by researchers at Germany’s Leibniz Institute for Astrophysics Potsdam (AIP) says that a flattened rather than spherical distribution of dark matter could account for the glow’s properties, bringing us a step closer to solving the mystery.

Dark matter is believed to be responsible for holding galaxies together. However, since it does not interact with light or other electromagnetic radiation, it can only be detected through its gravitational effects. Hence, while astrophysical and cosmological evidence has confirmed its presence, its true nature remains one of the greatest mysteries in modern physics.

“It’s extremely consequential and we’re desperately thinking all the time of ideas as to how we could detect it,” says Joseph Silk, an astronomer at Johns Hopkins University in the US and the Institut d’Astrophysique de Paris and Sorbonne University in France who co-led this research together with the AIP’s Moorits Mihkel Muru. “Gamma rays, and specifically the excess light we’re observing at the centre of our galaxy, could be our first clue.”

Models might be too simple

The problem, Muru explains, is that the way scientists have usually modelled dark matter to account for the excess gamma-ray radiation in astronomical observations was highly simplified. “This, of course, made the calculations easier, but simplifications always fuzzy the details,” he says. “We showed that in this case, the details are important: we can’t model dark matter as a perfectly symmetrical cloud and instead have to take into account the asymmetry of the cloud.”

Muru adds that the team’s findings, which are detailed in Phys. Rev. Lett., provide a boost to the “dark matter annihilation” explanation of the excess radiation. According to the standard model of cosmology, all galaxies – including our own Milky Way – are nested inside huge haloes of dark matter. The density of this dark matter is highest at the centre, and while it primarily interacts through gravity, some models suggest that it could be made of massive, neutral elementary particles that are their own antimatter counterparts. In these dense regions, therefore, such dark matter species could be mutually annihilating, producing substantial amounts of radiation.

Pierre Salati, an emeritus professor at the Université Savoie Mont Blanc, France, who was not involved in this work, says that in these models, annihilation plays a crucial role in generating a dark matter component with an abundance that agrees with cosmological observations. “Big Bang nucleosynthesis sets stringent bounds on these models as a result of the overall concordance between the predicted elemental abundances and measurements, although most models do survive,” Salati says. “One of the most exciting aspects of such explanations is that dark matter species might be detected through the rare antimatter particles – antiprotons, positrons and anti-deuterons – that they produce as they currently annihilate inside galactic halos.”

Silvia Manconi of the Laboratoire de Physique Théorique et Hautes Energies (LPTHE), France, who was also not involved in the study, describes it as “interesting and stimulating”. However, she cautions that – as is often the case in science – reality is probably more complex than even advanced simulations can capture. “This is not the first time that galaxy simulations have been used to study the implications of the excess and found non-spherical shapes,” she says, though she adds that the simulations in the new work offer “significant improvements” in terms of their spatial resolution.

Manconi also notes that the study does not demonstrate how the proposed distribution of dark matter would appear in data from the Fermi Gamma-ray Space Telescope’s Large Area Telescope (LAT), or how it would differ quantitatively from observations of a distribution of old stars. Forthcoming observations with radio telescopes such as MeerKat and FAST, she adds, may soon identify pulsars in this region of the galaxy, shedding further light on other possible contributions to the excess of gamma rays.

New telescopes could help settle the question

Muru acknowledges that better modelling and observations are still needed to rule out other possible hypotheses. “Studying dark matter is very difficult, because it doesn’t emit or block light, and despite decades of searching, no experiment has yet detected dark matter particles directly,” he tells Physics World. “A confirmation that this observed excess radiation is caused by dark matter annihilation through gamma rays would be a big leap forward.”

New gamma-ray telescopes with higher resolution, such as the Cherenkov Telescope Array, could help settle this question, he says. If these telescopes, which are currently under construction, fail to find star-like sources for the glow and only detect diffuse radiation, that would strengthen the alternative dark matter annihilation explanation.

Muru adds that a “smoking gun” for dark matter would be a signal that matches current theoretical predictions precisely. In the meantime, he and his colleagues plan to work on predicting where dark matter should be found in several of the dwarf galaxies that circle the Milky Way.

“It’s possible we will see the new data and confirm one theory over the other,” Silk says. “Or maybe we’ll find nothing, in which case it’ll be an even greater mystery to resolve.”

The post Flattened halo of dark matter could explain high-energy ‘glow’ at Milky Way’s heart appeared first on Physics World.

]]>
Research update Finding brings us a step closer to solving the mystery of dark matter, say astronomers https://physicsworld.com/wp-content/uploads/2025/11/hestia.jpg
Talking physics with an alien civilization: what could we learn? https://physicsworld.com/a/talking-physics-with-an-alien-civilization-what-could-we-learn/ Thu, 20 Nov 2025 13:55:11 +0000 https://physicsworld.com/?p=125106 Do Aliens Speak Physics? author Daniel Whiteson is our podcast guest

The post Talking physics with an alien civilization: what could we learn? appeared first on Physics World.

]]>
It is book week here at Physics World and over the course of three days we are presenting conversations with the authors of three fascinating and fun books about physics. Today, my guest is the physicist Daniel Whiteson, who along with the artist Andy Warner has created the delightful book Do Aliens Speak Physics?.

Is physics universal, or is it shaped by human perspective? This will be a very important question if and when we are visited by an advanced alien civilization. Would we recognize our visitors’ alien science – or indeed, could a technologically-advanced civilization have no science at all? And would we even be able to communicate about science with our alien guests?

Whiteson, who is a particle physicist at the University of California Irvine, tackles these profound questions and much more in this episode of the Physics World Weekly podcast.

APS logo

 

This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online.

The post Talking physics with an alien civilization: what could we learn? appeared first on Physics World.

]]>
Podcasts Do Aliens Speak Physics? author Daniel Whiteson is our podcast guest https://physicsworld.com/wp-content/uploads/2025/11/20-11-25-alien-visit-graphics-list.jpg