diff --git "a/raw_rss_feeds/https___physicsworld_com_feed_.xml" "b/raw_rss_feeds/https___physicsworld_com_feed_.xml"
new file mode 100644--- /dev/null
+++ "b/raw_rss_feeds/https___physicsworld_com_feed_.xml"
@@ -0,0 +1,5780 @@
+
The post Quantum computing on the verge: correcting errors, developing algorithms and building up the user base appeared first on Physics World.
+]]> +So the problem of error correction is a key issue for the future of the market. It arises because errors in qubits can’t be corrected simply by keeping multiple copies, as they are in classical computers: quantum rules forbid the copying of qubit states while they are still entangled with others, and are thus unknown. To run quantum circuits with millions of gates, we therefore need new tricks to enable quantum error correction (QEC).
+The general principle of QEC is to spread the information over many qubits so that an error in any one of them doesn’t matter too much. “The essential idea of quantum error correction is that if we want to protect a quantum system from damage then we should encode it in a very highly entangled state,” says John Preskill, director of the Institute for Quantum Information and Matter at the California Institute of Technology in Pasadena.
+There is no unique way of achieving that spreading, however. Different error-correcting codes can depend on the connectivity between qubits – whether, say, they are coupled only to their nearest neighbours or to all the others in the device – which tends to be determined by the physical platform being used. However error correction is done, it must be done fast. “The mechanisms for error correction need to be running at a speed that is commensurate with that of the gate operations,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC). “There’s no point in doing a gate operation in a nanosecond if it then takes 100 microseconds to do the error correction for the next gate operation.”
+At the moment, dealing with errors is largely about compensation rather than correction: patching up the problems of errors in retrospect, for example by using algorithms that can throw out some results that are likely to be unreliable (an approach called “post-selection”). It’s also a matter of making better qubits that are less error-prone in the first place.
+To protect the information stored in qubits, a multitude of unreliable physical qubits have to be combined in such a way that if one qubit fails and causes an error, the others can help protect the system. Essentially, by combining many physical qubits (shown above on the left), one can build a few “logical” qubits that are strongly resistant to noise.
++
According to Maria Maragkou, commercial vice-president of quantum software company Riverlane, the goal of full QEC has ramifications for the design of the machines all the way from hardware to workflow planning. “The shift to support error correction has a profound effect on the way quantum processors themselves are built, the way we control and operate them, through a robust software stack on top of which the applications can be run,” she explains. The “stack” includes everything from programming languages to user interfaces and servers.
+With genuinely fault-tolerant qubits, errors can be kept under control and prevented from proliferating during a computation. Such qubits might be made in principle by combining many physical qubits into a single “logical qubit” in which errors can be corrected (see figure 1). In practice, though, this creates a large overhead: huge numbers of physical qubits might be needed to make just a few fault-tolerant logical qubits. The question is then whether errors in all those physical qubits can be checked faster than they accumulate (see figure 2).
+That overhead has been steadily reduced over the past several years, and at the end of last year researchers at Google announced that their 105-qubit Willow quantum chip passed the break-even threshold at which the error rate gets smaller, rather than larger, as more physical qubits are used to make a logical qubit. This means that in principle such arrays could be scaled up without errors accumulating.
+The illustration gives an overview of quantum error correction (QEC) in action within a quantum processing unit. UK-based company Riverlane is building its Deltaflow QEC stack that will correct millions of data errors in real time, allowing a quantum computer to go beyond the reach of any classical supercomputer.
++
Fault-tolerant quantum computing is the ultimate goal, says Jay Gambetta, director of IBM research at the company’s centre in Yorktown Heights, New York. He believes that to perform truly transformative quantum calculations, the system must go beyond demonstrating a few logical qubits – instead, you need arrays of at least a 100 of them, that can perform more than 100 million quantum operations (108 QuOps). “The number of operations is the most important thing,” he says.
+It sounds like a tall order, but Gambetta is confident that IBM will achieve these figures by 2029. By building on what has been achieved so far with error correction and mitigation, he feels “more confident than I ever did before that we can achieve a fault-tolerant computer.” Jerry Chow, previous manager of the Experimental Quantum Computing group at IBM, shares that optimism. “We have a real blueprint for how we can build [such a machine] by 2029,” he says (see figure 3).
+Others suspect the breakthrough threshold may be a little lower: Steve Brierly, chief executive of Riverlane, believes that the first error-corrected quantum computer, with around 10 000 physical qubits supporting 100 logical qubits and capable of a million QuOps (a megaQuOp), could come as soon as 2027. Following on, gigaQuOp machines (109 QuOps) should be available by 2030–32, and teraQuOps (1012 QuOp) by 2035–37.
+Error mitigation and error correction are just two of the challenges for developers of quantum software. Fundamentally, to develop a truly quantum algorithm involves taking full advantage of the key quantum-mechanical properties such as superposition and entanglement. Often, the best way to do that depends on the hardware used to run the algorithm. But ultimately the goal will be to make software that is not platform-dependent and so doesn’t require the user to think about the physics involved.
+ +“At the moment, a lot of the platforms require you to come right down into the quantum physics, which is a necessity to maximize performance,” says Richard Murray of photonic quantum-computing company Orca. Try to generalize an algorithm by abstracting away from the physics and you’ll usually lower the efficiency with which it runs. “But no user wants to talk about quantum physics when they’re trying to do machine learning or something,” Murray adds. He believes that ultimately it will be possible for quantum software developers to hide those details from users – but Brierly thinks this will require fault-tolerant machines.
+“In due time everything below the logical circuit will be a black box to the app developers”, adds Maragkou over at Riverlane. “They will not need to know what kind of error correction is used, what type of qubits are used, and so on.” She stresses that creating truly efficient and useful machines depends on developing the requisite skills. “We need to scale up the workforce to develop better qubits, better error-correction codes and decoders, write the software that can elevate those machines and solve meaningful problems in a way that they can be adopted.” Such skills won’t come only from quantum physicists, she adds: “I would dare say it’s mostly not!”
+Yet even now, working on quantum software doesn’t demand a deep expertise in quantum theory. “You can be someone working in quantum computing and solving problems without having a traditional physics training and knowing about the energy levels of the hydrogen atom and so on,” says Ashley Montanaro, who co-founded the quantum software company Phasecraft.
+On the other hand, insights can flow in the other direction too: working on quantum algorithms can lead to new physics. “Quantum computing and quantum information are really pushing the boundaries of what we think of as quantum mechanics today,” says Montanaro, adding that QEC “has produced amazing physics breakthroughs.”
+Once we have true error correction, Cuthbert at the UK’s NQCC expects to see “a flow of high-value commercial uses” for quantum computers. What might those be?
+In this arena of quantum chemistry and materials science, genuine quantum advantage – calculating something that is impossible using classical methods alone – is more or less here already, says Chow. Crucially, however, quantum methods needn’t be used for the entire simulation but can be added to classical ones to give them a boost for particular parts of the problem.
+
For example, last year researchers at IBM teamed up with scientists at several RIKEN institutes in Japan to calculate the minimum energy state for the iron sulphide cluster (4Fe-4S) at the heart of the bacterial nitrogenase enzyme that fixes nitrogen. This cluster is too big and complex to be accurately simulated using the classical approximations of quantum chemistry. The researchers used a combination of both quantum computing (with IBM’s 72-qubit Heron chip) and RIKEN’s Fugaku high performance computing (HPC). This idea of “improving classical methods by injecting quantum as a subroutine” is likely to be a more general strategy, says Gambetta. “The future of computing is going to be heterogeneous accelerators [of discovery] that include quantum.”
+Likewise, Montanaro says that Phasecraft is developing “quantum-enhanced algorithms”, where a quantum computer is used, not to solve the whole problem, but just to help a classical computer in some way. “There are only certain problems where we know quantum computing is going to be useful,” he says. “I think we are going to see quantum computers working in tandem with classical computers in a hybrid approach. I don’t think we’ll ever see workloads that are entirely run using a quantum computer.” Among the first important problems that quantum machines will solve, according to Montanaro, are the simulation of new materials – to develop, for example, clean-energy technologies (see figure 4).
+“For a physicist like me,” says Preskill, “what is really exciting about quantum computing is that we have good reason to believe that a quantum computer would be able to efficiently simulate any process that occurs in nature.”
+A promising application of quantum computers is simulating novel materials. Researchers from the quantum algorithms firm Phasecraft, for example, have already shown how a quantum computer could help simulate complex materials such as the polycrystalline compound LK-99, which was purported by some researchers in 2024 to be a room-temperature superconductor.
+Using a classical/quantum hybrid workflow, together with the firm’s proprietary material simulation approach to encode and compile materials on quantum hardware, Phasecraft researchers were able to establish a classical model of the LK99 structure that allowed them to extract an approximate representation of the electrons within the material. The illustration above shows the green and blue electronic structure around red and grey atoms in LK-99.
++
Montanaro believes another likely near-term goal for useful quantum computing is solving optimization problems – both here and in quantum simulation, “we think genuine value can be delivered already in this NISQ era with hundreds of qubits.” (NISQ, a term coined by Preskill, refers to noisy intermediate-scale quantum computing, with relatively small numbers of rather noisy, error-prone qubits.)
+One further potential benefit of quantum computing is that it tends to require less energy than classical high-performance computing, which is notoriously high. If the energy cost could be cut by even a few percent, it would be worth using quantum resources for that reason alone. “Quantum has real potential for an energy advantage,” says Chow. One study in 2020 showed that a particular quantum-mechanical calculation carried out on a HPC used many orders of magnitude more energy than when it was simulated on a quantum circuit. Such comparisons are not easy, however, in the absence of an agreed and well-defined metric for energy consumption.
+Right now, the quantum computing market is in a curious superposition of states itself – it has ample proof of principle, but today’s devices are still some way from being able to perform a computation relevant to a practical problem that could not be done with classical computers. Yet to get to that point, the field needs plenty of investment.
+The fact that quantum computers, especially if used with HPC, are already unique scientific tools should establish their value in the immediate term, says Gambetta. “I think this is going to accelerate, and will keep the funding going.” It is why IBM is focusing on utility-scale systems of around 100 qubits or so and more than a thousand gate operations, he says, rather than simply trying to build ever bigger devices.
+Montanaro sees a role for governments to boost the growth of the industry “where it’s not the right fit for the private sector”. One role of government is simply as a customer. For example, Phasecraft is working with the UK national grid to develop a quantum algorithm for optimizing the energy network. “Longer-term support for academic research is absolutely critical,” Montanaro adds. “It would be a mistake to think that everything is done in terms of the underpinning science, and governments should continue to support blue-skies research.”
+
It’s not clear, though, whether there will be a big demand for quantum machines that every user will own and run. Before 2010, “there was an expectation that banks and government departments would all want their own machine – the market would look a bit like HPC,” Cuthbert says. But that demand depends in part on what commercial machines end up being like. “If it’s going to need a premises the size of a football field, with a power station next to it, that becomes the kind of infrastructure that you only want to build nationally.” Even for smaller machines, users are likely to try them first on the cloud before committing to installing one in-house.
+According to Cuthbert , the real challenge in the supply-chain development is that many of today’s technologies were developed for the science community – where, say, achieving millikelvin cooling or using high-power lasers is routine. “How do you go from a specialist scientific clientele to something that starts to look like a washing machine factory, where you can make them to a certain level of performance,” while also being much cheaper, and easier to use?
+ +But Cuthbert is optimistic about bridging this gap to get to commercially useful machines, encouraged in part by looking back at the classical computing industry of the 1970s. “The architects of those systems could not imagine what we would use our computation resources for today. So I don’t think we should be too discouraged that you can grow an industry when we don’t know what it’ll do in five years’ time.”
+Montanaro too sees analogies with those early days of classical computing. “If you think what the computer industry looked like in the 1940s, it’s very different from even 20 years later. But there are some parallels. There are companies that are filling each of the different niches we saw previously, there are some that are specializing in quantum hardware development, there are some that are just doing software.” Cuthbert thinks that the quantum industry is likely to follow a similar pathway, “but more quickly and leading to greater market consolidation more rapidly.”
+However, while the classical computing industry was revolutionized by the advent of personal computing in the 1970s and 80s, it seems very unlikely that we will have any need for quantum laptops. Rather, we might increasingly see apps and services appear that use cloud-based quantum resources for particular operations, merging so seamlessly with classical computing that we don’t even notice.
+That, perhaps, would be the ultimate sign of success: that quantum computing becomes invisible, no big deal but just a part of how our answers are delivered.
+This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
+Find out more on our quantum channel.
++
The post Quantum computing on the verge: correcting errors, developing algorithms and building up the user base appeared first on Physics World.
+]]>The post Young rogue planet grows like a star appeared first on Physics World.
+]]>In their study, which is detailed in The Astrophysical Journal Letters, astronomers led by Víctor Almendros-Abad at Italy’s Palermo Astronomical Observatory; Ray Jayawardhana of Johns Hopkins University in the US; and Belinda Damian and Aleks Scholz of the University of St Andrews, UK, focused on a planet known as Cha1107-7626. Located around 620 light-years from Earth, this planet has a mass approximately five to 10 times that of Jupiter. Unlike Jupiter, though, it does not orbit around a central star. Instead, it floats freely in space as a “rogue” planet, one of many identified in recent years.
+Like other rogue planets, Cha1107-7626 was known to be surrounded by a disk of dust and gas. When material from this disk spirals, or accretes, onto the planet, the planet grows.
+What Almendros-Abad and colleagues discovered is that this process is not uniform. Using the VLT’s XSHOOTER and the NIRSpec and MIRI instruments on JWST, they found that Cha1107-7626 experienced a burst of accretion beginning in June 2025. This is the first time anyone has seen an accretion burst in an object with such a low mass, and the peak accretion rate of six billion tonnes per second makes it the strongest accretion episode ever recorded in a planetary-mass object. It may not be over, either. At the end of August, when the observing campaign ended, the burst was still ongoing.
+The team identified several parallels between Cha1107-7626’s accretion burst and those that young stars experience. Among them were clear signs that gas is being funnelled onto the planet. “This indicates that magnetic fields structure the flow of gas, which is again something well known from stars,” explains Scholz. “Overall, our discovery is establishing interesting, perhaps surprising parallels between stars and planets, which I’m not sure we fully understand yet.”
+The astronomers also found that the chemistry of the disc around the planet changed during accretion, with water being present in this phase even though it hadn’t been before. This effect has previously been spotted in stars, but never in a planet until now.
+“We’re struck by quite how much the infancy of free-floating planetary-mass objects resembles that of stars like the Sun,” Jayawardhana says. “Our new findings underscore that similarity and imply that some objects comparable to giant planets form the way stars do, from contracting clouds of gas and dust accompanied by disks of their own, and they go through growth episodes just like newborn stars.”
+ +The researchers have been studying similar objects for many years and earlier this year published results based on JWST observations that featured a small sample of planetary-mass objects. “This particular study is part of that sample,” Scholz tells Physics World, “and we obtained the present results because Victor wanted to look in detail at the accretion flow onto Cha1107-7626, and in the process discovered the burst.”
+The researchers say they are “keeping an eye” on Cha1107-7626 and other such objects that are still growing because their environment is dynamic and unstable. “More to the point, we really don’t understand what drives these accretion events, and we need detailed follow-up to figure out the underlying reasons for these processes,” Scholz says.
+The post Young rogue planet grows like a star appeared first on Physics World.
+]]>The post Spooky physics: from glowing green bats to vibrating spider webs appeared first on Physics World.
+]]>First up is researchers at the University of Georgia in the US who have confirmed that six different species of bats found in North America emit a ghoulish green light when exposed to ultraviolet light.
+The researchers examined 60 specimens from the Georgia Museum of Natural History and exposed the bats to UV light.
+ +They found that the wings and hind limbs of six species – big brown bats, eastern red bats, Seminole bats, southeastern myotis, grey bats and the Brazilian free-tailed bat – gave off photoluminescence with the resulting glow being a shade of green.
+While previous research found that some mammals, like pocket gophers, also emit a glow under ultraviolet light, this was the first discovery of such a phenomenon for bats located in North America.
+The colour and location of the glow on the winged mammals suggest it is not down to genetics or camouflage and as it is the same between sexes it is probably not used to attract mates.
+“It may not seem like this has a whole lot of consequence, but we’re trying to understand why these animals glow,” notes wildlife biologist Steven Castleberry from the University of Georgia.
+Given that many bats can see the wavelengths emitted, one option is that the glow may be an inherited trait used for communication.
+“The data suggests that all these species of bats got it from a common ancestor. They didn’t come about this independently,” adds Castleberry. “It may be an artifact now, since maybe glowing served a function somewhere in the evolutionary past, and it doesn’t anymore.”
+In other frightful news, spider webs are a classic Halloween decoration and while the real things are marvels of bioengineering, there is still more to understand about these sticky structures.
+Many spider species build spiral wheel-shaped webs – orb webs – to capture prey, and some incorporate so-called “stabilimenta” into their web structure. These “extra touches” look like zig-zagging threads that span the gap between two adjacent “spokes,” or threads arranged in a circular “platform” around the web center.
+The purpose of stabilimenta is unknown and proposed functions include as a deterrence for predatory wasps or birds.
+Yet Gabriele Greco of the Swedish University of Agricultural Sciences and colleagues suggest such structures might instead influence the propagation of web vibrations triggered by the impact of captured prey.
+Greco and colleagues observed different stabilimentum geometries that were constructed by wasp spiders, Argiope bruennichi. The researchers then performed numerical simulations to explore how stabilimenta affect prey impact vibrations.
+For waves generated at angles perpendicular to the threads spiralling out from the web centre, stabilimenta caused negligible delays in wave propagation.
+However, for waves generated in the same direction as the spiral threads, vibrations in webs with stabilimenta propagated to a greater number of potential detection points across the web – where a spider might sense them – than in webs without stabilimenta.
+This suggests that stabilimenta may boost a spider’s ability to pinpoint the location of unsuspecting prey caught in its web.
+Spooky.
+The post Spooky physics: from glowing green bats to vibrating spider webs appeared first on Physics World.
+]]>The post Lowering exam stakes could cut the gender grade gap in physics, finds study appeared first on Physics World.
+]]>The study has been carried out by David Webb from the University of California, Davis, and Cassandra Paul from San Jose State University. It builds on previous work they did in 2023, which showed that the gender gap disappears in introductory physics classes that offer the chance for all students to retake the exams. That study did not, however, explore why the offer of a retake has such an impact.
+ +In the new study, the duo analysed exam results from 1997 to 2015 for a series of introductory physics classes at a public university in the US. The dataset included 26 783 students, mostly in biosciences, of whom about 60% were female. Some of the classes let students retake exams while others did not, thereby letting the researchers explore why retakes close the gender gap.
+When Webb and Paul examined the data for classes that offered retakes, they found that in first-attempt exams female students slightly outperformed their male counterparts. But male students performed better than female students in retakes.
+This, the researchers argue, discounts the notion that retakes close the gender gap by allowing female students to improve their grades. Instead, they suggest that the benefit of retakes is that they lower the stakes of the first exam.
+The team then compared the classes that offered retakes with those that did not, which they called high-stakes courses. They found that the gender gap in exam results was much larger in the high-stakes classes than the lower-stakes classes that allowed retakes.
+“This suggests that high-stakes exams give a benefit to men, on average, [and] lowering the stakes of each exam can remove that bias ” Webb told Physics World. He thinks that as well as allowing students to retake exams, physics might benefit from not having comprehensive high-stakes final exams but instead “use final exam time to let students retake earlier exams”.
+The post Lowering exam stakes could cut the gender grade gap in physics, finds study appeared first on Physics World.
+]]>The post Quantum steampunk: we explore the art and science appeared first on Physics World.
+]]>I was so taken by the art and science of quantum steampunk that I promised Rosenbaum that I would chat with him and Yunger Halpern on the podcast – and here is that conversation. We begin by exploring the art of steampunk and how it is influenced by the technology of the 19th century. Then, we look at the physics of quantum steampunk, a field that weds modern concepts of quantum information with thermodynamics – which itself is a scientific triumph of the 19th century.
++
This podcast is supported by Atlas Technologies, specialists in custom aluminium and titanium vacuum chambers as well as bonded bimetal flanges and fittings used everywhere from physics labs to semiconductor fabs.
++
The post Quantum steampunk: we explore the art and science appeared first on Physics World.
+]]>The post Quantum fluids mix like oil and water appeared first on Physics World.
+]]>
Researchers in the US have replicated a well-known fluid-dynamics process called the Rayleigh-Taylor instability on a quantum scale for the first time. The work opens the hydrodynamics of quantum gases to further exploration and could even create a new platform for understanding gravitational dynamics in the early universe.
+If you’ve ever tried mixing oil with water, you’ll understand how the Rayleigh-Taylor instability (RTI) can develop. Due to their different molecular structures and the nature of the forces between their molecules, the two fluids do not mix well. After some time, they separate, forming a clear interface between oil and water.
+ +Scientists have studied the dynamics of this interface upon perturbations – disturbances of the system – for nearly 150 years, with major work being done by the British physicists Lord Rayleigh in 1883 and Geoffrey Taylor in 1950. Under specific conditions related to the buoyant force of the fluid and the perturbative force causing the disturbance, they showed that this interface becomes unstable. Rather than simply oscillating, the system deviates from its initial state, leading to the formation of interesting geometric patterns such as mushroom clouds and filaments of gas in the Crab Nebula.
+To show that such dynamics occur not only in macroscopic structures, but also at a quantum scale, scientists at the University of Maryland and the Joint Quantum Institute (JQI) created a two-state quantum system using a Bose-Einstein condensate (BEC) of sodium (23Na) atoms. In this state of matter, the temperature is so low, the sodium atoms behave as a single coherent system, giving researchers precise control of their parameters.
+The JQI team confine this BEC in a two-dimensional optical potential that essentially produces a 100 µm x 100 µm sheet of atoms in the horizontal plane. The scientists then apply a microwave pulse that excites half of the atoms from the spin-down to the spin-up state. By adding a small magnetic field gradient along one of the horizontal axes, they induce a force (the Stern-Gerlach force) that acts on the two spin components in opposite directions due to the differing signs of their magnetic moments. This creates a clear interface between the spin-up and the spin-down atoms.
+To initiate the RTI, the scientists need to perturb this two-component BEC by reversing the magnetic field gradient, which consequently reverses the direction of the induced force. According to Ian Spielman, who led the work alongside co-principal investigator Gretchen Campbell, this wasn’t as easy as it sounds. “The most difficult part was preparing the initial state (horizontal interface) with high quality, and then reliably inverting the gradient rapidly and accurately,” Spielman says.
+The researchers then investigated how the magnitude of this force difference, acting on the two sides of the interface, affected the dynamics of the two-component BEC. For a small differential force, they initially observed a sinusoidal modulation of the interface. After some time, the interface enters a nonlinear dynamics regime where the RTI manifests through the formation of mushroom clouds. Finally, it becomes a turbulent mixture. The larger the differential force, the more rapidly the system evolves.
+
While RTI dynamics like these were expected to occur in quantum fluids, Spielman points out that proving it required a BEC with the right internal interactions. The BEC of sodium atoms in their experimental setup is one such system.
+In general, Spielman says that cold atoms are a great tool for studying RTI because the numerical techniques used to describe them do not suffer from the same flaws as the Navier-Stokes equation used to model classical fluid dynamics. However, he notes that the transition to turbulence is “a tough problem that resides at the boundary between two conceptually different ways of thinking”, pushing the capabilities of both analytical and numerical techniques.
+The scientists were also able to excite waves known as ripplon modes that travel along the interface of the two-component BEC. These are equivalent to the classical capillary waves –“ripples” when a droplet impacts a water surface. Yanda Geng, a JQI PhD student working on this project, explains that every unstable RTI mode has a stable ripplon as a sibling. The difference is that ripplon modes only appear when a small sinusoidal modulation is added to the differential force. “Studying ripplon modes builds understanding of the underlying [RTI] mechanism,” Geng says.
+The flow of the spins
+In a further experiment, the team studied a phenomenon that occurs as the RTI progresses and the spin components of the BEC flow in opposite directions along part of their shared interface. This is known as an interfacial counterflow. By transferring half the atoms into the other spin state after initializing the RTI process, the scientists were able to generate a chain of quantum mechanical whirlpools – a vortex chain – along the interface in regions where interfacial counterflow occurred.
+ +Spielman, Campbell and their team are now working to create a cleaner interface in their two-component BEC, which would allow a wider range of experiments. “We are considering the thermal properties of this interface as a 1D quantum ‘string’,” says Spielman, adding that the height of such an interface is, in effect, an ultra-sensitive thermometer. Spielman also notes that interfacial waves in higher dimensions (such as a 2D surface) could be used for simulations of gravitational physics.
+The research is described in Science Advances.
+The post Quantum fluids mix like oil and water appeared first on Physics World.
+]]>The post Large-area triple-junction perovskite solar cell achieves record efficiency appeared first on Physics World.
+]]>When there are no constraints on the choice of materials, triple-junction solar cells can outperform double-junction and single-junction solar cells, with a power conversion efficiency (PCE) of up to 51% theoretically possible. But material constraints – due to fabrication complexity, cost or other technical challenges – mean that many such devices still perform far from the theoretical limits.
+Perovskites are one of the most promising materials in the solar cell world today, but fabricating practical triple-junction solar cells beyond 1 cm2 in area has remained a challenge. A research team from Australia, China, Germany and Slovenia set out to change this, recently publishing a paper in Nature Nanotechnology describing the largest and most efficient triple-junction perovskite–perovskite–silicon tandem solar cell to date.
+When asked why this device architecture was chosen, Anita Ho-Baillie, one of the lead authors from The University of Sydney, states: “I am interested in triple-junction cells because of the larger headroom for efficiency gains”.
+Solar cells formed from metal halide perovskites have potential to be commercially viable, due to their cost-effectiveness, efficiency, ease of fabrication and their ability to be paired with silicon in multi-junction devices. The ease of fabrication means that the junctions can be directly fabricated on top of each other through monolithic integration – which leads to only two terminal connections, instead of four or six. However, these junctions can still contain surface defects.
+ +To enhance the performance and resilience of their triple-junction cell (top and middle perovskite junctions on a bottom silicon cell), the researchers optimized the chemistry of the perovskite material and the cell design. They addressed surface defects in the top perovskite junction by replacing traditional lithium fluoride materials with piperazine-1,4-diium chloride (PDCl). They also replaced methylammonium – which is commonly used in perovskite cells – with rubidium. “The rubidium incorporation in the bulk and the PDCl surface treatment improved the light stability of the cell,” explains Ho-Baillie.
+To connect the two perovskite junctions, the team used gold nanoparticles on tin oxide. Because the gold was in a nanoparticle form, the junctions could be engineered to maximize the flow of electric charge and light absorption by the solar cell.
+“Another interesting aspect of the study is the visualization of the gold nanoparticles [using transmission electron microscopy] and the critical point when they become a semi-continuous film, which is detrimental to the multi-junction cell performance due to its parasitic absorption,” says Ho-Baillie. “The optimization for achieving minimal particle coverage while achieving sufficient ohmic contact for vertical carrier flow are useful insights”.
+Using these design strategies, Ho-Baillie and colleagues developed a 16 cm2 triple-junction cell that achieved an independently certified steady-state PCE of 23.3% – the highest reported for a large-area device. While triple-junction perovskite solar cells have exhibited higher PCEs – with all-perovskite triple-junction cells reaching 28.7% and perovskite–perovskite–silicon devices reaching 27.1% – these were all achieved on a 1 cm2 cell, not a large-area cell.
+ +In this study, the researchers also developed a 1 cm2 cell that was close to the best, with a PCE of 27.06%, but it is the large-area cell that’s the record breaker. The 1 cm2 cell also passed the International Electrotechnical Commission’s (IEC) 61215 thermal cycling test, which exposes the cell to 200 cycles under extreme temperature swings, ranging from –40 to 85°C. During this test, the 1 cm2 cell retained 95% of its initial efficiency after 407 h of continuous operation.
+The combination of the successful thermal cycling test combined with the high efficiencies on a larger cell shows that there could be potential for this triple-junction architecture in real-world settings in the near future, even though they are still far away from their theoretical limits.
+The post Large-area triple-junction perovskite solar cell achieves record efficiency appeared first on Physics World.
+]]>The post Tim Berners-Lee: why the inventor of the Web is ‘optimistic, idealistic and perhaps a little naïve’ appeared first on Physics World.
+]]>Berners-Lee was born in London in 1955 to parents, originally from Birmingham, who met while working on the Ferranti Mark 1 computer and knew Alan Turing. Theirs was a creative, intellectual and slightly chaotic household. His mother could maintain a motorbike with fence wire and pliers, and was a crusader for equal rights in the workplace. His father – brilliant and absent minded – taught Berners-Lee about computers and queuing theory. A childhood of camping and model trains, it was, in Berners-Lee’s view, idyllic.
+Berners-Lee had the good fortune to be supported by a series of teachers and managers who recognized his potential and unique way of working. He studied physics at the University of Oxford (his tutor “going with the flow” of Berners-Lee’s unconventional notation and ability to approach problems from oblique angles) and built his own computer. After graduating, he married and, following a couple of jobs, took a six-month placement at the CERN particle-physics lab in Geneva in 1985.
+ +This placement set “a seed that sprouted into a tool that shook up the world”. Berners-Lee saw how difficult it was to share information stored in different languages in incompatible computer systems and how, in contrast, information flowed easily when researchers met over coffee, connected semi-randomly and talked. While at CERN, he therefore wrote a rough prototype for a program to link information in a type of web rather than a structured hierarchy.
++Back at CERN, Tim Berners-Lee developed his vision of a “universal portal” to information.
+
The placement ended and the program was ignored, but four years later Berners-Lee was back at CERN. Now divorced and soon to remarry, he developed his vision of a “universal portal” to information. It proved to be the perfect time. All the tools necessary to achieve the Web – the Internet, address labelling of computers, network cables, data protocols, the hypertext language that allowed cross-referencing of text and links on the same computer – had already been developed by others.
+Berners-Lee saw the need for a user-friendly interface, using hypertext that could link to information on other computers across the world. His excitement was “uncontainable”, and according to his line manager “few of us if any could understand what he was talking about”. But Berners-Lee’s managers supported him and freed his time away from his actual job to become the world’s first web developer.
+Having a vision was one thing, but getting others to share it was another. People at CERN only really started to use the Web properly once the lab’s internal phone book was made available on it. As a student at the time, I can confirm that it was much, much easier to use the Web than log on to CERN’s clunky IBM mainframe, where phone numbers had previously been stored.
+Wider adoption relied on a set of volunteer developers, working with open-source software, to make browsers and platforms that were attractive and easy to use. CERN agreed to donate the intellectual property for web software to the public domain, which helped. But the path to today’s Web was not smooth: standards risked diverging and companies wanted to build applications that hindered information sharing.
+Feeling the “the Web was outgrowing my institution” and “would be a distraction” to a lab whose core mission was physics, Berners-Lee moved to the Massachusetts Institute of Technology in 1994. There he founded the World Wide Web Consortium (W3C) to ensure consistent, accessible standards were followed by everyone as the Web developed into a global enterprise. The progression sounds straightforward although earlier accounts, such as James Gillies and Robert Caillau’s 2000 book How the Web Was Born, imply some rivalry between institutions that is glossed over here.
++Initially inclined to advise people to share good things and not search for bad things, Berners-Lee had reckoned without the insidious power of “manipulative and coercive” algorithms on social networks
+
The rest is history, but not quite the history that Berners-Lee had in mind. By 1995 big business had discovered the possibilities of the Web to maximize influence and profit. Initially inclined to advise people to share good things and not search for bad things, Berners-Lee had reckoned without the insidious power of “manipulative and coercive” algorithms on social networks. Collaborative sites like Wikipedia are closer to his vision of an ideal Web; an emergent good arising from individual empowerment. The flip side of human nature seems to come as a surprise.
+The rest of the book brings us up to date with Berners-Lee’s concerns (data, privacy, misuse of AI, toxic online culture), his hopes (the good use of AI), a third marriage and his move into a data-handling business. There are some big awards and an impressive amount of name dropping; he is excited by Order of Merit lunches with the Queen and by sitting next to Paul McCartney’s family at the opening ceremony to the London Olympics in 2012. A flick through the index reveals names ranging from Al Gore and Bono to Lucien Freud. These are not your average computing technology circles.
+There are brief character studies to illustrate some of the main players, but don’t expect much insight into their lives. This goes for Berners-Lee too, who doesn’t step back to particularly reflect on those around him, or indeed his own motives beyond that vision of a Web for all enabling the best of humankind. He is firmly future focused.
+ +Still, there is no-one more qualified to describe what the Web was intended for, its core philosophy, and what caused it to develop to where it is today. You’ll enjoy the book whether you want an insight into the inner workings that make your web browsing possible, relive old and forgotten browser names, or see how big tech wants to monetize and monopolize your online time. It is an easy read from an important voice.
+The book ends with a passionate statement for what the future could be, with businesses and individuals working together to switch the Web from “the attention economy to the intention economy”. It’s a future where users are no longer distracted by social media and manipulated by attention-grabbing algorithms; instead, computers and services do what users want them to do, with the information that users want them to have.
+Berners-Lee is still optimistic, still an incurable idealist, still driven by vision. And perhaps still a little naïve too in believing that everyone’s values will align this time.
+The post Tim Berners-Lee: why the inventor of the Web is ‘optimistic, idealistic and perhaps a little naïve’ appeared first on Physics World.
+]]>The post New protocol makes an elusive superconducting signature measurable appeared first on Physics World.
+]]>
Understanding the mechanism of high-temperature superconductivity could unlock powerful technologies, from efficient energy transmission to medical imaging, supercomputing and more. Researchers at Harvard University and the Massachusetts Institute of Technology have designed a new protocol to study a candidate model for high-temperature superconductivity (HTS), described in Physical Review Letters.
+The model, known as the Fermi-Hubbard model, is believed to capture the essential physics of cuprate high-temperature superconductors, materials composed of copper and oxygen. The model describes fermions, such as electrons, moving on a lattice. The fermions experience two competing effects: tunnelling and on-site interaction. Imagine students in a classroom: they may expend energy to switch seats (tunnelling), avoid a crowded desk (repulsive on-site interaction) or share desks with friends (attractive on-site interaction). Such behaviour mirrors that of electrons moving between lattice sites.
+Daniel Mark, first author of the study, notes that: “After nearly four decades of research, there are many detailed numerical studies and theoretical models on how superconductivity can emerge from the Fermi-Hubbard model, but there is no clear consensus [on exactly how it emerges].”
+ +A precursor to understanding the underlying mechanism is testing whether the Fermi-Hubbard model gives rise to an important signature of cuprate HTS: d-wave pairing. This is a special type of electron pairing where the strength and sign of the pairing depend on the direction of electron motion. It contrasts with conventional low-temperature superconductors that exhibit s-wave pairing, in which the pairing strength is uniform in all directions.
+Although physicists have developed robust methods for simulating the Fermi-Hubbard model with ultracold atoms, measuring d-wave pairing has been notoriously difficult. The new protocol aims to change that.
+A key ingredient in the protocol is the team’s use of “repulsive-to-attractive mapping”. The physics of HTS is often described by the repulsive Fermi-Hubbard model, in which electrons pay an energetic penalty for occupying the same lattice site, like disagreeing students sharing a desk. In this model, detecting d-wave pairing requires fermions to maintain a fragile quantum state as they move over large distances, which necessitates carefully fine-tuned experimental parameters.
+To make the measurement more robust to experimental imperfection, the authors use a clever mathematical trick: they map from the repulsive model to the attractive one. In the attractive model, electrons receive an energetic benefit from being close together, like two friends in a classroom. The mapping is achieved by a particle–hole transformation, wherein spin-down electrons are reinterpreted as holes and vice versa. After mapping, the d-wave pairing signal becomes an observable that conserves local fermion number, thereby circumventing the challenge of long-range motion.
+
In its initial form, the d-wave pairing signal is difficult to measure. Drawing inspiration from digital quantum gates, the researchers divide their complex system into subsystems composed of pairs of lattice sites or dimers. Then, they apply a pulse sequence to make the observable measurable by simply counting fermions – a standard technique in the lab.
+The pulse sequence begins with a global microwave pulse to manipulate the spin of the fermions, followed by a series of “hopping” and “idling” steps. The hopping step involves lowering the barrier between lattice sites, thereby increasing tunnelling. The idling step involves raising the barrier, allowing the system to evolve without tunnelling. Every step is carefully timed to reveal the d-wave pairing information at the end of the sequence.
+ +The researchers report that their protocol is sample-efficient, experimentally viable, and generalizable to other observables that conserve local fermion number and act on dimers.
+This work adds to a growing field that combines components of analogue quantum systems with digital gates to deeply study complex quantum phenomena. “All the experimental ingredients in our protocol have been demonstrated in existing experiments, and we are in discussion with several groups on possible use cases,” Mark tells Physics World.
+The post New protocol makes an elusive superconducting signature measurable appeared first on Physics World.
+]]>The post Interface engineered ferromagnetism appeared first on Physics World.
+]]>Cr₂Te₃’s crystal structure naturally forms layers that behave like two-dimensional sheets of magnetic material. Each layer has magnetic ordering (ferromagnetism), but the layers are not tightly bonded in the third dimension and are considered “quasi-2D.” These layers are useful for interface engineering. Using a vacuum-based technique for atomically precise thin-film growth, known as molecular beam epitaxy, the researchers demonstrate wafer-scale synthesis of Cr₂Te₃ down to monolayer thickness on insulating substrates. Remarkably, robust ferromagnetism persists even at the monolayer limit, a critical milestone for 2D magnetism.
+When Cr₂Te₃ is proximitized (an effect that occurs when one material is placed in close physical contact with another so that its properties are influenced by the neighbouring material) to a topological insulator, specifically (Bi,Sb)₂Te₃, the Curie temperature, the threshold between ferromagnetic and paramagnetic phases, increases from ~100 K to ~120 K. This enhancement is experimentally confirmed via polarized neutron reflectometry, which reveals a substantial boost in magnetization at the interface.
+Theoretical modelling attributes this magnetic enhancement to the Bloembergen–Rowland interaction which is a long-range exchange mechanism mediated by virtual intraband transitions. Crucially, this interaction is facilitated by the topological insulator’s topologically protected surface states, which are spin-polarized and robust against disorder. These states enable long-distance magnetic coupling across the interface, suggesting a universal mechanism for Curie temperature enhancement in topological insulator-coupled magnetic heterostructures.
+This work not only demonstrates a method for stabilizing 2D ferromagnetism but also opens the door to topological electronics, where magnetism and topology are co-engineered at the interface. Such systems could enable novel quantum hybrid devices, including spintronic components, topological transistors, and platforms for realizing exotic quasiparticles like Majorana fermions.
+Enhanced ferromagnetism in monolayer Cr2Te3 via topological insulator coupling
+Yunbo Ou et al 2025 Rep. Prog. Phys. 88 060501
++
Interacting topological insulators: a review by Stephan Rachel (2018)
+The post Interface engineered ferromagnetism appeared first on Physics World.
+]]>The post Probing the fundamental nature of the Higgs Boson appeared first on Physics World.
+]]>This discovery made headline news at the time and, since then, the two collaborations have been performing a series of measurements to establish the fundamental nature of the Higgs boson field and of the quantum vacuum. Researchers certainly haven’t stopped working on the Higgs though. In subsequent years, a series of measurements have been performed to establish the fundamental nature of the new particle.
+One key measurement comes from studying a process known as off-shell Higgs boson production. This is the creation of Higgs bosons with a mass significantly higher than their typical on-shell mass of 125 GeV. This phenomenon occurs due to quantum mechanics, which allows particles to temporarily fluctuate in mass.
+This kind of production is harder to detect but can reveal deeper insights into the Higgs boson’s properties, especially its total width, which relates to how long it exists before decaying. This in turn, allows us to test key predictions made by the Standard Model of particle physics.
+Previous observations of this process had been severely limited in their sensitivity. In order to improve on this, the ATLAS collaboration had to introduce a completely new way of interpreting their data (read here for more details).
+They were able to provide evidence for off-shell Higgs boson production with a significance of 2.5𝜎 (corresponding to a 99.38% likelihood), using events with four electrons or muons, compared to a significance of 0.8𝜎 using traditional methods in the same channel.
+The results mark an important step forward in understanding the Higgs boson as well as other high-energy particle physics phenomena.
+The ATLAS Collaboration, 2025 Rep. Prog. Phys. 88 057803
++
The post Probing the fundamental nature of the Higgs Boson appeared first on Physics World.
+]]>Discover how NiO/Ga₂O₃ heterojunction rectifiers unlock high-performance power electronics with breakthrough thermal, radiation, and structural resilience—driving innovation in EVs, AI data centers, and aerospace systems
+The post Fabrication and device performance of Ni0/Ga<sub>2</sub>O<sub>3</sub> heterojunction power rectifiers appeared first on Physics World.
+]]>
+This talk shows how integrating p-type NiO to form NiO/Ga₂O₃ heterojunction rectifiers overcomes that barrier, enabling record-class breakdown and Ampere-class operation. It will cover device structure/process optimization, thermal stability to high temperatures, and radiation response – with direct ties to today’s priorities: EV fast charging, AI data‑center power systems, and aerospace/space‑qualified power electronics.
+An interactive Q&A session follows the presentation.
+ ++

Jian-Sian Li received the PhD in chemical engineering from the University of Florida in 2024, where his research focused on NiO/β-Ga₂O₃ heterojunction power rectifiers, includes device design, process optimization, fast switching, high-temperature stability, and radiation tolerance (γ, neutron, proton). His work includes extensive electrical characterization and microscopy/TCAD analysis supporting device physics and reliability in harsh environments. Previously, he completed his BS and MS at National Taiwan University (2015, 2018), with research spanning phoretic/electrokinetic colloids, polymers for OFETs/PSCs, and solid-state polymer electrolytes for Li-ion batteries. He has since transitioned to industry at Micron Technology.
+The post Fabrication and device performance of Ni0/Ga<sub>2</sub>O<sub>3</sub> heterojunction power rectifiers appeared first on Physics World.
+]]>The post Randomly textured lithium niobate gives snapshot spectrometer a boost appeared first on Physics World.
+]]>Spectroscopy is crucial to analysis of all kinds of objects in science and engineering, from studying the radiation emitted by stars to identifying potential food contaminants. Conventional spectrometers – such as those used on telescopes – rely on diffractive optics to separate incoming light into its constituent wavelengths. This makes them inherently large, expensive and inefficient at rapid image acquisition as the light from each point source has to be spatially separated to resolve the wavelength components.
+In recent years researchers have combined computational methods with advanced optical sensors to create computational spectrometers with the potential to rival conventional instruments. One such approach is hyperspectral snapshot imaging, which captures both spectral and spatial information in the same image. There are currently two main snapshot-imaging techniques available. Narrowband-filtered snapshot spectral imagers comprise a mosaic pattern of narrowband filters and acquire an image by taking repeated snapshots at different wavelengths. However, these trade spectral resolution with spatial resolution, as each extra band requires its own tile within the mosaic. A more complex alternative design – the broadband-modulated snapshot spectral imager – uses a single, broadband detector covered with a spatially varying element such as a metasurface that interacts with the light and imprints spectral encoding information onto each pixel. However, these are complex to manufacture and their spectral resolution is limited to the nanometre scale.
+In the new work, researchers led by Lu Fang at Tsinghua University in Beijing unveil a spectroscopy technique that utilizes the nonlinear optical properties of lithium niobate to achieve sub-Ångström spectral resolution in a simply fabricated, integrated snapshot detector they call RAFAEL. A lithium niobate layer with random, sub-wavelength thickness variations is surrounded by distributed Bragg reflectors, forming optical cavities. These are integrated into a stack with a set of electrodes. Each cavity corresponds to a single pixel. Incident light enters from one side of a cavity, interacting with the lithium niobate repeatedly before exiting and being detected. Because lithium niobate is nonlinear, its response varies with the wavelength of the light.
+The researchers then applied a bias voltage using the electrodes. The nonlinear optical response of lithium niobate means that this bias alters its response to light differently at different wavelengths. Moreover, the random variation of the lithium niobate’s thickness around the surface means that the wavelength variation is spatially specific.
+The researchers designed a machine learning algorithm and trained it to use this variation of applied bias voltage with resulting wavelength detected at each point to reconstruct the incident wavelengths on the detector at each point in space.
+“The randomness is useful for making the equations independent,” explains Fang; “We want to have uncorrelated equations so we can solve them.”
+The researchers showed that they could achieve 88 Hz snapshot spectroscopy on a grid of 2048×2048 pixels with a spectral resolution of 0.5 Å (0.05 nm) between wavelengths of 400–1000 nm. They demonstrated this by capturing the full atomic absorption spectra of up to 5600 stars in a single snapshot. This is a two to four orders of magnitude improvement in observational efficiency over world-class astronomical spectrometers. They also demonstrated other applications, including a materials analysis challenge involving the distinction of a real leaf from a fake one. The two looked identical at optical wavelengths, but, using its broader range of wavelengths, RAFAEL was able to distinguish between the two.
+ +The researchers are now attempting to improve the device further: “I still think that sub-Ångstrom is not the ending – it’s just the starting point,” says Fu. “We want to push the limit of our resolution to the picometre.” In addition, she says, they are working on further integration of the device – which requires no specialized lithography – for easier use in the field. “We’ve already put this technology on a drone platform,” she reveals. The team is also working with astronomical observatories such as Gran Telescopio Canarias in La Palma, Spain.
+The research is described in Nature.
+Computational imaging expert David Brady of Duke University in North Carolina is impressed by the instrument. “It’s a compact package with extremely high spectral resolution,” he says; “Typically an optical instrument, like a CMOS sensor that’s used here, is going to have between 10,000 and 100,000 photo-electrons per pixel. That’s way too many photons for getting one measurement…I think you’ll see that with spectral imaging as is done here, but also with temporal imaging. People are saying you don’t need to go at 30 frames second, you can go at a million frames per second and push closer to the single photon limit, and then that would require you to do computation to figure out what it all means.”
+The post Randomly textured lithium niobate gives snapshot spectrometer a boost appeared first on Physics World.
+]]>The post Tumour-specific radiofrequency fields suppress brain cancer growth appeared first on Physics World.
+]]>The study, led by Hugo Jimenez and reported in Oncotarget, uses a device developed by TheraBionic that delivers amplitude-modulated 27.12 MHz RF EMF throughout the entire body, via a spoon-shaped antenna placed on the tongue. Using tumour-specific modulation frequencies, the device has already received US FDA approval for treating patients with advanced hepatocellular carcinoma (HCC, a liver cancer), while its safety and effectiveness are currently being assessed in clinical trials in patients with pancreatic, colorectal and breast cancer.
+In this latest work, the team investigated its use in glioblastoma, an aggressive and difficult to treat brain tumour.
+To identify the particular frequencies needed to treat glioblastoma, the team used a non-invasive biofeedback method developed previously to study patients with various types of cancer. The process involves measuring variations in skin electrical resistance, pulse amplitude and blood pressure while individuals are exposed to low levels of amplitude-modulated frequencies. The approach can identify the frequencies, usually between 1 Hz and 100 kHz, specific to a single tumour type.
+ +Jimenez and colleagues first examined the impact of glioblastoma-specific amplitude-modulated RF EMF (GBMF) on glioblastoma cells, exposing various cell lines to GBMF for 3 h per day at the exposure level used for patient treatments. After one week, GBMF decreased the proliferation of three glioblastoma cell lines (U251, BTCOE-4765 and BTCOE-4795) by 34.19%, 15.03% and 14.52%, respectively.
+The team note that the level of this inhibitive effect (15–34%) is similar to that observed in HCC cell lines (19–47%) and breast cancer cell lines (10–20%) treated with tumour-specific frequencies. A fourth glioblastoma cell line (BTCOE-4536) was not inhibited by GBMF, for reasons currently unknown.
+Next, the researchers examined the effect of GBMF on cancer stem cells, which are responsible for treatment resistance and cancer recurrence. The treatment decreased the tumour sphere-forming ability of U251 and BTCOE-4795 cells by 36.16% and 30.16%, respectively – also a comparable range to that seen in HCC and breast cancer cells.
+Notably, these effects were only induced by frequencies associated with glioblastoma. Exposing glioblastoma cells to HCC-specific modulation frequencies had no measurable impact and was indistinguishable from sham exposure.
+Looking into the underlying treatment mechanisms, the researchers hypothesized that – as seen in breast cancer and HCC – glioblastoma cell proliferation is mediated by T-type voltage-gated calcium channels (VGCC). In the presence of a VGCC blocker, GBMF did not inhibit cell proliferation, confirming that GBMF inhibition of cell proliferation depends on T-type VGCCs, in particular, a calcium channel known as CACNA1H.
+The team also found that GBMF blocks the growth of glioblastoma cells by modulating the “Mitotic Roles of Polo-Like Kinase” signalling pathway, leading to disruption of the cells’ mitotic spindles, critical structures in cell replication.
+Finally, the researchers used the TheraBionic device to treat two patients: a 38-year-old patient with recurrent glioblastoma and a 47-year-old patient with the rare brain tumour oligodendroglioma. The first patient showed signs of clinical and radiological benefit following treatment; the second exhibited stable disease and tolerated the treatment well.
+“This is the first report showing feasibility and clinical activity in patients with brain tumour,” the authors write. “Similarly to what has been observed in patients with breast cancer and hepatocellular carcinoma, this report shows feasibility of this treatment approach in patients with malignant glioma and provides evidence of anticancer activity in one of them.”
+ +The researchers add that a previous dosimetric analysis of this technique measured a whole-body specific absorption rate (SAR, the rate of energy absorbed by the body when exposed to RF EMF) of 1.35 mW/kg and a peak spatial SAR (over 1 g of tissue) of 146–352 mW/kg. These values are well within the safety limits set by the ICNIRP (whole-body SAR of 80 mW/kg; peak spatial SAR of 2000 mW/kg). Organ-specific values for grey matter, white matter and the midbrain also had mean SAR ranges well within the safety limits.
+The team concludes that the results justify future preclinical and clinical studies of the TheraBionic device in this patient population. “We are currently in the process of designing clinical studies in patients with brain tumors,” Jimenez tells Physics World.
+The post Tumour-specific radiofrequency fields suppress brain cancer growth appeared first on Physics World.
+]]>The post Entangled light leads to quantum advantage appeared first on Physics World.
+]]>
Physicists at the Technical University of Denmark have demonstrated what they describe as a “strong and unconditional” quantum advantage in a photonic platform for the first time. Using entangled light, they were able to reduce the number of measurements required to characterize their system by a factor of 1011, with a correspondingly huge saving in time.
+“We reduced the time it would take from 20 million years with a conventional scheme to 15 minutes using entanglement,” says Romain Brunel, who co-led the research together with colleagues Zheng-Hao Liu and Ulrik Lund Andersen.
+Although the research, which is described in Science, is still at a preliminary stage, Brunel says it shows that major improvements are achievable with current photonic technologies. In his view, this makes it an important step towards practical quantum-based protocols for metrology and machine learning.
+Quantum devices are hard to isolate from their environment and extremely sensitive to external perturbations. That makes it a challenge to learn about their behaviour.
+ +To get around this problem, researchers have tried various “quantum learning” strategies that replace individual measurements with collective, algorithmic ones. These strategies have already been shown to reduce the number of measurements required to characterize certain quantum systems, such as superconducting electronic platforms containing tens of quantum bits (qubits), by as much as a factor of 105.
+In the new study, Brunel, Liu, Andersen and colleagues obtained a quantum advantage in an alternative “continuous-variable” photonic platform. The researchers note that such platforms are far easier to scale up than superconducting qubits, which they say makes them a more natural architecture for quantum information processing. Indeed, photonic platforms have already been crucial to advances in boson sampling, quantum communication, computation and sensing.
+The team’s experiment works with conventional, “imperfect” optical components and consists of a channel containing multiple light pulses that share the same pattern, or signature, of noise. The researchers began by performing a procedure known as quantum squeezing on two beams of light in their system. This caused the beams to become entangled – a quantum phenomenon that creates such a strong linkage that measuring the properties of one instantly affects the properties of the other.
+The team then measured the properties of one of the beams (the “probe” beam) in an experiment known as a 100-mode bosonic displacement process. According to Brunel, one can imagine this experiment as being like tweaking the properties of 100 independent light modes, which are packets or beams of light. “A ‘bosonic displacement process’ means you slightly shift the amplitude and phase of each mode, like nudging each one’s brightness and timing,” he explains. “So, you then have 100 separate light modes, and each one is shifted in phase space according to a specific rule or pattern.”
+ +By comparing the probe beam to the second (“reference”) beam in a single joint measurement, Brunel explains that he and his colleagues were able to cancel out much of the uncertainties in these measurements. This meant they could extract more information per trial than they could have by characterizing the probe beam alone. This information boost, in turn, allowed them to significantly reduce the number of measurements – in this case, by a factor of 1011.
+While the DTU researchers acknowledge that they have not yet studied a practical, real-world system, they emphasize that their platform is capable of “doing something that no classical system will ever be able to do”, which is the definition of a quantum advantage. “Our next step will therefore be to study a more practical system in which we can demonstrate a quantum advantage,” Brunel tells Physics World.
+The post Entangled light leads to quantum advantage appeared first on Physics World.
+]]>The post Queer Quest: a quantum-inspired journey of self-discovery appeared first on Physics World.
+]]>Mental health professionals also joined Queer Quest, which was officially recognized by UNESCO as part of the International Year of Quantum Science and Technology (IYQ). Over two days in Chicago this October, the event brought science, identity and wellbeing into powerful conversation.
+Jessica Esquivel, a particle physicist and associate scientist at Fermilab, is part of the Muon g-2 experiment, pushing the limits of the Standard Model. Emily Esquivel is a licensed clinical professional counsellor. Together, they run Oyanova, an organization empowering Black and Brown communities through science and wellness.
+
Queer Quest blended keynote talks, with collective conversations, alongside meditation and other wellbeing activities. Panellists drew on quantum metaphors – such as entanglement – to explore identity, community and mental health.
+In a wide-ranging conversation with podcast host Andrew Glester, Jessica and Emily speak about the inspiration for the event, and the personal challenges they have faced within academia. They speak about the importance of building resilience through community connections, especially given the social tensions in the US right now.
+Hear more from Jessica Esquivel in her 2021 Physics World Stories appearance on the latest developments in muon science.
+This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
+Find out more on our quantum channel.
++
+
The post Queer Quest: a quantum-inspired journey of self-discovery appeared first on Physics World.
+]]>The post Fingerprint method can detect objects hidden in complex scattering media appeared first on Physics World.
+]]>
Physicists have developed a novel imaging technique for detecting and characterizing objects hidden within opaque, highly scattering material. The researchers, from France and Austria, showed that their new mathematical approach, which utilizes the fact that hidden objects generate their own complex scattering pattern, or “fingerprint”, can work on biological tissue.
+Viewing the inside of the human body is challenging due to the scattering nature of tissue. With ultrasound, when waves propagate through tissue they are reflected, bounce around and scatter chaotically, creating noise that obscures the signal from the object that the medical practitioner is trying to see. The further you delve into the body the more incoherent the image becomes.
+There are techniques for overcoming these issues, but as scattering increases – in more complex media or as you push deeper through tissue – they struggle and unpicking the required signal becomes too complex.
+The scientists behind the latest research, from the Institut Langevin in Paris, France and TU Wien in Vienna, Austria, say that rather than compensating for scattering, their technique instead relies on detecting signals from the hidden object in the disorder.
+Objects buried in a material create their own complex scattering pattern, and the researchers found that if you know an object’s specific acoustic signal it’s possible to find it in the noise created by the surrounding environment.
+“We cannot see the object, but the backscattered ultrasonic wave that hits the microphones of the measuring device still carries information about the fact that it has come into contact with the object we are looking for,” explains Stefan Rotter, a theoretical physicist at TU Wien.
+ +Rotter and his colleagues examined how a series of objects scattered ultrasound waves in an interference-free environment. This created what they refer to as fingerprint matrices: measurements of the specific, characteristic way in which each object scattered the waves.
+The team then developed a mathematical method that allowed them to calculate the position of each object when hidden in a scattering medium, based on its fingerprint matrix.
+“From the correlations between the measured reflected wave and the unaltered fingerprint matrix, it is possible to deduce where the object is most likely to be located, even if the object is buried,” explains Rotter.
+The team tested the technique in three different scenarios. The first experiment trialled the ultrasound imaging of metal spheres in a dense suspension of glass beads in water. Conventional ultrasound failed in this setup and the spheres were completely invisible, but with their novel fingerprint method the researchers were able to accurately detect them.
+Next, to examine a medical application for the technique, the researchers embedded lesion markers often used to monitor breast tumours in a foam designed to mimic the ultrasound scattering of soft tissue. These markers can be challenging to detect due to scatterers randomly distributed in human tissue. With the fingerprint matrix, however, the researchers say that the markers were easy to locate.
+Finally, the team successfully mapped muscle fibres in a human calf using the technique. They claim this could be useful for diagnosing and monitoring neuromuscular diseases.
+ +According to Rotter and his colleagues, their fingerprint matrix method is a versatile and universal technique that could be applied beyond ultrasound to all fields of wave physics. They highlight radar and sonar as examples of sensing techniques where target identification and detection in noisy environments are long-standing challenges.
+���The concept of the fingerprint matrix is very generally applicable – not only for ultrasound, but also for detection with light,” Rotter says. “It opens up important new possibilities in all areas of science where a reflection matrix can be measured.”
+The researchers report their findings in Nature Physics.
+The post Fingerprint method can detect objects hidden in complex scattering media appeared first on Physics World.
+]]>The post Ask me anything: Kirsty McGhee – ‘Follow what you love: you might end up doing something you never thought was an option’ appeared first on Physics World.
+]]>Obviously, I write: I wouldn’t be a very good science writer if I couldn’t. So communication skills are vital. Recently, for example, Qruise launched a new magnetic-resonance product for which I had to write a press release, create a new webpage and do social-media posts. That meant co-ordinating with lots of different people, finding out the key features to advertise, identifying the claims we wanted to make – and if we have the data to back those claims up. I’m not an expert in quantum computing or magnetic-resonance imagining or even marketing so I have to pick things up fast and then translate technically complex ideas from physics and software into simple messages for a broader audience. Thankfully, my colleagues are always happy to help. Science writing is a difficult task but I think I’m getting better at it.
+ +I love the variety and the fact that I’m doing so many different things all the time. If there’s a day I feel I want something a little bit lighter, I can do some social media or the website, which is more creative. On the other hand, if I feel I could really focus in detail on something then I can write some documentation that is a little bit more technical. I also love the flexibility of remote working, but I do miss going to the office and socialising with my colleagues on a regular basis. You can’t get to know someone as well online, it’s nicer to have time with them in person.
+ +That’s a hard one. It would be easy to say I wish I’d known earlier that I could combine science and writing and make a career out of that. On the other hand, if I’d known that, I might not have done my PhD – and if I’d gone into writing straight after my undergraduate degree, I perhaps wouldn’t be where I am now. My point is, it’s okay not to have a clear plan in life. As children, we’re always asked what we want to be – in my case, my dream from about the age of four was to be a vet. But then I did some work experience in a veterinary practice and I realized I’m really squeamish. It was only when I was 15 or 16 that I discovered I wanted to do physics because I liked it and was good at it. So just follow the things you love. You might end up doing something you never even thought was an option.
+The post Ask me anything: Kirsty McGhee – ‘Follow what you love: you might end up doing something you never thought was an option’ appeared first on Physics World.
+]]>The post New adaptive optics technology boosts the power of gravitational wave detectors appeared first on Physics World.
+]]>Gravitational waves are distortions in spacetime that occur when massive astronomical objects accelerate and collide. When these distortions pass through the four-kilometre-long arms of the two LIGO detectors, they create a tiny difference in the (otherwise identical) distance that light travels between the centre of the observatory and the mirrors located at the end of each arm. The problem is that detecting and studying gravitational waves requires these differences in distance to be measured with an accuracy of 10-19 m, which is 1/10 000th the size of a proton.
+LIGO overcame this barrier 10 years ago when it detected the gravitational waves produced when two black holes located roughly 1.3 billion light–years from Earth merged. Since then, it and two smaller facilities, KAGRA and VIRGO, have observed many other gravitational waves at frequencies ranging from 30–2000 Hz.
+ +Observing waves at lower and higher frequencies in the gravitational wave spectrum remains challenging, however. At lower frequencies (around 10–30 Hz), the problem stems from vibrational noise in the mirrors. Although these mirrors are hefty objects – each one measures 34 cm across, is 20 cm thick and has a mass of around 40 kg – the incredible precision required to detect gravitational waves at these frequencies means that even the minute amount of energy they absorb from the laser beam is enough to knock them out of whack.
+At higher frequencies (150 – 2000 Hz), measurements are instead limited by quantum shot noise. This is caused by the random arrival time of photons at LIGO’s output photodetectors and is a fundamental consequence of the fact that the laser field is quantized.
+Jonathan Richardson, the physicist who led this latest study, explains that FROSTI is designed to reduce quantum shot noise by allowing the mirrors to cope with much higher levels of laser power. At its heart is a novel adaptive optics device that is designed to precisely reshape the surfaces of LIGO’s main mirrors under laser powers exceeding 1 megawatt (MW), which is nearly five times the power used at LIGO today.
+Though its name implies cooling, FROSTI actually uses heat to restore the mirror’s surface to its original shape. It does this by projecting infrared radiation onto test masses in the interferometer to create a custom heat pattern that “smooths out” distortions and so allows for fine-tuned, higher-order corrections.
+The single most challenging aspect of FROSTI’s design, and one that Richardson says shaped its entire concept, is the requirement that it cannot introduce even more noise into the LIGO interferometer. “To meet this stringent requirement, we had to use the most intensity-stable radiation source available – that is, an internal blackbody emitter with a long thermal time constant,” he tells Physics World. “Our task, from there, was to develop new non-imaging optics capable of reshaping the blackbody thermal radiation into a complex spatial profile, similar to one that could be created with a laser beam.”
+Richardson anticipates that FROSTI will be a critical component for future LIGO upgrades – upgrades that will themselves serve as blueprints for even more sensitive next-generation observatories like the proposed Cosmic Explorer in the US and the Einstein Telescope in Europe. “The current prototype has been tested on a 40-kg LIGO mirror, but the technology is scalable and will eventually be adapted to the 440-kg mirrors envisioned for Cosmic Explorer,” he says.
+ +Jan Harms, a physicist at Italy’s Gran Sasso Science Institute who was not involved in this work, describes FROSTI as “an ingenious concept to apply higher-order corrections to the mirror profile.” Though it still needs to pass the final test of being integrated into the actual LIGO detectors, Harms notes that “the results from the prototype are very promising”.
+Richardson and colleagues are continuing to develop extensions to their technology, building on the successful demonstration of their first prototype. “In the future, beyond the next upgrade of LIGO (A+), the FROSTI radiation will need to be shaped into an even more complex spatial profile to enable the highest levels of laser power (1.5 MW) ultimately targeted,” explains Richardson. “We believe this can be achieved by nesting two or more FROSTI actuators together in a single composite, with each targeting a different radial zone of the test mass surfaces. This will allow us to generate extremely finely-matched optical wavefront corrections.”
+The present study is detailed in Optica.
+The post New adaptive optics technology boosts the power of gravitational wave detectors appeared first on Physics World.
+]]>The post A SMART approach to treating lung cancers in challenging locations appeared first on Physics World.
+]]>The standard treatment for non-small cell lung cancer (NSCLC) is stereotactic ablative body radiotherapy (SABR), which delivers intense radiation doses in just a few treatment sessions and achieves excellent local control. For ultracentral lung legions, however – defined as having a planning target volume (PTV) that abuts or overlaps the proximal bronchial tree, oesophagus or pulmonary vessels – the high risk of severe radiation toxicity makes SABR highly challenging.
+A research team at GenesisCare UK, an independent cancer care provider operating nine treatment centres in the UK, has now demonstrated that stereotactic MR-guided adaptive radiotherapy (SMART)-based SABR may be a safer and more effective option for treating ultracentral metastatic lesions in patients with histologically confirmed NSCLC. They report their findings in Advances in Radiation Oncology.
+SMART uses diagnostic-quality MR scans to provide real-time imaging, 3D multiplanar soft-tissue tracking and automated beam control of an advanced linear accelerator. The idea is to use daily online volume adaptation and plan re-optimization to account for any changes in tumour size and position relative to organs-at-risk (OAR). Real-time imaging enables treatment in breath-hold with gated beam delivery (automatically pausing delivery if the target moves outside a defined boundary), eliminating the need for an internal target volume and enabling smaller PTV margins.
+The approach offers potential to enhance treatment precision and target coverage while improving sparing of adjacent organs compared with conventional SABR, first author Elena Moreno-Olmedo and colleagues contend.
+The team conducted a study to assess the incidence of SABR-related toxicities in patients with histologically confirmed NSCLC undergoing SMART-based SABR. The study included 11 patients with 18 ultracentral lesions, the majority of whom had oligometastatic or olioprogressive disease.
+ +Patients received five to eight treatment fractions, to a median dose of 40 Gy (ranging from 30 to 60 Gy). The researchers generated fixed-field SABR plans with dosimetric aims including a PTV V100% (the volume receiving at least 100% of the prescription dose) of 95% or above, a PTV V95% of 98% or above and a maximum dose of between110% and 140%. PTV coverage was compromised where necessary to meet OAR constraints, with a minimum PTV V100% of at least 70%.
+SABR was performed using a 6 MV 0.35 T MRIdian linac with gated delivery during repeated breath-holds, under continuous MR guidance. Based on daily MRI scans, online plan adaptation was performed for all of the 78 delivered fractions.
+The researchers report that both the PTV volume and PTV overlap with ultracentral OARs were reduced in SMART treatments compared with conventional SABR. The median SMART PTV was 10.1 cc, compared with 30.4 cc for the simulated SABR PTV, while the median PTV overlap with OARs was 0.85 cc for SMART (8.4% of the PTV) and 4.7 cc for conventional SABR.
+In terms of treatment-related side effects for SMART, the rates of acute and late grade 1–2 toxicities were 54% and 18%, respectively, with no grade 3–5 toxicities observed. This demonstrates the technique’s increased safety compared with non-adaptive SABR treatments, which have exhibited severe rates of toxicity, including treatment-related deaths, in ultracentral tumours.
+Two-thirds of patients were alive at the median follow-up point of 28 months, and 93% were free from local progression at 12 months. The median progression-free survival was 5.8 months and median overall survival was 20 months.
+ +Acknowledging the short follow-up time frame, the researchers note that additional late toxicities may occur. However, they are hopeful that SMART will be considered as a favourable treatment option for patients with ultracentral NSCLC lesions.
+“Our analysis demonstrates that hypofractionated SMART with daily online adaptation for ultracentral NSCLC achieved comparable local control to conventional non-adaptive SABR, with a safer toxicity profile,” they write. “These findings support the consideration of SMART as a safer and effective treatment option for this challenging subgroup of thoracic tumours.”
+SMART-based SABR radiotherapy remains an emerging cancer treatment that’s not available yet in many cancer treatment centres. Despite the high risk for patients with ultracentral tumours, SABR is the standard treatment for inoperable NSCLC.
+The phase 1 clinical trial, Stereotactic radiation therapy for ultracentral NSCLC: a safety and efficacy trial (SUNSET), assessed the use of SBRT for ultracentral tumours in 30 patients with early-stage NSCLC treated at five Canadian cancer centres. In all cases, the PTVs touched or overlapped the proximal bronchial tree, the pulmonary artery, the pulmonary vein or the oesophagus. Led by Meredith Giuliani of the Princess Margaret Cancer Centre, the trial aimed to determine the maximum tolerated radiation dose associated with a less than 30% rate of grade 3–5 toxicity within two years of treatment.
+All patients received 60 Gy in eight fractions. Dose was prescribed to deliver a PTV V100% of 95%, a PTV V90% of 99% and a maximum dose of no more than 120% of the prescription dose, with OAR constraints prioritized over PTV coverage. All patients had daily cone-beam CT imaging to verify tumour position before treatment.
+At a median follow-up of 37 months, two patients (6.7%) experienced dose-limiting grade 3–5 toxicities – an adverse event rate within the prespecified acceptability criteria. The three-year overall survival was 72.5% and the three-year progression-free survival was 66.1%.
+In a subsequent dosimetric analysis, the researchers report that they did not identify any relationship between OAR dose and toxicity, within the dose constraints used in the SUNSET trial. They note that 73% of patients could be treated without compromise of the PTV, and where compromise was needed, the mean PTV D95 (the minimum dose delivered to 95% of the PTV) remained high at 52.3 Gy.
+As expected, plans that overlapped with central OARs were associated with worse local control, but PTV undercoverage was not. “[These findings suggest] that the approach of reducing PTV coverage to meet OAR constraints does not appear to compromise local control, and that acceptable toxicity rates are achievable using 60 Gy in eight fractions,” the team writes. “In the future, use of MRI or online adaptive SBRT may allow for safer treatment delivery by limiting dose variation with anatomic changes.”
++
The post A SMART approach to treating lung cancers in challenging locations appeared first on Physics World.
+]]>The post Spiral catheter optimizes drug delivery to the brain appeared first on Physics World.
+]]>Modern treatments for brain-related conditions including Parkinson’s disease, epilepsy, and tumours often involve implanting microfluidic catheters that deliver controlled doses of drug-infused fluids to highly localized regions of the brain. Today, these implants are made from highly flexible materials that closely mimic the soft tissue of the brain. This makes them far less invasive than previous designs.
+However, there is still much room for improvement, as Khlaifat explains. “Catheter design and function have long been limited by the neuroinflammatory response after implantation, as well as the unequal drug distribution across the catheter’s outlets,” she says.
+A key challenge with this approach is that each of the brain’s distinct regions has highly irregular shapes, which makes it incredibly difficult to target via single drug doses. Instead, doses must be delivered either through repeated insertions from a single port at the end of a catheter, or through single insertions across multiple co-implanted catheters. Either way, the approach is highly invasive, and runs the risk of further trauma to the brain.
+In their study, Khlaifat’s team explored how many of these problems stem from existing catheter designs. They tend to be simple tubes with single input and output ports at either end. Using fluid dynamics simulations, they started by investigating how drug outflow would change when multiple output ports are positioned along the length of the catheter.
+To ensure this outflow is delivered evenly, they carefully adjusted the diameter of each port to account for the change in fluid pressure along the catheter’s length – so that four evenly spaced ports could each deliver roughly one quarter of the total flow. Building on this innovation, the researchers then explored how the shape of the catheter itself could be adjusted to optimize delivery even further.
+“We varied the catheter design from a straight catheter to a helix of the same small diameter, allowing for a larger area of drug distribution in the target implantation region with minimal invasiveness,” explains team member Khalil Ramadi. “This helical shape also allows us to resist buckling on insertion, which is a major problem for miniaturized straight catheters.”
+Based on their simulations, the team fabricated a helical catheter the call Strategic Precision Infusion for Regional Administration of Liquid, or SPIRAL. In their first set of experiments, they tested their simulations in controlled lab conditions. They verified their prediction of even outflow rates across the catheter’s outlets.
+ +“Our helical device was also tested in mouse models alongside its straight counterpart to study its neuroinflammatory response,” Khlaifat says. “There were no significant differences between the two designs.”
+Having validated the safety of their approach, the researchers are now hopeful that SPIRAL could pave the way for new and improved methods for targeted drug delivery within the brain. With the ability to target entire regions of the brain with smaller, more controlled doses, this future generation of implanted catheters could ultimately prove to be both safer and more effective than existing designs.
+“These catheters could be optimized for each patient through our computational framework to ensure only regions that require dosing are exposed to therapy, all through a single insertion point in the skull,” describes team member Mahmoud Elbeh. “This tailored approach could improve therapies for brain disorders such as epilepsy and glioblastomas.”
+The research is described in the Journal of Neural Engineering.
+The post Spiral catheter optimizes drug delivery to the brain appeared first on Physics World.
+]]>The post Performance metrics and benchmarks point the way to practical quantum advantage appeared first on Physics World.
+]]>
From quantum utility today to quantum advantage tomorrow: incumbent technology companies – among them Google, Amazon, IBM and Microsoft – and a wave of ambitious start-ups are on a mission to transform quantum computing from applied research endeavour to mainstream commercial opportunity. The end-game: quantum computers that can be deployed at-scale to perform computations significantly faster than classical machines while addressing scientific, industrial and commercial problems beyond the reach of today’s high-performance computing systems.
+Meanwhile, as technology translation gathers pace across the quantum supply chain, government laboratories and academic scientists must maintain their focus on the “hard yards” of precompetitive research. That means prioritizing foundational quantum hardware and software technologies, underpinned by theoretical understanding, experimental systems, device design and fabrication – and pushing out along all these R&D pathways simultaneously.
+Equally important is the requirement to understand and quantify the relative performance of quantum computers from different manufacturers as well as across the myriad platform technologies – among them superconducting circuits, trapped ions, neutral atoms as well as photonic and semiconductor processors. A case study in this regard is a broad-scope UK research collaboration that, for the past four years, has been reviewing, collecting and organizing a holistic taxonomy of metrics and benchmarks to evaluate the performance of quantum computers against their classical counterparts as well as the relative performance of competing quantum platforms.
+Funded by the National Quantum Computing Centre (NQCC), which is part of the UK National Quantum Technologies Programme (NQTP), and led by scientists at the National Physical Laboratory (NPL), the UK’s National Metrology Institute, the cross-disciplinary consortium has taken on an endeavour that is as sprawling as it is complex. The challenge lies in the diversity of quantum hardware platforms in the mix; also the emergence of two different approaches to quantum computing – one being a gate-based framework for universal quantum computation, the other an analogue approach tailored to outperforming classical computers on specific tasks.
+“Given the ambition of this undertaking, we tapped into a deep pool of specialist domain knowledge and expertise provided by university colleagues at Edinburgh, Durham, Warwick and several other centres-of-excellence in quantum,” explains Ivan Rungger, a principal scientist at NPL, professor in computer science at Royal Holloway, University of London, and lead scientist on the quantum benchmarking project. That core group consulted widely within the research community and with quantum technology companies across the nascent supply chain. “The resulting study,” adds Rungger, “positions transparent and objective benchmarking as a critical enabler for trust, comparability and commercial adoption of quantum technologies, aligning closely with NPL’s mission in quantum metrology and standards.”
+
For context, a number of performance metrics used to benchmark classical computers can also be applied directly to quantum computers, such as the speed of operations, the number of processing units, as well as the probability of errors to occur in the computation. That only goes so far, though, with all manner of dedicated metrics emerging in the past decade to benchmark the performance of quantum computers – ranging from their individual hardware components to entire applications.
+Complexity reigns, it seems, and navigating the extensive literature can prove overwhelming, while the levels of maturity for different metrics varies significantly. Objective comparisons aren’t straightforward either – not least because variations of the same metric are commonly deployed; also the data disclosed together with a reported metric value is often not sufficient to reproduce the results.
+“Many of the approaches provide similar overall qualitative performance values,” Rungger notes, “but the divergence in the technical implementation makes quantitative comparisons difficult and, by extension, slows progress of the field towards quantum advantage.”
+The task then is to rationalize the metrics used to evaluate the performance for a given quantum hardware platform to a minimal yet representative set agreed across manufacturers, algorithm developers and end-users. These benchmarks also need to follow some agreed common approaches to fairly and objectively evaluate quantum computers from different equipment vendors.
+With these objectives in mind, Rungger and colleagues conducted a deep-dive review that has yielded a comprehensive collection of metrics and benchmarks to allow holistic comparisons of quantum computers, assessing the quality of hardware components all the way to system-level performance and application-level metrics.
+Drill down further and there’s a consistent format for each metric that includes its definition, a description of the methodology, the main assumptions and limitations, and a linked open-source software package implementing the methodology. The software transparently demonstrates the methodology and can also be used in practical, reproducible evaluations of all metrics.
+“As research on metrics and benchmarks progresses, our collection of metrics and the associated software for performance evaluation are expected to evolve,” says Rungger. “Ultimately, the repository we have put together will provide a ‘living’ online resource, updated at regular intervals to account for community-driven developments in the field.”
+Innovation being what it is, those developments are well under way. For starters, the importance of objective and relevant performance benchmarks for quantum computers has led several international standards bodies to initiate work on specific areas that are ready for standardization – work that, in turn, will give manufacturers, end-users and investors an informed evaluation of the performance of a range of quantum computing components, subsystems and full-stack platforms.
+What’s evident is that the UK’s voice on metrics and benchmarking is already informing the collective conversation around standards development. “The quantum computing community and international standardization bodies are adopting a number of concepts from our approach to benchmarking standards,” notes Deep Lall, a quantum scientist in Rungger’s team at NPL and lead author of the study. “I was invited to present our work to a number of international standardization meetings and scientific workshops, opening up widespread international engagement with our research and discussions with colleagues across the benchmarking community.”
+He continues: “We want the UK effort on benchmarking and metrics to shape the broader international effort. The hope is that the collection of metrics we have pulled together, along with the associated open-source software provided to evaluate them, will guide the development of standardized benchmarks for quantum computers and speed up the progress of the field towards practical quantum advantage.”
+That’s a view echoed – and amplified – by Cyrus Larijani, NPL’s head of quantum programme. “As we move into the next phase of NPL’s quantum strategy, the importance of evidence-based decision making becomes ever-more critical,” he concludes. “By grounding our strategic choices in robust measurement science and real-world data, we ensure that our innovations not only push the boundaries of quantum technology but also deliver meaningful impact across industry and society.”
+Deep Lall et al. 2025 A review and collection of metrics and benchmarks for quantum computers: definitions, methodologies and software https://arxiv.org/abs/2502.06717
+Quantum computing technology has reached the stage where a number of methods for performance characterization are backed by a large body of real-world implementation and use, as well as by theoretical proofs. These mature benchmarking methods will benefit from commonly agreed-upon approaches that are the only way to fairly, unambiguously and objectively benchmark quantum computers from different manufacturers.
+“Performance benchmarks are a fundamental enabler of technology innovation in quantum computing,” explains Konstantinos Georgopoulos, who heads up the NQCC’s quantum applications team and is responsible for the centre’s liaison with the NPL benchmarking consortium. “How do we understand performance? How do we compare capabilities? And, of course, what are the metrics that help us to do that? These are the leading questions we addressed through the course of this study.
+”If the importance of benchmarking is a given, so too is collaboration and the need to bring research and industry stakeholders together from across the quantum ecosystem. “I think that’s what we achieved here,” says Georgopoulos. “The long list of institutions and experts who contributed their perspectives on quantum computing was crucial to the success of this project. What we’ve ended up with are better metrics, better benchmarks, and a better collective understanding to push forward with technology translation that aligns with end-user requirements across diverse industry settings.”
++
End note: NPL retains copyright on this article.
+The post Performance metrics and benchmarks point the way to practical quantum advantage appeared first on Physics World.
+]]>The post Quantum computing and AI join forces for particle physics appeared first on Physics World.
+]]>My guest is Javier Toledo-Marín, and we spoke at the Perimeter Institute in Waterloo, Canada. As well as having an appointment at Perimeter, Toledo-Marín is also associated with the TRIUMF accelerator centre in Vancouver.
+Toledo-Marín and colleagues have recently published a paper called “Conditioned quantum-assisted deep generative surrogate for particle–calorimeter interactions”.
+This podcast is supported by Delft Circuits.
+As gate-based quantum computing continues to scale, Delft Circuits provides the i/o solutions that make it possible.
+The post Quantum computing and AI join forces for particle physics appeared first on Physics World.
+]]>The post Master’s programme takes microelectronics in new directions appeared first on Physics World.
+]]>
The microelectronics sector is known for its relentless drive for innovation, continually delivering performance and efficiency gains within ever more compact form factors. Anyone aspiring to build a career in this fast-moving field needs not just a thorough grounding in current tools and techniques, but also an understanding of the next-generation materials and structures that will propel future progress.
+That’s the premise behind a Master’s programme in microelectronics technology and materials at the Hong Kong Polytechnic University (PolyU). Delivered by the Department for Applied Physics, globally recognized for its pioneering research in technologies such as two-dimensional materials, nanoelectronics and artificial intelligence, the aim is to provide students with both the fundamental knowledge and practical skills they need to kickstart their professional future – whether they choose to pursue further research or to find a job in industry.
+“The programme provides students with all the key skills they need to work in microelectronics, such as circuit design, materials processing and failure analysis,” says programme leader Professor Zhao Jiong, who research focuses on 2D ferroelectrics. “But they also have direct access to more than 20 faculty members who are actively investigating novel materials and structures that go beyond silicon-based technologies.”
+The course in also unusual in providing a combined focus on electronics engineering and materials science, providing students with a thorough understanding of the underlying semiconductors and device structures as well as their use in mass-produced integrated circuits. That fundamental knowledge is reinforced through regular experimental work, providing the students with hands-on experience of fabricating and testing electronic devices. “Our cleanroom laboratory is equipped with many different instruments for microfabrication, including thin-film deposition, etching and photolithography, as well as advanced characterization tools for understanding their operating mechanisms and evaluating their performance,” adds Zhao.
+In a module focusing on thin-film materials, for example, students gain valuable experience from practical sessions that enable them to operate the equipment for different growth techniques, such as sputtering, molecular beam epitaxy, and both physical and chemical vapour deposition. In another module on materials analysis and characterization, the students are tasked with analysing the layered structure of a standard computer chip by making cross-sections that can be studied with a scanning electron microscope.
+
That practical experience extends to circuit design, with students learning how to use state-of-the-art software tools for configuring, simulating and analysing complex electronic layouts. “Through this experimental work students gain the technical skills they need to design and fabricate integrated circuits, and to optimize their performance and reliability through techniques like failure analysis,” says Professor Dai Jiyan, PolyU Associate Dean of Students, who also teaches the module on thin-film materials. “This hands-on experience helps to prepare them for working in a manufacturing facility or for continuing their studies at the PhD level.”
+Also integrated into the teaching programme is the use of artificial intelligence to assist key tasks, such as defect analysis, materials selection and image processing. Indeed, PolyU has established a joint laboratory with Huawei to investigate possible applications of AI tools in electronic design, providing the students with early exposure to emerging computational methods that are likely to shape the future of the microelectronics industry. “One of our key characteristics is that we embed AI into our teaching and laboratory work,” says Dai. “Two of the modules are directly related to AI, while the joint lab with Huawei helps students to experiment with using AI in circuit design.”
+Now in its third year, the Master’s programme was designed in collaboration with Hong Kong’s Applied Science and Technology Research Institute (ASTRI), established in 2000 to enhance the competitiveness of the region through the use of advanced technologies. Researchers at PolyU already pursue joint projects with ASTRI in areas like chip design, microfabrication and failure analysis. As part of the programme, these collaborators are often invited to give guest lectures or to guide the laboratory work. “Sometimes they even provide some specialized instruments for the students to use in their experiments,” says Zhao. “We really benefit from this collaboration.”
+Once primed with the knowledge and experience from the taught modules, the students have the opportunity to work alongside one of the faculty members on a short research project. They can choose whether to focus on a topic that is relevant to present-day manufacturing, such as materials processing or advanced packaging technologies, or to explore the potential of emerging materials and devices across applications ranging from solar cells and microfluidics to next-generation memories and neuromorphic computing.
+“It’s very interesting for the students to get involved in these projects,” says Zhao. “They learn more about the research process, which can make them more confident to take their studies to the next level. All of our faculty members are engaged in important work, and we can guide the students towards a future research field if that’s what they are interested in.”
+There are also plenty of progression opportunities for those who are more interested in pursuing a career in industry. As well as providing support and advice through its joint lab in AI, Huawei arranges visits to its manufacturing facilities and offers some internships to interested students. PolyU also organizes visits to Hong Kong’s Science Park, home to multinational companies such as Infineon as well as a large number of start-up companies in the microelectronics sector. Some of these might support a student’s research project, or offer an internship in areas such as circuit design or microfabrication.
+The international outlook offered by PolyU has made the Master’s programme particularly appealing to students from mainland China, but Zhao and Dai believe that the forward-looking ethos of the course should make it an appealing option for graduates across Asia and beyond. “Through the programme, the students gain knowledge about all aspects of the microelectronics industry, and how it is likely to evolve in the future,” says Dai. “The knowledge and technical skills gained by the students offer them a competitive edge for building their future career, whether they want to find a job in industry or to continue their research studies.”
+The post Master’s programme takes microelectronics in new directions appeared first on Physics World.
+]]>The post Resonant laser ablation selectively destroys pancreatic tumours appeared first on Physics World.
+]]>Thermal ablation techniques, such as radiofrequency, microwave or laser ablation, could provide a treatment option for patients with locally advanced PDAC, but existing methods risk damaging surrounding blood vessels and healthy pancreatic tissues. The new approach, described in Optica, uses the molecular fingerprint of pancreatic tumours to enable selective ablation.
+The technique exploits the fact that PDAC tissue contains a large amount of collagen compared with healthy pancreatic tissue. Amide-I collagen fibres exhibit a strong absorption peak at 6.1 µm, thus the researchers surmised that tuning the treatment laser to this resonant wavelength could enable efficient tumour ablation with minimal collateral thermal damage. As such, they designed a femtosecond pulsed laser that can deliver 6.1 µm pulses with a power of more than 1 W.
+
“We developed a mid-infrared femtosecond laser system for the selective tissue ablation experiment,” says team leader Houkun Liang. “The system is tunable in the wavelength range of 5 to 11 µm, aligning with various molecular fingerprint absorption peaks such as amide proteins, cholesteryl ester, hydroxyapatite and so on.”
+Liang and colleagues first examined the ablation efficiency of three different laser wavelengths on two types of pancreatic cancer cells. Compared with non-resonant wavelengths of 1 and 3 µm, the collagen-resonant 6.1 µm laser was far more effective in killing pancreatic cancer cells, reducing cell viability to ranges of 0.27–0.32 and 0.37–0.38, at 0 and 24 h, respectively.
+ +The team observed similar results in experiments on ectopic PDAC tumours cultured on the backs of mice. Irradiation at 6.1 µm led to five to 10 times deeper tumour ablation than seen for the non-resonant wavelengths (despite using a laser power of 5 W for 1 µm ablation and just 500 mW for 6.1 and 3 µm), indicating that 6.1 µm is the optimal wavelength for PDAC ablation surgery.
+To validate the feasibility and safety of 6.1 µm laser irradiation, the team used the technique to treat PDAC tumours on live mice. Nine days after ablation, the tumour growth rate in treated mice was significantly suppressed, with an average tumour volume of 35.3 mm3. In contrast, tumour volume in a control group of untreated mice reached an average of 292.7 mm3, roughly eight times the size of the ablated tumours. No adverse symptoms were observed following the treatment.
+The researchers also used 6.1 µm laser irradiation to ablate pancreatic tissue samples (including normal tissue and PDAC) from 13 patients undergoing surgical resection. They used a laser power of 1 W and four scanning speeds (0.5, 1, 2 and 3 mm/s) with 10 ablation passes, examining 20 to 40 samples for each parameter.
+At the slower scanning speeds, excessive energy accumulation resulted in comparable ablation depths. At speeds of 2 or 3 mm/s, however, the average ablation depths in PDAC samples were 2.30 and 2.57 times greater than in normal pancreatic tissue, respectively, demonstrating the sought-after selective ablation. At 3 mm/s, for example, the ablation depth in tumour was 1659.09±405.97 µm, compared with 702.5±298.32 µm in normal pancreas.
+The findings show that by carefully controlling the laser power, scanning speed and number of passes, near-complete ablation of PDACs can be achieved, with minimal damage to surrounding healthy tissues.
+To further investigate the clinical potential of this technique, the researchers developed an anti-resonant hollow-core fibre (AR-HCF) that can deliver high-power 6.1 µm laser pulses deep inside the human body. The fibre has a core diameter of approximately 113 µm and low bending losses at radii under 10 cm. The researchers used the AR-HCF to perform 6.1 µm laser ablation of PDAC and normal pancreas samples. The ablation depth in PDAC was greater than in normal pancreas, confirming the selective ablation properties.
+“We are working together with a company to make a medical-grade fibre system to deliver the mid-infrared femtosecond laser. It consists of AR-HCF to transmit mid-infrared femtosecond pulses, a puncture needle and a fibre lens to focus the light and prevent liquid tissue getting into the fibre,” explains Liang. “We are also making efforts to integrate an imaging unit into the fibre delivery system, which will enable real-time monitoring and precise surgical guidance.”
+ +Next, the researchers aim to further optimize the laser parameters and delivery systems to improve ablation efficiency and stability. They also plan to explore the applicability of selective laser ablation to other tumour types with distinct molecular signatures, and to conduct larger-scale animal studies to verify long-term safety and therapeutic outcomes.
+“Before this technology can be used for clinical applications, highly comprehensive biological safety assessments are necessary,” Liang emphasizes. “Designing well-structured clinical trials to assess efficacy and risks, as well as navigating regulatory and ethical approvals, will be critical steps toward translation. There is a long way to go.”
+The post Resonant laser ablation selectively destroys pancreatic tumours appeared first on Physics World.
+]]>The post Doorway states spotted in graphene-based materials appeared first on Physics World.
+]]>Low-energy electron (LEE) emission from solids is used across a range of materials analysis and processing applications including scanning electron microscopy and electron-beam induced deposition. However, the precise physics of the emission process is not well understood.
+Electrons are ejected from a material when a beam of electrons is fired at its surface. Some of these incident electrons will impart energy to electrons residing in the material, causing some resident electrons to be emitted from the surface. In the simplest model, the minimum energy needed for this LEE emission is the electron binding energy of the material.
+In this new study, however, researchers have shown that exceeding the binding energy is not enough for LEE emission from graphene-based materials. Not only does the electron need this minimum energy, it must also be in a specific doorway state or it is unlikely to escape. The team compare this phenomenon to the predicament of a frog in a cardboard box with a window. Not only must the frog hop a certain height to escape the box, it must also begin its hop from a position that will result in it travelling through the hole (see figure).
+For most materials, the energy spectrum of LEE electrons is featureless. However, it was known that graphite’s spectrum has an “X state” at about 3.3 eV, where emission is enhanced. This state could be related to doorway states.
+To search for doorway states, the Vienna team studied LEE emission from graphite as well as from single-layer and bi-layer graphene. Graphene is a sheet of carbon just one atom thick. Sheets can stick together via the relatively weak Van der Waals force to create multilayer graphene – and ultimately graphite, which comprises a large number of layers.
+Because electrons are mostly confined within the graphene layers, the electronic states of single-layer, bi-layer and multi-layer graphene are broadly similar. As a result, it was expected that these materials would have similar LEE emission spectra . However, the Vienna team found a surprising difference.
+The team made their discovery by firing a beam of relatively low energy electrons (173 eV) incident at 60° to the surface of single-layer and bi-layer graphene as well as graphite. The scattered electrons are then detected at the same angle of reflection. Meanwhile, a second detector is pointed normal to the surface to capture any emitted electrons. In quantum mechanics electrons are indistinguishable, so the modifiers scattered and emitted are illustrative, rather than precise.
+ +The team looked for coincident signals in both detectors and plotted their results as a function of energy in 2D “heat maps”. These plots revealed that bi-layer graphene and graphite each had doorway states – but at different energies. However, single-layer graphene did not appear to have any doorway states. By combining experiments with calculations, the team showed that doorway states emerge above a certain number of layers. As a result the researchers showed that graphite’s X state can be attributed in part to a doorway state that appears at about five layers of graphene.
+“For the first time, we’ve shown that the shape of the electron spectrum depends not only on the material itself, but crucially on whether and where such resonant doorway states exist,” explains Anna Niggas at the Vienna Institute of Technology.
+As well as providing important insights in how the electronic properties of graphene morph into the properties of graphite, the team says that their research could also shed light on the properties of other layered materials.
+The research is described in Physical Review Letters.
+The post Doorway states spotted in graphene-based materials appeared first on Physics World.
+]]>The post NASA’s Jet Propulsion Lab lays off a further 10% of staff appeared first on Physics World.
+]]>Managed by the California Institute of Technology in Pasadena, JPL oversees scientific missions such as the Psyche asteroid probe, the Europa Clipper and the Perseverance rover on Mars. The lab also operates the Deep Space Network that keeps Earth in communication with unmanned space missions. JPL bosses already laid off about 530 staff – and 140 contractors – in February last year followed by another 325 people in November 2024.
+JPL director Dave Gallagher insists, however, that the new layoffs are not related to the current US government shutdown that began on 1 October. “[They are] essential to securing JPL’s future by creating a leaner infrastructure, focusing on our core technical capabilities, maintaining fiscal discipline, and positioning us to compete in the evolving space ecosystem,” he says in a message to employees.
+Judy Chu, Democratic Congresswoman for the constituency that includes JPL, is less optimistic. “Every layoff devastates the highly skilled and uniquely talented workforce that has made these accomplishments possible,” she says. “Together with last year’s layoffs, this will result in an untold loss of scientific knowledge and expertise that threatens the very future of American leadership in space exploration and scientific discovery.”
+John Logsdon, professor emeritus at George Washington University and founder of the university’s Space Policy Institute, says that the cuts are a direct result of the Trump administration’s approach to science and technology. “The administration gives low priority to robotic science and exploration, and has made draconic cuts to the science budget; that budget supports JPL’s work,” he told Physics World. “With these cuts, there is not enough money to support a JPL workforce sized for more ambitious activities. Ergo, staff cuts.”
+The post NASA’s Jet Propulsion Lab lays off a further 10% of staff appeared first on Physics World.
+]]>The post How to solve the ‘future of physics’ problem appeared first on Physics World.
+]]>That enjoyment continued beyond school. I ended up doing a physics degree at the University of Oxford before working on the discovery of the gluon at the DESY lab in Hamburg for my PhD. Since then I have used physics in industry – first with British Oxygen/Linde and later with Air Products & Chemicals – to solve all sorts of different problems, build innovative devices and file patents.
+While some students have a similarly positive school experience and subsequent career path, not enough do. Quite simply, physics at school is the key to so many important, useful developments, both within and beyond physics. But we have a physics education problem, or to put it another way – a “future of physics” problem.
+There are just not enough school students enjoying and learning physics. On top of that there are not enough teachers enjoying physics and not enough students doing practical physics. The education problem is bad for physics and for many other subjects that draw on physics. Alas, it’s not a new problem but one that has been developing for years.
+Many good points about the future of physics learning were made by the Institute of Physics in its 2024 report Fundamentals of 11 to 19 Physics. The report called for more physics lessons to have a practical element and encouraged more 16-year-old students in England, Wales and Northern Ireland to take AS-level physics at 17 so that they carry their GCSE learning at least one step further.
+ +Doing so would furnish students who are aiming to study another science or a technical subject with the necessary skills and give them the option to take physics A-level. Another recommendation is to link physics more closely to T-levels – two-year vocational courses in England for 16–19 year olds that are equivalent to A-levels – so that students following that path get a background in key aspects of physics, for example in engineering, construction, design and health.
+But do all these suggestions solve the problem? I don’t think they are enough and we need to go further. The key change to fix the problem, I believe, is to have student groups invent, build and test their own projects. Ideally this should happen before GCSE level so that students have the enthusiasm and background knowledge to carry them happily forward into A-level physics. They will benefit from “pull learning” – pulling in knowledge and active learning that they will remember for life. And they will acquire wider life skills too.
+During my time in industry, I did outreach work with schools every few weeks and gave talks with demonstrations at the Royal Institution and the Franklin Institute. For many years I also ran a Saturday Science club in Guildford, Surrey, for pupils aged 8–15.
+Based on this, I wrote four Saturday Science books about the many playful and original demonstrations and projects that came out of it. Then at the University of Surrey, as a visiting professor, I had small teams of final-year students who devised extraordinary engineering – designing superguns for space launches, 3D printers for full-size buildings and volcanic power plants inter alia. A bonus was that other staff working with the students got more adventurous too.
+ +But that was working with students already committed to a scientific path. So lately I’ve been working with teachers to get students to devise and build their own innovative projects. We’ve had 14–15-year-old state-school students in groups of three or four, brainstorming projects, sketching possible designs, and gathering background information. We help them and get A-level students to help too (who gain teaching experience in the process). Students not only learn physics better but also pick up important life skills like brainstorming, team-working, practical work, analysis and presentations.
+We’ve seen lots of ingenuity and some great projects such as an ultrasonic scanner to sense wetness of cloth; a system to teach guitar by lighting up LEDs along the guitar neck; and measuring breathing using light passing through a band of Lycra around the patient below the ribs. We’ve seen the value of failure, both mistakes and genuine technical problems.
+Best of all, we’ve also noticed what might be dubbed the “combination bonus” – students having to think about how they combine their knowledge of one area of physics with another. A project involving a sensor, for example, will often involve electronics as well the physics of the sensor and so student knowledge of both areas is enhanced.
+ +Some teachers may question how you mark such projects. The answer is don’t mark them! Project work and especially group work is difficult to mark fairly and accurately, and the enthusiasm and increased learning by students working on innovative projects will feed through into standard school exam results.
+Not trying to grade such projects will mean more students go on to study physics further, potentially to do a physics-related extended project qualification – equivalent to half an A-level where students research a topic to university level – and do it well. Long term, more students will take physics with them into the world of work, from physics to engineering or medicine, from research to design or teaching.
+Such projects are often fun for students and teachers. Teachers are often intrigued and amazed by students’ ideas and ingenuity. So, let’s choose to do student-invented project work at school and let’s finally solve the future of physics problem.
+The post How to solve the ‘future of physics’ problem appeared first on Physics World.
+]]>The post A recipe for quantum chaos appeared first on Physics World.
+]]>It is, however, an essential prerequisite for the design of quantum computing platforms and for the benchmarking of quantum simulators.
+A key concept here is that of quantum ergodicity. This is because quantum ergodic dynamics can be harnessed to generate highly entangled quantum states.
+In classical statistical mechanics, an ergodic system evolving over time will explore all possible microstates states uniformly. Mathematically, this means that a sufficiently large collection of random samples from an ergodic process can represent the average statistical properties of the entire process.
+Quantum ergodicity is simply the extension of this concept to the quantum realm.
+Closely related to this is the idea of chaos. A chaotic system is one in which is very sensitive to its initial conditions. Small changes can be amplified over time, causing large changes in the future.
+The ideas of chaos and ergodicity are intrinsically linked as chaotic dynamics often enable ergodicity.
+Until now, it has been very challenging to predict which experimentally preparable initial states will trigger quantum chaos and ergodic dynamics over a reasonable time scale.
+In a new paper published in Reports on Progress in Physics, a team of researchers have proposed an ingenious solution to this problem using the Bose–Hubbard Hamiltonian.
+They took as an example ultracold atoms in an optical lattice (a typical choice for experiments in this field) to benchmark their method.
+The results show that there are certain tangible threshold values which must be crossed in order to ensure the onset of quantum chaos.
+These results will be invaluable for experimentalists working across a wide range of quantum sciences.
+Pausch et al. 2025 Rep. Prog. Phys. 88 057602
++
The post A recipe for quantum chaos appeared first on Physics World.
+]]>The post Neural simulation-based inference techniques at the LHC appeared first on Physics World.
+]]>These are often performed using statistical techniques such as the method of maximum likelihood. However, given the size of datasets generated, reduction techniques, such as grouping data into bins, are often necessary.
+These can lead to a loss of sensitivity, particularly in non-linear cases like off-shell Higgs boson production and effective field theory measurements. The non-linearity in these cases comes from quantum interference and traditional methods are unable to optimally distinguish the signal from background.
+In this paper, the ATLAS collaboration pioneered the use of a neural network based technique called neural simulation-based inference (NSBI) to combat these issues.
+A neural network is a machine learning model originally inspired by how the human brain works. It’s made up of layers of interconnected units called neurons, which process information and learn patterns from data. Each neuron receives input, performs a simple calculation, and passes the result to other neurons.
+NSBI uses these neural networks to analyse each particle collision event individually, preserving more information and improving accuracy.
+The framework developed here can handle many sources of uncertainty and includes tools to measure how confident scientists can be in their results.
+The researchers benchmarked their method by using it to calculate the Higgs boson signal strength and compared it to previous methods with impressive results (see here for more details about this).
+The greatly improved sensitivity gained from using this method will be invaluable in the search for physics beyond the Standard Model in future experiments at ATLAS and beyond.
+The ATLAS Collaboration, 2025 Rep. Prog. Phys. 88 067801
++
The post Neural simulation-based inference techniques at the LHC appeared first on Physics World.
+]]>The post Chip-integrated nanoantenna efficiently harvests light from diamond defects appeared first on Physics World.
+]]>
Nitrogen-vacancy (NV) centres are point defects that occur when one carbon atom in diamond’s lattice structure is replaced by a nitrogen atom next to an empty lattice site (a vacancy). Together, this nitrogen atom and its adjacent vacancy behave like a negatively charged entity with an intrinsic quantum spin.
+When excited with laser light, an electron in an NV centre can be promoted into an excited state. As the electron decays back to the ground state, it emits light. The exact absorption-and-emission process is complicated by the fact that both the ground state and the excited state of the NV centre have three sublevels (spin triplet states). However, by exciting an individual NV centre repeatedly and collecting the photons it emits, it is possible to determine the spin state of the centre.
+The problem, explains Boaz Lubotzky, who co-led this research effort together with his colleague Ronen Rapaport, is that NV centres radiate over a wide range of angles. Hence, without an efficient collection interface, much of the light they emit is lost.
+Lubotzky and colleagues say they have now solved this problem thanks to a hybrid nanostructure made from a PMMA dielectric layer above a silver grating. This grating is arranged in a precise bullseye pattern that accurately guides light in a well-defined direction thanks to constructive interference. Using a nanometre-accurate positioning technique, the researchers placed the nanodiamond containing the NV centres exactly at the optimal location for light collection: right at the centre of the bullseye.
+ +For standard optics with a numerical aperture (NA) of about 0.5, the team found that the system captures around 80% of the light emitted from the NV centres. When NA >0.7, this value exceeds 90%, while for NA > 0.8, Lubotzky says it approaches unity.
+“The device provides a chip-based, room-temperature interface that makes NV emission far more directional, so a larger fraction of photons can be captured by standard lenses or coupled into fibres and photonic chips,” he tells Physics World. “Collecting more photons translates into faster measurements, higher sensitivity and lower power, thereby turning NV centres into compact precision sensors and also into brighter, easier-to-use single-photon sources for secure quantum communication.”
+ +The researchers say their next priority is to transition their prototype into a plug-and-play, room-temperature module – one that is fully packaged and directly coupled to fibres or photonic chips – with wafer-level deterministic placement for arrays. “In parallel, we will be leveraging the enhanced collection for NV-based magnetometry, aiming for faster, lower-power measurements with improved readout fidelity,” says Lubotzky. “This is important because it will allow us to avoid repeated averaging and enable fast, reliable operation in quantum sensors and processors.”
+They detail their present work in APL Quantum.
+The post Chip-integrated nanoantenna efficiently harvests light from diamond defects appeared first on Physics World.
+]]>The post Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh appeared first on Physics World.
+]]>“Diwali comes from Deepavali, meaning a ‘row of lights’. It marks the triumph of light over dark; good over evil; and knowledge over ignorance,” Ghosh explains. “In science too, every discovery is a Diwali – a victory of knowledge over ignorance.”
+With 2025 being marked by the International Year of Quantum Science and Technology, a victory of knowledge over ignorance couldn’t ring truer. “It has taken us a hundred years since the birth of quantum mechanics to arrive at this point, where quantum technologies are poised to transform our lives,” says Ghosh.
+Ghosh has another reason to celebrate, having been named as this year’s Institute of Physics (IOP) Homi Bhabha lecturer. The IOP and the Indian Physical Association (IPA) jointly host the Homi Bhabha and Cockcroft Walton bilateral exchange of lecturers. Running since 1998, these international programmes aim to promote dialogue on global challenges through physics and provide physicists with invaluable opportunities for global exposure and professional growth. Ghosh’s online lecture, entitled “Illuminating quantum frontiers: from photons to emerging technologies”, will be aired at 3 p.m. GMT on Wednesday 22 October.
+Ghosh’s career in physics took off in the mid-1980s, when she and American physicist Leonard Mandel – who is often referred to as one of the founding fathers of quantum optics – demonstrated a new quantum source of twin photons through spontaneous parametric down-conversion: a process where a high-energy photon splits into two lower-energy, correlated photons (Phys. Rev. Lett. 59, 1903).
+ +“Before that,” she recalls, “no-one was looking for quantum effects in this nonlinear optical process. The correlations between the photons defied classical explanation. It was an elegant early verification of quantum nonlocality.”
+Those entangled photon pairs are now the building blocks of quantum communication and computation. “We’re living through another Diwali of light,” she says, “where theoretical understanding and experimental innovation illuminate each other.”
+During Diwali, lamps unite households in a shimmering network of connection, and so too does entanglement of photons. “Quantum entanglement reminds us that connection transcends locality,” Ghosh says. “In the same way, the lights of Diwali connect us across borders and cultures through shared histories.”
+Her own research extends that metaphor further. Ghosh’s team has worked on mapping quantum states of light onto collective atomic excitations. These “slow-light” techniques – using electromagnetically induced transparency or Raman interactions – allow photons to be stored and retrieved, forming the backbone of long-distance quantum communication (Phys. Rev. A. 88 023852, EPL 105 44002)
+“Symbolically,” she adds, “it’s like passing the flame from one diya (lamp) to another. We’re not just spreading light – we’re preserving, encoding and transmitting it. Success comes through connection and collaboration.”
+
Ghosh is quick to note that in quantum physics, “darkness” is far from empty. “In quantum optics, even the vacuum is rich – with fluctuations that are essential to our understanding of the universe.”
+Her group studies the transition from quantum to classical systems, using techniques such as error correction, shielding and coherence-preserving materials. “Decoherence – the loss of quantum behaviour through environmental interaction – is a constant threat. To build reliable quantum technologies, we must engineer around this fragility,” Ghosh explains.
+There are also human-engineered shadows: some weaknesses in quantum communication devices aren’t due to the science itself – they come from mistakes or flaws in how humans built them. Hackers can exploit these “side channels” to get around security. “Security,” she warns, “is only as strong as the weakest engineering link.”
+Beyond the lab, Ghosh finds poetic meaning in these challenges. “Decoherence isn’t just a technical problem – it helps us understand the arrows of time, why the universe evolves irreversibly. The dark side has its own lessons.”
+ +For Ghosh, Diwali’s illumination is also a call for inclusivity in science. “No corner should remain dark,” she says. “Science thrives on diversity. Diverse teams ask broader questions and imagine richer answers. It’s not just morally right – it’s good for science.”
+She argues that equity is not sameness but recognition of uniqueness. “Innovation doesn’t come from conformity. Gender diversity, for example, brings varied cognitive and collaborative styles – essential in a field like quantum science, where intuition is constantly stretched.”
+The shadows she worries most about are not in the lab, but in academia itself. “Unconscious biases in mentorship or gatekeeping in opportunity can accumulate to limit visibility. Institutions must name and dismantle these hidden shadows through structural and cultural change.”
+Her vision of inclusion extends beyond gender. “We shouldn’t think of work and life as opposing realms to ‘balance’,” she says. “It’s about creating harmony among all dimensions of life – work, family, learning, rejuvenation. That’s where true brilliance comes from.”
+As the rows of diyas are lit this Diwali, Ghosh’s reflections remind us that light – whether classical or quantum – is both a physical and moral force: it connects, illuminates and endures. “Each advance in quantum science,” she concludes, “is another step in the age-old journey from darkness to light.”
+This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
+Find out more on our quantum channel.
++
The post Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh appeared first on Physics World.
+]]>The post Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103 appeared first on Physics World.
+]]>Born on 22 September 1922 in Hefei, China, Yang competed a BSc at the National Southwest Associated University in Kunming in 1942. After finishing an MSc in statistical physics at Tsinghua University two years later, in 1945 he moved to the University of Chicago in the US as part of a government-sponsored programme. He received his PhD in physics in 1948 working under the guidance of Edward Teller.
+In 1949 Yang moved to the Institute for Advanced Study in Princeton, where he made pioneering contributions to quantum field theory, wotrking together with Robert Mills. In 1953 they proposed the Yang-Mills theory, which became a cornerstone of the Standard Model of particle physics.
+It was also at Princeton where Yang began a fruitful collaboration with Lee, who died last year aged 97. Their work on parity – a property of elementary particles that expresses their behaviour upon reflection in a mirror – led to the duo winning the Nobel prize.
+In the early 1950s, physicists had been puzzled by the decays of two subatomic particles, known as tau and theta, which are identical except that the tau decays into three pions with a net parity of -1, while a theta particle decays into two pions with a net parity of +1.
+There were two possible explanations: either the tau and theta are different particles or that parity in the weak interaction is not conserved with Yang and Lee proposing various ways to test their ideas (Phys. Rev. 104 254).
+This “parity violation” was later proved experimentally by, among others, Chien-Shiung Wu at Columbia University. She carried out an experiment based on the radioactive decay of unstable cobalt-60 nuclei into nickel-60 – what became known as the “Wu experiment”. For their work, Yang, who was 35 at the time, shared the 1957 Nobel Prize for Physics with Lee.
+In 1965 Yang moved to Stony Brook University, becoming the first director of the newly founded Institute for Theoretical Physics, which is now known as the C N Yang Institute for Theoretical Physics. During this time he also contributed to advancing science and education in China, setting up the Committee on Educational Exchange with China – a programme that has sponsored some 100 Chinese scholars to study in the US.
+In 1997, Yang returned to Beijing where he became an honorary director of the Centre for Advanced Study at Tsinghua University. He then retired from Stony Brook in 1999, becoming a professor at Tsinghua University. During his time in the US, Yang obtained US citizenship, but renounced it in 2015.
+More recently, Yang was involved in debates over whether China should build the Circular Electron Positron Collider (CEPC) – a huge 100 km circumference underground collider that would study the Higgs boson in unprecented detail and be a successor to CERN’s Large Hadron Collider. Yang took a sceptical view calling it “inappropriate” for a developing country that is still struggling with “more acute issues like economic development and environment protection”.
+Yang also expressed concern that the science performed on the CEPC is just “guess” work and without guaranteed results. “I am not against the future of high-energy physics, but the timing is really bad for China to build such a super collider,” he noted in 2016. “Even if they see something with the machine, it’s not going to benefit the life of Chinese people any sooner.”
+As well as the Nobel prize, Yang won many other awards such as the US National Medal of Science in 1986, the Einstein Medal in 1995, which is presented by the Albert Einstein Society in Bern, and the American Physical Society’s Lars Onsager Prize in 1990.
+“The world has lost one of the most influential physicists of the modern era,” noted Stony Brook president Andrea Goldsmith in a statement. “His legacy will continue through his transformational impact on the field of physics and through the many colleagues and students influenced by his teaching, scholarship and mentorship.”
+The post Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103 appeared first on Physics World.
+]]>The post ‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann appeared first on Physics World.
+]]>At the time, Storchi Bergmann could not have imagined that one day this path would lead to cosmic discoveries and international recognition at the frontiers of astrophysics. “I always had the curiosity inside me,” she recalls. “It was something I carried since adolescence.”
+That curiosity almost got lost to another discipline. By the time Storchi Bergmann was about to enter university, she was swayed by a cousin living with her family who was passionate about architecture. By 1974 she began studying architecture at the Federal University of Rio Grande do Sul (UFRGS). “But I didn’t really like technical drawing. My favourite part of the course were physics classes,” she says. Within a semester, she switched to physics.
+There she met Edemundo da Rocha Vieira, the first astrophysicist UFRGS ever hired – who later went on to structure the university’s astronomy department. He nurtured Storchi Bergmann’s growing fascination with the universe and introduced her to research.
+In 1977, newly married after graduation, Storchi Bergmann followed her husband to Rio de Janeiro, where she did a master’s degree and worked with William Kunkel, an American astronomer who was in Rio to help establish Brazil’s National Astrophysics Laboratory. She began working on data from a photometric system to measure star radiation. “But Kunkel said galaxies were a lot more interesting to study, and that stuck in my head,” she says.
+Three years after moving to Rio, she returned to Porto Alegre, in Rio Grande do Sul, to start her doctoral research and teach at UFRGS. Vital to her career was her decision to join the group of Miriani Pastoriza, one of the pioneers of extragalactic astrophysics in Latin America. “She came from Argentina, where [in the late 1970s and early 1980s] scientists were being strongly persecuted [by the country’s military dictatorship] at the time,” she recalls. Pastoriza studied galaxies with “peculiar nuclei” – objects later known to harbour supermassive black holes. Under Pastoriza’s guidance, she moved from stars to galaxies, laying the foundation for her career.
+Between 1986 and 1987, Storchi Bergmann often travelled to Chile to make observations and gather data for her PhD, using some of the largest telescopes available at the time. Then came a transformative period – a postdoc fellowship in Maryland, US, just as the Hubble Space Telescope was launched in 1990. “Each Thursday, I would drive to Baltimore for informal bag-lunch talks at the Space Telescope Science Institute, absorbing new results on active galactic nuclei (AGN) and supermassive black holes,” Storchi Bergmann recalls.
+In 1991, during an observing campaign, she and a collaborator saw something extraordinary in the galaxy NGC 1097: gas moving at immense speeds, captured by the galaxy’s central black hole. The work, published in 1993, became one of the earliest documented cases of what are now called “tidal disruption events”, in which a star or cloud gets too close to a black hole and is torn apart.
+Her research also contributed to one of the defining insights of the Hubble era: that every massive galaxy hosts a central black hole. “At first, we didn’t know if they were rare,” she explains. “But gradually it became clear: these objects are fundamental to galaxy evolution.”
+ +Another collaboration brought her into contact with Daniela Calzetti, whose work on the effects of interstellar dust led to the formulation of the widely used “Calzetti law”. These and other contributions placed Storchi Bergmann among the most cited scientists worldwide, recognition of which came in 2015 when she received the L’Oréal-UNESCO Award for Women in Science.
+Her scientific achievements, however, unfolded against personal and structural obstacles. As a young mother, she often brought her baby to observatories and conferences so she could breastfeed. This kind of juggling is no stranger to many women in science.
+“It was never easy,” Storchi Bergmann reflects. “I was always running, trying to do 20 things at once.” The lack of childcare infrastructure in universities compounded the challenge. She recalls colleagues who succeeded by giving up on family life altogether. “That is not sustainable,” she insists. “Science needs all perspectives – male, female and everything in-between. Otherwise, we lose richness in our vision of the universe.”
+When she attended conferences early in her career, she was often the only woman in the room. Today, she says, the situation has greatly improved, even if true equality remains distant.
+Now a tenured professor at UFRGS and a member of the Brazilian Academy of Sciences, Storchi Bergmann continues to push at the cosmic frontier. Her current focus is the Legacy Survey of Space and Time (LSST), about to begin at the Vera Rubin Observatory in Chile.
+Her group is part of the AGN science collaboration, developing methods to analyse the characteristic flickering of accreting black holes. With students, she is experimenting with automated pipelines and artificial intelligence to make sense of and manage the massive amounts of data.
+Yet this frontier science is not guaranteed. Storchi Bergmann is frustrated by the recent collapse in research scholarships. Historically, her postgraduate programme enjoyed a strong balance of grants from both of Brazil’s federal research funding agencies, CNPq (from the Ministry of Science) and CAPES (from the Ministry of Education). But cuts at CNPq, she says, have left students without support, and CAPES has not filled the gap.
+“The result is heartbreaking,” she says. “I have brilliant students ready to start, including one from Piauí (a state in north-eastern Brazil), but without a grant, they simply cannot continue. Others are forced to work elsewhere to support themselves, leaving no time for research.”
+She is especially critical of the policy of redistributing scarce funds away from top-rated programmes to newer ones without expanding the overall budget. “You cannot build excellence by dismantling what already exists,” she argues.
+For her, the consequences go beyond personal frustration. They risk undermining decades of investment that placed Brazil on the international astrophysics map. Despite these challenges, Storchi Bergmann remains driven and continues to mentor master’s and PhD students, determined to prepare them for the LSST era.
+At the heart of her research is a question as grand as any in cosmology: which came first – the galaxy or its central black hole? The answer, she believes, will reshape our understanding of how the universe came to be. And it will carry with it the fingerprint of her work: the persistence of a Brazilian scientist who followed her curiosity from a home-made lab to the centres of galaxies, overcoming obstacles along the way.
+The post ‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann appeared first on Physics World.
+]]>The post Precision sensing experiment manipulates Heisenberg’s uncertainty principle appeared first on Physics World.
+]]>“Heisenberg’s principle says that if two operators – for example, position x and momentum, p – do not commute, then one cannot simultaneously measure both of them to absolute precision,” explains team leader Ting Rei Tan of the University of Sydney’s Nano Institute. “Our result shows that one can instead construct new operators – namely ‘modular position’ x̂ and ‘modular momentum’ p̂. These operators can be made to commute, meaning that we can circumvent the usual limitation imposed by the uncertainty principle.”
+The modular measurements, he says, give the true measurement of displacements in position and momentum of the particle if the distance is less than a specific length l, known as the modular length. In the new work, they measured x̂ = x mod lx and p̂ = p mod lp, where lx and lp are the modular length in position and momentum.
+ +“Since the two modular operators x̂ and p̂ commute, this means that they are now bounded by an uncertainty principle where the product is larger or equal to 0 (instead of the usual ℏ/2),” adds team member Christophe Valahu. “This is how we can use them to sense position and momentum below the standard quantum limit. The catch, however, is that this scheme only works if the signal being measured is within the sensing range defined by the modular lengths.”
+The researchers stress that Heisenberg’s uncertainty principle is in no way “broken” by this approach, but it does mean that when observables associated with these new operators are measured, the precision of these measurements is not limited by this principle. “What we did was to simply push the uncertainty to a sensing range that is relatively unimportant for our measurement to obtain a better precision at finer details,” Valahu tells Physics World.
+This concept, Tan explains, is related to an older method known as quantum squeezing that also works by shifting uncertainties around. The difference is that in squeezing, one reshapes the probability, reducing the spread in position at the cost of enlarging the spread of momentum, or vice versa. “In our scheme, we instead redistribute the probability, reducing the uncertainties of position and momentum within a defined sensing range, at the cost of an increased uncertainty if the signal is not guaranteed to lie within this range,” Tan explains. “We effectively push the unavoidable quantum uncertainty to places we don’t care about (that is, big, coarse jumps in position and momentum) so the fine details we do care about can be measured more precisely.
+“Thus, as long as we know the signal is small (which is almost always the case for precision measurements), modular measurements give us the correct answer.”
+The particle being measured in Tan and colleagues’ experiment was a 171Yb+ ion trapped in a so-called grid state, which is a subclass of error-correctable logical state for quantum bits, or qubits. The researchers then used a quantum phase estimation protocol to measure the signal they imprinted onto this state, which acts as a sensor.
+This measurement scheme is similar to one that is commonly used to measure small errors in the logical qubit state of a quantum computer. “The difference is that in this case, the ‘error’ corresponds to a signal that we want to estimate, which displaces the ion in position and momentum,” says Tan. “This idea was first proposed in a theoretical study.”
+The Sydney researchers hope their result will motivate the development of next-generation precision quantum sensors. Being able to detect extremely small changes is important for many applications of quantum sensing, including navigating environments where GPS isn’t effective (such as on submarines, underground or in space). It could also be useful for biological and medical imaging, materials analysis and gravitational systems.
+ +Their immediate goal, however, is to further improve the sensitivity of their sensor, which is currently about 14 x10-24 N/Hz1/2, and calculate its limit. “It would be interesting if we could push that to the 10-27 N level (which, admittedly, will not be easy) since this level of sensitivity could be relevant in areas like the search for dark matter,” Tan says.
+Another direction for future research, he adds, is to extend the scheme to other pairs of observables. “Indeed, we have already taken some steps towards this: in the latter part of our present study, which is published in Science Advances, we constructed a modular number operator and a modular phase operator to demonstrate that the strategy can be extended beyond position and momentum.”
+The post Precision sensing experiment manipulates Heisenberg’s uncertainty principle appeared first on Physics World.
+]]>The post Eye implant restores vision to patients with incurable sight loss appeared first on Physics World.
+]]>AMD is the most common cause of incurable blindness in older adults. In its advanced stage, known as geographic atrophy, AMD can cause progressive, irreversible death of light-sensitive photoreceptors in the centre of the retina. This loss of photoreceptors means that light is not transduced into electrical signals, causing profound vision loss.
+ +The PRIMA system works by replacing these lost photoreceptors. The two-part system includes the implant itself: a 2 × 2 mm array of 378 photovoltaic pixels, plus PRIMA glasses containing a video camera that captures images and, after processing, projects them onto the implant using near-infrared light. The pixels in the implant convert this light into electrical pulses, restoring the flow of visual information to the brain. Patients can use the glasses to focus and zoom the image that they see.
+The clinical study, led by Frank Holz of the University of Bonn in Germany, enrolled 38 participants at 17 hospital sites in five European countries. All participants had geographic atrophy due to AMD in both eyes, as well as loss of central sight in the study eye over a region larger than the implant (more than 2.4 mm in diameter), leaving only limited peripheral vision.
+Around one month after surgical insertion of the 30 μm-thick PRIMA array into one eye, the patients began using the glasses. All underwent training to learn to interpret the visual signals from the implant, with their vision improving over months of training.
+
After one year, 27 of the 32 patients who completed the trial could read letters and words (with some able to read pages in a book) and 26 demonstrated clinically meaningful improvement in visual acuity (the ability to read at least two extra lines on a standard eye chart). On average, participants could read an extra five lines, with one person able to read an additional 12 lines.
+Nineteen of the participants experienced side-effects from the surgical procedure, with 95% of adverse events resolving within two months. Importantly, their peripheral vision was not impacted by PRIMA implantation. The researchers note that the infrared light used by the implant is not visible to remaining photoreceptors outside the affected region, allowing patients to combine their natural peripheral vision with the prosthetic central vision.
+ +“Before receiving the implant, it was like having two black discs in my eyes, with the outside distorted,” Sheila Irvine, a trial patient treated at Moorfields Eye Hospital in the UK, says in a press statement. “I was an avid bookworm, and I wanted that back. There was no pain during the operation, but you’re still aware of what’s happening. It’s a new way of looking through your eyes, and it was dead exciting when I began seeing a letter. It’s not simple, learning to read again, but the more hours I put in, the more I pick up. It’s made a big difference.”
+The PRIMA system – originally designed by Daniel Palanker at Stanford University – is being developed and manufactured by Science Corporation. Based on these latest results, reported in the New England Journal of Medicine, the company has applied for clinical use authorization in Europe and the United States.
+The post Eye implant restores vision to patients with incurable sight loss appeared first on Physics World.
+]]>The post Single-phonon coupler brings different quantum technologies together appeared first on Physics World.
+]]>“One of the main advantages of phonons over photons is they interact with a lot of different things,” explains team leader Simon Gröblacher of the Kavli Institute of Nanoscience at Delft University of Technology. “So it’s very easy to make them interface with systems.”
+ +There are, however, a few elements still missing from the phononic circuitry developer’s toolkit. One such element is a reversible beam splitter that can either combine two phonon channels (which might be carrying quantum information transferred from different media) or split one channel into two, depending on its orientation.
+While several research groups have already investigated designs for such phonon splitters, these works largely focused on surface acoustic waves. This approach has some advantages, as waves of this type have already been widely explored and exploited commercially. Mobile phones, for example, use surface acoustic waves as filters for microwave signals. The problem is that these unconfined mechanical excitations are prone to substantial losses as phonons leak into the rest of the chip.
+Gröblacher and his collaborators chose instead to mimic the design of beam splitters used in photonic chips. They used a strip of thin silicon to fashion a waveguide for phonons that confined them in all dimensions but one, giving additional control and reducing loss. They then brought two waveguides into contact with each other so that one waveguide could “feel” the mechanical excitations in the other. This allowed phonon modes to be coupled between the waveguides – something the team demonstrated down to the single-phonon level. The researchers also showed they could tune the coupling between the two waveguides by altering the contact length.
+Although this is the first demonstration of single-mode phonon coupling in this kind of waveguide, the finite element method simulations Gröblacher and his colleagues ran beforehand made him pretty confident it would work from the outset. “I’m not surprised that it worked. I’m always surprised how hard it is to get it to work,” he tells Physics World. “Making it to look and do exactly what you design it to do – that’s the really hard part.”
+According to A T Charlie Johnson, a physicist at the University of Pennsylvania, US whose research focuses on this area, that hard work paid off. “These very exciting new results further advance the prospects for phonon-based qubits in quantum technology,” says Johnson, who was not directly involved in the demonstration. “Integrated quantum phononics is one significant step closer.”
+As well as switching between different quantum media, the new single-phonon coupler could also be useful for frequency shifting. For instance, microwave frequencies are close to the frequencies of ambient heat, which makes signals at these frequencies much more prone to thermal noise. Gröblacher already has a company working on transducers to transform quantum information from microwave to optical frequencies with this challenge in mind, and he says a single-phonon coupler could be handy.
+ +One remaining challenge to overcome is dispersion, which occurs when phonon modes couple to other unwanted modes. This is usually due to imperfections in the nanofabricated device, which are hard to avoid. However, Gröblacher also has other aspirations. “I think the one component that’s missing for us to have the similar level of control over phonons as people have with photons is a phonon phase shifter,” he tells Physics World. This, he says, would allow on-chip interferometry to route phonons to different parts of a chip, and perform advanced quantum experiments with phonons.
+The study is reported in Optica.
+The post Single-phonon coupler brings different quantum technologies together appeared first on Physics World.
+]]>The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.
+]]>The parasitic roundworm Steinernema carpocapsae, which live in soil, are already known to leap some 25 times their body length into the air. They do this by curling into a loop and springing in the air, rotating hundreds of times a second.
+ +If the nematode lands successfully, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. At the same time, if it fails to attach to a host then it faces death itself.
+While static electricity plays a role in how some non-parasitic nematodes detach from large insects, little is known whether static helps their parasitic counterparts to attach to an insect.
+To investigate, researchers are Emory University and the University of California, Berkeley, conducted a series of experiments, in which they used highspeed microscopy techniques to film the worms as they leapt onto a fruit fly.
+They did this by tethering a fly with a copper wire that was connected to a high-voltage power supply.
+They found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly.
+Carrying out simulations of the worm jumps, they found that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing. For 880 V, for example, the probability was 80%.
+The team also carried out experiments using a wind tunnel, finding that the presence of wind helped the nematodes drift and this also increased their chances of attaching to the insect.
+“Using physics, we learned something new and interesting about an adaptive strategy in an organism,” notes Emory physicist Ranjiangshang Ran. “We’re helping to pioneer the emerging field of electrostatic ecology.”
+The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.
+]]>The post Wearable UVA sensor warns about overexposure to sunlight appeared first on Physics World.
+]]>
A flexible and wearable sensor that allows the user to monitor their exposure to ultraviolet (UV) radiation has been unveiled by researchers in South Korea. Based on a heterostructure of four different oxide semiconductors, the sensor’s flexible, transparent design could vastly improve the real-time monitoring of skin health.
+UV light in the A band has wavelengths of 315–400 nm and comprises about 95% of UV radiation that reaches the surface of the earth. Because of its relatively long wavelength, UVA can penetrate deep into the skin. There it can alter biological molecules, damaging tissue and even causing cancer.
+While covering up with clothing and using sunscreen are effective at reducing UVA exposure, researchers are keen on developing wearable sensors that can monitor UVA levels in real time. These can alert users when their UVA exposure reaches a certain level. So far, the most promising advances towards these designs have come from oxide semiconductors.
+“For the past two decades, these materials have been widely explored for displays and thin-film transistors because of their high mobility and optical transparency,” explains Seong Jun Kang at Kyung Hee University, who led the research. “However, their application to transparent ultraviolet photodetectors has been limited by high persistent photocurrent, poor UV–visible discrimination, and instability under sunlight.”
+ +While these problems can be avoided in more traditional UV sensors, such as gallium nitride and zinc oxide, these materials are opaque and rigid – making them completely unsuitable for use in wearable sensors.
+In their study, Kang’s team addressed these challenges by introducing a multi-junction heterostructure, made by stacking multiple ultrathin layers of different oxide semiconductors. The four semiconductors they selected each had wide bandgaps, which made them more transparent in the visible spectrum but responsive to UV light.
+The structure included zinc and tin oxide layers as n-type semiconductors (doped with electron-donating atoms) and cobalt and hafnium oxide layers as p-type semiconductors (doped with electron-accepting atoms) – creating positively charged holes. Within the heterostructure, this selection created three types of interface: p–n junctions between hafnium and tin oxide; n–n junctions between tin and zinc oxide; and p–p junctions between cobalt and hafnium oxide.
+When the team illuminated their heterostructure with UVA photons, the electron–hole charge separation was enhanced by the p–n junction, while the n–n and p–p junctions allowed for more efficient transport of electrons and holes respectively, improving the design’s response speed. When the illumination was removed, the electron–hole pairs could quickly decay, avoiding any false detections.
+ +To test their design’s performance, the researchers integrated their heterostructure into a wearable detector. “In collaboration with UVision Lab, we developed an integrated Bluetooth circuit and smartphone application, enabling real-time display of UVA intensity and warning alerts when an individual’s exposure reaches the skin-type-specific minimal erythema dose (MED),” Kang describes. “When connected to the Bluetooth circuit and smartphone application, it successfully tracked real-time UVA variations and issued alerts corresponding to MED limits for various skin types.”
+As well as maintaining over 80% transparency, the sensor proved highly stable and responsive, even in direct outdoor sunlight and across repeated exposure cycles. Based on this performance, the team is now confident that their design could push the capabilities of oxide semiconductors beyond their typical use in displays and into the fast-growing field of smart personal health monitoring.
+“The proposed architecture establishes a design principle for high-performance transparent optoelectronics, and the integrated UVA-alert system paves the way for next-generation wearable and Internet-of-things-based environmental sensors,” Kang predicts.
+The research is described in Science Advances.
+The post Wearable UVA sensor warns about overexposure to sunlight appeared first on Physics World.
+]]>The post Astronauts could soon benefit from dissolvable eye insert appeared first on Physics World.
+]]>While eye conditions can generally be treated with medication, delivering drugs in space is not a straightforward task. Eye drops simply don’t work without gravity, for example. To address this problem, researchers in Hungary are developing a tiny dissolvable eye insert that could deliver medication directly to the eye. The size of a grain of rice, the insert has now been tested by an astronaut on the International Space Station.
+This episode of the Physics World Weekly podcast features two of those researchers – Diána Balogh-Weiser of Budapest University of Technology and Economics and Zoltán Nagy of Semmelweis University – who talk about their work with Physics World’s Tami Freeman.
+The post Astronauts could soon benefit from dissolvable eye insert appeared first on Physics World.
+]]>The post Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids appeared first on Physics World.
+]]>“With a clear three-dimensional image of where supercritical fluids are located and how they move, we can identify promising drilling targets and design safer and more efficient development plans,” Tsuji says. “This could have direct implications for expanding geothermal power generation, reducing dependence on fossil fuels, and contributing to carbon neutrality and energy security in Japan and globally.”
+ +In their study, Tsuji and colleagues focused on a region known as the brittle-ductile transition zone, which is where rocks go from being seismically active to mostly inactive. This zone is important for understanding volcanic activity and geothermal processes because it lies near an impermeable sealing band that allows fluids such as water to accumulate in a high-pressure, supercritical state. When these fluids undergo phase transitions, earthquakes may follow. However, such fluids could also produce more geothermal energy than conventional systems. Identifying their location is therefore important for this reason, too.
+Many previous electromagnetic and magnetotelluric surveys suffered from low spatial resolution and were limited to regions relatively close to the Earth’s surface. In contrast, the techniques used in the latest study enabled Tsuji and colleagues to create a clear high-resolution “digital map” of deep geothermal reservoirs – something that has never been achieved before.
+To make their map, the researchers used three-dimensional multichannel seismic surveys to image geothermal structures in the Kuju volcanic group, which is located on the Japanese island of Kyushu. They then analysed these images using a method they developed known as extended Common Reflection Surface (CRS) stacking. This allowed them to visualize deeper underground features such as magma-related structures, fracture-controlled fluid pathways and rock layers that “seal in” supercritical fluids.
+“In addition to this, we applied advanced seismic tomography and machine-learning based analyses to determine the seismic velocity of specific structures and earthquake mechanisms with high accuracy,” explains Tsuji. “It was this integrated approach that allowed us to image a deep geothermal system in unprecedented detail.” He adds that the new technique is also better suited to mountainous geothermal regions where limited road access makes it hard to deploy the seismic sources and receivers used in conventional surveys.
+Tsuji and colleagues chose to study the Kuju area because it is home to several volcanoes that were active roughly 1600 years ago and have erupted intermittently in recent years. The region also hosts two major geothermal power plants, Hatchobaru and Otake. The former has a capacity of 110 MW and is the largest geothermal facility in Japan.
+The heat source for both plants is thought to be located beneath Mt Kuroiwa and Mt Sensui, and the region is considered a promising site for supercritical geothermal energy production. Its geothermal reservoir appears to consist of water that initially fell as precipitation (so-called meteoric water) and was heated underground before migrating westward through the fault system. Until now, though, no detailed images of the magmatic structures and fluid pathways had been obtained.
+ +Tsuji says he has long wondered why geothermal power is not more widely used in Japan, despite the country’s abundant volcanic and thermal resources. “Our results now provide the scientific and technical foundation for next-generation supercritical geothermal power,” he tells Physics World.
+The researchers now plan to try out their technique using portable seismic sources and sensors deployed in mountainous areas (not just along roads) to image the shallower parts of geothermal systems in greater detail as well. “We also plan to extend our surveys to other geothermal fields to test the general applicability of our method,” Tsuji says. “Ultimately, our goal is to provide a reliable scientific basis for the large-scale deployment of supercritical geothermal power as a sustainable energy source.”
+The present work is detailed in Communications Earth & Environment.
+The post Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids appeared first on Physics World.
+]]>The post Researchers visualize blood flow in pulsating artificial heart appeared first on Physics World.
+]]>The Linköping University (LiU) team used 4D flow MRI to examine the internal processes of a mechanical heart prototype created by Västerås-based technology company Scandinavian Real Heart. The researchers evaluated blood flow patterns and compared them with similar measurements taken in a native human heart, outlining their results in Scientific Reports.
+“As the pulsatile total artificial heart contains metal parts, like the motor, we used 3D printing [to replace most metal parts] and a physiological flow loop so we could run it in the MRI scanner under representable conditions,” says first author Twan Bakker, a PhD student at the Center for Medical Image Science and Visualization at LiU.
+According to Bakker, this is first time that a 3D-printed MRI-compatible artificial heart has been built and successfully evaluated using 4D flow MRI. The team was pleased to discover that the results corroborate the findings of previous computational fluid dynamics simulations indicating “low shear stress and low stagnation”. Overall flow patterns also suggest there is no elevated risk for blood complications compared with hearts in healthy humans and those suffering from valvular disease.
+ +“[The] patterns of low blood flow, a risk for thrombosis, were in the same range as for healthy native human hearts. Patterns of turbulent flow, a risk for activation of blood platelets, which can contribute to thrombosis, were lower than those found in patients with valvular disease,” says Bakker.
+“4D flow MRI allows us to measure the flow field without altering the function of the total artificial heart, which is therefore a valuable tool to complement computer simulations and blood testing during the development of the device. Our measurements provided valuable information to the design team that could improve the artificial heart prototype further,” he adds.
+A key advantage of 4D flow MRI over alternative measurement techniques – such as particle image velocimetry and laser doppler anemometry – is that it doesn’t require the creation of a fully transparent model. This is an important distinction for Bakker, since some components in the artificial heart are made with materials possessing unique mechanical properties, meaning that replication in a see-through version would be extremely challenging.
+
“With 4D flow MRI we had to move the motor away from the scanner bore, but the material in contact with the blood and the motion of the device remained as the original design,” says Bakker.
+According to Bakker, the velocity measurements can also be used for visualization and analysis of hemodynamic parameters, such as turbulent kinetic energy, wall shear stresses and more in the heart, as well as for larger vessels in our bodies.
+“By studying the flow dynamics in patients and healthy subjects, we can better understand its role in health and disease, which can then support improved diagnostics, interventions and surgical therapies,” he explains.
+Moving forward, Bakker says that the research team will continue to evaluate the improved heart design, which was recently granted designation as a Humanitarian Use Device (HUD) by the US Food and Drug Administration (FDA).
+ +“This makes it possible to apply for designation as a Humanitarian Device Exemption (HDE) – which may grant the device limited marketing rights and paves the way for the pre-clinical and clinical studies,” he says.
+“In addition, we are currently developing tools to compute blood flow using simulations. This may provide us with a deeper understanding of the mechanisms that cause the formation of thrombosis and haemolysis,” he tells Physics World.
+The post Researchers visualize blood flow in pulsating artificial heart appeared first on Physics World.
+]]>The post Evo CT-Linac eases access to online adaptive radiation therapy appeared first on Physics World.
+]]>Elekta, the company behind the Unity MR-Linac, believes that in time, all radiation treatments will incorporate ART as standard. Towards this goal, it brings its broad knowledge base from the MR-Linac to the new Elekta Evo, a next-generation CT-Linac designed to improve access to ART. Evo incorporates AI-enhanced cone-beam CT (CBCT), known as Iris, to provide high-definition imaging, while its Elekta ONE Online software automates the entire workflow, including auto-contouring, plan adaptation and end-to-end quality assurance.
+In February of this year, Matthias Lampe and his team at the private centre DTZ Radiotherapy in Berlin, Germany became the first in the world to treat patients with online ART (delivering daily plan updates while the patient is on the treatment couch) using Evo. “To provide proper tumour control you must be sure to hit the target – for that, you need online ART,” Lampe tells Physics World.
+
The ability to visualize and adapt to daily anatomy enables reduction of the planning target volume, increasing safety for nearby organs-at-risk (OARs). “It is highly beneficial for all treatments in the abdomen and pelvis,” says Lampe. “My patients with prostate cancer report hardly any side effects.”
+Lampe selected Evo to exploit the full flexibility of its C-arm design. He notes that for the increasingly prevalent hypofractionated treatments, a C-arm configuration is essential. “CT-based treatment planning and AI contouring opened up a new world for radiation oncologists,” he explains. “When Elekta designed Evo, they enabled this in an achievable way with an extremely reliable machine. The C-arm linac is the primary workhorse in radiotherapy, so you have the best of everything.”
+While online ART can take longer than conventional treatments, Evo’s use of automation and AI limits the additional time requirement to just five minutes – increasing the overall workflow from 12 to 17 minutes and remaining within the clinic’s standard time slots.
+
The workflow begins with patient positioning and CBCT imaging, with Evo’s AI-enhanced Iris imaging significantly improving image quality, crucial when performing ART. The radiation therapist then matches the cone-beam and planning CTs and performs any necessary couch shift.
+Simultaneously, Elekta ONE Online performs AI auto-contouring of OARs, which are reviewed by the physician, and the target volume is copied in. The physicist then simulates the dose distribution on the new contours, followed by a plan review. “Then you can decide whether to adapt or not,” says Lampe. “This is an outstanding feature.” The final stage is treatment delivery and online dosimetry.
+When DTZ Berlin first began clinical treatments with Evo, some of Lampe’s colleagues were apprehensive as they were attached to the conventional workflow. “But now, with CBCT providing the chance to see what will be treated, every doctor on my team has embraced the shift and wouldn’t go back,” he says.
+The first treatments were for prostate cancer, a common indication that’s relatively easy to treat. “I also thought that if the Elekta ONE workflow struggled, I could contour this on my own in a minute,” says Lampe. “But this was never necessary, the process is very solid. Now we also treat prostate cancer patients with lymph node metastases and those with relapse after radiotherapy. It’s a real success story.”
+Lampe says that older and frailer patients may benefit the most from online ART, pointing out that while published studies often include relatively young, healthy patients, “our patients are old, they have chronic heart disease, they’re short of breath”.
+For prostate cancer, for example, patients are instructed to arrive with a full bladder and an empty rectum. “But if a patient is in his eighties, he may not be able to do this and the volumes will be different every day,” Lampe explains. “With online adaptive, you can tell patients: ‘if this is not possible, we will handle it, don’t stress yourself’. They are very thankful.”
+At UMC Utrecht in the Netherlands, the radiotherapy team has also added CT-Linac online adaptive to its clinical toolkit.
+UMC Utrecht is renowned for its development of MR-guided radiotherapy, with physicists Bas Raaymakers and Jan Lagendijk pioneering the development of a hybrid MR-Linac. “We come from the world of MR-guidance, so we know that ART makes sense,” says Raaymakers. “But if we only offer MR-guided radiotherapy, we miss out on a lot of patients. We wanted to bring it to the wider community.”
+
At the time of speaking to Physics World, the team was treating its second patient with CBCT-guided ART, and had delivered about 30 fractions. Both patients were treated for bladder cancer, with future indications to explore including prostate, lung and breast cancers and bone metastases.
+“We believe in ART for all patients,” says medical physicist Anette Houweling. “If you have MR and CT, you should be able to choose the optimal treatment modality based on image quality. For below the diaphragm, this is probably MR, while for the thorax, CT might be better.”
+Houweling says that ART delivery has taken 19 minutes on average. “We record the CBCT, perform image fusion and then the table is moved, that’s all standard,” she explains. “Then the adaptive part comes in: delineation on the CBCT and creating a new plan with Elekta ONE Planning as part of Elekta One Online.”
+The plan adaptation, when selected to perform, takes roughly four minutes to create a clinical-grade volumetric-modulated arc therapy (VMAT) plan. With the soon to be installed next-generation optimizer, it is expected to take less than one minute to generate a VMAT plan.
+“As you start with the regular workflow, you can still decide not to choose adaptive treatment, and do a simple couch shift, up until the last second,” says Raaymakers. It’s very close to the existing workflow, which makes adoption easier. Also, the treatment slots are comparable to standard slots. Now with CBCT it takes 19 minutes and we believe we can get towards 10. That’s one of the drivers for cone-beam adaptive.”
+Shorter treatment times will impact the decision as to which patients receive ART. If fully automated adaptive treatment is deliverable in a 10-minute time slot, it could be available to all patients. “From the physics side, our goal is to have no technological limitations to delivering ART. Then it’s up to the radiation oncologists to decide which patients might benefit,” Raaymakers explains.
+Looking to the future, Raaymakers predicts that simulation-free radiotherapy will be adopted for certain standard treatments. “Why do you need days of preparation if you can condense the whole process to the moment when the patient is on the table,” he says. “That would be very much helped by online ART.”
+“Scroll forward a few years and I expect that ART will be automated and fast such that the user will just sign off the autocontours and plan in one, maybe tune a little, and then go ahead,” adds Houweling. “That will be the ultimate goal of ART. Then there’s no reason to perform radiotherapy the traditional way.”
+The post Evo CT-Linac eases access to online adaptive radiation therapy appeared first on Physics World.
+]]>The post Jesper Grimstrup’s <em>The Ant Mill</em>: could his anti-string-theory rant do string theorists a favour? appeared first on Physics World.
+]]>According to the book, you used to be inventive, perceptive and dashing. Then you started hanging out with the wrong crowd, and became competitive, self-involved and incapable of true friendship. Your ex struggles to turn you around; failing, they leave. The book, though, is so over-the-top that by the end you stop cringing and find it a hoot.
+That’s how I think most Physics World readers will react to The Ant Mill: How Theoretical High-energy Physics Descended into Groupthink, Tribalism and Mass Production of Research. Its author and self-publisher is the Danish mathematician-physicist Jesper Grimstrup, whose previous book was Shell Beach: the Search for the Final Theory.
+ +After receiving his PhD in theoretical physics at the Technical University of Vienna in 2002, Grimstrup writes, he was “one of the young rebels” embarking on “a completely unexplored area” of theoretical physics, combining elements of loop quantum gravity and noncommutative geometry. But there followed a decade of rejected articles and lack of opportunities.
+Grimstrup became “disillusioned, disheartened, and indignant” and in 2012 left the field, selling his flat in Copenhagen to finance his work. Grimstrup says he is now a “self-employed researcher and writer” who lives somewhere near the Danish capital. You can support him either through Ko-fi or Paypal.
+The Ant Mill opens with a copy of the first page of the letter that Grimstrup’s fellow Dane Niels Bohr sent in 1917 to the University of Copenhagen successfully requesting a four-storey building for his physics institute. Grimstrup juxtaposes this incident with the rejection of his funding request, almost a century later, by the Danish Council for Independent Research.
+Today, he writes, theoretical physics faces a situation “like the one it faced at the time of Niels Bohr”, but structural and cultural factors have severely hampered it, making it impossible to pursue promising new ideas. These include Grimstrup’s own “quantum holonomy theory, which is a candidate for a fundamental theory”. The Ant Mill is his diagnosis of how this came about.
++The Standard Model of particle physics, according to Grimstrup, is dominated by influential groups that squeeze out other approaches.
+
A major culprit, in Grimstrup’s eyes, was the Standard Model of particle physics. That completed a structure for which theorists were trained to be architects and should have led to the flourishing of a new crop of theoretical ideas. But it had the opposite effect. The field, according to Grimstrup, is now dominated by influential groups that squeeze out other approaches.
+The biggest and most powerful is string theory, with loop quantum gravity its chief rival. Neither member of the coterie can make testable predictions, yet because they control jobs, publications and grants they intimidate young researchers and create what Grimstrup calls an “undercurrent of fear”. (I leave assessment of this claim to young theorists.)
+Half the chapters begin with an anecdote in which Grimstrup describes an instance of rejection by a colleague, editor or funding agency. In the book’s longest chapter Grimstrup talks about his various rejections – by the Carlsberg Foundation, The European Physics Journal C, International Journal of Modern Physics A, Classical and Quantum Gravity, Reports on Mathematical Physics, Journal of Geometry and Physics, and the Journal of Noncommutative Geometry.
+ +Grimstrup says that the reviewers and editors of these journals told him that his papers variously lacked concrete physical results, were exercises in mathematics, seemed the same as other papers, or lacked “relevance and significance”. Grimstrup sees this as the coterie’s handiwork, for such journals are full of string theory papers open to the same criticism.
+“Science is many things,” Grimstrup writes at the end. “[S]imultaneously boring and scary, it is both Indiana Jones and anonymous bureaucrats, and it is precisely this diversity that is missing in the modern version of science”. What the field needs is “courage…hunger…ambition…unwillingness to compromise…anarchy.”
+Grimstrup hopes that his book will have an impact, helping to inspire young researchers to revolt, and to make all the scientific bureaucrats and apparatchiks and bookkeepers and accountants “wake up and remember who they truly are”.
+The Ant Mill is an example of what I have called “rant literature” or rant-lit. Evangelical, convinced that exposing truth will make sinners come to their senses and change their evil ways, rant lit can be fun to read, for it is passionate and full of florid metaphors.
+Theoretical physicists, Grimstrup writes, have become “obedient idiots” and “technicians”. He slams theoretical physics for becoming a “kingdom”, a “cult”, a “hamster wheel”, and “ant mill”, in which the ants march around in a pre-programmed “death spiral”.
++Grimstrup hammers away at theories lacking falsifiability, but his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”
+
An attentive reader, however, may come away with a different lesson. Grimstrup calls falsifiability the “crown jewel of the natural sciences” and hammers away at theories lacking it. But his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”
+In his 2013 book String Theory and the Scientific Method, for instance, the Stockholm University philosopher of science Richard Dawid suggested rescuing the scientific status of string theory by adding such non-empirical criteria to evaluating theories as clarity, coherence and lack of alternatives. It’s an approach that both rescues the formalistic approach to the scientific method and undermines it.
+Dawid, you see, is making the formalism follow the practice rather than the other way around. In other words, he is able to reformulate how we make theories because he already knows how theorizing works – not because he only truly knows what it is to theorize after he gets the formalism right.
+Grimstrup’s rant, too, might remind you of the birth of the Yang–Mills theory in 1954. Developed by Chen Ning Yang and Robert Mills, it was a theory of nuclear binding that integrated much of what was known about elementary particle theory but implied the existence of massless force-carrying particles that then were known not to exist. In fact, at one seminar Wolfgang Pauli unleashed a tirade against Yang for proposing so obviously flawed a theory.
+The theory, however, became central to theoretical physics two decades later, after theorists learned more about the structure of the world. The Yang-Mills story, in other words, reveals that theory-making does not always conform to formal strictures and does not always require a testable prediction. Sometimes it just articulates the best way to make sense of the world apart from proof or evidence.
+The lesson I draw is that becoming the target of a rant might not always make you feel repentant and ashamed. It might inspire you into deep reflection on who you are in a way that is insightful and vindicating. It might even make you more rather than less confident about why you’re doing what you’re doing
+Your ex, of course, would be horrified.
+The post Jesper Grimstrup’s <em>The Ant Mill</em>: could his anti-string-theory rant do string theorists a favour? appeared first on Physics World.
+]]>The post Further evidence for evolving dark energy? appeared first on Physics World.
+]]>Dark energy is now a well established concept and forms a key part of the standard model of Big Bang cosmology, the Lambda-CDM model.
+The trouble is, we’ve never really been able to explain exactly what dark energy is, or why it has the value that it does.
+Even worse, new data acquired by cutting-edge telescopes have suggested that dark energy might not even exist as we had imagined it.
+This is where the new work by Mukherjee and Sen comes in. They combined two of these datasets, while making as few assumptions as possible, to understand what’s going on.
+The first of these datasets came from baryon acoustic oscillations. These are patterns in the distribution of matter in the universe, created by sound waves in the early universe.
+The second dataset is based on a survey of supernovae data from the last 5 years. Both sets of data can be used to track the expansion history of the universe by measuring distances at different snapshots in time.
+The team’s results are in tension with the Lambda-CDM model at low redshifts. Put simply, the results disagree with the current model at recent times. This provides further evidence for the idea that dark energy, previously considered to have a constant value, is evolving over time.
+
The is far from the end of the story with dark energy. New observational data, and new analyses such as this one are urgently required to provide a clearer picture.
+However, where there’s uncertainty, there’s opportunity. Understanding dark energy could hold the key to understanding quantum gravity, the Big Bang and the ultimate fate of the universe.
+Mukherjee and Sen, 2025 Rep. Prog. Phys. 88 098401
++
+
+
+
The post Further evidence for evolving dark energy? appeared first on Physics World.
+]]>The post Searching for dark matter particles appeared first on Physics World.
+]]>The Standard Model of particle physics does not contain any dark matter particles but there have been several proposed extensions of how they might be included. Several of these are very low mass particles such as the axion or the sterile neutrino.
+Detecting these hypothesised particles is very challenging, however, due to the extreme sensitivity required.
+Electromagnetic resonant systems, such as cavities and LC circuits, are widely used for this purpose, as well as to detect high-frequency gravitational waves.
+When an external signal matches one of these systems’ resonant frequencies, the system responds with a large amplitude, making the signal possible to detect. However, there is always a trade-off between the sensitivity of the detector and the range of frequencies it is able to detect (its bandwidth).
+A natural way to overcome this compromise is to consider multi-mode resonators, which can be viewed as coupled networks of harmonic oscillators. Their scan efficiency can be significantly enhanced beyond the standard quantum limit of simple single-mode resonators.
+In a recent paper, the researchers demonstrated how multi-mode resonators can achieve the advantages of both sensitive and broadband detection. By connecting adjacent modes inside the resonant cavity, and tuning these interactions to comparable magnitudes, off-resonant (i.e. unwanted) frequency shifts are effectively cancelled increasing the overall response of the system.
+Their method allows us to search for these elusive dark matter particles in a faster, more efficient way.
+
Chen et al. 2025 Rep. Prog. Phys. 88 057601
++
The post Searching for dark matter particles appeared first on Physics World.
+]]>The post Physicists explain why some fast-moving droplets stick to hydrophobic surfaces appeared first on Physics World.
+]]>“If the droplet moves too slowly, it sticks,” explains Jamie McLauchlan, a PhD student at the University of Bath, UK who led the new research effort with Bath’s Adam Squires and Anton Souslov of the University of Cambridge. “Too fast, and it sticks again. Only in between is bouncing possible, where there is enough momentum to detach from the surface but not so much that it collapses back onto it.”
+As well as this new velocity-dependent condition, the researchers also discovered a size effect in which droplets that are too small cannot bounce, no matter what their speed. This size limit, they say, is set by the droplets’ viscosity, which prevents the tiniest droplets from leaving the surface once they land on it.
+While academic researchers and industrialists have long studied single-droplet impacts, McLauchlan says that much of this earlier work focused on millimetre-sized drops that took place on millisecond timescales. “We wanted to push this knowledge to smaller sizes of micrometre droplets and faster speeds, where higher surface-to-volume ratios make interfacial effects critical,” he says. “We were motivated even further during the COVID-19 pandemic, when studying how small airborne respiratory droplets interact with surfaces became a significant concern.”
+ +Working at such small sizes and fast timescales is no easy task, however. To record the outcome of each droplet landing, McLauchlan and colleagues needed a high-speed camera that effectively slowed down motion by a factor of 100 000. To produce the droplets, they needed piezoelectric droplet generators capable of dispensing fluid via tiny 30-micron nozzles. “These dispensers are highly temperamental,” McLauchlan notes. “They can become blocked easily by dust and fibres and fail to work if the fluid viscosity is too high, making experiments delicate to plan and run. The generators are also easy to break and expensive.”
+The researchers used this experimental set-up to create and image droplets between 30‒50 µm in diameter as they struck water-repelling surfaces at speeds of 1‒10 m/s. They then compared their findings with calculations based on a simple mathematical model that treats a droplet like a tiny spring, taking into account three main parameters in addition to its speed: the stickiness of the surface; the viscosity of the droplet liquid; and the droplet’s surface tension.
+Previous research had shown that on perfectly non-wetting surfaces, bouncing does not depend on velocity. Other studies showed that on very smooth surfaces, droplets can bounce on a thin air layer. “Our work has explored a broader range of hydrophobic surfaces, showing that bouncing occurs due to a delicate balance of kinetic energy, viscous dissipation and interfacial energies,” McLauchlan tells Physics World.
+This is exciting, he adds, because it reveals a previously unexplored regime for bounce behaviour: droplets that are too small, or too slow, will always stick, while sufficiently fast droplets can rebound. “This finding provides a general framework that explains bouncing at the micron scale, which is directly relevant for aerosol science,” he says.
+McLauchlan thinks that by linking bouncing to droplet velocity, size and surface properties, the new framework could make it easier to engineer microdroplets for specific purposes. “In agriculture, for example, understanding how spray velocities interact with plant surfaces with different hydrophobicity could help determine when droplets deposit fully versus when they bounce away, improving the efficiency of crop spraying,” he says.
+ +Such a framework could also be beneficial in the study of airborne diseases, since exhaled droplets frequently bump into surfaces while floating around indoors. While droplets that stick are removed from the air, and can no longer transmit disease via that route, those that bounce are not. Quantifying these processes in typical indoor environments will provide better models of airborne pathogen concentrations and therefore disease spread, McLauchlan says. For example, in healthcare settings, coatings could be designed to inhibit or promote bouncing, ensuring that high-velocity respiratory droplets from sneezes either stick to hospital surfaces or recoil from them, depending on which mode of potential transmission (airborne or contact-based) is being targeted.
+The researchers now plan to expand their work on aqueous droplets to droplets with more complex soft-matter properties. “This will include adding surfactants, which introduce time-dependent surface tensions, and polymers, which give droplets viscoelastic properties similar to those found in biological fluids,” McLauchlan reveals. “These studies will present significant experimental challenges, but we hope they broaden the relevance of our findings to an even wider range of fields.”
+The present work is detailed in PNAS.
+The post Physicists explain why some fast-moving droplets stick to hydrophobic surfaces appeared first on Physics World.
+]]>The post Quantum computing on the verge: a look at the quantum marketplace of today appeared first on Physics World.
+]]>We don’t know how amazed Deutsch, a pioneer of quantum computing, would have been had he attended a meeting at the Royal Society in London in February on “the future of quantum information”. But it was tempting to conclude from the event that quantum computing has now well and truly arrived, with working machines that harness quantum mechanics to perform computations being commercially produced and shipped to clients. Serving as the UK launch of the International Year of Quantum Science and Technology (IYQ) 2025, it brought together some of the key figures of the field to spend two days discussing quantum computing as something like a mature industry, even if one in its early days.
+Werner Heisenberg – who worked out the first proper theory of quantum mechanics 100 years ago – would surely have been amazed to find that the formalism he and his peers developed to understand the fundamental behaviour of tiny particles had generated new ways of manipulating information to solve real-world problems in computation. So far, quantum computing – which exploits phenomena such as superposition and entanglement to potentially achieve greater computational power than the best classical computers can muster – hasn’t tackled any practical problems that can’t be solved classically.
+ +Although the fundamental quantum principles are well-established and proven to work, there remain many hurdles that quantum information technologies have to clear before this industry can routinely deliver resources with transformative capabilities. But many researchers think that moment of “practical quantum advantage” is fast approaching, and an entire industry is readying itself for that day.
+So what are the current capabilities and near-term prospects for quantum computing?
+The first thing to acknowledge is that a booming quantum-computing market exists. Devices are being produced for commercial use by a number of tech firms, from the likes of IBM, Google, Canada-based D-Wave, and Rigetti who have been in the field for a decade or more; to relative newcomers like Nord Quantique (Canada), IQM (Finland), Quantinuum (UK and US), Orca (UK) and PsiQuantum (US), Silicon Quantum Computing (Australia), see box below, “The global quantum ecosystem”.
+We are on the cusp of a second quantum revolution, with quantum science and technologies growing rapidly across the globe. This includes quantum computers; quantum sensing (ultra-high precision clocks, sensors for medical diagnostics); as well as quantum communications (a quantum internet). Indeed, according to the State of Quantum 2024 report, a total of 33 countries around the world currently have government initiatives in quantum technology, of which more than 20 have national strategies with large-scale funding. As of this year, worldwide investments in quantum tech – by governments and industry – exceed $55.7 billion, and the market is projected to reach $106 billion by 2040. With the multitude of ground-breaking capabilities that quantum technologies bring globally, it’s unsurprising that governments all over the world are eager to invest in the industry.
+With data from a number of international reports and studies, quantum education and skills firm QURECA has summarized key programmes and efforts around the world. These include total government funding spent through 2025, as well as future commitments spanning 2–10 year programmes, varying by country. These initiatives generally represent government agencies’ funding announcements, related to their countries’ advancements in quantum technologies, excluding any private investments and revenues.
++
A supply chain is also organically developing, which includes manufacturers of specific hardware components, such as Oxford Instruments and Quantum Machines and software developers like Riverlane, based in Cambridge, UK, and QC Ware in Palo Alto, California. Supplying the last link in this chain are a range of eager end-users, from finance companies such as J P Morgan and Goldman Sachs to pharmaceutical companies such as AstraZeneca and engineering firms like Airbus. Quantum computing is already big business, with around 400 active companies and current global investment estimated at around $2 billion.
+But the immediate future of all this buzz is hard to assess. When the chief executive of computer giant Nvidia announced at the start of 2025 that “truly useful” quantum computers were still two decades away, the previously burgeoning share prices of some leading quantum-computing companies plummeted. They have since recovered somewhat, but such volatility reflects the fact that quantum computing has yet to prove its commercial worth.
+The field is still new and firms need to manage expectations and avoid hype while also promoting an optimistic enough picture to keep investment flowing in. “Really amazing breakthroughs are being made,” says physicist Winfried Hensinger of the University of Sussex, “but we need to get away from the expectancy that [truly useful] quantum computers will be available tomorrow.”
+The current state of play is often called the “noisy intermediate-scale quantum” (NISQ) era. That’s because the “noisy” quantum bits (qubits) in today’s devices are prone to errors for which no general and simple correction process exists. Current quantum computers can’t therefore carry out practically useful computations that could not be done on classical high-performance computing (HPC) machines. It’s not just a matter of better engineering either; the basic science is far from done.
+
“We are right on the cusp of scientific quantum advantage – solving certain scientific problems better than the world’s best classical methods can,” says Ashley Montanaro, a physicist at the University of Bristol who co-founded the quantum software company Phasecraft. “But we haven’t yet got to the stage of practical quantum advantage, where quantum computers solve commercially important and practically relevant problems such as discovering the next lithium-ion battery.” It’s no longer if or how, but when that will happen.
+As the quantum-computing business is such an emerging area, today’s devices use wildly different types of physical systems for their qubits, see the box below, “Comparing computing modalities: from qubits to architectures”
+. There is still no clear sign as to which of these platforms, if any, will emerge as the winner. Indeed many researchers believe that no single qubit type will ever dominate.
+The top-performing quantum computers, like those made by Google (with its 105-qubit Willow chip) and IBM (which has made the 121-qubit Condor), use qubits in which information is encoded in the wavefunction of a superconducting material. Until recently, the strongest competing platform seemed to be trapped ions, where the qubits are individual ions held in electromagnetic traps – a technology being developed into working devices by the US company IonQ, spun out from the University of Maryland, among others.
+Much like classical computers, quantum computers have a core processor and a control stack – the difference being that the core depends on the type of qubit being used. Currently, quantum computing is not based on a single platform, but rather a set of competing hardware approaches, each with its own physical basis for creating and controlling qubits and keeping them stable.
+The data above – taken from the August 2025 report Quantum Computing at an Inflection Point: Who’s Leading, What They Own, and Why IP Decides Quantum’s Future by US firm Patentvest – shows the key “quantum modalities”, which refers to the different types of qubits and architectures used to build these quantum systems. Differing qubits each have their own pros and cons, with varying factors including the temperature at which they operate, coherence time, gate speed, and how easy they might be to scale up.
++
But over the past few years, neutral trapped atoms have emerged as a major contender, thanks to advances in controlling the positions and states of these qubits. Here the atoms are prepared in highly excited electronic states called Rydberg atoms, which can be entangled with one another over a few microns. A Harvard startup called QuEra is developing this technology, as is the French start-up Pasqal. In September a team from the California Institute of Technology announced a 6100-qubit array made from neutral atoms. “Ten years ago I would not have included [neutral-atom] methods if I were hedging bets on the future of quantum computing,” says Deutsch’s Oxford colleague, the quantum information theorist Andrew Steane. But like many, he thinks differently now.
+Some researchers believe that optical quantum computing, using photons as qubits, will also be an important platform. One advantage here is that there is no need for complex conversion of photonic signals in existing telecommunications networks going to or from the processing units, which is also handy for photonic interconnections between chips. What’s more, photonic circuits can work at room temperature, whereas trapped ions and superconducting qubits need to be cooled. Photonic quantum computing is being developed by firms like PsiQuantum, Orca, and Xanadu.
+Other efforts, for example at Intel and Silicon Quantum Computing in Australia, make qubits from either quantum dots (Intel) or precision-placed phosphorus atoms (SQC), both in good old silicon, which benefits from a very mature manufacturing base. “Small qubits based on ions and atoms yield the highest quality processors”, says Michelle Simmons of the University of New South Wales, who is the founder and CEO of SQC. “But only atom-based systems in silicon combine this quality with manufacturability.”
+
And it’s not impossible that entirely new quantum computing platforms might yet arrive. At the start of 2025, researchers at Microsoft’s laboratories in Washington State caused a stir when they announced that they had made topological qubits from semiconducting and superconducting devices, which are less error-prone than those currently in use. The announcement left some scientists disgruntled because it was not accompanied by a peer-reviewed paper providing the evidence for these long-sought entities. But in any event, most researchers think it would take a decade or more for topological quantum computing to catch up with the platforms already out there.
+Each of these quantum technologies has its own strengths and weaknesses. “My personal view is that there will not be a single architecture that ‘wins’, certainly not in the foreseeable future,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC), which aims to facilitate the transition of quantum computing from basic research to an industrial concern. Cuthbert thinks the best platform will differ for different types of computation: cold neutral atoms might be good for quantum simulations of molecules, materials and exotic quantum states, say, while superconducting and trapped-ion qubits might be best for problems involving machine learning or optimization.
+Given these pros and cons of different hardware platforms, one difficulty in assessing their merits is finding meaningful metrics for making comparisons. Should we be comparing error rates, coherence times (basically how long qubits remain entangled), gate speeds (how fast a single computational step can be conducted), circuit depth (how many steps a single computation can sustain), number of qubits in a processor, or what? “The metrics and measures that have been put forward so far tend to suit one or other platform more than others,” says Cuthbert, “such that it becomes almost a marketing exercise rather than a scientific benchmarking exercise as to which quantum computer is better.”
+ +The NQCC evaluates the performance of devices using a factor known as the “quantum operation” (QuOp). This is simply the number of quantum operations that can be carried out in a single computation, before the qubits lose their coherence and the computation dissolves into noise. “If you want to run a computation, the number of coherent operations you can run consecutively is an objective measure,” Cuthbert says. If we want to get beyond the NISQ era, he adds, “we need to progress to the point where we can do about a million coherent operations in a single computation. We’re now at the level of maybe a few thousand. So we’ve got a long way to go before we can run large-scale computations.”
+One important issue is how amenable the platforms are to making larger quantum circuits. Cuthbert contrasts the issue of scaling up – putting more qubits on a chip – with “scaling out”, whereby chips of a given size are linked in modular fashion. Many researchers think it unlikely that individual quantum chips will have millions of qubits like the silicon chips of today’s machines. Rather, they will be modular arrays of relatively small chips linked at their edges by quantum interconnects.
+Having made the Condor, IBM now plans to focus on modular architectures (scaling out) – a necessity anyway, since superconducting qubits are micron-sized, so a chip with millions of them would be “bigger than your dining room table”, says Cuthbert. But superconducting qubits are not easy to scale out because microwave frequencies that control and read out the qubits have to be converted into optical frequencies for photonic interconnects. Cold atoms are easier to scale up, as the qubits are small, while photonic quantum computing is easiest to scale out because it already speaks the same language as the interconnects.
+To be able to build up so called “fault tolerant” quantum computers, quantum platforms must solve the issue of error correction, which will enable more extensive computations without the results becoming degraded into mere noise.
+In part two of this feature, we will explore how this is being achieved and meet the various firms developing quantum software. We will also look into the potential high-value commercial uses for robust quantum computers – once such devices exist.
+This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
+Find out more on our quantum channel.
++
The post Quantum computing on the verge: a look at the quantum marketplace of today appeared first on Physics World.
+]]>The post Physicists achieve first entangled measurement of W states appeared first on Physics World.
+]]>Physicists usually measure entangled particles using a technique known as quantum tomography. In this method, many identical copies of a particle are prepared, and each copy is measured at a different angle. The results of these measurements are then combined to reconstruct its full quantum state. To visualize this, imagine being asked to take a family photo. Instead of taking one group picture, you have to photograph each family member individually and then combine all the photos into a single portrait. Now imagine taking a photo properly: taking one photograph of the entire family. This is essentially what happens in an entangled measurement: where all particles are measured simultaneously rather than separately. This approach allows for significantly faster and more efficient measurements.
+So far, for three-particle systems, entangled measurements have only been performed on Greenberger–Horne–Zeilinger (GHZ) states, where all qubits (quantum bits of a system) are either in one state or another. Until now, no one had carried out an entangled measurement for a more complicated set of states known as W states, which do not share this all-or-nothing property. In their experiment, the researchers at Kyoto University and Hiroshima University specifically used the simplest type of W state, made up of three photons, where each photon’s polarization (horizontal or vertical) is represented by one qubit.
+“In a GHZ state, if you measure one qubit, the whole superposition collapses. But in a W state, even if you measure one particle, entanglement still remains,” explains Shigeki Takeuchi, corresponding author of the paper describing the study. This robustness makes the W state particularly appealing for quantum technologies.
+The team took advantage of the fact that different W states look almost identical but differ by tiny phase shift, which acts as a hidden label that distinguishes one state from another. Using a tool called a discrete Fourier transform (DFT) circuit, researchers were able to “decode” this phase and tell the states apart.
+The DFT exploits a special type of symmetry inherent to W states. Since the method relies on symmetry, in principle it can be extended to systems containing any number of photons. The researchers prepared photons in controlled polarization states and ran them through the DFT, which provided each state’s phase label. After, the photons were sent through polarizing beam splitters that separate them into vertically and horizontally polarized groups. By counting both sets of photons, and combining this with information from the DFT, the team could identify the W state.
+The experiment identified the correct W state about 87% of the time, well above the 15% success rate typically achieved using tomography-based measurements. Maintaining this level of performance was a challenge, as tiny fluctuations in optical paths or photon loss can easily destroy the fragile interference pattern. The fact that the team could maintain stable performance long enough to collect statistically reliable data marks an important technical milestone.
+“Our device is not just a single-shot measurement: it works with 100% efficiency,” Takeuchi adds. “Most linear optical protocols are probabilistic, but here the success probability is unity.” Although demonstrated with three photons, this procedure is directly scalable to larger systems, as the key insight is the symmetry that the DFT can detect.
+“In terms of applications, quantum communication seems the most promising,” says Takeuchi. “Because our device is highly efficient, our protocol could be used for robust communication between quantum computer chips. The next step is to build all of this on a tiny photonic chip, which would reduce errors and photon loss and help make this technology practical for real quantum computers and communication networks.”
+The post Physicists achieve first entangled measurement of W states appeared first on Physics World.
+]]>The post Physicists apply quantum squeezing to a nanoparticle for the first time appeared first on Physics World.
+]]>Oscillating objects that are smaller than a few microns in diameter have applications in many areas of quantum technology. These include optical clocks and superconducting devices as well as quantum sensors. Such objects are small enough to be affected by Heisenberg’s uncertainty principle, which places a limit on how precisely we can simultaneously measure the position and momentum of a quantum object. More specifically, the product of the measurement uncertainties in the position and momentum of such an object must be greater than or equal to ħ/2, where ħ is the reduced Planck constant.
+ +In these circumstances, the only way to decrease the uncertainty in one variable – for example, the position – is to boost the uncertainty in the other. This process has no classical equivalent and is called squeezing because reducing uncertainty along one axis of position-momentum space creates a “bulge” in the other, like squeezing a balloon.
+In the new work, which is detailed in Science, a team led by Kiyotaka Aikawa studied a single, charge-neutral nanoparticle levitating in a periodic intensity pattern formed by the interference of criss-crossed laser beams. Such patterns are known as optical lattices, and they are ideal for testing the quantum mechanical behaviour of small-scale objects because they can levitate the object. This keeps it isolated from other particles and allows it to sustain its fragile quantum state.
+After levitating the particle and cooling it to its motional ground state, the team rapidly varied the intensity of the lattice laser. This had the effect of changing the particle’s oscillation frequency, which in turn changed the uncertainty in its momentum. To measure this change (and prove they had demonstrated quantum squeezing), the researchers then released the nanoparticle from the trap and let it propagate for a short time before measuring its velocity. By repeating these time-of-flight measurements many times, they were able to obtain the particle’s velocity distribution.
+ +The telltale sign of quantum squeezing, the physicists say, is that the velocity distribution they measured for the nanoparticle was narrower than the uncertainty in velocity for the nanoparticle at its lowest energy level. Indeed, the measured velocity variance was narrower than that of the ground state by 4.9 dB, which they say is comparable to the largest mechanical quantum squeezing obtained thus far.
+“Our system will enable us to realize further exotic quantum states of motions and to elucidate how quantum mechanics should behave at macroscopic scales and become classical,” Aikawa tells Physics World. “This could allow us to develop new kinds of quantum devices in the future.”
+The post Physicists apply quantum squeezing to a nanoparticle for the first time appeared first on Physics World.
+]]>The post Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize appeared first on Physics World.
+]]>
The theoretical physicist Michael Berry from the University of Bristol has won the 2025 Isaac Newton Medal and Prize for his “profound contributions across mathematical and theoretical physics in a career spanning over 60 years”. Presented by the Institute of Physics (IOP), which publishes Physics World, the international award is given annually for “world-leading contributions to physics by an individual of any nationality”.
+Born in 1941 in Surrey, UK, Berry earned a BSc in physics from the University of Exeter in 1962 and a PhD from the University of St Andrews in 1965. He then moved to Bristol, where he has remained for the rest of his career.
+Berry is best known for his work in the 1980s in which he showed that, under certain conditions, quantum systems can acquire what is known as a geometric phase. He was studying quantum systems in which the Hamiltonian describing the system is slowly changed so that it eventually returns to its initial form.
+ +Berry showed that the adiabatic theorem widely used to describe such systems was incomplete and that a system acquires a phase factor that depends on the path followed, but not on the rate at which the Hamiltonian is changed. This geometric phase factor is now known as the Berry phase.
+Over his career Berry, has written some 500 papers across a wide number of topics. In physics, Berry’s ideas have applications in condensed matter, quantum information and high-energy physics, as well as optics, nonlinear dynamics, and atomic and molecular physics. In mathematics, meanwhile, his work forms the basis for research in analysis, geometry and number theory.
+Berry told Physics World that the award is “unexpected recognition for six decades of obsessive scribbling…creating physics by seeking ‘claritons’ – elementary particles of sudden understanding – and evading ‘anticlaritons’ that annihilate them” as well as “getting insights into nature’s physics” such as studying tidal bores, tsunamis, rainbows and “polarised light in the blue sky”.
+Over the years, Berry has won a wide number of other honours, including the IOP’s Dirac Medal and the Royal Medal from the Royal Society, both awarded in 1990. He was also given the Wolf Prize for Physics in 1998 and the 2014 Lorentz Medal from the Royal Netherlands Academy of Arts and Sciences. In 1996 he received a knighthood for his services to science.
+Berry will also be a speaker at the IOP’s International Year of Quantum celebrations on 4 November.
+Berry’s latest honour forms part of the IOP’s wider 2025 awards, which recognize everyone from early-career scientists and teachers to technicians and subject specialists. Other winners include Julia Yeomans, who receives the Dirac Medal and Prize for her work highlighting the relevance of active physics to living matter.
+Lok Yiu Wu, meanwhile, receives Jocelyn Bell Burnell Medal and Prize for her work on the development of a novel magnetic radical filter device, and for ongoing support of women and underrepresented groups in physics.
+In a statement, IOP president Michele Dougherty congratulated all the winners. “It is becoming more obvious that the opportunities generated by a career in physics are many and varied – and the potential our science has to transform our society and economy in the modern world is huge,” says Dougherty. “I hope our winners appreciate they are playing an important role in this community, and know how proud we are to celebrate their successes.”
+The full list of 2025 award winners is available here.
+The post Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize appeared first on Physics World.
+]]>The post Phase shift in optical cavities could detect low-frequency gravitational waves appeared first on Physics World.
+]]>GWs were first observed a decade ago and since then the LIGO–Virgo–KAGRA detectors have spotted GWs from hundreds of merging black holes and neutron stars. These detectors work in the 10 Hz–30 kHz range. Researchers have also had some success at observing a GW background at nanohertz frequencies using pulsar timing arrays.
+However, GWs have yet to be detected in the milli-Hz band, which should include signals from binary systems of white dwarfs, neutron stars, and stellar-mass black holes. Many of these signals would emanate from the Milky Way.
+Several projects are now in the works to explore these frequencies, including the space-based interferometers LISA, Taiji, and TianQin; as well as satellite-borne networks of ultra-precise optical clocks. However, these projects are still some years away.
+Joining these efforts was a collaboration called QSNET, which was within the UK’s Quantum Technology for Fundamental Physics (QTFP) programme. “The QSNET project was a network of clocks for measuring the stability of fundamental constants,” explains Giovanni Barontini at the University of Birmingham. “This programme brought together physics communities that normally don’t interact, such as quantum physicists, technologists, high energy physicists, and astrophysicists.”
+QTFP ended this year, but not before Barontini and colleagues had made important strides in demonstrating how milli-Hz GWs could be detected using optical cavities.
+Inside an ultrastable optical cavity, light at specific resonant frequencies bounces constantly between a pair of opposing mirrors. When this resonant light is produced by a specific atomic transition, the frequency of the light in the cavity is very precise and can act as the ticking of an extremely stable clock.
+“Ultrastable cavities are a main component of modern optical atomic clocks,” Barontini explains. “We demonstrated that they have reached sufficient sensitivities to be used as ‘mini-LIGOs’ and detect gravitational waves.”
+When such GW passes through an optical cavity, the spacing between its mirrors does not change in any detectable way. However, QSNET results have led to Barontini’s team to conclude that milli-Hz GWs alter the phase of the light inside the cavity. What is more, they conclude that this effect would be detectable in the most precise optical cavities currently available.
+“Methods from precision measurement with cold atoms can be transferred to gravitational-wave detection,” explains team member Vera Guarrera. “By combining these toolsets, compact optical resonators emerge as credible probes in the milli-Hz band, complementing existing approaches.”
+Their compact detector would comprise two optical cavities at 90° to each other – each operating at a different frequency – and an atomic reference at a third frequency. The phase shift caused by a passing gravitational wave is revealed in a change in how the three frequencies interfere with each other. The team proposes linking multiple detectors to create a global, ground-based network. This, they say, could detect a GW and also locate the position of its source in the sky.
+By harnessing this existing technology, the researchers now hope that future studies could open up a new era of discovery of GWs in the milli-Hz range, far sooner than many projects currently in development.
+“This detector will allow us to test astrophysical models of binary systems in our galaxy, explore the mergers of massive black holes, and even search for stochastic backgrounds from the early universe,” says team member Xavier Calmet at the University of Sussex. “With this method, we have the tools to start probing these signals from the ground, opening the path for future space missions.”
+Barontini adds, “Hopefully this work will inspire the build-up of a global network of sensors that will scan the skies in a new frequency window that promises to be rich of sources – including many from our own galaxy,”.
+The research is described in Classical and Quantum Gravity.
++
The post Phase shift in optical cavities could detect low-frequency gravitational waves appeared first on Physics World.
+]]>The post The physics behind why cutting onions makes us cry appeared first on Physics World.
+]]>While it is known that volatile chemicals released from the onion – called propanethial S-oxide – irritate the nerves in the cornea to produce tears, how such chemical-laden droplets reach the eyes and whether they are influenced by the knife or cutting technique remain less clear.
+To investigate, Sunghwan Jung from Cornell University and colleagues built a guillotine-like apparatus and used high-speed video to observe the droplets released from onions as they were cut by steel blades.
+ +“No one had visualized or quantified this process,” Jung told Physics World. “That curiosity led us to explore the mechanics of droplet ejection during onion cutting using high-speed imaging and strain mapping.”
+They found that droplets, which can reach up to 60 cm high, were released in two stages – the first being a fast mist-like outburst that was followed by threads of liquid fragmenting into many droplets.
+The most energetic droplets were released during the initial contact between the blade and the onion’s skin.
+When they began varying the sharpness of the blade and the cutting speed, they discovered that a greater number of droplets were released by blunter blades and faster cutting speeds.
+“That was even more surprising,” notes Jung. “Blunter blades and faster cuts – up to 40 m/s – produced significantly more droplets with higher kinetic energy.”
+Another surprise was that refrigerating the onions prior to cutting also produced an increased number of droplets of similar velocity, compared to unchilled vegetables.
+So if you want to reduce chances of welling up when making dinner, sharpen your knives, cut slowly and perhaps don’t keep the bulbs in the fridge.
+The researchers say there are many more layers to the work and now plan to study how different onion varieties respond to cutting as well as how cutting could influence the spread of airborne pathogens such as salmonella.
+The post The physics behind why cutting onions makes us cry appeared first on Physics World.
+]]>The post Motion blur brings a counterintuitive advantage for high-resolution imaging appeared first on Physics World.
+]]>
Images captured by moving cameras are usually blurred, but researchers at Brown University in the US have found a way to sharpen them up using a new deconvolution algorithm. The technique could allow ordinary cameras to produce gigapixel-quality photos, with applications in biological imaging and archival/preservation work.
+“We were interested in the limits of computational photography,” says team co-leader Rashid Zia, “and we recognized that there should be a way to decode the higher-resolution information that motion encodes onto a camera image.”
+Conventional techniques to reconstruct high-resolution images from low-resolution ones involve relating low-res to high-res via a mathematical model of the imaging process. These effectiveness of these techniques is limited, however, as they produce only relatively small increases in resolution. If the initial image is blurred due to camera motion, this also limits the maximum resolution possible.
+Together with Pedro Felzenszwalb of Brown’s computer science department, Zia and colleagues overcame these problems, successfully reconstructing a high-resolution image from one or several low-resolution images produced by a moving camera. The algorithm they developed to do this takes the “tracks” left by light sources as the camera moves and uses them to pinpoint precisely where the fine details must have been located. It then reconstructs these details on a finer, sub-pixel grid.
+ +“There was some prior theoretical work that suggested this shouldn’t be possible,” says Felzenszwalb. “But we show that there were a few assumptions in those earlier theories that turned out not to be true. And so this is a proof of concept that we really can recover more information by using motion.”
+When they tried the algorithm out, they found that it could indeed exploit the camera motion to produce images with much higher resolution than those without the motion. In one experiment, they used a standard camera to capture a series of images in a grid of high-resolution (sub-pixel) locations. In another, they took one or more images while the sensor was moving. They also simulated recording single images or sequences of pictures while vibrating the sensor and while moving it along a linear path. These scenarios, they note, could be applicable to aerial or satellite imaging. In both, they used their algorithm to construct a single high-resolution image from the shots captured by the camera.
+“Our results are especially interesting for applications where one wants high resolution over a relatively large field of view,” Zia says. “This is important at many scales from microscopy to satellite imaging. Other areas that could benefit are super-resolution archival photography of artworks or artifacts and photography from moving aircraft.”
+ +The researchers say they are now looking into the mathematical limits of this approach as well as practical demonstrations. “In particular, we hope to soon share results from consumer camera and mobile phone experiments as well as lab-specific setups using scientific-grade CCDs and thermal focal plane arrays,” Zia tells Physics World.
+“While there are existing systems that cameras use to take motion blur out of photos, no one has tried to use that to actually increase resolution,” says Felzenszwalb. “We’ve shown that’s something you could definitely do.”
+The researchers presented their study at the International Conference on Computational Photography and their work is also available on the arXiv pre-print server.
+The post Motion blur brings a counterintuitive advantage for high-resolution imaging appeared first on Physics World.
+]]>The post Hints of a boundary between phases of nuclear matter found at RHIC appeared first on Physics World.
+]]>Team member Frank Geurts at Rice University in the US tells Physics World that these findings could confirm that the “generic physics properties of phase diagrams that we know for many chemical substances apply to our most fundamental understanding of nuclear matter, too.”
+A phase diagram maps how a substance transforms between solid, liquid, and gas. For everyday materials like water, the diagram is familiar, but the behaviour of nuclear matter under extreme heat and pressure remains a mystery.
+Atomic nuclei are made of protons and neutrons tightly bound together. These protons and neutrons are themselves made of quarks that are held together by gluons. When nuclei are smashed together at high energies, the protons and neutrons “melt” into a fluid of quarks and gluons called a quark–gluon plasma. This exotic high-temperature state is thought to have filled the universe just microseconds after the Big Bang.
+The quark–gluon plasma is studied by accelerating heavy ions like gold nuclei to nearly the speed of light and smashing them together. “The advantage of using heavy-ion collisions in colliders such as RHIC is that we can repeat the experiment many millions, if not billions, of times,” Geurts explains.
+By adjusting the collision energy, researchers can control the temperature and density of the fleeting quark–gluon plasma they create. This allows physicists to explore the transition between ordinary nuclear matter and the quark–gluon plasma. Within this transition, theory predicts the existence of a critical point where gradual change becomes abrupt.
+Now, the STAR Collaboration has focused on measuring the minute fluctuations in the number of protons produced in each collision. These “proton cumulants,” says Geurts, are statistical quantities that “help quantify the shape of a distribution – here, the distribution of the number of protons that we measure”.
+In simple terms, the first two cumulants correspond to the average and width of that distribution, while higher-order cumulants describe its asymmetry and sharpness. Ratios of these cumulants are tied to fundamental properties known as susceptibilities, which become highly sensitive near a critical point.
+Over three years of experiments, the STAR team studied gold–gold collisions at a wide range of energies, using sophisticated detectors to track and identify the protons and antiprotons created in each event. By comparing how the number of these particles changed with energy, the researchers discovered something unexpected.
+As the collision energy decreased, the fluctuations in proton numbers did not follow a smooth trend. “STAR observed what it calls non-monotonic behaviour,” Geurts explains. “While at higher energies the ratios appear to be suppressed, STAR observes an enhancement at lower energies.” Such irregular changes, he said, are consistent with what might happen if the collisions pass near the critical point — the boundary separating different phases of nuclear matter.
+For Volodymyr Vovchenko, a physicist at the University of Houston who was not involved in the research, the new measurements represent “a major step forward”. He says that “the STAR Collaboration has delivered the most precise proton-fluctuation data to date across several collision energies”.
+Still, interpretation remains delicate. The corrections required to extract pure physical signals from the raw data are complex, and theoretical calculations lag behind in providing precise predictions for what should happen near the critical point.
+“The necessary experimental corrections are intricate,” Vovchenko said, and some theoretical models “do not yet implement these corrections in a fully consistent way.” That mismatch, he cautions, “can blur apples-to-apples comparisons.”
+The STAR team is now studying new data from lower-energy collisions, focusing on the range where the signal appears strongest. The results could reveal whether the observed pattern marks the presence of a nuclear matter critical point or stems from more conventional effects.
+Meanwhile, theorists are racing to catch up. “The ball now moves largely to theory’s court,” Vovchenko says. He emphasizes the need for “quantitative predictions across energies and cumulants of various order that are appropriate for apples-to-apples comparisons with these data.”
+ +Future experiments, including RHIC’s fixed-target program and new facilities such as the FAIR accelerator in Germany, will extend the search even further. By probing lower energies and producing vastly larger datasets, they aim to map the transition between ordinary nuclear matter and quark–gluon plasma with unprecedented precision.
+Whether or not the critical point is finally revealed, the new data are a milestone in the exploration of the strong force and the early universe. As Geurts put it, these findings trace “landmark properties of the most fundamental phase diagram of nuclear matter,” bringing physicists one step closer to charting how everything – from protons to stars – first came to be.
+The research is described in Physical Review Letters.
+The post Hints of a boundary between phases of nuclear matter found at RHIC appeared first on Physics World.
+]]>The post From quantum curiosity to quantum computers: the 2025 Nobel Prize for Physics appeared first on Physics World.
+]]>That circuit was a superconducting device called a Josephson junction and their work in the 1980s led to the development of some of today’s most promising technologies for quantum computers.
+To chat about this year’s laureates, and the wide-reaching scientific and technological consequences of their work I am joined by Ilana Wisby – who is a quantum physicist, deep tech entrepreneur and former CEO of UK-based Oxford Quantum Circuits. We chat about the trio’s breakthrough and its influence on today’s quantum science and technology.
+
This podcast is supported by American Elements, the world’s leading manufacturer of engineered and advanced materials. The company’s ability to scale laboratory breakthroughs to industrial production has contributed to many of the most significant technological advancements since 1990 – including LED lighting, smartphones, and electric vehicles.
The post From quantum curiosity to quantum computers: the 2025 Nobel Prize for Physics appeared first on Physics World.
+]]>The post The power of physics: what can a physicist do in the nuclear energy industry? appeared first on Physics World.
+]]>The UK currently has nine operational reactors across five power stations, which together provided 12% of the country’s electricity in 2024. But the government wants that figure to reach 25% by 2050 as part of its goal to move away from fossil fuels and reach net zero. Some also think that nuclear energy will be vital for powering data centres for AI in a clean and efficient way.
+While many see fusion as the future of nuclear power, it is still in the research and development stages, so fission remains where most job opportunities lie. Although eight of the current fleet of nuclear reactors are to be retired by the end of this decade, the first of the next generation are already in construction. At Hinkley Point C in Somerset, two new reactors are being built with costs estimated to reach £46bn; and in July 2025, Sizewell C in Suffolk got the final go-ahead.
+ +Rolls-Royce, meanwhile, has just won a government-funded bid to develop small modular reactors (SMR) in the UK. Although currently an unproven technology, the hope is that SMRs will be cheaper and quicker to build than traditional plants, with proponents saying that each reactor could produce enough affordable emission-free energy to power about 600,000 homes for at least 60 years.
+The renaissance of the nuclear power industry has led to employment in the sector growing by 35% between 2021 and 2024, with the workforce reaching over 85,000. However – as highlighted in a 2025 members survey by the Nuclear Institute – there are concerns about a skills shortage. In fact, the Nuclear Skills Plan was detailed by the Nuclear Skills Delivery Group in 2024 with the aim to address this problem.
+Supported by an investment of £763m by 2030 from the UK government and industry, the plan’s objectives include quadrupling the number of PhDs in nuclear fission, and doubling the number of graduates entering the workforce. It also aims to provide opportunities for people to “upskill” and join the sector mid-career. The overall hope is to fill 40,000 new jobs by the end of the decade.
+Having a degree in physics can open the door to any part of the nuclear-energy industry, from designing, operating or decommissioning a reactor, to training staff, overseeing safety or working as a consultant. We talk to six nuclear experts who all studied physics at university but now work across the sector, for a range of companies – including EDF Energy and Great British Energy–Nuclear. They give a quick snapshot of their “nuclear journeys”, and offer advice to those thinking of following in their footsteps.
+Michael Hodgson, lead engineer, Rolls-Royce SMR
+My interest in nuclear power started when I did a project on energy at secondary school. I learnt that there were significant challenges around the world’s future energy demands, resource security, and need for clean generation. Although at the time these were not topics commonly talked about, I could see they were vital to work on, and thought nuclear would play an important role.
+I went on to study physics at the University of Surrey, with a year at Michigan State University in the US and another at CERN. After working for a couple of years, I returned to Surrey to do a part-time masters in radiation detection and instrumentation, followed a few years later by a PhD in radiation-hard semiconductor neutron detectors.
+Up until recently, my professional work has mainly been in the supply chain for nuclear applications, working for Thermo Fisher Scientific, Centronic and Exosens. Nuclear power isn’t made by one company, it’s a combination of thousands of suppliers and sub-suppliers, the majority of which are small to medium-sized enterprises that need to operate across multiple industries. My job was primarily a technical design authority for manufacturers of radiation detectors and instruments, used in applications such as reactor power monitoring, health physics, industrial controls, and laboratory equipment, to name but a few. Now I work at Rolls-Royce SMR as a lead engineer for the control and instrumentation team. This role involves selecting and qualifying the thousands of different detectors and control instruments that will support the operation of small modular reactors.
+++Logical, evidence-based problem solving is the cornerstone of science and a powerful tool in any work setting
+
Beyond the technical knowledge I’ve gained throughout my education, studying physics has also given me two important skills. Firstly, learning how to learn – this is critical in academia but it also helps you step into any professional role. The second skill is the logical, evidence-based problem solving that is the cornerstone of science, which is a powerful tool in any work setting.
+A career in nuclear energy can take many forms. The industry is comprised of a range of sectors and thousands of organizations that altogether form a complex support structure. My advice for any role is that knowledge is important, but experience is critical. While studying, try to look for opportunities to gain professional experience – this may be industry placements, research projects, or even volunteering. And it doesn’t have to be in your specific area of interest – cross-disciplinary experience breeds novel thinking. Utilizing these opportunities can guide your professional interests, set your CV apart from your peers, and bring pragmatism to your future roles.
+Katie Barber, nuclear reactor operator and simulator instructor at Sizewell B, EDF
+I studied physics at the University of Leicester simply because it was a subject I enjoyed – at the time I had no idea what I wanted to do for a career. I first became interested in nuclear energy when I was looking for graduate jobs. The British Energy (now EDF) graduate scheme caught my eye because it offered a good balance of training and on-the-job experience. I was able to spend time in multiple different departments at different power stations before I decided which career path was right for me.
+At the end of my graduate scheme, I worked in nuclear safety for several years. This involved reactor physics testing and advising on safety issues concerning the core and fuel. It was during that time I became interested in the operational response to faults. I therefore applied for the company’s reactor operator training programme – a two-year course that was a mixture of classroom and simulator training. I really enjoyed being a reactor operator, particularly during outages when the plant would be shutdown, cooled, depressurised and dissembled for refuelling before reversing the process to start up again. But after almost 10 years in the control room, I wanted a new challenge.
+Now I develop and deliver the training for the control-room teams. My job, which includes simulator and classroom training, covers everything from operator fundamentals (such as reactor physics and thermodynamics) and normal operations (e.g. start up and shutdown), through to accident scenarios.
+My background in physics gives me a solid foundation for understanding the reactor physics and thermodynamics of the plant. However, there are also a lot of softer skills essential for my role. Teaching others requires the ability to present and explain technical material; to facilitate a constructive debrief after a simulator scenario; and to deliver effective coaching and feedback. The training focuses as much on human performance as it does technical knowledge, highlighting the importance of effective teamwork, error prevention and clear communications.
+++A graduate training scheme is an excellent way to get an overview of the business, and gain experience across many different departments and disciplines
+
With Hinkley Point C construction progressing well and the recent final investment decision for Sizewell C, now is an exciting time to join the nuclear industry. A graduate training scheme is an excellent way to get an overview of the business, and gain experience across many different departments and disciplines, before making the decision about which area is right for you.
+Jacob Plummer, principal nuclear safety inspector, Office for Nuclear Regulation
+I’d been generally interested in nuclear science throughout my undergraduate physics degree at the University of Manchester, but this really accelerated after studying modules in applied nuclear and reactor physics. The topic was engaging, and the nuclear industry offered a way to explore real-world implementation of physics concepts. This led me to do a masters in nuclear science and technology, also at Manchester (under the Nuclear Technology Education Consortium), to develop the skills the UK nuclear sector required.
+My first job was as a graduate nuclear safety engineer at Atkins (now AtkinsRealis), an engineering consultancy. It opened my eyes to the breadth of physics-related opportunities in the industry. I worked on new and operational power station projects for Hitachi-GE and EDF, as well as a variety of defence new-build projects. I primarily worked in hazard analysis, using modelling and simulation tools to generate evidence on topics like fire, blast and flooding to support safety case claims and inform reactor designs. I was also able to gain experience in project management, business development, and other energy projects, such as offshore wind farms. The analytical and problem solving skills I had developed during my physics studies really helped me to adapt to all of these roles.
+Currently I work as a principal nuclear safety inspector at the Office for Nuclear Regulation. My role is quite varied. Day to day I might be assessing safety case submissions from a prospective reactor vendor; planning and delivering inspections at fuel and waste sites; or managing fire research projects as part of an international programme. A physics background helps me to understand complex safety arguments and how they link to technical evidence; and to make reasoned and logical regulatory judgements as a result.
+++Physics skills and experience are valued across the nuclear industry, from hazards and fault assessment to security, safeguards, project management and more
+
It’s a great time to join the nuclear industry with a huge amount of activity and investment across the nuclear lifecycle. I’d advise early-career professionals to cast the net wide when looking for roles. There are some obvious physics-related areas such as health physics, fuel and core design, and criticality safety, but physics skills and experience are valued across the nuclear industry, from hazards and fault assessment to security, safeguards, project management and more. Don’t be limited by the physicist label.
+Becky Houghton, principal consultant, Galson Sciences Ltd
+My interest in a career in nuclear energy sparked mid-way through my degree in physics and mathematics at the University of Sheffield, when I was researching “safer nuclear power” for an essay. Several rabbit holes later, I had discovered a myriad of opportunities in the sector that would allow me to use the skills and knowledge I’d gained through my degree in an industrial setting.
+My first job in the field was as a technical support advisor on a graduate training scheme, where I supported plant operations on a nuclear licensed site. Next, I did a stint working in strategy development and delivery across the back end of the fuel cycle, before moving into consultancy. I now work as a principal consultant for Galson Sciences Ltd, part of the Egis group. Egis is an international multi-disciplinary consulting and engineering firm, within which Galson Sciences provides specialist nuclear decommissioning and waste management consultancy services to nuclear sector clients worldwide.
+Ultimately, my role boils down to providing strategic and technical support to help clients make decisions. My focus these days tends to be around radioactive waste management, which can mean anything from analysing radioactive waste inventories to assessing the environmental safety of disposal facilities.
+In terms of technical skills needed for the role, data analysis and the ability to provide high-quality reports on time and within budget are at the top of the list. Physics-wise, an understanding of radioactive decay, criticality mechanisms and the physico-chemical properties of different isotopes are fairly fundamental requirements. Meanwhile, as a consultant, some of the most important soft skills are being able to lead, teach and mentor less experienced colleagues; develop and maintain strong client relationships; and look after the well-being and deployment of my staff.
+++Whichever part of the nuclear fuel cycle you end up in, the work you do makes a difference
+
My advice to anyone looking to go into the nuclear energy is to go for it. There are lots of really interesting things happening right now across the industry, all the way from building new reactors and operating the current fleet, to decommissioning, site remediation and waste management activities. Whichever part of the nuclear fuel cycle you end up in, the work you do makes a difference, whether that’s by cleaning up the legacy of years gone by or by helping to meet the UK’s energy demands. Don’t be afraid to say “yes” to opportunities even if they’re outside your comfort zone, keep learning, and keep being curious about the world around you.
+Mark Savage, nuclear licensing manager, Urenco UK
+As a child, I remember going to the visitors’ centre at the Sellafield nuclear site – a large nuclear facility in the north-west of England that’s now the subject of a major clean-up and decommissioning operation. At the centre, there was a show about splitting the atom that really sparked my interest in physics and nuclear energy.
+I went on to study physics at Durham University, and did two summer placements at Sellafield, working with radiometric instruments. I feel these placements helped me get a place on the Rolls-Royce nuclear engineering graduate scheme after university. From there I joined Urenco, an international supplier of uranium enrichment services and fuel cycle products for the civil nuclear industry.
+While at Urenco, I have undertaken a range of interesting roles in nuclear safety and radiation physics, including criticality safety assessment and safety case management. Highlights have included being the licensing manager for a project looking to deploy a high-temperature gas-cooled reactor design, and presenting a paper at a nuclear industry conference in Japan. These roles have allowed me to directly apply my physics background – such as using Monte Carlo radiation transport codes to model nuclear systems and radiation sources – as well as develop broader knowledge and skills in safety, engineering and project management.
+My current role is nuclear licensing manager at the Capenhurst site in Cheshire, where we operate a number of nuclear facilities including three uranium enrichment plants, a uranium chemical deconversion facility, and waste management facilities. I lead a team who ensure the site complies with regulations, and achieves the required approvals for our programme of activities. Key skills for this role include building relationships with internal and external stakeholders; being able to understand and explain complex technical issues to a range of audiences; and planning programmes of work.
+++I would always recommend anyone interested in working in nuclear energy to look for work experience
+
Some form of relevant experience is always advantageous, so I would always recommend anyone interested in working in nuclear energy to look for work experience visits, summer placements or degree schemes that include working with industry.
+
Saralyn Thomas, skills lead, Great British Energy – Nuclear
+During my physics degree at the University of Bristol, my interest in energy led me to write a dissertation on nuclear power. This inspired me to do a masters in nuclear science and technology at the University of Manchester under the Nuclear Technology Education Consortium. The course opened doors for me, such as a summer placement with the UK National Nuclear Laboratory, and my first role as a junior safety consultant with Orano.
+I worked in nuclear safety for roughly 10 years, progressing to principal consultant with Abbott Risk Consulting, but decided that this wasn’t where my strengths and passions lay. During my career, I volunteered for the Nuclear Institute (NI), and worked with the society’s young members group – the Young Generation Network (YGN). I ended up becoming chair of the YGN and a trustee of the NI, which involved supporting skills initiatives including those feeding into the Nuclear Skills Plan. Having a strategic view of the sector and helping to solve its skills challenges energized me in a new way, so I chose to change career paths and moved to Great British Energy – Nuclear (GBE-N) as skills lead. In this role I plan for what skills the business and wider sector will need for a nuclear new build programme, as well as develop interventions to address skills gaps.
+GBE-N’s current remit is to deliver Europe’s first fleet of small modular reactors, but there is relatively limited experience of building this technology. Problem-solving skills from my background in physics have been essential to understanding what assumptions we can put in place at this early stage, learning from other nuclear new builds and major infrastructure projects, to help set us up for the future.
+++The UK’s nuclear sector is seeing significant government commitment, but there is a major skills gap
+
To anyone interested in nuclear energy, my advice is to get involved now. The UK’s nuclear sector is seeing significant government commitment, but there is a major skills gap. Nuclear offers a lifelong career with challenging, complex projects – ideal for physicists who enjoy solving problems and making a difference.
++
The post The power of physics: what can a physicist do in the nuclear energy industry? appeared first on Physics World.
+]]>The post A record-breaking anisotropic van der Waals crystal? appeared first on Physics World.
+]]>However, recent research has shown that this is not the case for all materials. In some cases, their optical permittivity is directional. This is commonly known as in-plane optical anisotropy. A larger difference between optical permittivity in different directions means a larger anisotropy.
+Materials with very large anisotropies have applications in a wide range of fields from photonics and electronics to medical imaging. However, for most materials remains available today, the value remains relatively low.
+These potential applications combined with the current limitation has driven a large amount of research into novel anisotropic materials.
+In this latest work, a team of researchers studied the quasi-one-dimensional van der Waals crystal: Ta2NiSe5.
+Van der Waals (vdW) crystals are made up of chains, ribbons, or layers of atoms that stick together through weak van der Waals forces.
+In quasi-one-dimensional vdW crystals, the atoms are strongly connected along one direction, while the connections in the other directions are much weaker, making their properties very direction-dependent.
+This structure makes quasi-one-dimensional vdW crystals a good place to search for large optical anisotropy values. The researchers studied the new crystal by using a range of measurement techniques such as ellipsometry and spectroscopy as well as state of the art first principles computer simulations.
+The results show that Ta2NiSe5 has a record-breaking in-plane optical anisotropy across the visible to infrared spectral region, representing the highest value reported among van der Waals materials to date.
+The study therefore has large implications for next-generation devices in photonics and beyond.
+Giant in-plane anisotropy in novel quasi-one-dimensional van der Waals crystal – IOPscience
+Zhou et al. 2025 Rep. Prog. Phys. 88 050502
++
+
The post A record-breaking anisotropic van der Waals crystal? appeared first on Physics World.
+]]>The post Unlocking the limits of quantum security appeared first on Physics World.
+]]>A bipartite quantum state is a system shared between two parties (often called Alice and Bob) that may exhibit entanglement. If they successfully distil a secret key, they can encrypt and decrypt messages securely, using the key like a shared password known only to them.
+To achieve this, Alice and Bob use point-to-point quantum channels and perform local operations, meaning each can only manipulate their own part of the system. They also rely on one-way classical communication, where Alice sends messages to Bob, but Bob cannot reply. This constraint reflects realistic limitations in quantum networks and helps researchers identify the minimum requirements for secure key generation.
+This paper investigates how many secret bits can be extracted under these conditions. The authors introduce a resource-theoretic framework based on unextendible entanglement which is a form of entanglement that cannot be shared with additional parties. This framework allows them to derive efficiently computable upper bounds on secret-key rates, helping determine how much security is achievable with limited resources.
+Their results apply to both one-shot scenarios, where the quantum system is used only once, and asymptotic regimes, where the same system is used repeatedly and statistical patterns emerge. Notably, they extend their approach to quantum channels assisted by forward classical communication, resolving a long-standing open problem about the one-shot forward-assisted private capacity.
+Finally, they show that error rates in private communication can decrease exponentially with repeated channel use, offering a scalable and practical path toward building secure quantum messaging systems.
+Extendibility limits quantum-secured communication and key distillation
+Vishal Singh and Mark M Wilde 2025 Rep. Prog. Phys. 88 067601
++
Distribution of entanglement in large-scale quantum networks by S Perseguers, G J Lapeyre Jr, D Cavalcanti, M Lewenstein and A Acín (2013)
+The post Unlocking the limits of quantum security appeared first on Physics World.
+]]>The post Optical gyroscope detects Earth’s rotation with the highest precision yet appeared first on Physics World.
+]]>The Earth rotates once every day, but there are tiny fluctuations, or wobbles, in its axis of rotation. These fluctuations are caused by several factors, including the gravitational forces of the Moon and Sun and, to a lesser extent, the neighbouring planets in our Solar System. Other, smaller fluctuations stem from the exchange of momentum between the solid Earth and the oceans, atmosphere and ice sheets. The Earth’s shape, which is not a perfect sphere but is flattened at the poles and thickened at the equator, also contributes to the wobble.
+These different types of fluctuations produce effects known as precession and nutation that cause the extension of the Earth’s axis to trace a wrinkly circle in the sky. At the moment, this extended axis is aligned precisely with the North Star. In the future, it will align with other stars before returning to the North Star again in a cycle that lasts 26,000 years.
+ +Most studies of the Earth’s rotation involve combining data from many sources. These sources include very long baseline radio-astronomy observations of quasars; global satellite navigation systems (GNSS); and GNSS observations combined with satellite laser ranging (SLR) and Doppler orbitography and radiopositioning integrated by satellite (DORIS). These techniques are based on measuring the travel time of light, and because it is difficult to combine them, only one such measurement can be made per day.
+The new gyroscope, which is detailed in Science Advances, is an optical interferometer that operates using the Sagnac effect. At its heart is an optical cavity that guides a light beam around a square path 16 m long. Depending on the rate of rotation it experiences, this cavity selects two different frequencies from the beam to be coherently amplified. “The two frequencies chosen are the only ones that have an integer number of waves around the cavity,” explains team leader Ulrich Schreiber of the Technische Universität München (TUM). “And because of the finite velocity of light, the co-rotating beam ‘sees’ a slightly larger cavity, while the anti-rotating beam ‘sees’ a slightly shorter one.”
+The frequency shift in the interference pattern produced by the co-rotating beam is projected onto an external detector and is strictly proportional to the Earth’s rotation rate. Because the accuracy of the measurement depends, in part, on the mechanical stability of the set-up, the researchers constructed their gyroscope from a glass ceramic that does not expand much with temperature. They also set it up horizontally in an underground laboratory, the Geodetic Observatory Wettzell in southern Bavaria, to protect it as much as possible from external vibrations.
+The instrument can sense the Earth’s rotation to within an accuracy of 48 parts per billion (ppb), which corresponds to picoradians per second. “This is about a factor of 100 better than any other rotation sensor,” says Schreiber, “and, importantly, is less than an order of magnitude away from the regime in which relativistic effects can be measured – but we are not quite there yet.”
+ +An increase in the measurement accuracy and stability of the ring laser by a factor of 10 would, Schreiber adds, allow the researchers to measure the space-time distortion caused by the Earth’s rotation. For example, it would permit them to conduct a direct test for the Lense-Thirring effect — that is, the “dragging” of space by the Earth’s rotation – right at the Earth’s surface.
+To reach this goal, the researchers say they would need to amend several details of their sensor design. One example is the composition of the thin-film coatings on the mirrors inside their optical interferometer. “This is neither easy nor straightforward,” explains Schreiber, “but we have some ideas to try out and hope to progress here in the near future.
+“In the meantime, we are working towards implementing our measurements into a routine evaluation procedure,” he tells Physics World.
+The post Optical gyroscope detects Earth’s rotation with the highest precision yet appeared first on Physics World.
+]]>The post Susumu Kitagawa, Richard Robson and Omar Yaghi win the 2025 Nobel Prize for Chemistry appeared first on Physics World.
+]]>The award includes a SEK 11m prize ($1.2m), which is shared equally by the winners. The prize will be presented at a ceremony in Stockholm on 10 December.
+The prize was announced this morning by members of the Royal Swedish Academy of Science. Speaking on the phone during the press conference, Kitagawa noted that he was “deeply honoured and delighted” that his research had been recognized.
+Beginning in the late 1980s and for the next couple of decades, the trio, who are all trained chemists, developed a new form of molecular architecture in that metal ions function as cornerstones that are linked by long organic carbon-based molecules.
+Together, the metal ions and molecules form crystals that contain large cavities through which gases and other chemicals can flow.
+“It’s a little like Hermione’s handbag – small on the outside, but very large on the inside,” noted Heiner Linke, chair of the Nobel Committee for Chemistry.
+Yet the trio had to overcome several challenges before they could be used such as making them stable and flexible, which Kitagawa noted “was very tough”.
+These porous materials are now called metal-organic frameworks (MOF). By varying the building blocks used in the MOFs, researchers can design them to capture and store specific substances as well as drive chemical reactions or conduct electricity.
+ +“Metal-organic frameworks have enormous potential, bringing previously unforeseen opportunities for custom-made materials with new functions,” added Linke.
+Following the laureates’ work, chemists have built tens of thousands of different MOFs.
+3D MOFs are an important class of materials that could be used in applications as diverse as sensing, gas storage, catalysis and optoelectronics.
+MOFs are now able to capture water from air in the desert, sequester carbon dioxide from industry effluents, store hydrogen gas, recover rare-earth metals from waste, break down oil contamination as well as extract “forever chemicals” such as PFAS from water.
+“My dream is to capture air and to separate air into CO2, oxygen and water and convert them to usable materials using renewable energy,” noted Kitagawa.
+Their 2D versions might even be used as flexible material platforms to realize exotic quantum phases, such as topological and anomalous quantum Hall insulators.
+Kitagawa was born in 1951 in Kyoto, Japan. He obtained a PhD from Kyoto University, Japan, in 1979 and then held positions at Kindai University before joining Tokyo Metropolitan University in 1992. He then joined Kyoto University in 1998 where he is currently based.
+Robson was born in 1937 in Glusburn, UK. He obtained a PhD from University of Oxford in 1962. After postdoc positions at California Institute of Technology and Stanford University, in 1966 he moved to the University of Melbourne where he remained for the rest of his career.
+Yaghi was born in 1965 in Amman, Jordan. He obtained a PhD from University of Illinois Urbana-Champaign, US, in 1990. He then held positions at Arizona State University, the University of Michigan and the University of California, Los Angeles, before joining the University of California, Berkeley, in 2012 where he is currently based.
+The post Susumu Kitagawa, Richard Robson and Omar Yaghi win the 2025 Nobel Prize for Chemistry appeared first on Physics World.
+]]>The post Machine learning optimizes nanoparticle design for drug delivery to the brain appeared first on Physics World.
+]]>The work focuses on nanoparticles that can cross the BBB and provide a promising platform for enhancing drug transport into the brain. But designing specific nanoparticles to target specific brain regions is a complex and time-consuming task; there’s a need for improved design frameworks to identify potential candidates with desirable bioactivity profiles. For this, the team – comprising researchers from the University of the Basque Country (UPV/EHU) in Spain and Tulane University in the USA, led by the multicentre CHEMIF.PTML Lab – turned to machine learning.
+Machine learning uses molecular and clinical data to detect trends that may lead to novel drug delivery strategies with improved efficiency and reduced side effects. In contrast to slow and costly trial-and-error or physical modelling approaches, machine learning could provide efficient initial screening of large combinations of nanoparticle compositions. Traditional machine learning, however, can be hindered by the lack of suitable data sets.
+To address this limitation, the CHEMIF.PTML Lab team developed the IFE.PTML method – an approach that integrates information fusion, Python-based encoding and perturbation theory with machine learning algorithms, describing the model in Machine Learning: Science and Technology.
+ +“The main advantage of our IFE.PTML method lies in its ability to handle heterogeneous nanoparticle data,” corresponding author Humberto González-Díaz explains. “Standard machine learning approaches often struggle with disperse and multi-source datasets from nanoparticle experiments. Our approach integrates information fusion to combine diverse data types – such as physicochemical properties, bioassays and so on – and applies perturbation theory to model these uncertainties as probabilistic perturbations around baseline conditions. This results in more robust, generalizable predictions of nanoparticle behaviour.”
+To build the predictive models, the researchers created a database containing physicochemical and bioactivity parameters for 45 different nanoparticle systems across 41 different cell lines. They used these data to train IFE.PTML models with three machine learning algorithms – random forest, extreme gradient boosting and decision tree – to predict the drug delivery behaviour of various nanomaterials. The random forest-based model showed the best overall performance, with accuracies of 95.1% and 89.7% on training and testing data sets, respectively.
+To illustrate the real-world applicability of the random forest-based IFE.PTML model, the researchers synthetized two novel magnetite nanoparticle systems (the 31 nm-diameter Fe3O4_A and the 26 nm-diameter Fe3O4_B). Magnetite-based nanoparticles are biocompatible, can be easily functionalized and have a high surface area-to-volume ratio, making them efficient drug carriers. To make them water soluble, the nanoparticles were coated with either PMAO (poly(maleic anhydride-alt-1-octadecene)) or PMAO plus PEI (poly(ethyleneimine).
+
The team characterized the structural, morphological and magnetic properties of the four nanoparticle systems and then used the optimized model to predict their likelihood of favourable bioactivity for drug delivery in various human brain cell lines, including models of neurodegenerative disease, brain tumour models and a cell line modelling the BBB.
+As inputs for their model, the researchers used a reference function based on the bioactivity parameters for each system, plus perturbation theory operators for various nanoparticle parameters. The IFE.PTML model calculated key bioactivity parameters, focusing on indicators of toxicity, efficacy and safety. These included the 50% cytotoxic, inhibitory, lethal and toxic concentrations (at which 50% of the biological effect is observed) and the zeta potential, which affects the nanoparticles’ capacity to cross the BBB. For each parameter, the model output a binary result: “0” for undesired and “1” for desired bioactivities.
+The model identified PMAO-coated nanoparticles as the most promising candidates for BBB and neuronal applications, due to their potentially favourable stability and biocompatibility. Nanoparticles with PMAO-PEI coatings, on the other hand, could prove optimal for targeting brain tumour cells.
+ +The researchers point out that, where comparisons were possible, the trends predicted by the RF-IFE.PTML model agreed with the experimental findings, as well as with previous studies reported in the literature. As such, they conclude that their model is efficient and robust and offers valuable predictions on nanoparticle–coating combinations designed to act on specific targets.
+“The present study focused on the nanoparticles as potential drug carriers. Therefore, we are currently implementing a combined machine learning and deep learning methodology with potential drug candidates for neurodegenerative diseases,” González-Díaz tells Physics World.
+The post Machine learning optimizes nanoparticle design for drug delivery to the brain appeared first on Physics World.
+]]>The post Advances in quantum error correction showcased at Q2B25 appeared first on Physics World.
+]]>Among the quantum computing topics was quantum error correction (QEC) – something that will be essential for building tomorrow’s fault-tolerant machines. Indeed, it could even be the technology’s most important and immediate challenge, according to the speakers on the State of Quantum Error Correction Panel: Paul Hilaire of Telecom Paris/IP Paris, Michael Vasmer of Inria, Quandela’s Boris Bourdoncle, Riverlane’s Joan Camps and Christophe Vuillot from Alice & Bob.
+As was clear from the conference talks, quantum computers are undoubtedly advancing in leaps and bounds. One of their most important weak points, however, is that their fundamental building blocks (quantum bits, or qubits) are highly prone to errors. These errors are caused by interactions with the environment – also known as noise – and correcting them will require innovative software and hardware. Today’s machines are only capable of running on average a few hundred operations before an error occurs; but in the future, we will have to develop quantum computers capable of processing a million error-free quantum operations (known as a MegaQuOp) or even a trillion error-free operations (TeraQuOps).
+QEC works by distributing one quantum bit of information – called a logical qubit – across several different physical qubits, such as superconducting circuits or trapped atoms. Each physical qubit is noisy, but they work together to preserve the quantum state of the logical qubit – at least for long enough to perform a calculation. It was Peter Shor who first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A technique known as syndrome decoding is then used to diagnose which error was the likely source of corruption on an encoded state. The error can then be reversed by applying a corrective operation depending on the syndrome.
+
While error correction should become more effective as the number of physical qubits in a logical qubit increases, adding more physical qubits to a logical qubit also adds more noise. Much progress has been made in addressing this and other noise issues in recent years, however.
+“We can say there’s a ‘fight’ when increasing the length of a code,” explains Hilaire. “Doing so allows us to correct more errors, but we also introduce more sources of errors. The goal is thus being able to correct more errors than we introduce. What I like with this picture is the clear idea of the concept of a fault-tolerant threshold below which fault-tolerant quantum computing becomes feasible.”
+Speakers at the Q2B25 meeting shared a comprehensive overview of the most recent advancements in the field – and they are varied. First up, concatenated error correction codes. Prevalent in the early days of QEC, these fell by the wayside in favour of codes like surface code, but are making a return as recent work has shown. Concatenated codes can achieve constant encoding rates and a quantum computer operating on a linear, nearest-neighbour connectivity was recently put forward. Directional codes, the likes of which are being developed by Riverlane, are also being studied. These leverage native transmon qubit logic gates – for example, iSWAP gates – and could potentially outperform surface codes in some aspects.
+The panellists then described bivariate bicycle codes, being developed by IBM, which offer better encoding rates than surface codes. While their decoding can be challenging for real-time applications, IBM’s “relay belief propagation” (relay BP) has made progress here by simplifying decoding strategies that previously involved combining BP with post-processing. The good thing is that this decoder is actually very general and works for all the “low-density parity check codes” — one of the most studied class of high performance QEC codes (these also include, for example, surface codes and directional codes).
+There is also renewed interest in decoders that can be parallelized and operate locally within a system, they said. These have shown promise for codes like the 1D repetition code, which could revive the concept of self-correcting or autonomous quantum memory. Another possibility is the increased use of the graphical language ZX calculus as a tool for optimizing QEC circuits and understanding spacetime error structures.
+The panel stressed that to achieve robust and reliable quantum systems, we will need to move beyond so-called hero experiments. For example, the demand for real-time decoding at megahertz frequencies with microsecond latencies is an important and unprecedented challenge. Indeed, breaking down the decoding problem into smaller, manageable bits has proven difficult so far.
+There are also issues with qubit platforms themselves that need to be addressed: trapped ions and neutral atoms allow for high fidelities and long coherence times, but they are roughly 1000 times slower than superconducting and photonic qubits and therefore require algorithmic or hardware speed-ups. And that is not all: solid-state qubits (such as superconducting and spin qubits) suffer from a “yield problem”, with dead qubits on manufactured chips. Improved fabrication methods will thus be crucial, said the panellists.
+
+
The discussions then moved towards the subject of collaboration between academia and industry. In the field of QEC, such collaboration is highly productive today, with joint PhD programmes and shared conferences like Q2B, for example. Large companies also now boast substantial R&D departments capable of funding high-risk, high-reward research, blurring the lines between fundamental and application-oriented research. Both sectors also use similar foundational mathematics and physics tools.
+At the moment there’s an unprecedented degree of openness and cooperation in the field. This situation might change, however, as commercial competition heats up, noted the panellists. In the future, for example, researchers from both sectors might be less inclined to share experimental chip details.
+Last, but certainly not least, the panellists stressed the urgent need for more PhDs trained in quantum mechanics to address the talent deficit in both academia and industry. So, if you were thinking of switching to another field, perhaps now could be the time to jump.
+The post Advances in quantum error correction showcased at Q2B25 appeared first on Physics World.
+]]>The post A low vibration wire scanner fork for free electron lasers appeared first on Physics World.
+]]>
A new high-performance wire scanner fork that the latest generation of free electron lasers (FELs) can use for measuring beam profiles has been developed by UK-based firm UHV Design. Produced using technology licensed from the Paul Scherrer Institute (PSI) in Switzerland, the device could be customized for different FELs and low emittance accelerators around the world. It builds on the company’s PLSM range, which allows heavy objects to be moved very smoothly and with minimal vibrations.
+The project began 10 years ago when the PSI was starting to build the Swiss Free Electron Laser and equipping the facility, explains Jonty Eyres. The remit for UHV Design was to provide a stiff, very smooth, bellows sealed, ultra-high vacuum compatible linear actuator that could move a wire fork without vibrating it adversely. The fork, designed by PSI, can hold wires in two directions and can therefore scan the intensity of the beam profile in both X and Y planes using just one device as opposed to two or more as in previous such structures.
+“We decided to employ an industrial integrated ball screw and linear slide assembly with a very stiff frame around it, the construction of which provides the support and super smooth motion,” he says. “This type of structure is generally not used in the ultra-high vacuum industry.”
+The position of the wire fork is determined through a (radiation-hard) side mounted linear optical encoder in conjunction with the PSI’s own motor and gearbox assembly. A power off brake is also incorporated to avoid any issues with back driving under vacuum load if electrical power was to be lost to the PLSM. All electrical connections terminated with UTO style connectors to PSI specification.
+Long term reliability was important to avoid costly and unnecessary down time, particularly between planned FEL maintenance shutdowns. The industrial ball screw and slide assembly by design was the perfect choice in conjunction with a bellows assembly rated for 500,000 cycles with an option to increase to 1 million cycles.
+Eyres and his UHV design team began by building a prototype that the PSI tested themselves with a high-speed camera. Once validated, the UHV engineers then built a batch of 20 identical units to prove that the device could be replicated in terms of constraints and tolerances.
+The real challenge in constructing this device, says Eyres, was about trying to minimize the amount of vibration on the wire, which, for PSI, is typically between 5 and 25 microns thick. This is only possible if the vibration of the wire during a scan is low compared to the cross section of the wire – that is, about a micron for a 25-micron wire. “Otherwise, you are just measuring noise,” explains Eyres. “The small vibration we achieved can be corrected for in calculations, so providing an accurate value for the beam profile intensity.”
+UHV Design holds the intellectual property rights for the linear actuator and PSI the property rights of the fork. Following the success of the project and a subsequent agreement between them both, it was recently decided that UHV Design buy the licence to promote the wire fork, allowing the company to sell the device or a version of it to any institution or company operating a FEL or low-emittance accelerator. “The device is customizable and can be adapted to different types of fork, wires, motors or encoders,” says Eyres. “The heart of the design remains the same: a very stiff structure and its integrated ball screw and linear slide assembly. But, it can be tailored to meet the requirements of different beam lines in terms of stroke size, specific wiring and the components employed.”
+UHV Design’s linear actuator was installed on the Swiss FEL in 2016 and has been performing very well since, says Eyres.
+A final and important point to note, he adds, is that UHV Design built an identical copy of their actuator when we took on board the licence agreement, so that we could prove it could still reproduce the same performance. “We built an exact copy of the wire scanner, including the PSI fork assembly and sent it to the PSI, who then used the very same high-speed camera rig that they’d employed in 2015 to directly compare the new actuator with the original ones supplied. They reported that the results were indeed comparable, meaning that if fitted to the Swiss FEL today, it would perform in the same way.”
+For more information: https://www.uhvdesign.com/products/linear-actuators/wire-scanner/
+The post A low vibration wire scanner fork for free electron lasers appeared first on Physics World.
+]]>To learn about calendar aging challenges in next generation Si based Li-ion batteries and how to measure and improve it
+The post Rapid calendar life screening of electrolytes for silicon anodes using voltage holds appeared first on Physics World.
+]]>
+Silicon-based lithium-ion batteries exhibit severe time-based degradation resulting in poor calendar lives. In this webinar, we will talk about how calendar aging is measured, why the traditional measurement approaches are time intensive and there is a need for new approaches to optimize materials for next generation silicon based systems. Using this new approach we also screen multiple new electrolyte systems that can lead to calendar life improvements in Si containing batteries.
+An interactive Q&A session follows the presentation.
+ +
Ankit Verma’s expertise is in physics-based and data-driven modeling of lithium-ion and next generation lithium metal batteries. His interests lie in unraveling the coupled reaction-transport-mechanics behavior in these electrochemical systems with experiment-driven validation to provide predictive insights for practical advancements. Predominantly, he’s working on improving silicon anodes energy density and calendar life as part of the Silicon Consortium Project, understanding solid-state battery limitations and upcycling of end-of-life electrodes as part of the ReCell Center.
+Verma’s past works include optimization of lithium-ion battery anodes and cathodes for high-power and fast-charge applications and understanding electrodeposition stability in metal anodes.
++


The post Rapid calendar life screening of electrolytes for silicon anodes using voltage holds appeared first on Physics World.
+]]>The post John Clarke, Michel Devoret and John Martinis win the 2025 Nobel Prize for Physics appeared first on Physics World.
+]]>The award includes a SEK 11m prize ($1.2m), which is shared equally by the winners. The prize will be presented at a ceremony in Stockholm on 10 December.
+The prize was announced this morning by members of the Royal Swedish Academy of Science. Olle Eriksson of Uppsala University and chair of the Nobel Committee for Physics commented, “There is no advanced technology today that does not rely on quantum mechanics.”
+Göran Johansson of Chalmers University of Technology explained that the three laureates took quantum tunnelling from the microscopic world and onto superconducting chips, allowing physicists to study quantum physics and ultimately create quantum computers.
+ +Speaking on the telephone, John Clarke said of his win, “To put it mildly, it was the surprise of my life,” adding “I am completely stunned. It had never occurred to me that this might be the basis of a Nobel prize.” On the significance of the trio’s research, Clarke said, “The basis of quantum computing relies to quite an extent on our discovery.”
+As well as acknowledging the contributions of Devoret and Martinis, Clarke also said that their work was made possible by the work of Anthony Leggett and Brian Josephson – who laid the groundwork for their work on tunnelling in superconducting circuits. Leggett and Josephson are previous Nobel winners.
+As well as having scientific significance, the trio’s work has led to the development of nascent commercial quantum computers that employ superconducting circuits. Physicist and tech entrepreneur Ilana Wisby, who co-founded Oxford Quantum Circuits, told Physics World, “It’s such a brilliant and well-deserved recognition for the community”.
+Clarke was born in 1942 in Cambridge, UK. He received his BA in physics from the University of Cambridge in 1964 before carrying out a PhD at Cambridge in 1968. He then moved to the University of California, Berkeley, to carry out a postdoc before joining the physics faculty in 1969 where he has remained since.
+Devoret was born in Paris, France in 1953. He graduated from Ecole Nationale Superieure des Telecommunications in Paris in 1975 before earning a PhD from the University of Paris, Orsay, in 1982. He then moved to the University of California, Berkeley, to work in Clarke’s group collaborating with Martinis who was a graduate student at the time. In 1984 Devoret returned to France to start his own research group at the Commissariat à l’Energie Atomique in Saclay (CEA-Saclay) before heading to the US to Yale University in 2002. In 2024 he moved to the University of California, Santa Barbara, and also became chief scientist at Google Quantum AI.
+Martinis was born in the US in 1958. He received a BS in physics in 1980 and a PhD in physics both from the University of California, Berkeley. He then carried out postdocs at CEA-Saclay, France, and the National Institute of Standards and Technology in Boulder, Colorado, before moving to the University of California, Santa Barbara, in 2004. In 2014 Martinis and his team joined Google with the aim of building the first useful quantum computer before he moved to Australia in 2020 to join the start-up Silicon Quantum Computing. In 2022 he co-founded the company Qolab, of which he is currently the chief technology officer.
+The trio did its prizewinning work in the mid-1980s at the University of California, Berkeley. At the time Devoret was a postdoctoral fellow and Martinis was a graduate student – both working for Clarke. They were looking for evidence of macroscopic quantum tunnelling (MQT) in a device called a Josephson junction. This comprises two pieces of superconductor that are separated by an insulating barrier. In 1962 the British physicist Brian Josephson predicted how the Cooper pairs of electrons that carry current in a superconductor can tunnel across the barrier unscathed. This Josephson effect was confirmed experimentally in 1963.
+The lowest-energy (ground) state of a superconductor is a macroscopic quantum state in which all Cooper pairs are described by a single quantum-mechanical wavefunction. In the late 1970s, the British–American physicist Anthony Leggett proposed that the tunnelling of this entire macroscopic state could be observed in a Josephson junction.
+The idea is to put the system into a metastable state in which electrical current flows without resistance across the junction – resulting in zero voltage across the junction. If the system is indeed a macroscopic quantum state, then it should be able to occasionally tunnel out of this metastable state, resulting in a voltage across the junction.
+This tunnelling can be observed by increasing the current through the junction and measuring the current at which a voltage occurs – obtaining an average value over many such measurements. As the temperature of the device is reduced, this average current increases – something that is expected regardless of whether the system is in a macroscopic quantum state.
+However, at very low temperatures the average current becomes independent of temperature, which is the signature of macroscopic quantum tunnelling that Martinis, Devoret and Clarke were seeking. Their challenge was to reduce the noise in their experimental apparatus, because noise has a similar effect as tunnelling on their measurements.
+As well as observing the signature of tunnelling, they were also able to show that the macroscopic quantum state exists in several different energy states. Such a multilevel system is essentially a macroscopic version of an atom or nucleus, with its own spectroscopic structure.
+The noise-control techniques developed by the trio to observe MQT and the fact that a Josephson junction can function as a macroscopic multilevel quantum system have led to the development of superconducting quantum bits (qubits) that form the basis of some nascent quantum computers.
+The post John Clarke, Michel Devoret and John Martinis win the 2025 Nobel Prize for Physics appeared first on Physics World.
+]]>The post Is materials science the new alchemy for the 21st century? appeared first on Physics World.
+]]>One area that never fails to fascinate me is the development of new and advanced materials. I’m not a materials scientist – my expertise lies in creating monitoring systems for engineering – so I apologize for any over-simplification in what follows. But I do want to give you a sense of just how impressive, challenging and rewarding the field of materials science is.
+ +It’s all too easy to take advanced materials for granted. We are in constant contact with them in everyday life, whether it’s through applications in healthcare, electronics and computing or energy, transport, construction and process engineering. But what are the most important materials innovations right now – and what kinds of novel materials can we expect in future?
+There are several – and all equally important – drivers when it comes to materials development. One is the desire to improve the performance of products we’re already familiar with. A second is the need to develop more sustainable materials, whether that means replacing less environmentally friendly solutions or enabling new technology. Third, there’s the drive for novel developments, which is where some of the most ground-breaking work is occurring.
+On the environmental front, we know that there are many products with components that could, in principle, be recycled. However, the reality is that many products end up in landfill because of how they’ve been constructed. I was recently reminded of this conundrum when I heard a research presentation about the difficulties of recycling solar panels.
+
Photovoltaic cells become increasingly inefficient with time and most solar panels aren’t expected to last more than about 30 years. Trouble is, solar panels are so robustly built that recycling them requires specialized equipment and processes. More often than not, solar panels just get thrown away despite mostly containing reusable materials such as glass, plastic and metals – including aluminium and silver.
+It seems ironic that solar panels, which enable sustainable living, could also contribute significantly to landfill. In fact, the problem could escalate significantly if left unaddressed. There are already an estimated 1.8 million solar panels in use the UK, and potentially billions around the world, with a rapidly increasing install base. Making solar panels more sustainable is surely a grand challenge in materials science.
+Another vital issue concerns our addiction to new tech, which means we rarely hang on to objects until the end of their life; I mean, who hasn’t been tempted by a shiny new smartphone even though the old one is perfectly adequate? That urge for new objects means we need more materials and designs that can be readily re-used or recycled, thereby reducing waste and resource depletion.
+As someone who works in the aerospace industry, I know first-hand how companies are trying to make planes more fuel efficient by developing composite materials that are stronger and can survive higher temperatures and pressures – for example carbon fibre and composite matrix ceramics. The industry also uses “additive manufacturing” to enable more intricate component design with less resultant waste.
+Plastics are another key area of development. Many products are made from single type, recyclable materials, such as polyethylene or polypropylene, which benefit from being light, durable and capable of withstanding chemicals and heat. Trouble is, while polyethene and polypropene can be recycled, they both create the tiny “microplastics” that, as we know all too well, are not good news for the environment.
+
Bio-based materials are becoming more common for everyday items. Think about polylactic acid (PLA), which is a plant-based polymer derived from renewable resources such as cornstarch or sugar cane. Typically used for food or medical packaging, it’s usually said to be “compostable”, although this is a term we need to view with caution.
+Sadly, PLA does not degrade readily in natural environments or landfill. To break it down, you need high-temperature, high-moisture industrial composting facilities. So whilst PLAs come from natural plants, they are not straightforward to recycle, which is why single-use disposable items, such as plastic cutlery, drinking straws and plates, are no longer permitted to be made from it.
+Thankfully, we’re also seeing greater use of more sustainable, natural fibre composites, such as flax, hemp and bamboo (have you tried bamboo socks or cutlery?). All of which brings me to an interesting urban myth, which is that in 1941 legendary US car manufacturer Henry Ford built a car apparently made entirely of a plant-based plastic – dubbed the “soybean” car (see box).
+
Henry Ford’s 1941 “soybean” car, which was built entirely of a plant-based plastic, was apparently motivated by a need to make vehicles lighter (and therefore more fuel efficient), less reliant on steel (which was in high demand during the Second World War) and safer too. The exact ingredients of the plastic are, however, not known since there were no records kept.
+Speculation is that it was a combination of soybeans, wheat, hemp, flax and ramie (a kind of flowering nettle). Lowell Overly, a Ford designer who had major involvement in creating the car, said it was “soybean fibre in a phenolic resin with formaldehyde used in the impregnation”. Despite being a mix of natural and synthetic materials – and not entirely made of soybeans – the car was nonetheless a significant advancement for the automotive industry more than eight decades ago.
+So what technology developments do we need to take materials to the next level? The key will be to avoid what I coin the “solar-panel trap” and find materials that are sustainable from cradle to grave. We have to create an environmentally sustainable economic system that’s based on the reuse and regeneration of materials or products – what some dub the “circular economy”.
+Sustainable composites will be essential. We’ll need composites that can be easily separated, such as adhesives that dissolve in water or a specific solvent, so that we can cleanly, quickly and cheaply recover valuable materials from complex products. We’ll also need recycled composites, using recycled carbon fibre, or plastic combined with bio-based resins made from renewable sources like plant-based oils, starches and agricultural waste (rather than fossil fuels).
+Vital too will be eco-friendly composites that combine sustainable composite materials (such as natural fibres) with bio-based resins. In principle, these could be used to replace traditional composite materials and to reduce waste and environmental impact.
+Another important trend is developing novel metals and complex alloys. As well as enhancing traditional applications, these are addressing future requirements for what may become commonplace applications, such as wide-scale hydrogen manufacture, transportation and distribution.
+Then there are “soft composites”. These are advanced, often biocompatible materials that combine softer, rubbery polymers with reinforcing fibres or nanoparticles to create flexible, durable and functional materials that can be used for soft robotics, medical implants, prosthetics and wearable sensors. These materials can be engineered for properties like stretchability, self-healing, magnetic actuation and tissue integration, enabling innovative and patient-friendly healthcare solutions.
+
And have you heard of e-textiles, which integrate electronic components into everyday fabrics? These materials could be game-changing for healthcare applications by offering wearable, non-invasive monitoring of physiological information such as heart rate and respiration.
+Further applications could include advanced personal protective equipment (PPE), smart bandages and garments for long-term rehabilitation and remote patient care. Smart textiles could revolutionize medical diagnostics, therapy delivery and treatment by providing personalized digital healthcare solutions.
+I realize I have only scratched the surface of materials science – an amazing cauldron of ideas where physics, chemistry and engineering work hand in hand to deliver groundbreaking solutions. It’s a hugely and truly important discipline. With far greater success than the original alchemists, materials scientists are adept at creating the “new gold”.
+Their discoveries and inventions are making major contributions to our planet’s sustainable economy from the design, deployment and decommission of everyday items, as well as finding novel solutions that will positively impact way we live today. Surely it’s an area we should celebrate and, as physicists, become more closely involved in.
+The post Is materials science the new alchemy for the 21st century? appeared first on Physics World.
+]]>The post Perovskite detector could improve nuclear medicine imaging appeared first on Physics World.
+]]>Nuclear medicine imaging techniques like single-photon emission computed tomography (SPECT) work by detecting the gamma rays emitted by a short-lived radiotracer delivered to a specific part of a patient’s body. Each gamma ray can be thought of as being a pixel of light, and after millions of these pixels have been collected, a 3D image of the region of interest can be built up by an external detector.
+ +Such detectors are today made from either semiconductors like CZT or scintillators such as NaI:TI, CsI and LYSO, but CZT detectors are expensive – often costing hundreds of thousands to millions of dollars. CZT crystals are also brittle, making the detectors difficult to manufacture. While NaI is cheaper than CZT, detectors made of this material end up being bulky and generate blurrier images.
+To overcome these problems, researchers led by Mercouri Kanatzidis and Yihui He studied the lead halide perovskite crystal CsPbBr3. They already knew that this was an efficient solar cell material and recently, they discovered that it also showed promise for detecting X-rays and gamma rays.
+In the new work, detailed in Nature Communications, the team grew high-quality crystals of CsPbBr3 and fabricated them into detector devices. “When a gamma-ray photon enters the crystal, it interacts with the material and produces electron–hole pairs,” explains Kanatzidis. “These charge carriers are collected as an electrical signal that we can measure to determine both the energy of the photon and its point of interaction.”
+The researchers found that their detectors could resolve individual gamma rays at the energies used in SPECT imaging with high resolution. They could also sense extremely weak signals from the medical tracer technetium-99m, which is routinely employed in hospital settings. They were thus able to produce sharp images that could distinguish features as small as 3.2 mm. This fine sensitivity means that patients would be exposed to shorter scan times or smaller doses of radiation compared with NaI or CZT detectors.
+“Importantly, a parallel study published in Advanced Materials the same week as our Nature Communications paper directly compared perovskite performance with CZT, the only commercial semiconductor material available today for SPECT, which showed that perovskites can even surpass CZT in certain aspects,” says Kanatzidis.
+“The result was possible thanks to our efforts over the last 10 years in optimizing the crystal growth of CsPbBr3, improving the electrode contacts in the detectors and carrier transport and nuclear electronics therein,” adds He. “Since the first demonstration of high spectral resolution by CsPbBr3 in our previous work, it has gradually been recognized as the most promising competitor to CZT.”
+ +Looking forward, the Northwestern–Soochow team is now busy scaling up detector fabrication and improving its long-term stability. “We are also trying to better understand the fundamental physics of how gamma rays interact in perovskites, which could help optimize future materials,” says Kanatzidis. “A few years ago, we established a new company, Actinia, with the goal of commercializing this technology and moving it toward practical use in hospitals and clinics,” he tells Physics World.
+“High-quality nuclear medicine shouldn’t be limited to hospitals that can afford the most expensive equipment,” he says. “With perovskites, we can open the door to clearer, faster, safer scans for many more patients around the world. The ultimate goal is better scans, better diagnoses and better care for patients.”
+The post Perovskite detector could improve nuclear medicine imaging appeared first on Physics World.
+]]>The post Radioactive BEC could form a ‘superradiant neutrino laser’ appeared first on Physics World.
+]]>Neutrinos – the ghostly particles produced in beta decay – are notoriously difficult to detect or manipulate because of the weakness of their interaction with matter. They cannot be used to produce a conventional laser because they would pass straight through mirrors unimpeded. More fundamentally, neutrinos are fermions rather than bosons such as photons. This prevents neutrinos forming a two-level system with a population inversion as only one neutrino can occupy each quantum state in a system.
+However, another quantum phenomenon called superradiance can also increase the intensity and coherence of the radiation from photons. This occurs when the emitters are sufficiently close together to become indistinguishable. The emission then comes not from any single entity but from the collective ensemble. As it does not require the emitted particles to be quantum degenerate, this is not theoretically forbidden for fermions. “There are devices that use superradiance to make light sources, and people call them superradiant lasers – although that’s actually a misnomer” explains neutrino physicist Benjamin Jones of the University of Texas at Arlington and a visiting professor at the University of Manchester. “There’s no stimulated emission.”
+In their new work, Jones and colleague Joseph Formaggio of Massachusetts Institute of Technology propose that, in a BEC of radioactive atoms, superradiance could enhance the neutrino emission rate and therefore speed up beta decay, with an initial burst before the expected exponential decay commences. “That has not been seen for nuclear systems so far – only for electronic ones,” says Formaggio. Rubidium was used to produce the first ever condensate in 1995 by Carl Wiemann and Eric Cornell of University of Colorado Boulder, and conveniently, one of its isotopes decays by beta emission with a half-life of 86 days.
+The presence of additional hyperfine states would make direct laser cooling of rubidium-83 more challenging than the rubidium-87 isotope used by Wiemann and Cornell, but not significantly more so than the condensation of rubidium-85, which has also been achieved. Alternatively, the researchers propose that a dual condensate could be created in which rubidium-83 is cooled by sympathetic cooling with rubidium-87. The bigger challenge, says Jones, is the Bose–Einstein condensation of a radioactive atom, which has yet to be achieved: “It’s difficult to handle in a vacuum system,” he explains, “You have to be careful to make sure you don’t contaminate your laboratory with radioactive vapour.”
+If such a condensate were produced, the researchers predict that superradiance would increase with the size of the BEC. In a BEC of 106 atoms, for example, more than half the atoms would decay within three minutes. The researchers now hope to test this prediction. “This is one of those experiments that does not require a billion dollars to fund,” says Formaggio. “It is done in university laboratories. It’s a hard experiment but it’s not out of reach, and I’d love to see it done and be proven right or wrong.”
+If the prediction were proved correct, the researchers suggest it could eventually lead towards a benchtop neutrino source. As the same physics applies to neutrino capture, this could theoretically assist the detection of neutrinos that decoupled from the hot plasma of the universe just seconds after the Big Bang – hundreds of thousands of years before photons in the cosmic microwave background. The researchers emphasize, however, that this would not currently be feasible.
+Neutrino physicist Patrick Huber of Virginia Tech is impressed by the work. “I think for a first, theoretical study of the problem this is very good,” he says. “The quantum mechanics seems to be sound, so the question is if you try to build an experiment what kind of real-world obstacles are you going to encounter?” He predicts that, if the experiment works, other researchers would quite likely find hitherto unforeseen applications.
+ +Atomic, molecular and optical physicist James Thompson of University of Colorado Boulder is sceptical, however. He says several important aspects are either glossed over or simply ignored. Most notably, he calculates that the de Broglie wavelength of the neutrinos would be below the Bohr radius – which would prevent a BEC from feasibly satisfying the superradiance criterion that the atoms be indistinguishable.
+“I think it’s a really cool, creative idea to think about,” he concludes, “but I think there are things we’ve learned in atomic physics that haven’t really crept into [the neutrino physics] community yet. We learned them the hard way by building experiments, having them not work and then figuring out what it takes to make them work.”
+The proposal is described in Physical Review Letters.
+The post Radioactive BEC could form a ‘superradiant neutrino laser’ appeared first on Physics World.
+]]>The post Bayes’ rule goes quantum appeared first on Physics World.
+]]>Bayes’ rule is named after Thomas Bayes who first defined it for conditional probabilities in “An Essay Towards Solving a Problem in the Doctrine of Chances” in 1763. It describes the probability of an event based on prior knowledge of conditions that might be related to the event. One area in which it is routinely used is to update beliefs based on new evidence (data). In classical statistics, the rule can be derived from the principle of minimum change, meaning that the updated beliefs must be consistent with the new data while only minimally deviating from the previous belief.
+In mathematical terms, the principle of minimum change minimizes the distance between the joint probability distributions of the initial and updated belief. Simply put, this is the idea that for any new piece of information, beliefs are updated in the smallest possible way that is compatible with the new facts. For example, when a person tests positive for Covid-19, they may have suspected that they were ill, but the new information confirms this. Bayes’ rule is a therefore way to calculate the probability of having contracted Covid-19 based not only on the test result, and the chance of the test yielding a false negative, but also on the patient’s initial suspicions.
+Quantum versions of Bayes’ rule have been around for decades, but the approach through the minimum change principle had not been tried before. In the new work, a team led by Ge Bai, Francesco Buscemi and Valerio Scarani set out to do just that.
+ +“We found which quantum Bayes’ rule is singled out when one maximizes the fidelity (which is equivalent to minimizing the change) between two processes,” explains Bai. “In many cases, the solution is the ‘Petz recovery map’, proposed by Dénes Petz in the 1980s and which was already considered as being one of the best candidates for the quantum Bayes’ rule. It is based on the rules of information processing, crucial not only for human reasoning, but also for machine learning models that update their parameters with new data.”
+Quantum theory is counter-intuitive, and the mathematics is hard, says Bai. “Our work provides a mathematically sound way to update knowledge about a quantum system, rigorously derived from simple principles of reasoning, he tells Physics World. “It demonstrates that the mathematical description of a quantum system—the density matrix—is not just a predictive tool, but is genuinely useful for representing our understanding of an underlying system. “It effectively extends the concept of gaining knowledge, which mathematically corresponds to a change in probabilities, into the quantum realm.”
+The “simple principles of reasoning” encompass the minimum change principle, adds Buscemi. “The idea is that while new data should lead us to update our opinion or belief about something, the change should be as small as possible, given the data received.
+“It’s a conservative stance of sorts: I’m willing to change my mind, but only by the amount necessary to accept the hard facts presented to me, no more.”
+“This is the simple (yet powerful) principle that Ge mentioned,” he says, “and it guides scientific inference by preventing unwanted biases from entering the reasoning process.”
+While several quantum versions of the Bayes’ rule have been put forward before now, these were mostly based on the fact of having analogous properties to their classical counterpart, adds Scarani. “Recently, Francesco and one co-author proposed an axiomatic approach to the most frequently-used quantum Bayes rule, the one using the Petz recovery map. Our work is the first to derive a quantum Bayes rule from an optimization principle, which works very generally for classical information, but which has been used here for the first time in quantum information.
+ +The result is very intriguing, he says: “we recover the Petz map in many cases, but not all. If we take that our new approach is the correct way to define a quantum Bayes rule, then previous constructions based on analogies were correct very often, but not quite always; and one or more of the axioms are not to be enforced after all. Our work is therefore is a major advance, but it is not the end of the road – and this is nice.”
+Indeed, the researchers say they are now busy further refining their quantum Bayes’ rule. They are also looking into applications for it. “Beyond machine learning, this rule could be powerful for inference—not just for predicting the future but also retrodicting the past,” says Bai. “This is directly applicable to problems in quantum communication, where one must recover encoded messages, and in quantum tomography, where the goal is to infer a system’s internal state from observations.
+“We will be using our results to develop new, hopefully more efficient, and mathematically well-founded methods for these tasks,” he concludes.
+The present study is detailed in Physical Review Letters.
+The post Bayes’ rule goes quantum appeared first on Physics World.
+]]>The post The top five physics Nobel prizes of the 21st century revealed appeared first on Physics World.
+]]>Quantum physics is our hot favourite this time round – it’s the International Year of Quantum Science and Technology and the Nobel Committee for Physics aren’t immune to wider events. But whoever wins, you know that the prize will have been very carefully considered by committee members.
+Over the 125 years since the prize was first awarded, almost every seminal finding in physics has been honoured – from the discovery of the electron, neutrino and positron to the development of quantum mechanics and the observation of high-temperature superconductivity.
+But what have been the most significant physics prizes of the 21st century? I’m including 2000 as part of this century (ignoring pedants who say it didn’t start till 1 January 2001). During that time, the Nobel Prize for Physics has been awarded 25 times and gone to 68 different people, averaging out at about 2.7 people per prize.
+Now, my choice is entirely subjective, but I reckon the most signficant prizes are those that:
+So with that in mind, here’s my pick of the five top physics Nobel prizes of the 21st century. You’ll probably disagree violently with my choice so e-mail us with your thoughts.
+
Coming in at number five in our list of top physics Nobels of the 21st century is the discovery of neutrino oscillation, which went to Takaaki Kajita and Art McDonald in 2015. The neutrino was first hypothesized by Wolfgang Pauli back in 1930 as “a desperate remedy” to the fact that energy didn’t seem to be conserved when a nucleus emits an electron via beta decay. Fred Reines and Clyde Cowan had won a Nobel prize in 1995 for the original discovery of neutrinos themselves, which are chargeless particles that interact with matter via the weak force and are fiendishly hard to detect.
+ +But what Kajita (at the Super-Kamikande experiment in Japan) and McDonald (at the Sudbury Neutrino Observatory in Canada) had done is to see them switch, or “oscillate”, from one type to another. Their work proved that these particles, which physicists had assumed to be massless, do have mass after all. This was at odds with the Standard Model of particle physics – and isn’t it fun when physics upends conventional wisdom?
+What’s more, the discovery of neutrino oscillation explained why Ray Davies and John Bahcall had seen only a third of the solar neutrinos predicted by theory in their famous experiment of 1964. This discrepancy arose because solar neutrinos are oscillating between flavours as they travel to the Earth – and their experiment had detected only a third as it was sensitive mainly to electron neutrinos, not the other types.
+
At number four in our list of the best physics Nobel prizes of the 21st century is the 2001 award, which went to Eric Cornell, Wolfgang Ketterle and Carl Wieman for creating the first Bose–Einstein condensates (BECs). I love the idea that Cornell and Wieman created a new state of matter – in which particles are locked together in their lowest quantum state – at exactly 10.54 a.m. on Monday 5 June 1995 at the JILA laboratory in Boulder, Colorado.
+First envisaged by Satyendra Nath Bose and Albert Einstein in 1924, Cornell and Wieman created the first BEC by cooling 2000 rubidium-87 atoms to 170nK using the then new techniques of laser and evaporative cooling. Within a few months, Wolfgang Ketterle over at the Massachusetts Institute of Technology also made a BEC from 500,000 sodium-23 atoms at 2 μK.
+Since then hundreds of groups around the world have created BECs, which have been used for everything from slowing light to making “atom lasers” and even modelling the behaviour of black holes. Moreover, the interactions between the atoms can be finely controlled, meaning BECs can be used to simulate properties of condensed-matter systems that are extremely difficult – or impossible – to probe in real materials.
+
Coming in at number three is the 2013 prize, which went to François Englert and the late Peter Higgs for discovering the mechanism by which subatomic particles get mass. Their work was confirmed in 2012 by the discovery of the so-called Higgs boson at the ATLAS and CMS experiments at CERN’s Large Hadron Collider.
+Higgs and Englert didn’t, of course, win for detecting the Higgs boson, although the Nobel citation credits the ATLAS and CMS teams in its citation. What they were being credited for was work done back in the early 1960s when they published papers independently of each other that provided a mechanism by which particles can have the masses we observe.
+ +Higgs had been studying spontaneous symmetry breaking, which led to the notion of massless, force-carrying particles, known as Goldstone bosons. But what Higgs realized was that Goldstone bosons don’t necessarily occur when a symmetry is spontaneously broken – they could be reinterpreted as an additional quantum (polarization) state of a force-carrying particle.
+The leftover terms in the equations represented a massive particle – the Higgs boson – avoiding the need for a massless unobserved particle. Writing in his now-famous 1964 paper (Phys. Rev. Lett. 13 508), Higgs highlighted the possibility of a massive spin-zero boson, which is what was discovered at CERN in 2012.
+That work probably got more media attention than all Nobel prizes this century, because who doesn’t love a huge international collaboration tracking down a particle on the biggest physics experiment of all time? Especially as the Standard Model doesn’t predict what its mass should be so it’s hard to know where to look. But it doesn’t take top slot in my book because it “only” confirmed what we had expected and we’re still on the look-out for “new physics” beyond the Standard Model.
+
Taking second place in our list is the discovery that the expansion of the universe is not slowing down – but accelerating – thanks to studies of exploding stars called supernovae. As with so many Nobel prizes these days, the 2011 award went to three people: Brian Schmidt, who led the High-Z Supernovae Search Team, and his colleague Adam Riess, and to Saul Perlmutter who led the rival Supernova Cosmology Project.
+Theirs was a pretty sensational finding that implied that about three-quarters of the mass–energy content of the universe must consist of some weird, gravitationally repulsive substance, dubbed “dark energy”, about which even now we still know virtually nothing. It had previously been assumed that the universe would – depending on how much matter it contains – either collapse eventually in a big crunch or go on expanding forever, albeit at an ever more gentle pace.
+The teams had been studying type 1a supernovae, which always blow up in the same way when they reach the same mass, which means that they can be used as “standard candles” to accurately measure distance in the universe. Such supernovae are very rare and the two groups had to carry out painstaking surveys using ground-based telescopes and the Hubble Space Telescope to find enough of them.
+The teams thought they’d find that the expansion of the universe is decelerating, but as more and more data piled up, the results only appeared to make sense if the universe has a force pushing matter apart. The Royal Swedish Academy of Sciences said the discovery was “as significant” as the 2006 prize, which had gone to John Mather and the late George Smoot for their discovery in 1992 of the minute temperature variations in the cosmic microwave background – the fossil remnants of the large-scale structures in today’s universe.
+But to me, the accelerating expansion has the edge as the implications are even more profound, pointing as they do to the composition and fate of the cosmos.
+
And finally, the winner of the greatest Nobel Prize for Physics of the 21st century is the 2017 award, which went to Barry Barish, Kip Thorne and the late Rainer Weiss for the discovery of gravitational waves. Not only is it the most recent prize on my list, it’s also memorable for being a genuine first – discovering the “ripples in space–time” originally predicted by Einstein. The two LIGO detectors in Livingston, Louisiana, and Hanford, Washington, are also astonishing feats of engineering, capable of detecting changes in distance tinier than the radius of the proton.
+The story of how gravitational waves were first observed is now well known. It was in the early hours of the morning Monday 14 September 2015, just after staff who had been calibrating the LIGO detector in Livingston had gone to bed, when gravitational waves created from the collision of two black holes 1.3 billion light-years away hit the LIGO detectors in the US. The historic measurement dubbed GW150914 hit the headlines around the world.
+ +More than 200 gravitational-wave events have so far been detected – and observing these ripples, which had long been on many physicists’ bucket lists, has over the last decade become almost routine. Most gravitational-wave detections have been binary black-hole mergers, though there have also been a few neutron-star/black-hole collisions and some binary neutron-star mergers too. Gravitational-wave astronomy is now a well-established field not just thanks to LIGO but also Virgo in Italy and KAGRA in Japan. There are also plans for an even more advanced Einstein Telescope, which could detect in a day what it took LIGO a decade to spot.
+Gravitational waves also opened the whole new field of “multimessenger astronomy” – the idea that you observe a cosmic event with gravitational waves and then do follow-up studies using other instruments, measuring it with cosmic rays, neutrinos and photons. Each of these cosmic messengers is produced by distinct processes and so carries information about different mechanisms within its source.
+The messengers also differ widely in how they carry this information to the astronomer: for example, gravitational waves and neutrinos can pass through matter and intergalactic magnetic fields, providing an unobstructed view of the universe at all wavelengths. Combining observations of different messengers will therefore let us see more and look further.
+The post The top five physics Nobel prizes of the 21st century revealed appeared first on Physics World.
+]]>The post ASTRO 2025: expanding the rules of radiation therapy appeared first on Physics World.
+]]>Yashar was speaking at a news briefing arranged to highlight a select few high-impact abstracts. And in accord with the ASTRO 2025 meeting’s theme of “rediscovering radiation medicine and exploring new indications”, the chosen presentations included examples of innovative techniques and less common indications, including radiotherapy treatments of non-malignant disease and a novel combination of external-beam radiation with radioligand therapy.
+Ventricular tachycardia (VT) is a life-threatening heart rhythm disorder that’s usually treated with medication, implantation of a cardiac device and then catheter ablation, an invasive procedure in which a long catheter is inserted via a leg vein into the heart to destroy abnormal cardiac tissue. A research team at Washington University School of Medicine has now shown that stereotactic arrhythmia radiation therapy (STAR) could provide an equally effective and potentially safer treatment alternative.
+
STAR works by delivering precision beams of radiation to the scarred tissue that drives the abnormal heart rhythm, without requiring invasive catheters or anaesthesia.
+“Over the past several years, STAR has emerged as a novel non-invasive treatment for patients with refractory VT,” said Shannon Jiang, who presented the team’s findings at ASTRO. “So far, there have been several single-arm studies showing promising results for STAR, but there are currently no data that directly compare STAR to catheter ablation, and that’s the goal for our study.”
+Jiang and colleagues retrospectively analysed data from 43 patients with recurrent refractory VT (which no longer responds to treatment). Patients were treated with either STAR or repeat catheter ablation at a single institution. The team found that both treatments were similarly effective at controlling arrhythmia, but patients receiving radiation had far fewer serious side effects.
+Within one year of the procedure, eight patients (38%) in the ablation group experienced treatment-related serious adverse events, compared with just two (9%) in the STAR group. These complications occurred sooner after ablation (median six days) than after radiation (10 months). In four cases, patients receiving ablation died within a month of treatment, soon after experiencing an adverse event, and one patient did not survive the procedure. In contrast, in the STAR group, there were no deaths attributed to treatment-related side effects. One year after treatment, overall survival was 73% following radiation and 58% after ablation; at three years (the median follow-up time), it was 45% in both groups.
+ +“Despite the fact that this is a retrospective, non-randomized analysis, our study provides some important preliminary data that support the use of STAR as a potentially safer and equally effective treatment option for patients with high-risk refractory VT,” Jiang concluded.
+Commenting on the study, Kenneth Rosenzweig from Icahn School of Medicine at Mount Sinai emphasizes that the vast majority of patients with VT will be well cared for by standard cardiac ablation, but that radiation can help in certain situations. “This study shows that for patients where the ablation just isn’t working anymore, there’s another option. Some patients will really need the help of radiation medicine to get them through, and work like this will help us figure out who those patients are and what we can do to improve their quality-of-life.”
+A clinical trial headed up at the University of California, Los Angeles, has shown that adding radioligand therapy to metastasis-directed radiation therapy more than doubles progression-free survival in men with oligometastatic prostate cancer, without increasing toxicity.
+“When we pair external-beam radiation directed to tumours we can see with a radiopharmaceutical to reach microscopic disease we can’t see, patients can experience a notably longer interval before progression,” explained principal investigator Amar Kishan.
+Patients with oligometastatic prostate cancer (up to five metastases outside the prostate after initial therapy) are increasingly treated with metastasis-directed stereotactic body radiation therapy (SBRT). While this treatment can delay progression and the need for hormone therapy, in most patients the cancer recurs, likely due to the presence of undetectable microscopic disease.
+
Radioligand therapy uses a radiopharmaceutical drug to deliver precise radiation doses directly to tumours. For prostate cancer, the drug combines radioactive isotope lutetium-177 with a ligand that targets the prostate-specific membrane antigen (PSMA) found on cancer cells. Following its promising use in men with advanced prostate cancer, the team examined whether adding radioligand therapy to SBRT could also improve progression-free survival in men with early metastatic disease.
+The phase II LUNAR trial included 92 men with oligometastatic prostate cancer and one to five distant lesions as seen on a PSMA PET/CT scan. The patients were randomized to receive either SBRT alone (control arm) or two cycles of the investigational PSMA-targeting drug 177Lu-PNT2002, eight weeks apart, followed by SBRT.
+At a median follow-up of 22 months, adding radioligand therapy improved median progression-free survival from 7.4 to 17.3 months. Hormone therapy was also delayed, from 14.1 months in the control group to 24.3 months. Of 65 progression events observed, 64 were due to new lesions rather than regrowth at previously treated sites. Both treatments were well tolerated, with no difference in severe side effects between the two groups.
+“We conclude that adding two cycles of 177Lu-PNT2002 to SBRT significantly improves progression-free survival in men with oligorecurrent prostate cancer, presumably by action on occult metastatic disease, without an increase in toxicity,” said Kishan. “Ultimately, while this intervention worked well, 64% of patients even on the investigational arm still had some progression, so we could further optimize the dose and cycle and other variables for these patients.”
+Osteoarthritis is a painful joint disease that arises when the cartilage cushioning the ends of bones wears down. Treatments include pain medication, which can cause significant side effects with long-term use, or invasive joint replacement surgery. Byoung Hyuck Kim from Seoul National University College of Medicine described how low-dose radiotherapy (LDRT) could help bridge this treatment gap.
+
LDRT could provide a non-invasive alternative treatment for knee osteoarthritis, a leading cause of disability, Kim explained. But while it is commonly employed in Europe to treat joint pain, its use in other countries is limited by low awareness and a lack of high-quality randomized evidence. To address this shortfall, Kim and colleagues performed a randomized, placebo-controlled trial designed to provide sufficient evidence to incorporate LDRT into clinical standard-of-care.
+“There’s a clinical need for moderate interventions between weak pain medications and aggressive surgery, and we think radiation may be a suitable option for those patients, especially when drugs and injections are poorly tolerated,” said Kim.
+The multicentre trial included 114 patients with mild to moderate knee osteoarthritis. Participants were randomized to receive one of three treatments: 0.3 Gy radiotherapy in six fractions; 3 Gy in six fractions; or sham irradiation where the treatment system did not deliver radiation – an approach that had not been tested in previous studies.
+The use of pain medication was limited, to avoid masking effects from the radiation itself. Response was considered positive if the patients (who did not know which treatment they had received) exhibited improvements in pain levels, physical function and overall condition.
+“Interestingly, at one month [after treatment], the response rates were very similar across all groups, which reflects a strong placebo effect from the sham group,” said Kim. “At four months, after the placebo effect had diminished, the 3 Gy group demonstrated significantly higher response rate compared to the sham control group; however, the 0.3 Gy group did not.”
+ +The response rates at four months were 70.3%, 58.3% and 41.7%, for the 3 Gy, 0.3 Gy and sham groups, respectively. As expected, with radiation doses less than 5% of those typically used for cancer treatments, no radiation-related side effects were observed.
+“Our study shows that a single course of low-dose radiotherapy improves knee osteoarthritis symptoms and function at four months, with no treatment-related toxicity observed,” Kim concluded. “So our trial could provide objective evidence and suggest that LDRT is a non-pharmacologic scalable option that merits further trials.”
+“While small, [the study] was really well executed in terms of being placebo controlled. It clearly showed that the 3 Gy arm was superior to the placebo control arm and there was a 30% benefit,” commented Kristina Mirabeau-Beale from GenesisCare. “So I think we can say definitively that the benefit is from radiation more than just the placebo effect of interacting with our healthcare system.”
+The post ASTRO 2025: expanding the rules of radiation therapy appeared first on Physics World.
+]]>The post Quantum information or metamaterials: our predictions for this year’s Nobel Prize for Physics appeared first on Physics World.
+]]>
On Tuesday 7 October the winner(s) of the 2025 Nobel Prize for Physics will be announced. The process of choosing the winners is highly secretive, so looking for hints about who will be this year’s laureates is futile. Indeed, in the immediate run-up to announcement, only members of the Nobel Committee for Physics and the Class for Physics at the Royal Swedish Academy of Sciences know who will be minted as the latest Nobel laureates. What is more, recent prizes provide little guidance because the deliberations and nominations are kept secret for 50 years. So we really are in the dark when it comes to predicting who will be named next week.
+If you would like to learn more about how the Nobel Prize for Physics is awarded, check out this profile of Lars Brink, who served on the Nobel Committee for Physics on eight occasions.
+But this level of secrecy doesn’t stop people like me from speculating about this year’s winners. Before I explain the rather lovely infographic that illustrates this article – and how it could be used to predict future Nobel winners – I am going to share my first prediction for next week.
+Inspired by last year’s physics Nobel prize, which went to two computer scientists for their work on artificial intelligence, I am predicting that the 2025 laureates will be honoured for their work on quantum information and algorithms. Much of the pioneering work in this field was done several decades ago, and has come to fruition in functioning quantum computers and cryptography systems. So the time seems right for an award and I have four people in mind. They are Peter Shor, Gilles Brassard, Charles Bennett and David Deutsch. However, only three can share the prize.
+Moving on to our infographic, which gives a bit of pseudoscientific credibility to my next predictions! It charts the history of the physics Nobel prize in terms of field of endeavour. One thing that is apparent from the infographic is that since about 1990 there have been clear gaps between awards in certain fields. If you look at “atomic, molecular and optical physics”, for example, there are gaps between awards of about 5–10 years. One might conclude, therefore, that the Nobel committee considers the field of an award and tries to avoid bunching together awards in the same field.
+Looking at the infographic, it looks like we are long overdue a prize in nuclear and particle physics – the last being 10 years ago. However, we haven’t had many big breakthroughs in this field lately. Two aspects of particle physics that have been very fruitful in the 21st century have been the study of the quark–gluon plasma formed when heavy nuclei collide; and the precise study of antimatter – observing how it behaves under gravity, for example. But I think it might be a bit too early for Nobels in these fields.
+One possibility for a particle-physics Nobel is the development of the theory of cosmic inflation, which seeks to explain the observed nature of the current universe by invoking an exponential expansion of the universe in its very early history. If an award were given for inflation, it would most certainly go to Alan Guth and Andrei Linde. A natural for the third slot would have been Alexei Starobinsky, who sadly died in 2023 – and Nobels are not awarded posthumously. If there was a third winner for inflation, it would probably be Paul Steinhardt.
+2016 was the last year when we had a Nobel prize in condensed-matter physics, so what work in that field would be worthy of an award this year? There has been a lot of very interesting research done in the field of metamaterials – materials that are engineered to have specific properties, particularly in terms of how they interact with light or sound.
+A Nobel prize for metamaterials would surely go to the theorist John Pendry, who pioneered the concept of transformation optics. This simplifies our understanding of how light interacts with metamaterials and helps with the design of objects and devices with amazing properties. These include invisibility cloaks –the first of which was built in 2006 by the experimentalist David Smith, who I think is also a contender for this year’s Nobel prize. Smith’s cloak works at microwave frequencies, but my nomination for the third slot has done an amazing amount of work on developing metamaterials for practical applications in optics. If you follow this field, you know that I am thinking of the applied physicist Federico Capasso – who is also known for the invention of the quantum cascade laser.
+The post Quantum information or metamaterials: our predictions for this year’s Nobel Prize for Physics appeared first on Physics World.
+]]>The post US scientific societies blast Trump administration’s plan to politicize grants appeared first on Physics World.
+]]>The executive order – Improving Oversight of Federal Grantmaking – calls on each agency head to “designate a senior appointee” to review new funding announcements and to “review discretionary grants to ensure that they are consistent with agency priorities and the national interest.”
+ +The order outlines several previous grants that it says have not aligned with the Trump administration’s current policies, claiming that in 2024 more than a quarter of new National Science Foundation (NSF) grants went to diversity, equity, and inclusion and what it calls “other far-left initiatives”.
+“These NSF grants included those to educators that promoted Marxism, class warfare propaganda, and other anti-American ideologies in the classroom, masked as rigorous and thoughtful investigation,” the order states. “There is a strong need to strengthen oversight and coordination of, and to streamline, agency grantmaking to address these problems, prevent them from recurring, and ensure greater accountability for use of public funds more broadly.”
+In response, the 58 agencies – including the American Physical Society, the American Astronomical Society, the Biophysical Society, the American Geophysical Union and SPIE – have written to the majority and minority leaders of the US Senate and House of Representatives, to voice their concerns that the order “raises the possibility of politicization” in federally funded research.
+“Our nation’s federal grantmaking ecosystem serves as the gold standard for supporting cutting-edge research and driving technological innovation worldwide,” the letters states. “Without the oversight traditionally applied by appropriators and committees of jurisdiction, this [order] will significantly increase administrative burdens on both researchers and agencies, slowing, and sometimes stopping altogether, vital scientific research that our country needs.”
+The letter says more review and oversight is required by the US Congress before the order should go into effect, adding that the scientific community “is eager” to work with congress and the Trump administration “to strengthen our scientific enterprise”.
+The post US scientific societies blast Trump administration’s plan to politicize grants appeared first on Physics World.
+]]>The post The curious history of Nobel prizes: from lighthouses to gravitational waves appeared first on Physics World.
+]]>We also look back to two early Nobel prizes, which were given for very puzzling reasons. One was awarded in 1908 to Gabriel Lippmann for an impractical colour-photography technique that was quickly forgotten; and the other in 1912 to Gustaf Dalén for the development of several technologies used in lighthouses.
+It’s a mug’s game, we know, but we couldn’t resist including a few predictions of who could win this year’s physics Nobel. Perhaps a prize for quantum algorithms could be announced on Tuesday, so stay tuned.
+And finally, we round off this episode with a fun Nobel quiz. Do you know how old Lawrence Bragg was when he became the youngest person to win the physics prize?
+Articles mentioned in this podcast:
+ + +“Inside the Nobels: Lars Brink reveals how the world’s top physics prize is awarded”
+
This podcast is supported by American Elements, the world’s leading manufacturer of engineered and advanced materials. The company’s ability to scale laboratory breakthroughs to industrial production has contributed to many of the most significant technological advancements since 1990 – including LED lighting, smartphones, and electric vehicles.
The post The curious history of Nobel prizes: from lighthouses to gravitational waves appeared first on Physics World.
+]]>The post Nobel prizes you’ve never heard of: how a Swedish inventor was honoured for a technology that nearly killed him appeared first on Physics World.
+]]>
The winner of the 1912 Nobel Prize for Physics was, by some margin, the unlikeliest physics Nobel laureate in history. He wasn’t a physicist, for starters. He wasn’t even a chemist. He was an inventor by the name of Nils Gustaf Dalén, and the invention that won him the prize was closely connected – in more ways than one – to the industrial accident that almost cost him his life.
+To understand why members of the Royal Swedish Academy of Sciences plumped for Dalén in 1912 over his more famous contemporaries (including such luminaries as Max Planck and Albert Einstein) it helps to know a bit about the man himself. Like Alfred Nobel, Dalén was Swedish, born in 1869 in the small farming community of Stengstorp. Located around 140 km north-east of Gothenburg, Stengstorp is now home to a museum in Dalén’s honour, but as a young man, he did not seem like museum material. On the contrary, he was incredibly lazy – so lazy, in fact, that he invented a machine to make coffee and turn the light on for him in the mornings.
+This ingenious device brought Dalén some local notoriety, but his big break came when Sweden’s most famous inventor at the time, Gustaf de Laval, saw him demonstrate a device for measuring milk fat content. Encouraged by de Laval to attend university, Dalén sold his family’s farm and enrolled at what is now the Chalmers University of Technology. In 1896 he earned his doctorate, and after a year at ETH Zürich in Switzerland, he returned to Sweden to set up his first engineering firm.
+The engineering challenge that set Dalén on the path to the Nobel was hugely important in a country like Sweden with a long, complex coastline. Years before the advent of GPS, or even reliable radio communications, lighthouses were the main way of warning ships away from danger. However, they were extremely expensive and hard to maintain. As well as needing 24-hour attention from skilled and hardy humans, they required huge amounts of propane fuel, necessitating frequent (and frequently dangerous) resupply trips.
+ +The obvious way of reducing these costs was to make lighthouses burn something else. Acetylene was attractive because it could be manufactured in industrial quantities, and it produced a bright light when burned. Unfortunately, it was also highly explosive, meaning it couldn’t be safely bottled or shipped.
+To tame the acetylene dragon, Dalén developed three separate inventions. The first was a combination of asbestos and diatomaceous earth that Dalén called “agamassan” after his company (Aktiebogalet Gasaccumulator) and the Swedish word for compound, massan. By filling a container with agamassan, wetting it with acetone and then forcing acetylene into the container under pressure, Dalén showed that the acetylene would dissolve in the acetone and become trapped within the agamassan like water in a sponge. Under these conditions, it could be shipped, stored and even dropped without exploding.
+Having made acetylene safe to use, Dalén turned his hand to making it economical. His second invention was a device that automatically turned the acetylene supply on and off. This saved fuel and enabled the light to flash (distinguishing it from other light sources on the shore) without the need for cumbersome rotation mechanisms.
+
Dalén’s third invention enabled even greater automation. Rather than relying on human lighthouse-keepers to switch acetylene burners on at night and off in the morning, Dalén developed a valve that could do it automatically. This valve worked by means of a set of metal rods, one of which was blackened while the others were polished. When the blackened rod absorbed enough heat from the Sun, it would expand and close the valve. At dusk, or in foggy conditions, the blackened rod would return to the temperature of the others, contract, and open the valve.
+While Dalén was perfecting the use of acetylene gas for lighthouses, the Nobel Committee for Physics was getting on with its usual business of recommending candidates for the prize. In 1909 the committee suggested the radio pioneer Guglielmo Marconi and his academic counterpart Karl Ferdinand Braun. The wider Academy accepted this choice. In 1910 the committee recommended the father of modern molecular science, Johannes Diderik van der Waals, and he also won the Academy’s approval. In 1911 the quantum theorist Wilhelm Wien, whose joint nomination with Max Planck in 1908 provoked such bitter disputes that neither of them got the prize, finally got the nod from both the committee and the Academy (Planck would have to wait for his prize until 1918).
+By the early autumn of 1912, there was every indication that the Academy would again accept the committee’s recommendation, which was Heike Kammerlingh Onnes, who had liquefied helium for the first time in 1908 and subsequently used it to discover superconductivity. Although Dalén had also been nominated, Mats Larsson, a physicist at Stockholm University who served on the committee between 2016 and 2023, says he wasn’t a serious contender.
+“It’s clear from the report from the Nobel committee to the Academy that they recognize there is an importance to Dalén’s inventions, but it doesn’t reach the standard for a Nobel prize,” says Larsson. With only a single nomination from a member of the Academy’s technical section, Larsson adds, “Dalén is not even on the shortlist.”
+Then, before the Academy could vote, tragedy struck. On 27 September 1912, during an experiment so risky it was performed in a quarry rather than in Aktiebogalet Gasaccumulator’s Stockholm factory, an explosion left Dalén seriously injured. The next day, Sweden’s national paper of record, Dagens Nyheter, put the accident on its front page, describing Dalén’s face as “unrecognizable” and his right side as “horribly massacred and burned”. Though conscious and talking when taken to hospital, he was not expected to survive.
+
Nobel prizes cannot be awarded posthumously. If Dalén had died of his injuries, it is unlikely that his colleagues would have voted to honour him. But though Dalén’s doctors could not save his eyesight, they did save his life. By the time the Academy convened to vote on the 1912 Nobel prizes, he was recovering in the care of his family and very much on the minds of his sympathetic colleagues.
+We don’t know exactly what happened next. “The material [in the Nobel archives] is very meagre,” Larsson explains. “It just says there was a vote and Dalén won the prize.”
+Still, it’s easy to imagine that someone in the Academy must have pled Dalén’s cause. “This is our national hero who fought the war against ignorance and against darkness,” agrees Karl Grandin, who directs the Academy’s Center for History of Science. “And he loses his sight in the purpose of bringing light to the world. It was a symbolic thing.”
+Dalén was too unwell to attend the usual Nobel prize celebrations in Stockholm. Instead, he sent his brother, a physician, to accept the prize on his behalf. Eventually, though, he recovered well enough to resume his duties at Aktiebogalet Gasaccumulator. In time, he even returned to inventing. And herein lies the final twist in his story.
+ +During his convalescence, the blind Dalén noticed something that had apparently escaped his attention when he could still see. His wife, Elma, worked very hard around the house, and cooking for him and their four children was especially tiresome. It would be much easier, Dalén decided, if she had a device that could cook several dishes at once, at different temperatures.
+In 1922, ten years after losing his sight and winning the Nobel prize, Dalén unveiled the invention that would become his most enduring. Named, like agamassan, after the initials of his company, the AGA cooker is still sold today, bringing warmth to kitchens just as its inventor brought safe, effective and economical illumination to lighthouses. Gustaf Dalén may be the least likely physics Nobel laureate in history, but it would be facile to dismiss him as undeserving. After all, how many other physics laureates can boast of saving hundreds of thousands of lives at sea, while also relieving the drudgery of hundreds of thousands back home?
+The post Nobel prizes you’ve never heard of: how a Swedish inventor was honoured for a technology that nearly killed him appeared first on Physics World.
+]]>The post Kirigami-inspired parachute falls on target appeared first on Physics World.
+]]>
Inspired by the Japanese art of kirigami, researchers in Canada and France have designed a parachute that can safely and accurately deliver its payloads when dropped directly above its target. Tested in realistic outdoor conditions, the parachute’s deformable design stabilizes the airflow around its porous structure, removing the need to drift as it falls. With its simple and affordable design, the parachute could have especially promising uses in areas including drone delivery and humanitarian aid.
+When a conventional parachute is deployed, it cannot simply fall vertically towards its target. To protect itself from turbulence, which can cause its canopy to collapse, it glides at an angle that breaks the symmetry of the airflow around it, stabilizing the parachute against small perturbations.
+But this necessity comes at a cost. When dropping a payload from a drone or aircraft, this gliding angle means parachutes will often drift far from their intended targets. This can be especially frustrating and potentially dangerous for operations such as humanitarian aid delivery, where precisely targeted airdrops are often vital to success.
+To address this challenge, researchers led by David Mélançon at Polytechnique Montréal looked to kirigami, whereby paper is cut and folded to create elaborate 3D designs. “Previously, kirigami has been used to morph flat sheets into 3D shapes with programmed curvatures,” Mélançon explains. “We proposed to leverage kirigami’s shape morphing capability under fluid flow to design new kinds of ballistic parachutes.”
+As well as kirigami, the team drew inspiration from nature. Instead of relying on a gliding angle, many wind-dispersed seeds are equipped with structures that stabilize the airflow around them: including the feathery bristles of dandelion seeds, which create a stabilized vortex in their wake; and the wings of sycamore and maple seeds, which cause them to rapidly spin as they fall. In each case, these mechanisms provide plants with passive control over where their seeds land and germinate.
+ +For their design, Mélançon’s team created a parachute that can deform into a shape pre-programmed by a pattern of kirigami cuts, etched into a flexible disc using a laser cutter. “Our parachutes are simple flat discs, with circumferential slits inspired by a kirigami motif called a closed loop,” Mélançon describes. “Instead of attaching the payload with strings at the outer edge of the disk, we directly mount it its centre.”
+When dropped, a combination of air resistance and the weight of the free-falling payload deformed the parachute into an inverted, porous bell shape. “The slits in the kirigami pattern are stretched, forcing air through its multitude of small openings,” Mélançon continues. “This ensures that the air flows in an orderly manner without any major chaotic turbulence, resulting in a predictable trajectory.”
+The researchers tested their parachute extensively using numerical simulations combined with wind tunnel experiments and outdoor tests, where they used the parachute to drop a water bottle from a hovering drone. In this case, the parachute delivered its payload safely to the ground from a height of 60 m directly above its target.
+Mélançon’s team tested their design with a variety of parachute sizes and kirigami patterns, demonstrating that designs with lower load-to-area ratios and more deformable patterns can reach comparable terminal velocity to conventional parachutes – with far greater certainty over where they will land. Compared with conventional parachutes, which are often both complex and costly to manufacture, kirigami-based designs will be far easier to fabricate.
+ +“Little hand labour is necessary,” Mélançon says. “We have made parachutes out of sheets of plastic, paper or cardboard. We need a sheet of material with a certain rigidity, that’s all.”
+By building on their design, the researchers hope that future studies will pave the way for new improvements in package home delivery. It could even advance efforts to deliver urgently needed aid during conflicts and natural disasters to those who need it most.
+The parachute is described in Nature.
+The post Kirigami-inspired parachute falls on target appeared first on Physics World.
+]]>The post Nobel prizes you’ve never heard of: how an obscure version of colour photography beat quantum theory to the most prestigious prize in physics appeared first on Physics World.
+]]>
By the time Gabriel Lippmann won the Nobel Prize for Physics, his crowning scientific achievement was already obsolete – and he probably knew it. Four days after receiving the 1908 prize “for his method of reproducing colours photographically based on the phenomenon of interference”, Lippmann, a Frenchman with a waxed moustache that would shame a silent film villain, ended his Nobel lecture with the verbal equivalent of a Gallic shrug.
+After nearly 20 years of work, he admitted, the minimum exposure time for his method – one minute in full sunlight – was still “too long for the portrait”. Though further improvements were possible, he concluded, “Life is short and progress is slow.”
+Why did Lippmann win a Nobel prize for a method that not even he seemed to believe in? It certainly wasn’t for a lack of alternatives. The early 1900s were a heady time for physics discoveries and inventions, and other Nobels of the era reflect this. In 1906 the Royal Swedish Academy of Sciences awarded the physics prize to J J Thomson for discovering the electron. In 1907 its members voted for Albert Michelson of the aether-defying Michelson–Morley experiment. So what made the Academy choose, in 1908, a version of colour photography that wouldn’t even let you take a selfie?
+Let’s start with the method itself. Unlike other imaging processes, Lippmann photography directly records the entire colour spectrum of an object. It does this by using standing waves of light to produce interference fringes in a light-sensitive emulsion backed by a mirrored surface. The longer the wavelength of light given off by the object, the larger the separation between the fringes. It’s an elegant application of classical wave theory. It’s easy to see why Edwardian-era physicists loved it.
+
Lippmann’s method also has an important practical advantage. Because his photographs don’t require pigments, they retain their colour over time. Consequently, the images Lippmann showed off in his Nobel lecture look as brilliant today as they did in 1908.
+The method’s disadvantages, though, are numerous. As well as needing long exposure times, the colours in Lippmann photographs are hard to see. Because they are virtual, like a hologram, they are only accurate when viewed face-on, in perpendicular light. Lippmann’s original method also required highly toxic liquid mercury to make the mirrored back surface of each photographic plate. Though modern versions have eliminated this, it’s not surprising that Lippmann’s method is now largely the domain of hobbyists and artists.
+If technical merit can’t explain Lippmann’s Nobel, could it perhaps have been due to politics? The easiest way to answer this question is to look in the Nobel archives. Although the names of Nobel prize nominees and the people who nominated them are initially secret, this secrecy is lifted after 50 years. The nomination records for Lippmann’s era are therefore very much available, and they show that he was a popular candidate. Between 1901 and 1908, he received 23 nominations from 12 different people – including previous laureates, foreign members of the Academy, and scientists from prestigious universities invited to make nominations in specific years.
+Funnily enough, though, all of them were French.
+Faced with this apparent conspiracy to stamp the French tricolour on the Nobel medal, Karl Grandin, who directs the Academy’s Center for History of Science, concedes that such nationalistic campaigns were “quite common in the first years”. However, this doesn’t mean they were successful: “Sometimes when all the members of the French Academy have signed a nomination, it might be impressive at one point, but it might also be working in the opposite way,” he says.
+Because Nobel Foundation statutes stipulate that discussions and vote numbers from the prize-awarding meeting of the Academy are not recorded, Grandin can’t say exactly how Lippmann came out on top in 1908. He does, however, have access to an illuminating article written in 1981 by a theoretical physicist, Bengt Nagel.
+ +Drawing on the private letters and diaries of Academy members as well as the Nobel archives, Nagel showed that personal biases played a significant role in the awarding of the 1908 prize. It’s a complicated story, but the most important strand of it centres on Svante Arrhenius, the Swedish physical chemist who’d won the Nobel Prize for Chemistry five years earlier.
+Today, Arrhenius is best known for predicting that putting carbon dioxide in the Earth’s atmosphere will affect the climate. In his own lifetime, though, Grandin says that Arrhenius was also known for having a long-running personality conflict with a wealthy Swedish mathematician called Gustaf Mittag-Leffler.
+“Stockholm at the time was a small place,” Grandin explains. “Everyone knew each other, and it wasn’t big enough to host both Arrhenius and Mittag-Leffler.”
+
Arrhenius wasn’t the chair of the Nobel physics committee in 1908. That honour fell to Knut Angstrom, son of the Angstrom the unit is named after. Still, Arrhenius’ prestige and outsized personality gave him considerable influence. After much debate, the committee agreed to recommend his preferred choice for the prize, Max Planck, to the full Academy.
+This choice, however, was not problem-free. Planck’s theory of the quantization of matter was still relatively new in 1908, and his work was not demonstrably guiding experiments. If anything, it was the other way around. In principle, the committee could have dealt with this by recommending that Planck share the prize with a quantum experimentalist. Unfortunately, no such person had been nominated.
+That was awkward, and it gave Mittag-Leffler the ammunition he needed. When the matter went to the Academy for a vote, he used members’ doubts about quantum theory to argue against Arrhenius’ choice. It worked. In Mittag-Leffler’s telling, Planck got only 13 votes. Lippmann, the committee’s second choice, got 46.
+Afterwards, Mittag-Leffler boasted about his victory. “Arrhenius wanted to give it to Planck…but his report, which he had nevertheless managed to have unanimously accepted by the committee, was so stupid that I could easily have crushed it,” he wrote to a French colleague. “Two members even declared that after hearing me, they changed their opinion and voted for Lippmann. I would have had nothing against sharing the prize between [quantum theorist Wilhelm] Wien and Planck,” Mittag-Leffler added, “but to give it to Planck alone would have been to reward ideas that are still very obscure and require verification by mathematics and experimentation.”
+
Lippmann’s work posed no such difficulties, and that seems to have swung it for him. In a letter to a colleague after the dust had settled, Angstrom called Lippmann “obviously a prizeworthy candidate who did not give rise to any objections”. However, Angstrom added, he “could not deny that the radiation laws constitute a more important advance in physical science than Lippmann’s colour photography”.
+ +Much has been written about excellent scientists getting overlooked for prizes because of biases against them. The flip side of this – that merely good scientists sometimes win prizes because of biases in their favour – is usually left unacknowledged. Nevertheless, it happens, and in 1908 it happened to Gabriel Lippmann – a good scientist who won a Nobel prize not because he did the most important work, but because his friends clubbed together to support him; because Academy members were wary of his quantum rivals; and above all because a grudge-holding mathematician and an egotistical chemist had a massive beef with each other.
+And then, four years later, it happened again, to someone else.
+The post Nobel prizes you’ve never heard of: how an obscure version of colour photography beat quantum theory to the most prestigious prize in physics appeared first on Physics World.
+]]>The post Destroyers of the world: the physicists who built nuclear weapons appeared first on Physics World.
+]]>Aimed at non-physicist readers with a strong interest in science, though undoubtedly appealing to physicists too, the book seeks to explain the highly complex physics and chemistry that led to the atomic bomb – a term first coined by H G Wells in his 1914 science-fiction novel The World Set Free. It also describes the contributions of numerous gifted scientists to the development of those weapons.
+ +Close draws mainly on numerous published sources from this deeply analysed period, including Richard Rhodes’s seminal 1988 study The Making of the Atomic Bomb. He starts with Wilhelm Röntgen’s discovery of X-rays in 1895, before turning to the discovery of radioactivity by Henri Becquerel in 1896 – described by Close as “the first pointer to nuclear energy [that was] so insignificant that it was almost missed”. Next, he highlights the work on radium by Marie and Pierre Curie in 1898.
+After discussing the emergence of nuclear physics, Close goes on to talk about the Allies’ development of the nuclear bomb. A key figure in this history was Enrico Fermi, who abandoned Fascist Italy in 1938 and emigrated to the US, where he worked on the Manhattan Project and built the first nuclear reactor, in Chicago, in 1942.
++Fermi showed his legendary ability to estimate a physical phenomenon’s magnitude by shredding a sheet of paper into small pieces and throwing them into the air
+
Within seconds of seeing Trinity’s blast in the desert in 1945, Fermi showed his legendary ability to estimate a physical phenomenon’s magnitude by shredding a sheet of paper into small pieces and throwing them into the air. The bomb’s shock wave blew this “confetti” (Close’s word) a few metres away. After measuring the exact distance, Fermi immediately estimated that the blast was equivalent to about 10,000 tonnes of TNT. This figure was not far off the 18,000 tonnes determined a week later following a detailed analysis by the project team.
+The day after the Trinity test, a group of 70 scientists, led by Leo Szilard, sent a petition to US President Harry Truman, requesting him not to use the bomb against Japan. Albert Einstein agreed with the petition but did not sign it, having been excluded from the Manhattan Project on security grounds (though in 1939 he famously backed the bomb’s development, fearing that Nazi Germany might build its own device). Despite the protests, atomic bombs were dropped on Hiroshima and Nagasaki less than a month later – a decision that Close neither defends nor condemns.
+Other key figures in the Manhattan Project were emigrants to the UK, who had fled Germany in the mid-1930s because of Nazi persecution of Jews, and later joined the secret British Tube Alloys bomb project. The best known are probably the nuclear physicists Otto Frisch and Rudolf Peierls, who initially worked together at the University of Birmingham for Tube Alloys before joining the Manhattan Project. They both receive their due from Close.
+Oddly, however, he neglects to mention their fellow émigré Franz (Francis) Simon by name, despite acknowledging the importance of his work in demonstrating a technique to separate fissionable uranium-235 from the more stable uranium-238. In 1940 Simon, then working at the Clarendon Laboratory in wartime Oxford, showed that separation could be achieved by gaseous diffusion of uranium hexafluoride through a porous barrier, which he initially demonstrated by hammering his wife’s kitchen sieve flat to make the barrier.
++The Manhattan Project set an example for the future of science as a highly collaborative, increasingly international albeit sometimes dangerous adventure
+
As Close ably documents and explains, numerous individuals and groups eventually ensured the success of the Manhattan Project. In addition to ending the Second World War and preserving freedom against Fascism, there is an argument that it also set an example for the future of science as a highly collaborative, increasingly international albeit sometimes dangerous adventure.
+ +Close finishes the book with a shorter discussion of the two decades of Cold War rivalry between scientists from the US and the Soviet Union to develop and test the hydrogen bomb. It features physicists such as Edward Teller and Andrei Sakharov, who led the efforts to build the American “Super Bomb” and the Soviet “Tsar Bomba”, respectively.
+The book ends in around 1965, after the 1963 partial test-ban treaty signed by the US, Soviet Union and the UK, preventing further tests of the hydrogen bomb for fear of their likely devastating effects on Earth’s atmosphere. As Close writes, the Tsar Bomba was more powerful than the meteorite impact 65 million years ago that wreaked global change and killed the dinosaurs, which had ruled for 150 million years.
+“Within just one per cent of that time, humans have produced nuclear arsenals capable of replicating such levels of destruction,” Close warns. “The explosion of a gigaton weapon would signal the end of history. Its mushroom cloud ascending towards outer space would be humanity’s final vision.”
+The post Destroyers of the world: the physicists who built nuclear weapons appeared first on Physics World.
+]]>The post A breakthrough in the hunt for dark matter appeared first on Physics World.
+]]>A leading theory suggests that dark matter consists of extremely light, elusive particles called axions. Traditional axion searches rely on narrow-band resonance techniques, which require slow, step-by-step scanning across possible axion masses, making the process time-consuming.
+In this study, researchers introduce a new broadband quantum sensing approach using an alkali-21Ne spin system, which works like a very sensitive antenna to listen for signals from dark matter. They identify two distinct ways the system behaves under different conditions. At low frequencies, the spin system naturally adjusts itself to cancel out noise or unwanted effects. This self-compensation makes the system stable and sensitive, even without fine-tuning. It’s like a car that automatically balances itself on a bumpy road, you don’t need to steer constantly. At higher frequencies, the system enters a state where the spins of different atoms resonate together. This resonance boosts the signal, making it easier to detect tiny effects caused by dark matter. Like two musical instruments playing in harmony, the combined sound is louder and clearer. This allows researchers to significantly expand the search bandwidth without sacrificing sensitivity.
+
Their experiment covers a vast frequency range, from very slow oscillations (0.01 Hz) to very fast ones (1000 Hz), enabling a comprehensive search for axion-like dark matter. They set new constraints on how axions might interact with neutrons and protons. For neutrons, they reached a sensitivity that beats previous astrophysical limits in some frequency ranges. For protons, they achieved the best lab-based constraints in specific frequency bands.
+This work not only advances the search for dark matter but also opens new frontiers in atomic physics, quantum sensing, and particle physics, offering a powerful new strategy to explore the invisible fabric of the universe.
+Dark matter search with a resonantly-coupled hybrid spin system
+Kai Wei et al 2025 Rep. Prog. Phys. 88 057801
++
Dark matter local density determination: recent observations and future prospects by Pablo F de Salas and A Widmark (2021)
+The post A breakthrough in the hunt for dark matter appeared first on Physics World.
+]]>The post A step towards bridging gravity and quantum physics appeared first on Physics World.
+]]>A central question in this context is: is gravity a force? Newtonian mechanics says yes, gravity pulls masses together. Einstein’s relativity says no, it’s the curvature of space-time that guides motion. Quantum field theory suggests gravity may be a force mediated by hypothetical particles called gravitons.
+The researchers behind this work propose that gravity can be treated as a gauge interaction, similar to electromagnetism. This approach implies gravity is a force mediated by a field and governed by the same kinds of symmetries as the other fundamental interactions.
+They introduce unified gravity, a novel framework that reformulates gravity using the compact symmetries of quantum field theory. Working with an eight-dimensional spinor model, they define a space-time dimension field to recover familiar four-dimensional space-time. By applying four U(1) symmetries, they derive a gauge theory of gravity that mirrors the Standard Model, with the stress-energy-momentum tensor emerging naturally from these symmetries.
+Their theory reproduces teleparallel gravity through a special geometric condition and describes gravity in flat Minkowski space-time by another geometric condition, making it compatible with quantum field theory. They develop Feynman rules and show the theory is renormalizable at 1-loop, meaning it handles quantum corrections without mathematical breakdown. Finally, they demonstrate that the theory respects BRST symmetry, which ensures gauge consistency in quantum field theory.
+While this remains a mathematical theory, it prompts us to reassess how we conceptualize gravity, not as a curvature of space-time, but as a gauge interaction like the other fundamental forces. If validated experimentally, unified gravity could reshape our understanding of the universe and mark a major turning point in theoretical physics.
+Gravity generated by four one-dimensional unitary gauge symmetries and the Standard Model
+Mikko Partanen and Jukka Tulkki 2025 Rep. Prog. Phys. 88 057802
++
How far are we from the quantum theory of gravity? by R P Woodard (2009)
+The post A step towards bridging gravity and quantum physics appeared first on Physics World.
+]]>The post Leo Cancer Care launches first upright photon therapy system appeared first on Physics World.
+]]>Upright treatments have a host of potential advantages over conventional radiotherapy, where patients typically lie on their back during treatment. Studies have shown that the more natural upright posture could deliver more consistent anatomical positioning and organ stability, as well as enabling more comfortable treatment positions, with patients who have experienced the technology reporting improved comfort and greater patient–therapist connection.
+ +A fixed treatment beam also simplifies system design, reduces space and shielding requirements, and lowers infrastructure costs. And for proton therapy in particular, removing the need for a bulky and expensive gantry could help increase global access to advanced cancer treatments. Indeed, a partnership between Leo Cancer Care and Mevion Medical Systems led to the development of the MEVION S250-FIT, an ultracompact upright proton therapy system that fits inside a linac vault.
+Moving on from Leo Cancer Care’s initial focus on proton therapy, the new Grace system will deliver conventional X-ray radiation therapy with patients positioned upright. Grace – named after American computer scientist and US Navy rear admiral Grace Hopper – comprises an upright patient positioning system (with six degrees of freedom and 360° continuous rotation) in front of a stationary 6 MV photon linac.
+“Our future innovation, Grace, will take a proven technology, photon therapy, and rethink the way it can be delivered,” Sophie Towe, the company’s director of marketing, tells Physics World. “Upright treatment isn’t just about comfort; it’s about consistency, stability and ultimately accessibility. By integrating advanced CT imaging, faster beam delivery and a more natural patient position, we are opening the door to more adaptive and affordable care. Our goal is to show that innovation in radiotherapy doesn’t always mean bigger or more complex; it can mean smarter and more human.”
+The system features a fan-beam CT scanner at the treatment isocentre, enabling planning-quality imaging throughout the entire treatment workflow. It also incorporates a large, ultrafast multileaf collimator that, in combination with the stationary photon beam delivery system, is designed to optimize dose conformity and treatment efficiency.
+ +“Leo Cancer Care is already known for delivering upright particle therapy technology, and over the past few years we have seen a real paradigm shift as a result,” says co-founder and CEO Stephen Towe in a press statement. “Grace represents a return to our original company focus of delivering more cost-effective photon treatments to a global stage without sacrificing on treatment quality. Our technology has always been bold, but we are pioneering with purpose and that purpose is to put the patient truly back at the centre of their treatments.”
+The company will install the first pre-commercial Grace systems at healthcare institutions within the Upright Photon Alliance research collaboration, which include Centre Léon Bérard, Cone Health, IHH Healthcare, Mayo Clinic and OncoRay.
+The post Leo Cancer Care launches first upright photon therapy system appeared first on Physics World.
+]]>The post NASA criticized over its management of $3.3bn Dragonfly mission to Titan appeared first on Physics World.
+]]>NASA chose Dragonfly in June 2019 as the next mission under its New Frontiers programme. Managed by the Johns Hopkins University Applied Physics Laboratory, it is a nuclear-powered, car-sized craft with eight rotors. Dragonfly will spend over three years studying potential landing sites before collecting data on Titan’s unique liquid environment and looking for signs that it could support life.
+The audit, carried out by NASA’s Inspector General, took no issue with NASA’s tests of the rotors’ performance, which were carried out via simulations. Indeed, the mission team is already planning formal testing of the system to start in January. But the audit criticized NASA for letting Dragonfly’s development “proceed under less than ideal circumstances”, including with a “lower than optimum project cost reserves”.
+Its report aims to now avoid those problems affecting future New Horizon missions. Specifically, it calls on Nicky Fox, NASA’s associate administrator for its science mission directorate, to document lessons learned from NASA’s decision to start work on the project before establishing a baseline commitment.
+It also says that NASA should maintain adequate levels of “unallocated future expenses” for the project and make sure that “the science community is informed of updates to the expected scope and cadence for future New Frontier missions”. A NASA spokesperson told Physics World that NASA management agrees with the recommendations in the report adding that the agency “will use existing resources to address [them]”.
+The post NASA criticized over its management of $3.3bn Dragonfly mission to Titan appeared first on Physics World.
+]]>The post Antiferromagnets could be better than ferromagnets for some ultrafast, high-density memories appeared first on Physics World.
+]]>
While antiferromagnets show much promise for spintronics applications, they have proved more difficult to control compared to ferromagnets. Researchers in Japan have now succeeded in switching an antiferromagnetic manganese–tin nanodot using electric current pulses as short as 0.1 ns. Their work shows that these materials can be used to make efficient high-speed, high-density, memories that operate at gigahertz frequencies, so outperforming ferromagnets in this range.
+In antiferromagnets, spins can flip quickly, potentially reaching frequencies well beyond the gigahertz. Such rapid spin flips are possible because neighbouring spins in antiferromagnets align antiparallel to each other thanks to strong interactions among the spins. This is different from ferromagnets, which have parallel electron spins.
+Another of their advantages is that antiferromagnets display almost no macroscopic magnetization, meaning that bits can be potentially packed densely onto a chip. And that is not all: the values of bits in antiferromagnetic memory devices are generally unaffected by the presence of external magnetic fields. However, this insensitivity can be a disadvantage because it makes the bits difficult to control.
+In the new work, a team led by Shunsuke Fukami of Tohoku University made a nanoscale dot device from the chiral antiferromagnet Mn3Sn. They were able to rapidly and coherently rotate the antiparallel spins in the material using electric currents with a pulse length of just 0.1 ns at zero magnetic field. This is faster than is possible in any existing ferromagnetic device, they say.
+ +The device is also capable of 1000 error-free switching cycles – a level of reliability not possible in ferromagnets, they add.
+This result is possible because, unlike conventional antiferromagnets, Mn₃Sn exhibits a large change in electrical resistance thanks to its unique symmetry of the internal spin texture, explains Yutaro Takeuchi, who is lead author of a paper describing the study. “This effect provides us with an easy method for electrically detecting (reading out) the antiferromagnetic state. Doing this is usually difficult because antiferromagnets are externally ‘invisible’ (remember, they have zero net magnetization), which means their spin ordering cannot be easily read out.”
+Until now, Mn₃Sn had mainly been studied in bulk samples, but in 2019, Fukami’s group succeeded in growing epitaxial thin films of the material. “This allowed us to perform clear-cut experiments using antiferromagnetic thin films and finally answer the question: can antiferromagnets really outperform their ferromagnetic cousins?” says Takeuchi. “Moreover, in this study, we took on the additional challenge of integrating antiferromagnetic thin films into nanoscale devices.”
+Fukami and colleagues have been working on spintronics using ferromagnets for more than 20 years. “Although the fabrication of antiferromagnets was initially difficult, we finally managed to produce high-quality Mn₃Sn nanodot devices and demonstrated high-speed and high-efficiency control of the antiferromagnetic state,” Takeuchi tells Physics World. “We would say that our work represents a fusion of our two key strengths: a new method for depositing antiferromagnetic thin films and our conventional core technology in the nanofabrication of magnetic materials.”
+As for potential applications, the most likely would be a high-performance non-volatile memory (MRAM), he says. “While MRAM technology is now commercially available, its applications remain limited. By further improving its high-speed and low power consumption, we anticipate a broader range of markets, including data centres and AI chips.”
+ +The research, which is detailed in Science, has also highlighted some dynamical aspects of antiferromagnets not seen before in ferromagnets. “In particular, we found that the rotation frequency of an antiferromagnet can be modulated by an applied current, thanks to the unique dynamical equation it obeys,” explains Yuta Yamane, who did the theoretical modelling part of the study. “This distinct property may open the door to new types of devices, such as frequency-tuneable oscillators, and emerging concepts like probabilistic computing.”
+Looking ahead, the team will now focus on improving the readout performance of antiferromagnets and pursuing new functionalities. “Thanks to their unique transport properties, chiral antiferromagnets allow us to detect spin ordering in experimental settings, but the readout performance has still not reached the level of ferromagnets,” says Takeuchi. “A breakthrough will be required to overcome this gap.”
+The post Antiferromagnets could be better than ferromagnets for some ultrafast, high-density memories appeared first on Physics World.
+]]>The post How the slowest experiment in the world became a fast success appeared first on Physics World.
+]]>Placed on a shelf at Trinity, the funnel was largely ignored by generations of students passing by. But anyone who looked closely would have seen a drop forming slowly at the bottom of the funnel, preparing to join older drops that had fallen roughly once a decade. Then, in 2013 this ultimate example of “slow science” went viral when a webcam recorded a video of a tear-drop blob of pitch falling into the beaker below.
+The video attracted more than two million hits on YouTube (a huge figure back then) and the story was covered on the main Irish evening TV news. We also had a visit from German news magazine Der Spiegel, while Discover named it as one of the top 100 science stories of 2013. As one of us (SH) described in a 2014 Physics World feature, the iconic experiment became “the drop heard round the world”.
+Inspired by that interest, we decided to create custom-made replicas of the experiment to send to secondary schools across Ireland as an outreach initiative. It formed part of our celebrations of 300 years of physics at Trinity, which dates back to 1724 when the college established the Erasmus Smith’s Professorship in Natural and Experimental Philosophy.
+ +An outreach activity that takes 10 years for anything to happen is obviously never going to work. Technical staff at Trinity’s School of Physics, who initiated the project, therefore experimented for months with different tar samples. Their goal was a material that appears solid but will lead to a falling drop every few months – not every decade.
+After hitting upon a special mix of two types of bitumen in just the right proportion, the staff also built a robust experimental set-up consisting of a stand, a funnel and flask to hold any fallen drops. Each was placed on a wooden base and contained inside a glass bell jar. There were also a thermometer and a ruler for data-taking along with a set of instructions.
++On 27 November 2024 we held a Zoom call with all participating schools, culminating in the official call to remove the funnel stopper
+
Over 100 schools – scattered all over Ireland – applied for one of the set-ups, with a total of 37 selected to take part. Most kits were personally hand-delivered to schools, which were also given a video explaining how to unpack and assemble the set-ups. On 27 November 2024 we held a Zoom call with all participating schools, culminating in the official call to remove the funnel stopper. The race was on.
+Each school was asked to record the temperature and length of the thread of pitch slowly emerging from the funnel. They were also given a guide to making a time-lapse video of the drop and provided with information about additional experiments to explore the viscosity of other materials.
+To process incoming data, we set up a website, maintained by yet another one of our technical staff. It contained interactive graphs showing the increased in drop length for every school, together with the temperature when the measurement was taken. All data were shared between schools.
+After about four months, four schools had recorded a pitch drop and we decided to take stock at a half-day event at Trinity in March 2025. Attended by more than 80 pupils aged 12–18 and teachers from 17 schools, we were amazed by how much excitement our initiative had created. It spawned huge levels of engagement, with lots of colourful posters.
+By the end of the school year, most had recorded a drop, showing our tar mix had worked well. Some schools had also done experiments testing other viscous materials, such as syrup, honey, ketchup and oil, examining the effect of temperature on flow rate. Others had studied the flow of granular materials, such as salt and seeds. One school had even captured on video the moment their drop fell, although sadly nobody was around to see it in person.
+ +Some schools displayed the kits in their school entrance, others in their trophy cabinet. One group of students appeared on their local radio station; another streamed the set-up live on YouTube. The pitch-drop experiment has been a great way for students to learn basic scientific skills, such as observation, data-taking, data analysis and communication.
+As for teachers, the experiment is an innovative way for them to introduce concepts such as viscosity and surface tension. It lets them explore the notion of multiple variables, measurement uncertainty and long-time-scale experiments. Some are now planning future projects on statistical analysis using the publicly available dataset or by observing the pitch drop in a more controlled environment.
+Wouldn’t it be great if other physics departments followed our lead?
+The post How the slowest experiment in the world became a fast success appeared first on Physics World.
+]]>The post Cosmic microwave background pioneer George Smoot dies aged 80 appeared first on Physics World.
+]]>Born in Yukon, Florida on 20 February 1945, Smoot studied mathematics and physics at the Massachusetts Institute of Technology (MIT), graduating with a dual major. He then completed a PhD in particle physics at MIT in 1970.
+Smoot then moved to the University of California, Berkeley, and the Lawrence Berkeley National Laboratory, where he began working on the NASA-funded High Altitude Particle Physics Experiment. The instrument was designed to search for particle interactions at higher energies than accelerators could produce at the time.
+ +After devising other balloon-borne detectors to search for antimatter, in 1973 Smoot switched to studying the CMB, which had been discovered by Arno Penzias and Robert Wilson in 1964.
+Smoot and colleagues conceived several experiments to detect possible variations in the CMB, which at the time was thought to be isotropic. This included using a differential microwave radiometer (DMR) aboard a Lockheed U-2 plane that could measure differences in temperature as small as one-thousandth of a degree in the microwave radiation between two points.
+Smoot then proposed a space-based mission to measure possible anisotropies. The probe eventually became NASA’s Cosmic Background Explorer (COBE) satellite, which went into space in 1989 containing a DMR instrument that Smoot led.
+Following two years of observations, in April 1992 the COBE team announced that the CMB still bore the black-body signature, albeit at a much lower temperature (2.7 K) due to the ongoing expansion of the universe. The COBE researchers also announced that they had detected tiny temperature fluctuations – as small as one part in 100 000 – in the CMB.
+For the work, Smoot together with John Mather who worked on another instrument aboard COBE, shared the 2006 Nobel Prize for Physics “for their discovery of the blackbody form and anisotropy of the cosmic microwave background radiation”.
+After COBE, Smoot led another balloon experiment – the Millimeter Anisotropy eXperiment IMaging Array – that refined the measurements of the anisotropies of the CMB.
+Smoot also collaborated with the journalist Keay Davidson on the 1993 book Wrinkles in Time, which chronicled efforts to measure variations in the CMB.
+After winning the prize, Smoot continued his studies of the CMB as one of the founders of the European Space Agency’s Planck satellite, which launched in April 2009. He also worked in other areas of cosmology such as the study of gamma-ray bursts.
+In 2007 he became founding director of the Berkeley Center for Cosmological Physics, in which he used the money from his Nobel prize as seed cash. Two years later he joined Université Paris-Diderot VII (now known as the Université Paris-Cité) where he founded the Paris Center for Cosmological Physics.
+Smoot also made several media appearances throughout his career including playing himself on the hit-TV show The Big Bang Theory and in a TV commercial for Intuit TurboTax. He also appeared in the TV show Are You Smarter Than a 5th Grader? where he bagged the top $1m prize.
+ +The post Cosmic microwave background pioneer George Smoot dies aged 80 appeared first on Physics World.
+]]>The post Ask me anything: Scott Bolton – ‘It’s exciting to be part of a team that’s seeing how nature works for the first time’ appeared first on Physics World.
+]]>As a planetary scientist, I use mathematics, physics, geology and atmospheric science. But as the principal investigator of Juno, I also have to manage the Juno team, and interface with politicians, people at NASA headquarters and other administrators. In that capacity, I need to be able to talk about topics at various technical levels, because many of the people I’m speaking with are not actively researching planetary science. I need a broad range of skills, but one of the most important is to be able to recognize when I don’t have the right expertise and need to find someone who can help.
+
I really love being part of a mission that’s discovering new information and new ideas about how the universe works. It’s exciting to be at the edge of something, where you are part of a team that’s seeing an image or an aspect of how nature works for the first time. The discovery element is truly inspirational. I also love seeing how a mixture of scientists with different expertise, skills and backgrounds can come together to understand something new. Watching that process unfold is very exciting to me.
+ +Some tasks I like least are related to budget exercises, administrative tasks and documentation. Some government rules and regulations can be quite taxing and require a lot of time to ensure forms and documents are completed correctly. Occasionally, an urgent action item will appear requiring an immediate response and having to drop current work to fit in a new task. As a result, my normal work gets delayed, and this can be frustrating. I consider one of my main jobs to shelter the team from these extraneous tasks so they can get their work done.
+The most important thing I know now is that if you really believe in something, you should stick to it. You should not give up. You should keep trying, keep working at it, and find people who can collaborate with you to make it happen. Early on, I didn’t realize how important it was to combine forces with people who complemented my skills in order to achieve goals.
+The other thing I wish I had known is that taking time to figure out the best way to approach a challenge, question or problem is beneficial to achieving one’s goals. That was a very valuable lesson to learn. We should resist the temptation to rush into finding the answer – instead, it’s worthwhile to take the time to think about the question and develop an approach.
+The post Ask me anything: Scott Bolton – ‘It’s exciting to be part of a team that’s seeing how nature works for the first time’ appeared first on Physics World.
+]]>The post Be a part of our quantum celebration appeared first on Physics World.
+]]>With the Institute of Physics (IOP) being one of the IYQ’s six founding members, we have already seen a packed agenda – including the UK’s opening meeting hosted by the Royal Society in February; a week-long parliamentary exhibition on quantum run by the IOP in June; plus numerous hackathons and careers events. It has been a very busy year.
+As the IYQ comes to a close, the UK is giving it a worthy send-off with an entire Quantum Week on 3–7 November. The IOP and the National Physical Laboratory will host conferences and public events, including a talk on “A new quantum world: ‘spooky’ physics to tech revolution” by quantum scientist and TV presenter Jim Al-Khalili.
+ +The highlight of the week for quantum physicists based in the UK will be the IOP’s two-day conference – Quantum Science and Technology: The First 100 Years; Our Quantum Future – at the Royal Institution in London. Day one, organized by the IOP’s History of Physics group, will look back on the first 100 years of quantum mechanics. Speakers will revisit foundational breakthroughs, while charting the evolution of quantum theory, from its early abstract framework to the main pillar it forms in modern physics. Day two – led by the IOP’s quantum Business Innovation and Growth group – will look to the future of quantum tech and its expanding role in society, as quantum computing, sensing and communications become a part of our world.
+Despite us celebrating a century of quantum advances, it’s interesting to note that most physicists are still undecided on some of the very foundational aspects of quantum theory. Even 100 years on, we cannot agree on which interpretation of quantum mechanics holds strong; whether the wavefunction is merely a mathematical tool or a true representation of reality; or the effects of an observer on a quantum state.
+Indeed, some of the biggest open questions in physics – where exactly is the boundary between the quantum and the classical world; and how do we reconcile gravity and quantum mechanics – lie at the very heart of these conundrums. As we all gather at the IOP’s conference, to look back and ahead, perhaps some answers to these puzzles will become apparent.
+Be sure to register for the event as soon as possible so that you are in the room as we perhaps crack the quantum code to our universe.
++
This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
+Find out more on our quantum channel.
++
The post Be a part of our quantum celebration appeared first on Physics World.
+]]>The post Schwinger effect appears in a 2D superfluid appeared first on Physics World.
+]]>
Vacuum tunnelling – an exotic process by which empty space can become temporarily filled with virtual particles when an extremely strong electric or magnetic field is applied to it – has never been observed in an experiment. This is because the field required to produce this “Schwinger effect” in the laboratory is simply too high and is usually only generated during intense astrophysical events. Theoretical physicists at the University of British Columbia (UBC) in Canada are now saying that an analogous effect could occur in a much simpler, tabletop system. In their model, a film of superfluid helium can be substituted for the vacuum and the superfluid flow of this helium for the massive field.
+The physicist Julian Schwinger was the first to put forward the effect that now bears his name. In 1951, he hypothesised that applying a uniform electric field to a vacuum, which is theoretically devoid of matter, would cause electron–positron pairs to spring into existence there. The problem is that this field needs to be, literally, astronomically high – on the order of around 1018 V/m.
+A team led by Philip Stamp says that a similar type of spontaneous pair production can occur in superfluid helium-4 just a few atomic layers thick and cooled to very low temperatures. In this liquid, which behaves essentially like a perfect, frictionless quantum vacuum state, pairs of quantized vortices/anti-vortices (spinning in opposite directions to each other) should occur in the presence of strong fluid flow. This process should be analogous to the Schwinger mechanism of vacuum tunnelling.
+ +“The helium-4 film provides a nice analogue to several cosmic phenomena, such as the vacuum in deep space, quantum black holes and even the early the universe itself (phenomena we can’t ever approach in any direct experimental way),” says Stamp. “However, the real interest of this work may lie less in analogues (which may or may not accurately portray the ‘real thing’) and more in the way it alters our understanding of superfluids and of phase transitions in two-dimensional systems.”
+“These are real physical systems in their own right, not just analogues. And we can do experiments on these.”
+According to physicist Warwick Bowen of the University of Queensland in Australia, who was not involved in this study, the new work is “very interesting” and “exciting” because it describes a new mechanism to produce vortices. “This description might even tell us more about the microscopic origins of turbulence and represents a new kind of quantum phase transition,” he tells Physics World. “Importantly, the effect appears to be accessible with extensions to existing experimental techniques used to study thin superfluid helium films.”
+Physicist Emil Varga of Prague’s Charles University in the Czech Republic, who was not involved in this study either, adds: “The work seems quite rigorous and might help clean up some outstanding discrepancies between theory and experiment. And the possible analogy with the Schwinger effect is, as far as I can tell, new and quite interesting and fits well into the emerging field of using superfluid helium-4 as a model system for high-energy and/or astrophysics.”
+ +Stamp and colleagues say they would now like to better understand the vortex effective mass and look at analogues in full quantum gravity with no “semiclassical approximations”. They will also be focusing on how the effect they propose will lead to phenomena like quantum avalanches – which are different to quantum turbulence – and in particular, how it modifies the so-called “Kosterlitz-Thouless” picture of 2D transitions.
+They report their present work in PNAS.
+The post Schwinger effect appears in a 2D superfluid appeared first on Physics World.
+]]>The post Meniscus size and shape affect how liquid waves move through barriers appeared first on Physics World.
+]]>When the upper surface of a liquid comes into contact with the container it is in or with another object, the layer of liquid at the interface curves upwards. This well-known capillary effect, produced by surface tension, is known as the meniscus.
+ +In the new study, a team led by Likun Zhang at the National Center of Physical Acoustics and the Department of Physics at the University of Mississippi wanted to find out how the size and the shape of the meniscus affects the way waves move across it. In their experiments, the researchers filled a tank measuring 106 cm × 6.8 cm × 11 cm with distilled water to a height of 9.2 cm. They then placed a thin acrylic sheet 6.8 cm wide on the surface of the water to create the meniscus. Next, they sent surface waves with a frequency of about 15 Hz through the set-up using a paddle wavemaker and measured the ripples on the surface that resulted.
+By varying both the frequency of the surface waves and the height and surface properties of the acrylic barrier (thanks to a surface coating to make it hydrophobic or hydrophilic), they were able to steadily adjust the meniscus very precisely – in steps of just 0.1 mm.
+The researchers found that a slightly curved meniscus allows more wave energy to pass through the barrier. Conversely, if the meniscus curves more steeply, it reduces the energy transported by the fluid.
+This is a counterintuitive result – we expect a barrier to block waves, explains Zhang. Instead, they observed that certain meniscus shapes can allow waves to pass through more easily. “Indeed, an adjustment of just a few millimetres can change the wave transmission by up to 60%, either going up or down depending on the meniscus shape,” he tells Physics World. “This is exciting because it’s the first time this effect has been observed in an experiment.”
+ +The discovery could open up new ways to control fluids more precisely – just by adjusting the meniscus, he adds. “This could be useful in open fluid channels, where liquids flow with a free surface exposed to air instead of being in a closed pipe. Such channels are common in nature and are also important in engineered systems, for example, in microfluidic devices, thermal control, and even technologies employed in space.”
+The researchers, who report their work in Physical Review Letters, say they now plan to develop theoretical models to better explain the effect they have observed. “For example, why do waves transmit less when the meniscus height is tall, but more when it is short?” ponders Zhang. “In the longer term, our goal is to exploit this knowledge to design better ways of controlling fluids for practical applications.”
+The post Meniscus size and shape affect how liquid waves move through barriers appeared first on Physics World.
+]]>The post Cosmic muons monitor river sediments surrounding Shanghai tunnel appeared first on Physics World.
+]]>
Researchers in China say that they are the first to use cosmic-ray muography to monitor the region surrounding a tunnel. Described as a lightweight, robust and affordable scintillator setup, the technology was developed by Kim Siang Khaw at Shanghai Jiao Tong University and colleagues. They hope that their approach could provide a reliable and non-invasive method for the real-time monitoring of subterranean infrastructure.
+Monitoring the structural health of tunnels and other underground infrastructure is challenging because of the lack of access. Inspection often relies on techniques such as borehole drilling, sonar scanning, and multibeam echo sounders to determine when maintenance is needed. These methods can be invasive, low resolution and involve costly and disruptive shutdowns. As a result there is often a trade-off between the quality of inspections and the frequency at which they are done.
+This applies to the Shanghai Outer Ring Tunnel: a major travel artery in China’s largest city, which runs for almost 3 km beneath the Huangpu River. Completed in 2023, the submerged section of the tunnel is immersed in water-saturated sediment, creating a unique set of challenges for structural inspection.
+In particular, different layers of sediment surrounding the tunnel can vary widely in their density, permeability, and cohesion. As they build up above the tunnel, they can impart uneven, time-varying stresses, making it incredibly challenging for existing techniques to accurately assess when maintenance is needed.
+ +To address these challenges, a multi-disciplinary team was formed to explore possible solutions. “During these talks, the [Shanghai Municipal Bureau of Planning and Natural Resources] emphasized the practical challenges of monitoring sediment build-up around critical infrastructure, such as the Shanghai Outer Ring Tunnel, without causing disruptive and costly shutdowns,” Khaw describes.
+Among the most promising solutions they discussed was muography, which involves detecting the muons created when high-energy cosmic rays interact with Earth’s upper atmosphere. These muons can penetrate deep beneath Earth’s surface and are absorbed at highly predictable rates depending on the density of the material they pass through.
+A simple version of muography involves placing a muon detector on the surface of an object and another detector beneath the object. By comparing the muon fluxes in the two detectors, the density of the object can be determined. By measuring the flux attenuation along different paths through the object, an image of the interior density of the object can be obtained.
+Muography has been used for several decades in areas as diverse as archaeology, volcanology and monitoring riverbanks. So far, however, its potential for monitoring underground infrastructure has gone largely untapped.
+“We took this ‘old-school’ technique and pioneered its use in a completely new scenario: dynamically monitoring low-density, watery sediment build-up above a submerged, operational tunnel,” Khaw explains. “Our approach was not just in the hardware, but in integrating the detector data with a simplified tunnel model and validating it against environmental factors like river tides.”
+With its durable, lightweight, and affordable design, the scintillator features a dual-layer configuration that suppresses background noise while capturing cosmic muons over a broad range of angles. Crucially, it is portable and could be discreetly positioned inside an underground tunnel to carry out real-time measurements, even as traffic flows.
+To test the design, Khaw’s team took measurements along the full length of the Shanghai Outer Ring Tunnel while it was undergoing maintenance; allowing them to map out a profile of the sediment surrounding the tunnel. They then compared their muon flux measurements with model predictions based on sediment profiles for the Huangpu River measured in previous years. They were pleased to obtain results that were better than anticipated.
+ +“We didn’t know the actual tidal height until we completed the measurement and checked tidal gauge data,” Khaw describes. “The most surprising and exciting discovery was a clear anti-correlation between muon flux and the tidal height of the Huangpu River.” Unexpectedly, the detector was also highly effective at measuring the real-time height of water above the tunnel, with its detected flux closely following the ebb and flow of the tides.
+Reassuringly, the team’s measurements confirmed that there are no as-yet unmapped obstructions or gaps in the sediment above the tunnel thereby confirming the structure’s safety.
+“Additionally, we have effectively shown a dual-purpose technology: it offers a reliable, non-invasive method for sediment monitoring and also reveals a new technique for tidal monitoring,” says Khaw. “This opens the possibility of using muon detectors as multi-functional sensors for comprehensive urban infrastructure and environmental oversight.”
+The research is described in the Journal of Applied Physics.
+The post Cosmic muons monitor river sediments surrounding Shanghai tunnel appeared first on Physics World.
+]]>The post Discovery of the Higgs boson at CERN inspires new stained-glass artwork appeared first on Physics World.
+]]>Born in Ukraine, Kondratyeva has a PhD in the theory of architecture and has an artist residency at the Romont Glass Museum (Vitromusée Romont) in Switzerland, where Discovery is currently exhibited.
+In 2023 Kondratyeva travelled to visit the LHC at CERN, which she notes represents “more than a laboratory [but] a gateway to the unknown”.
+ +“Discovery draws inspiration from the awe I felt standing at the frontier of human knowledge, where particles collide at unimaginable energies and new forms of matter are revealed,” Kondratyeva told Physics World.
+Kondratyeva says that the focal point of the artwork – a circle structured with geometric precision – represents the collision of two high-energy protons.
+The surrounding lead lines in the panel trace the trajectories of particle decays as they move through a magnetic field: right-curved lines represent positively charged particles, left-curved lines indicate negatively charged ones, while straight lines signify neutral particles unaffected by the magnetic field.
+The geometric composition within the central circle reflects the hidden symmetries of physical laws – patterns that only emerge when studying the behaviour of particle interactions.
+Kondratyeva says that the use of mouth-blown flashed glass adds further depth to the piece, with colours and subtle shades moving from hot and luminous at the centre to cooler, more subdued tones toward the edges.
+“Through glass, light and colour I sought to express the invisible forces and delicate symmetries that define our universe – ideas born in the realm of physics, yet deeply resonant in artistic expression,” notes Kondratyeva. “The work also continues a long tradition of stained glass as a medium of storytelling, reflecting the deep symmetries of nature and the human drive to find order in chaos.”
+In 2022 Kondratyeva teamed up with Rigetti Computing to create piece of art inspired by the packaging for a quantum chip. Entitled Per scientiam ad astra (through science to the stars), the artwork was displayed at the 2024 British Glass Biennale at the Ruskin Glass Centre in Stourbridge, UK.
+The post Discovery of the Higgs boson at CERN inspires new stained-glass artwork appeared first on Physics World.
+]]>The post Imagining alien worlds: we explore the science and fiction of exoplanets appeared first on Physics World.
+]]>Weird and wonderful planets are also firmly entrenched in the world of science fiction, and the interplay between imagined and real planets is explored in the new book Amazing Worlds of Science Fiction and Science Fact. Its author Keith Cooper is my guest in this episode of the Physics World Weekly podcast and our conversation ranges from the amazing science of “hot Jupiter” exoplanets to how the plot of a popular Star Trek episode could inform our understanding of how life could exist on distant exoplanets.
+The post Imagining alien worlds: we explore the science and fiction of exoplanets appeared first on Physics World.
+]]>The post Gyroscopic backpack improves balance for people with movement disorder appeared first on Physics World.
+]]>In development for over a decade, GyroPack is the brainchild of a team of neurologists, biomechanical engineers and rehabilitation specialists at the Radboud University Medical Centre, Delft University of Technology (TU Delft) and Erasmus Medical Centre. The first tests of its ability to improve balance performance with ataxia-impacted adults, described in npj Robotics, produced encouraging enough results to continue the GyroPack’s development as a portable robotic wearable for individuals with neurological conditions.
+Degenerative ataxias, a variety of diseases of the nervous system, cause progressive cerebral dysfunction manifesting as symptoms including lack of coordination, imbalance when standing and difficulty walking. Ataxia can afflict people of all ages, including young children. Managing the progressive symptoms may require lifetime use of cumbersome, heavily weighted walkers as mobility aids and to prevent falling.
+The 6 kg version of the GyroPack tested in this study contains two control moment gyroscopes (CMGs), which are attitude control devices that control orientation to a specific inertial frame-of-reference. Each CMG consists of a flywheel and a gimbal, which together generate the change in angular momentum that’s exerted onto the wearer to resist unintended torso rotations. Each CMG also contains an inertial measurement unit to determine the orientation and angular rate of change of the CMG.
+ +The backpack also holds two independent, 1.5 kg miniaturized actuators designed by the team that convert energy into motion. The system is controlled by a laptop and powered through a separate power box that filters and electrically separates electrical signals for safety. All activities can be immediately terminated when an emergency stop button is pushed.
+Lead researcher Jorik Nonnekes of Radboud UMC describes how the system works: “The change of orientation imposed by the gimbal motor, combined with the angular momentum of the flywheels, causes a free moment, or torque, that is exerted onto the system the CMG is attached to – which in this study is the human upper body,” he explains. “A cascaded control scheme reliably deals with actuator limitations without causing undesired disturbances on the user. The gimbals are controlled in such a way that the torque exerted on the trunk is proportional and opposite to the trunk’s angular velocity, which effectively lets the system damp rotational motion of the wearer. This damping has been shown to make balancing easier for unimpaired subjects and individuals post-stroke.”
+
For the study, 14 recruits diagnosed with degenerative ataxia performed five tasks: standing still with feet together and arms crossed for up to 30 s; walking on a treadmill for 2 min without using the handrail; making a clockwise and a counterclockwise 360° turn-in-place; performing a tandem stance with the heel of one foot touching the toes of the other for up to 30 s; and testing reactive balance by applying two forward and two backward treadmill perturbations.
+The participants performed these tasks under three conditions, two whilst wearing the backpack and one without as a baseline. In one scenario, the backpack was operated in assistive mode to investigate its damping power and torque profiles. In the other, the backpack was in “sham mode”, without assistive control but with sound and motor vibrations indistinguishable from normal operation.
+The researchers report that when fully operational, the GyroPack increased the user’s average standing time compared with not wearing the backpack at all. When used during walking, it reduced the variability of trunk angular velocity and the extrapolated centre-of-mass, two common indicators of gait stability. The trunk angular velocity variability also showed a significant reduction when comparing assistive to sham GyroPack modes. However, the performance of turn-in-place and perturbation recovery tasks were similar for all three scenarios.
+ +Interestingly, wearing the backpack in the sham scenario improved walking tasks compared with not wearing a backpack at all. The researchers attributed this to possibly more weight in the torso area improving body stabilization or to a placebo effect.
+Next, the team plans to redesign the device to make it lighter and quieter. “It’s not yet suitable for everyday use,” says Nonnekes in a press statement. “But in the future, it could help people with ataxia participate more freely in daily life, like attending social events without needing a walker, which many find bulky and inconvenient. This could greatly enhance their mobility and overall quality of life.”
+The post Gyroscopic backpack improves balance for people with movement disorder appeared first on Physics World.
+]]>The post Quarter of UK physics departments face closure, finds IOP report appeared first on Physics World.
+]]>The survey findings are published in a new report – Physics Matters: Funding the Foundations of Growth– that says UK university physics is a “major strength” of the UK university system and vital to “national security and technological sovereignty”. The UK currently has about 17,000 physics undergraduates and more than 6000 physics department staff, with about 1 in 20 jobs in the UK using physics-related knowledge and skills.
+ +However, the report adds that this strength cannot be taken for granted and points to “worrying signs” that university physics has started to “punch below its weight”. This is compounded, the IOP says, by a drop in the number of students studying physics at UK universities and flat grant funding for UK physics departments over the past decade.
+In addition, UK universities are being hit by financial challenges and funding shortfalls caused by inflationary pressure and a drop in international student numbers. Given that physics comes with high teaching costs, the report states this threatens a “perfect storm” for university physics departments.
+The survey of 31 departmental heads, which was carried out in August, found that three unnamed departments face imminent closure, with a further 11 anticipating shutting courses. When asked to look ahead over the next two years, eight say they expect to face closure, with 18 anticipating course closures.
+ +One head of physics at a UK university told the IOP, which publishes Physics World, that they are concerned they are “close to breaking point”. “Our university has a £30m deficit,” the anonymous head said. “Staff recruitment is frozen, morale is low. Yet colleagues in our school continue to deliver with less and less and under increasing pressure.”
+Jonte Hance, a quantum physicist at Newcastle University, told Physics World that the threat of closures is “horrifying”. In 2004, Newcastle closed its physics department before reopening it over a decade later. “Worryingly, this approach – ignoring, or even cutting, any departments that don’t make a massive short-term profit – doesn’t just seem to be a panicked knee-jerk response on the part of vice-chancellors, but part of a concerted and planned strategy, aiming to turn universities into business incubators,” adds Hance.
+The IOP is now calling on the UK government to commit additional funding for science and engineering departments to help with the operation, maintenance, refurbishment and building of labs and technical facilities. It also wants an “early-warning system” created for departments at risk as well as changes to visa policy to remove international students from net migration figures, retain the graduate visa in its current form, and make “global talent and skilled worker” visas more affordable.
++While we understand the pressures on public finances, it would be negligent not to sound the alarm
+Keith Burnett
In addition, the IOP wants the UK government to develop a decade-long plan that includes reform of higher-education funding so universities can fund the cost of teaching “important subjects such as physics”. Keith Burnett, the outgoing IOP president, warns that without such action, the UK is “walking towards a cliff edge”, although he believes there is still time to “avert a crisis”.
+“While we understand the pressures on public finances, it would be negligent not to sound the alarm for a national capability fundamental to our wellbeing, competitiveness and the defence of the realm,” says Burnett, who is former vice-chancellor at the University of Sheffield and former chair of physics at the University of Oxford. “Physics researchers and talented physics students are our future, but if action isn’t taken now to stabilise, strengthen and sustain one of our greatest national assets, we risk leaving them high and dry.”
+The post Quarter of UK physics departments face closure, finds IOP report appeared first on Physics World.
+]]>The post AI-powered algorithms help provide rapid, accurate contouring of brain metastases appeared first on Physics World.
+]]>“There are two challenges that we face in the clinic,” explains Evrim Tezcanli, professor of radiation oncology at Acibadem Atasehir Hospital in Turkey. “First, we want to treat all the lesions. But very small lesions, particularly those under 0.1 cc, can easily be missed by untrained eyes. Larger metastases, meanwhile, are more challenging to contour – you want to cover the whole lesion without missing a pixel, but don’t want to spill radiation over into the brain tissue. It’s time-consuming work, especially if there are multiple lesions.”
+To address these challenges, Siemens Healthineers has developed an AI-powered software tool that automates the contouring of brain metastases. The software – integrated into the company’s syngo.via RT Image Suite and AI-Rad Companion Organs RT packages – employs advanced deep-learning algorithms to rapidly analyse a patient’s MR images and contour and label metastatic lesions. Alongside, it delineates key organs at risk, such as the brainstem and optic structures.
+“One of the main strengths of this software is that it reduces the manual workloads really well,” says Tezcanli.
+To evaluate the accuracy and time efficiency of the new software tool, Tezcanli and her team compared AI-based delineation with the performance of two experienced radiation oncologists. The study included data from 10 patients with between three and 17 brain metastases. The radiation oncologists manually contoured all lesions (82 in total) based on patients’ contrast-enhanced MRI scans; the same images were also processed by the AI software to automatically contour the metastases.
+Tezcanli reports that the software performed remarkably well. “One of the most significant findings was that the manual contours and the AI-generated contours showed strong agreement, especially for lesions larger than 0.1 cc. In terms of geometric similarity, the AI-generated boundaries were well within our clinically acceptable levels,” she says.
+Comparing the manual and AI-generated contours revealed a medium Dice similarity coefficient of 0.83, increasing to 0.91 when excluding very small lesions, and a median Hausdorff distance (the maximum distance between the two contours) of 0.3 mm.
++AI will definitely have a place because of the time savings and accuracy it delivers
+Evrim Tezcanli
To quantify the overall time efficiency, the researchers timed the contouring process for the radiation oncologists and the AI tool. They also measured the time taken for expert review of the AI-generated results, in which a radiation oncologist checks the contours and performs any necessary adjustments before they are approved for treatment planning.
+The AI software completed the contouring for each patient in just one to two minutes, reducing the workload by an average of 75%, and in some cases saving over 30 minutes per patient. “We still needed to review the AI contours, but the correction time was only three to four minutes,” says Tezcanli, emphasizing that expert review remains essential when using AI. “One case required nine minutes, but even with that patient we had a time saving of 75%.”
+As well as saving time for the oncology staff, AI-based contouring has a lot to offer from the patient’s perspective. Spending less time on demanding manual contouring frees up the physician to spend more time with the patient.
+For their study, the researchers analysed post-contrast T1 MPRAGE sequences recorded using a 3 Tesla MRI scanner. To maximize lesion enhancement, they acquired images several minutes after contrast injection, though Tezcanli notes that this timing may vary between treatment centres. They also used image slices of 1 mm or less. “This is a very precise treatment and we want to make sure everything is accurate,” she adds.
+
The study deliberately included patients with varying numbers of different sized metastases, to assess the algorithms under diverse clinical scenarios. In terms of lesion detection, the software exhibited an overall sensitivity of 94% – finding 77 of the 82 metastases. The five missed lesions were extremely small, 0.01 to 0.03 cc, a volume that’s challenging even for physicians to detect. The software did, however, find three additional lesions that were not originally identified and which were later confirmed as brain metastases.
+The false positive rate was 8.5%, with the software mistakenly identifying seven vascular structures as metastases. “Because the algorithms work with contrast enhancement, any vascular enhancements that mimic the tumour can be mistaken,” says Tezcanli. “Here we needed to use a dedicated MRI sequence to define whether it was a metastasis or not. That’s just one thing to be cautious about. Other than that, we were very satisfied with the software’s ability to detect small lesions and find ones that we hadn’t detected.”
+The contours generated by the AI software are exported in DICOM RT Struct format, enabling direct transfer into the treatment planning system. At Acibadem Atasehir Hospital, this next step is performed using HyperArc, a radiosurgery-specific software module within the Eclipse treatment planning infrastructure. HyperArc performs automated treatment planning and delivery, enabling fast and efficient SRS on the Varian TrueBeam and Edge linacs.
+“HyperArc has proven to be highly effective, even when treating patients with multiple brain metastases,” says Burcin Ispir, a medical physicist working alongside Tezcanli. “One of its biggest powers is its ability to perform single isocentre, automated planning for multiple targets, which significantly reduces planning time while maintaining excellent plan quality. In our experience, HyperArc-generated plans offer high conformity and steep dose gradients, which are critical for sparing normal brain tissue.”
+Unlike conventional radiotherapy where homogeneity is desirable, SRS plans intentionally allow controlled heterogeneity within the target volume to improve sparing of normal tissue. HyperArc also offers automation of the beam geometry, including collimator and couch angles, ensuring consistent, fast and highly reproducible plans “For selected cases, we have found this enables a same-day workflow where contouring, planning and treatment can all be completed within a single day,” Ispir explains.
+ +The automation in AI contouring and HyperArc planning speeds up the treatment planning process, and when compared to traditional workflows, potentially allows patients to commence radiation therapy treatments earlier. The ability to commence treatment as soon as possible after the MRI scan is imperative when treating brain metastases. Most patients will also be receiving systemic therapies, which need to be delivered on schedule. But perhaps more importantly, the high spatial precision of SRS makes the technique sensitive to even small anatomical changes within lesions. If the delay between MR imaging and radiotherapy treatment is too long, any changes occurring during that time could decrease targeting accuracy.
+“We are in an era where we are using the technology to have even same-day treatments,” says Tezcanli. “We have rapid contouring with AI, a quick review of a few minutes by the expert radiation oncologist, treatment planning with HyperArc, and then a few hours later the patient is treated. This is where the technology is taking us.”
+Continuing improvements in cancer treatment techniques mean that patients are living longer, but this also increases the likelihood of metastases developing. In addition, higher quality MRI scans and enhanced imaging protocols lead to more metastases being detected. These factors combine to increase the workload on centres treating multiple metastases with SRS.
+“I think we will be treating brain metastasis more and more,” says Tezcanli. “And I think radiosurgery will be the main treatment modality in the future. AI will definitely have a place because of the time savings and accuracy it delivers. And this is only the first version of the software; I’m sure it can be improved to find even smaller lesions or differentiate vascular structures.”
+Following the initial software evaluation, the team has not yet fully integrated it into their clinical routine, but Tezcanli tells Physics World that they would be happy to use the software in every one of their brain metastases treatments. “I think we will be using it routinely in the future in all of our clinical cases,” she says.
+The post AI-powered algorithms help provide rapid, accurate contouring of brain metastases appeared first on Physics World.
+]]>The post NASA launches IMAP mission to provide real-time space weather forecasts appeared first on Physics World.
+]]>The solar wind is a stream of charged particles emitted by the Sun into space that helps to form the heliosphere. IMAP will study the solar wind and its interaction with the interstellar medium to better understand the heliosphere and its boundaries, which begin about 14 billion kilometres from Earth. This boundary offers protection from harsh radiation from space and is key to creating and maintaining a habitable solar system.
+ +IMAP, which is 2.4 m in diameter and almost 1 m high, will also support real-time observations of the solar wind and energetic particles that can harm satellites as well as disrupt global communications and electrical grids on Earth. From L1, IMAP will provide a 30-minute warning to astronauts and spacecraft near Earth of harmful radiation.
+To do so, IMAP contains 10 instruments that capture data on energetic neutral atoms, the solar wind and interstellar dust.
+They include a high-energy ion telescope, an electron instrument as well as a magnetometer that has been developed by Imperial College London. It will measure the strength and direction of magnetic fields in space, providing crucial data to improve our understanding of space weather.
+“Our magnetic field instrument will help us understand how particles are accelerated at shock waves and travel through the solar system,” notes Imperial’s Timothy Horbury. “I’m especially excited that our data will be made public within minutes of being measured over a million miles away, supporting real-time space weather forecasts. It’s a great example of how scientific measurements can positively impact society.”
+The IMAP mission is led by Princeton University and managed by the Johns Hopkins Applied Physics Laboratory with contributions from 25 institutions across six countries.
+The post NASA launches IMAP mission to provide real-time space weather forecasts appeared first on Physics World.
+]]>The post Environmental physics should be on a par with quantum physics or optics appeared first on Physics World.
+]]>The challenges are considerable and the crisis is urgent. But we know that physics has already contributed enormously to society – and I believe that environmental physics can make a huge difference by identifying, addressing and alleviating the problems at stake. However, physicists will only be able to make a difference if we put environmental physics at the centre of our university teaching.
+Environmental physics is defined as the response of living organisms to their environment within the framework of the physics principles and processes. It examines the interactions within and between the biosphere, the hydrosphere, the cryosphere, the lithosphere, the geosphere and the atmosphere. Stretching from geophysics, meteorology and climate change to renewable energy and remote sensing, it also covers soils and vegetation, the urban and built environment, and the survival of humans and animals in extreme environments.
+Environmental physics was pioneered in the UK in the 1950s by the physicists Howard Penman and John Monteith, who were based at the Rothamsted Experimental Station, which is one of the oldest agricultural research institutions in the world. In recent decades, environmental physics has become more prevalent in universities across the world.
+Some UK universities either teach environmental physics in their undergraduate physics degrees or have elements of it within environmental science degrees. That’s the approach taken, for example, by University College London as well as well as the universities of Cambridge, Leicester, Manchester, Oxford, Reading, Strathclyde and Warwick.
+ +When it comes to master’s degrees in environmental physics, there are 17 related courses in the UK, including nuclear and environmental physics at Glasgow and radiation and environmental protection at Surrey. Even the London School of Economics has elements of environmental physics in some of its business, geography and economics degrees via a “physics of climate” course.
+But we need to do more. The interdisciplinary nature of environmental physics means it overlaps with not just physics and maths but agriculture, biology, chemistry, computing, engineering, geology and health science too.
+Indeed, recent developments in machine learning, digital technology and artificial intelligence (AI) have had an impact on environmental physics – for example, through the use of drones in environmental monitoring and simulations – while AI algorithms can catalyse modelling and weather forecasting. AI could also in future be used to predict natural disasters, such as earthquakes, tsunamis, hurricanes and volcanic eruptions, and to assess the health implications of environmental pollution.
+Environmental physics is exciting and challenging, has solid foundations in mathematics and the sciences via experiments both in the lab and field. Environmental measurements are a great way to learn about the use of uncertainties, monitoring and modelling, while providing scope for project and teamwork. A grounding in environmental physics can also open the door to lots of exciting career opportunities, with ongoing environmental change meaning lots of ongoing environmental research will be vital.
+ +Solving major regional and global environmental problems is a key part of sociopolitics and so environmental physics has a special role to play in the public arena. It gives students the chance to develop presentational and interpersonal skills that can be used to influence decision makers at local and national government level.
+Taken together, I believe a module on environmental physics should be a component of every undergraduate degree as a minimum, ideally having the same weight as quantum or statistical physics or optics. Students of environmental physics have the potential to be enabled, engaged and, ultimately, to be empowered to meet the demands that the future holds.
+The post Environmental physics should be on a par with quantum physics or optics appeared first on Physics World.
+]]>The post Unlocking the full potential of organic solar cells appeared first on Physics World.
+]]>A key metric for evaluating solar cell performance is the fill factor, which measures how close a cell comes to delivering its theoretical maximum power. Fill factors are influenced by both the materials used and the design of the device. Historically, recombination losses have been considered the primary limitation to maximising fill factors. This is where electrons and holes recombine before contributing to the electrical current.
+Recent research has highlighted another critical variable called transport resistance. In organic semiconductors, which typically have low electrical conductivity, charge carriers move slowly through the material. This slow transport increases the likelihood of recombination before the charges reach the electrodes, leading to significant fill factor losses even in highly efficient devices. If the electrons and holes were runners in a race, recombination losses are runners giving up before they finish. In organic materials, charge transport occurs via hopping: it is like the runners moving through mud, it is much harder – increasing the transport resistance – and they are more likely to give up before the end.
+To address this, the authors developed an analytical model that incorporates the nature of this slow transport, energetic disorder, and systematically evaluates transport resistance using experimental data. By analysing current-voltage characteristics and light intensity-dependent open-circuit voltage across a range of temperatures, the model distinguishes between losses due to recombination and those due to charge transport.
+This refined approach enables more accurate predictions of fill factors and offers practical strategies to minimise transport-related losses. Improving fill factors not only enhances the performance of organic solar cells but also provides insights applicable to other emerging photovoltaic technologies, helping to guide the future design of third-generation solar power.
+Transport resistance strikes back: unveiling its impact on fill factor losses in organic solar cells
+Maria Saladina and Carsten Deibel 2025 Rep. Prog. Phys. 88 038001
++
Efficient charge generation at low energy losses in organic solar cells: a key issues review by Ye Xu, Huifeng Yao, Lijiao Ma, Jingwen Wang and Jianhui Hou (2020)
+The post Unlocking the full potential of organic solar cells appeared first on Physics World.
+]]>The post Floquet engineering made easy appeared first on Physics World.
+]]>These Floquet systems provide versatile platforms to investigate new physical phenomena such as time crystals, and can also be used to create fault-tolerant states for quantum computing.
+What’s important here is the ability to precisely control the behaviour of the quantum system by designing its effective Hamiltonian – the mathematical object that governs how the system evolves over time.
+When researchers want a system to behave in a very specific way, they engineer the Hamiltonian to match a desired target. This is called Floquet engineering.
+Unfortunately, it’s not possible to create a simple (analytical) Floquet Hamiltonian for any given system, and mathematical tools such as the Magnus expansion are usually required to get a Hamiltonian that is sufficiently precise.
+However, when you engineer a Hamiltonian using approximations, you get errors – not great for most applications and especially quantum computing.
+Mitigating these errors is possible to some degree although up until now it’s been a one system at a time approach. What we really need is a systematic approach for mitigating these errors for any given system.
+This is the problem that the latest paper by researchers Xu and Guo tries to address.
+They used symmetries (like rotational or mirror symmetry) to simplify the design of these correction terms. This makes the calculations more manageable and the system more predictable.
+They also provided a numerical method to calculate these corrections efficiently, which is important for practical implementation
+They validated their method by creating Hamiltonians that are directly relevant for quantum computers.
+The authors expect to further refine their method in the future, but this represents a big step forward towards practically engineering arbitrary Floquet Hamiltonians.
+Perturbative framework for engineering arbitrary Floquet Hamiltonian – IOPscience
+Xu and Guo, 2025 Rep. Prog. Phys. 88 037602
++
The post Floquet engineering made easy appeared first on Physics World.
+]]>The post Negative time observed in photon-atom interaction appeared first on Physics World.
+]]>Quantum mechanics has produced a lot of weird results, and the latest originated in 2022 with an experiment conducted by physicists at the University of Toronto, Canada. Led by Aephraim Steinberg, they found that when a photon passing through a cloud of atoms excites an electron in one of the atoms, it seems to spend a similar amount of time in this atomic excitation as a photon that passes straight through the cloud, apparently without exciting an atom at all.
+To understand the theory behind this counterintuitive result, Steinberg and colleagues worked with researchers from the Massachusetts Institute of Technology in the US, Griffith University in Australia, and the Indian Institute of Science Education and Research. The framework they developed, which they now describe in APL Quantum, involves a single photon being sent into an atom cloud that is continuously monitored with a so-called “weak probe” that detects the presence of an atomic excitation anywhere in the cloud. Integrating this weak-probe signal over time thus provides a measure of how long the photon spends in an excited atomic state before it leaves the atom.
+ +After crunching the numbers, the researchers made a prediction that surprised even them: the average excitation time can be negative. They also found that this excitation time should be the same as another, more familiar, time known as the group delay.
+The team’s lead theorist, Griffith’s Howard Wiseman, says it is important to distinguish between these two times. A negative group delay, he observes, can be explained in a relatively intuitive way. Because the front of the photon pulse exits the atom cloud before the peak of the pulse enters it, and the peak never exits because most of the photons are scattered, there is, he says, an “illusion” that the photons leave the medium before they arrive.
+However, he continues, what this framework actually measures is the time a transmitted photon spends in the atom cloud. It says nothing about whether such a photon excites an atom on its way through the cloud. Although it is normally assumed that any photon that excites an atom gets randomly scattered and never reaches the detector, says Wiseman, “We now say that this is not true and forward-scattered photons actually contribute a lot to the average measurement”.
+To test this theory, Steinberg and colleagues set up a new experiment that sends two counter-propagating laser beams into a cloud of 85Rb atoms that have been cooled to 60–70 µK. The first beam contains the photons that may give rise to atomic excitation and may be either transmitted or scattered. The second beam is used for the weak measurements and detects the presence of an excitation via tiny shifts in its phase. These measurements required a high level of stability and a low level of interference in all parts of the setup.
+After refining their system, the researchers measured average atomic excitation times for transmitted photons ranging from (–0.82 ± 0.31)𝜏0 for the most narrowband pulse to (0.54 ± 0.28)𝜏0 for the most broadband pulse. Here, 𝜏0 is the excitation time averaged over both scattered and transmitted photons, which is always positive and ranges from 10–20 ns, depending on various parameters. This result shows that negative excitation times do indeed have a physical reality in quantum measurements.
+According to Steinberg, while he and his colleagues previously knew that negative numbers could pop out of the mathematics, they tended to sweep them under the rug and make excuses for them, assuming that while they correctly described the location of a peak, they weren’t physically relevant. “I am now led to revisit this and say: those negative numbers appear to have more physical significance that we would previously have attributed to them,” he tells Physics World. As a result, he hopes to “begin to investigate more deeply what we think the meaning of a ‘negative time’ is”.
+ +Jonte Hance, a quantum physicist at Newcastle University, UK, who was not involved in this research, warns that interpreting negative time too literally can lead to paradoxes that aren’t necessary for the physics to work. Nevertheless, he says, the “anomalous” values recorded in the weak measurement “point to something interesting and quantum happening”.
+Hance explains that in his view, a negative value for the mean atomic excitation time for transmitted photons implies contextuality – a property of quantum systems whereby measuring the system in different ways can make it look like it has incompatible properties if we assume that measuring the system does nothing to it. “Contextuality seems to be one of the tell-tale signs a quantum scenario may provide us with an advantage at a certain task over all possible classical ways of doing that task,” he says. “And so it makes me excited for what this could be used for.”
+The post Negative time observed in photon-atom interaction appeared first on Physics World.
+]]>The post Training for the stars: Rosemary Coogan on becoming an astronaut appeared first on Physics World.
+]]>Coogan explains what astronaut training really entails: classroom sessions packed with technical knowledge, zero-gravity parabolic flights, and underwater practice in Houston’s neutral buoyancy pool. Born in Northern Ireland, Coogan reflects on her personal journey. From a child dreaming of space, she went on to study physics and astrophysics at Durham University, then completed a PhD on the evolution of distant galaxies.
+When not preparing for lift off, Coogan counts sci-fi among her interests – she loves getting lost in the world of possibilities. She’s also candid about the psychological side of astronaut training, and how she’s learned to savour the learning process itself rather than obsess over launch dates. Hosted by Andrew Glester, this episode captures both the challenge and wonder of preparing for an imminent journey to space.
+The post Training for the stars: Rosemary Coogan on becoming an astronaut appeared first on Physics World.
+]]>The post Bridging the gap between scientists, policy makers and industry to build the quantum ecosystem appeared first on Physics World.
+]]>Our journey into science policy engagement started almost by chance. Back in 2022 we received an e-mail from Imperial‘s Centre for Quantum Engineering Science and Technology (QuEST) advertising positions for PhD students to support evidence-based policy-making. Seeing it as an opportunity to contribute beyond the lab, we both took up the challenge. It became an integral part of our PhD experience. What started as a part-time role alongside our PhDs turned into something much more than that.
+
Elizabeth Pasatembou started her PhD in 2021, working with the particle-physics group and Centre for Cold Matter at Imperial College London. Her research focused on quantum sensing for fundamental physics as part of the Atom Interferometer Observatory and Network (AION) project. She will soon start as postdoctoral fellow working on quantum communications with the Cyprus Quantum Communications Infrastructure (CyQCI) team at the Cyprus University of Technology, which is part of the pan-European Quantum Communication Infrastructure (EuroQCI) project.
+Her interest in science policy engagement started out of curiosity and the desire to make a more immediate impact during her PhD. “Research can feel slow,” she says. “Taking up this role and getting involved in policy gave me the chance to use my expertise in a way that felt directly relevant, and develop new skills along the way. I also saw this as an opportunity to challenge myself and try something new.”
+Pasatembou also worked on a collaborative project between the Imperial Deep Tech Entrepreneurship and QuEST, conducting interviews with investors to inform the design of a tailored curriculum on quantum technologies for the investors community.
+Dimitrie Cielecki joined Imperial’s Complex Nanophotonics group as a PhD candidate in 2021. The opportunity to work in science policy came at a time when his research was evolving in new directions. “The first year of my PhD was not straightforward, with my project taking unexpected, yet exciting, turns in the realm of photonics, but shifting away from quantum,” explains Cielecki, whose PhD topic was spatio-temporal light shaping for metamaterials.
+After seeing an advert for a quantum-related policy fellowship, he decided to jump in. “I didn’t even know what supporting policy-making meant at that point,” he says. “But I quickly became driven by the idea that my actions and opinions could have a quick impact in this field.”
+Cielecki is now a quantum innovation researcher at the Institute for Deep Tech Entrepreneurship in the Imperial Business School, where he is conducting research on the correlations between technical progress, investors’ confidence and commercial success in the emerging quantum sector.
++
We joined QuEST and the Imperial Policy Forum – the university’s policy engagement programme – in 2022 and were soon sitting at the table with leading voices in the nascent quantum technology field. We had many productive conversations with senior figures from most quantum technology start-ups in the UK. We also found ourselves talking to leaders of the National Quantum Technology Programme (including its chair, Sir Peter Knight); to civil servants from the Office for Quantum in the Department of Science, Innovation and Technology (DSIT); and to members of both the House of Commons and the House of Lords.
+Sometimes we would carry out tasks such as identifying the relevant stakeholders for an event or a roundtable discussion with policy implications. Other times we would do desk research and contribute to reports used in the policy-making process. For example, we responded to the House of Commons written evidence inquiry on Commercialising Quantum Technologies (2023) and provided analysis and insights for the Regulatory Horizons Council report Regulating Quantum Technology Applications (2024). We also moderated a day of roundtable discussions with quantum specialists for the Parliamentary Office of Science and Technology’s briefing note Quantum Computing, Sensing and Communications (2025).
+When studying science, we tend to think of it as a purely intellectual exercise, divorced from the real world. But we know that the field is applied to many areas of life, which is why countries, governments and institutions need policies to decide how science should be regulated, taught, governed and so on.
+Science policy has two complimentary sides. First, it’s about how governments and institutions support and shape the practice of science through, for example, how funding is allocated. Second, science policy looks at how scientific knowledge informs and guides policy decisions in society, which also links to the increasingly important area of evidence-informed policy-making. These two dimensions are of course linked – science policy connects the science and its applications to regulation, economics, strategy and public value.
+ +Quantum policy specifically focuses on the frameworks, strategies and regulations that shape how governments, industries and research institutions develop and deploy quantum technologies. Many countries have published national quantum strategies, which include technology roadmaps tied to government investments. These outline the infrastructure needed to speed up the adoption of quantum technology – such as facilities, supply chains and a skilled workforce.
+In the UK, the National Quantum Technology Programme (NQTP) – a government-led initiative that brings together industry, academia and government – has pioneered the idea of co-ordinated national efforts for the development of quantum technologies. Set up in 2014, the programme has influenced other countries to adopt a similar approach. The NQTP has been immensely successful in bringing together different groups from both the public and private sectors to create a productive environment that advances quantum science and technology. Co-operation and communication have been at the core of this programme, which has led to the UK’s 10-year National Quantum Strategy. Launched in 2023, this details specific projects to help accelerate technological progress and make the country a leading quantum-enabled economy. But that won’t happen unless we have mechanisms to help translate science into innovation, resilient supply chains, industry-led standardization, stable regulatory frameworks and a trained workforce.
+
Quantum technologies can bring benefits for national security, from advanced sensing to secure communications. But their dual-use nature also poses potential threats as the technology matures, particularly with the prospect of cryptographically relevant quantum computers – machines powerful enough to break encryption. To mitigate these risks in a complex geopolitical landscape, governments need tailored regulations, whether that’s preparing for the transition to post-quantum cryptography (making communication safe from powerful code-cracking quantum computers) or controlling exports of sensitive products that could compromise security.
+Like artificial intelligence (AI) and other emerging technologies, there are also ethical considerations to take into account when developing quantum technologies. In particular, we need policies to ensure transparency, inclusivity and equitable access. International organizations such as UNESCO and the World Economic Forum have already started integrating quantum into their policy agendas. But as quantum technology is such a rapidly evolving new field, we need to strike a balance between innovation and regulation. Too many rules can stifle innovation but, on the other hand, policy needs to keep up with innovation to avoid any future serious incidents.
+Policy engagement involves collaborating with three sets of stakeholders – academia; industry and investors; and policy-makers. But as we started to work with these groups, we noticed each had a different way of communicating, creating a kind of language barrier. Scientists love throwing around equations, data and figures, often using highly technical terminology. Industry leaders and investors, on the other hand, talk in terms of how innovations could affect business performance and profitability, and what the risk for their investments could be. As for policy-makers, they focus more on how to distinguish between reality and hype, and look at budgets and regulations.
+We found ourselves acting as cross-sector translators, seeking to bridge the gap between the three groups. We had to listen to each stakeholder’s requirements and understand what they needed to know. We then had to reframe technical insights and communicate them in a relevant and useful way – without simplifying the science. Once we grasped everyone’s needs and expectations, we offered relevant information, putting it into context for each group so everyone was on the same page.
+To help us do this, we considered the stakeholders as “inventor”, “funder”, “innovator” or “regulator”. As quantum technology is such a rapidly growing sector, the groupings of academia, industry and policy-makers are so entangled that the roles are often blurred. This alternative framework helped us to identify the needs and objectives of the people we were working with and to effectively communicate our science or evidence-backed messages.
+During our time as policy fellows, we were lucky to have mentors to teach us how to navigate this quantum landscape. In terms of policy, Craig Whittall from the Imperial Policy Forum was our guide on protocol and policy scoping. We worked closely with QuEST management – Peter Haynes and Jess Wade – to organize discussions, collect evidence from researchers, generate policy leads, and formulate insights or recommendations. We also had the pleasure of working with other PhD students, including Michael Ho, Louis Chen and Victor Lovic, who shared the same passion for bridging quantum research and policy.
+Having access to world-leading scientists and a large pool of early-career researchers spread across all departments and faculties, facilitated by the network in QuEST, made it easier for us to respond to policy inquiries. Early on, we mapped out what quantum-related research is going on at Imperial and created a database of the researchers involved. This helped inform the university’s strategy regarding quantum research, and let us identify who should contribute to the various calls for evidence by government or parliament offices.
+
PhD students are often treated as learners rather than contributors. But our experience showed that with the right support and guidance, early-career researchers (ECRs) such as ourselves can make real impact by offering fresh perspectives and expertise. We are the scientists, innovators or funders of the future so there is value in training people like us to understand the bigger picture as we embark on our careers.
+To encourage young researchers to get involved in policy, QuEST and DSIT recently organized two policy workshops for ECR quantum tech specialists. Civil servants from the Office for Quantum explained their efforts and priorities, while we answered questions about our experience – the aim being to help ECRs to engage in policy-making, or choose it as a career option.
+In April 2025 QuEST also launched an eight-week quantum primer for policy-makers. The course was modelled on a highly successful equivalent for AI, and looked to help policy-makers make more technically informed policy discussions. The first cohort welcomed civil servants from across government, and it was so highly reviewed a second course will be running from October 2025.
+Our experience with QuEST has shown us the importance of scientists taking an active role in policy-making. With the quantum sector evolving at a formidable rate, it is vital that a framework is in place to take research from the lab to society. Scientists, industry, investors and policy-makers need to work together to create regulations and policies that will ensure the responsible use of quantum technologies that will benefit us all.
+The post Bridging the gap between scientists, policy makers and industry to build the quantum ecosystem appeared first on Physics World.
+]]>The post Harnessing quantum duality for object imaging appeared first on Physics World.
+]]>
The theory of quantum mechanics was born out of the need to explain how an object could behave both as a particle and as a wave. Since then, researchers have been attempting to understand and quantify the degree of “waveness” and “particleness” of quantum systems.
+Now, Pawan Khatiwada and Xiaofeng Qian, both based at the Stevens Institute of Technology in the US, have published a paper in Physical Review Research unveiling the missing piece of the puzzle in the unique relationship between the wave and particle nature of a quantum object. The key piece is coherence, which describes the statistical phase relationship between the possible states a quantum system can adopt. If the phase relationship is well-defined, that is, stable and consistent, the system is coherent and therefore has the potential to exhibit interference, a wave-like property. If the system is incoherent, the phase relationship is random and variable. Coherence therefore characterizes the total capacity or potential of a quantum system to exhibit wave-like behaviour. The degree of waveness that is then realised and can be observed is known as the visibility and is characterized by the intensity of the maxima and minima in the interference pattern. The amount of particleness, known as the predictability, is a measure of how well we can predict the path a particle will take and is defined by the degree to which the particle is taking only one of the two paths.
+ +The amount of interference that an object exhibits (the visibility) may only be a fraction of its total capacity to exhibit wave-like behaviour (its coherence) and can change according to how we choose to observe the system. For example, it we decide to track the position of the photon through a double-slit diffraction grating, no interference effects are observed, and the photon will behave as a particle. However, if we remove this tracking, the usual patterns of dark and light bands that are characteristic of interference emerge, and the photon behaves as a wave. In the first situation the visibility is low, but the coherence can nonetheless be high; it is our access to information about the path of the photon that destroys the interference pattern.
+Half a century ago, the relationship between a quantum object’s visibility and predictability was characterized by an inequality that restricted the sum of the square of the visibility and predictability to be less than or equal to one. However, this inequality could not have been telling the full story since it failed to capture the exclusivity of wave and particle-like behaviours such that if an object is more wave-like, it should display less particle properties and vice versa. The inequality instead permitted both wave and particle properties to increase or decrease simultaneously.
+Qian and Khatiwada recast this inequality into a precise equality relation by introducing coherence as an additional variable. The relation then describes a trade-off between the coherence, visibility, and predictability, where the sum of the square of the predictability and the fraction of visibility exhibited out of the total coherence, must be equal to one. This relationship forms an ellipse equation, with the eccentricity determined by the coherence, where maximum coherence yields a circle and partial coherence an ellipse and is thus known as a duality ellipse relation.
+Qian applies this formula to a technique for measuring quantum objects called quantum imaging with undetected photons (QIUP) first conceived by Nobel laureate Anton Zeilinger’s research group. One of a pair of entangled photons interacts with the object and depending on its shape, the coherence will change and can be tracked by following the behaviour of the second photon, without ever detecting the first. Information about the object is then given by the second photon even though it has never interacted with it. The more coherence is present at a specified location in the object, the more circular the duality ellipse relation (see figure).
+Finding the eccentricity of the duality ellipse relation at each point in the object thus provides a map of its shape, and therefore an image of the object. Since the coherence is related to the visibility and predictability of a quantum object, Qian explains that their formula then becomes a crucial link between ‘fundamental properties of a quantum system like waveness and particleness and operational properties which hold information about an object’.
+ +Of course, most experimental scenarios are not ideal, and Qian accounts for these experimental imperfections in a modified relation. Remarkably, the overall pattern of the elliptical duality relation remains the same and therefore this imaging technique proves to be robust even under such conditions.
+Qian explains that ‘in a similar way quantum entanglement is a useful resource in quantum information and quantum computing, quantum duality also proves to be too for certain quantum tasks’. His group are now working on uncovering further avenues through which quantum duality can act as a quantum resource.
+The post Harnessing quantum duality for object imaging appeared first on Physics World.
+]]>The post Compact diamond magnetometer detects metastatic tumours appeared first on Physics World.
+]]>“It’s really bad news when tumour cells spread from their original site, and so it’s very important to detect this metastatic cancer as soon as possible,” says physicist Gavin Morley, who led this research effort together with his doctoral student Alex Newman. “The new cancers are often lodged in the lymph nodes and our device could be used to detect these cancers early when they are still small.”
+Existing techniques to detect metastatic tumours include MRI and CT, but these technologies can only detect tumours that are at least 2 mm across. While alternatives like sentinel lymph node biopsy can detect tumours with a volume that is 1000 times smaller, this technique typically involves the use of radioactive tracer fluids that require special safety precautions, or blue dyes, which cause an allergic reaction in one in a hundred people.
+Medical device company Endomag recently developed a clinical technique that involves the surgeon injecting a magnetic tracer into a breast cancer tumour, explains Morley. “The tracer fluid travels to the nymph nodes and the surgeon can then identify the metastatic cells there and remove them.”
+ +While this approach is efficient for breast cancer, the magnetometers employed today to detect the tracer are too large for use in keyhole surgery or endoscopy, he explains. “We wanted to create a device that can be used to detect the metastatic tumours and so built a version that’s smaller. The surgeons we’ve spoken to say that colorectal cancer could be the best place for us to focus on first for our magnetometer.”
+Morley’s group has been working on magnetic field sensors using diamonds and lasers for ten years now. The diamonds are grown by the company Element Six in Oxford and they contain quantum defects known as nitrogen-vacancy (NV) centres. These are created when a pair of adjacent carbon atoms in the diamond lattice is replaced by a nitrogen atom, leaving one lattice site vacant. An NV centre is basically an isolated spin that is highly sensitive to an external magnetic field and it emits florescent light in a way that depends on the intensity and direction of this field. Measuring this light allows it to be used as a magnetic sensor.
+“Our speciality is using optical fibres to send laser light into the diamond and detect the red light that comes back,” says Morley.
+In this work, reported in Physical Review Applied, it was Newman who built the new sensor, Morley tells Physics World. “Alex likes fixing old sports cars and I liked the way he applied that thinking to this new technology. He tries different strategies and has built new types of diamond sensors that no-one has managed to build before.”
+The Warwick researchers are now working on a number of applications for their sensors: as well as use within healthcare, they could be employed in space applications and future fusion power plants, says Morley. “Indeed, for Alex’s project, we were working on detecting damage in steel to help the National Nuclear Laboratory who have nuclear waste stored in steel containers. I then met Stuart Robertson, who is a breast cancer surgeon at the University Hospitals Coventry and Warwickshire: he told me how useful the Endomag solution is for breast cancer metastatic cells and I thought we could build a magnetometer that would help.”
+ +Working with several surgeons, Morley, Newman and colleagues are now developing this work as part of the UK Quantum Biomedical Sensing Research Hub (Q-BIOMED). “For example, Jamie Murphy in the Cleveland Clinic in London is an expert on keyhole surgery, with a big interest in colorectal cancer,” says Morley. “And Conor McCann is an expert on gut health at the UCL Great Ormond Street Institute of Child Health. We’re interested in spinning out a company ourselves to take this forward alongside other applications of our diamond sensors.”
+The researchers are also busy making the sensor even smaller. “At the moment the probe is 1 cm across, but we think we can get it down to be only 3 mm,” says Morley. “While 1 cm is small enough for keyhole surgery and endoscopy, getting it even smaller would make it useful for even more types of surgeries.”
+The post Compact diamond magnetometer detects metastatic tumours appeared first on Physics World.
+]]>The post Delft Circuits: cryogenic RF cable innovations offer a flexible path to quantum scalability appeared first on Physics World.
+]]>
As manufacturers in the nascent quantum supply chain turn their gaze towards at-scale commercial opportunities in quantum computing, the scenic city of Delft in the Netherlands is emerging as a heavyweight player in quantum science, technology and innovation. At the heart of this regional quantum ecosystem is Delft Circuits, a Dutch manufacturer of specialist I/O cabling solutions, which is aligning its product development roadmap to deliver a core enabling technology for the scale-up and industrial deployment of next-generation quantum computing, communications and sensing systems.
+
In brief, the company’s Cri/oFlex® cryogenic RF cables comprise a stripline (a type of transmission line) based on planar microwave circuitry – essentially a conducting strip encapsulated in dielectric material and sandwiched between two conducting ground planes. The use of the polyimide Kapton® as the dielectric ensures Cri/oFlex® cables remain flexible in cryogenic environments (which are necessary to generate quantum states, manipulate them and read them out), with silver or superconducting NbTi providing the conductive strip and ground layer. The standard product comes as a multichannel flex (eight channels per flex) with a range of I/O channel configurations tailored to the customer’s application needs, including flux bias lines, microwave drive lines, signal lines or read-out lines.
+“As quantum computers evolve – think more and more qubits plus increasingly exacting requirements on gate fidelity – system developers will reach a point where current coax cabling technology doesn’t cut it anymore,” explains Daan Kuitenbrouwer, co-founder of Delft Circuits. “The key to our story is that Cri/oFlex® allows us to increase the I/O cabling density easily – and by a lot – to scale the number of channels in a single system while guaranteeing high gate fidelities [minimizing noise and heating] as well as market-leading uptime and reliability.”
+To put some hard-and-fast performance milestones against that claim, Kuitenbrouwer and colleagues have just published a granular product development roadmap that aligns Cri/oFlex® cabling specifications against the anticipated evolution of quantum computing systems – from 150+ qubits today out to 40,000 qubits and beyond in 2029 (see figure, “Quantum alignment”).
+
“Our roadmap is all about enabling, from an I/O perspective, the transition of quantum technologies out of the R&D lab into at-scale practical applications,” says Kuitenbrouwer. “As such, we studied the development roadmaps of more than 10 full-stack quantum computing vendors to ensure that our ‘guiding principles’ align versus the aggregate view of quantity and quality of qubits targeted by the system developers over time.”
+Notwithstanding the emphasis on technology innovation and continuous product improvement, Delft Circuits is also “coming of age” in line with the wider quantum community. Most notably, the company’s centre of gravity is shifting inexorably from academic end-users to servicing vendors large and small in the quantum supply chain. “What we see are full-stack quantum computing companies starting to embrace horizontal thinking – which, in our case, means a technology partner able to solve their entire I/O cabling challenge,” explains Kuitenbrouwer.
+To gain traction, however, systems integrators at the sub-stack level must, as a given, design their product offering with industrial metrics front-and-centre – for example, scalability, manufacturability, reliability, cost per I/O channel and second-sourcing. Equally important is the need to forge long-term vendor-customer relationships that often move beyond the transactional into the realm of co-development and collaboration – though all against a standardized package of cabling options.
+“We integrate Cri/oFlex® with cryostats that have relatively standard vacuum feedthroughs and thermalization – more or less the same across the board,” says Kuitenbrouwer. What changes is the type of qubit – superconducting, spin, photonic – which in turn determines the configuration of the I/O line and where to place the attenuators, low-pass filters and IR filters. “This is something we can adjust relatively easily – at high volume and high reliability – with the whole I/O package installed and tested at the customer premises,” he adds.
+Commercially, Delft Circuits is already making real headway, getting “in the door” with many of the leading developers of quantum computing systems in North America and Europe. One of the main reasons for that is the ability to respond to customer requirements in an agile and timely fashion, argues Sal Bosman, a fellow co-founder of Delft Circuits.
+
“We work on the basis of a very structured design process, playing to our strengths in superconductor fabrication, integrated microwave components and cryogenic engineering,” Bosman notes. “We have also developed our own in-house software to simulate the performance of Cri/oFlex® cabling in full-stack quantum systems. No other vendor can match this level of customer support and attention to detail.”
+Right now, though, it’s all about momentum as Delft Circuits seeks to capitalize on its first-mover advantage and, what Bosman claims, is the unique value proposition of its Cri/oFlex® technology: a complete and inherently scalable I/O solution with integrated flex cables incorporating filters and high-density interconnects to quantum chips or control electronics.
+With this in mind, the company is busy constructing a new 750m2 clean-room (with an option to double that footprint) alongside its existing 1000m2 in-house pilot-production and test facility. “Currently, we are the only industrial supplier able to deliver flexible circuits of superconducting materials at scale,” concludes Bosman.
+“Over the next two to three years,” he adds, “we have a credible opportunity to grab significant market share when it comes to cabling I/O for quantum. Watch this space: a lot of customers are already coming to us saying ‘we don’t want to buy more coax, we want to work with you.’”
+
Delft Circuits sits within a thriving regional cluster for quantum science and technology called Quantum Delta Delft, which is centred around the canal-ringed city of Delft between The Hague and Rotterdam.
+Formed in 2017 and initially located at the Faculty of Applied Sciences at Delft University of Technology (TU Delft), Delft Circuits has since grown as an independent company and is now based in the historic Cable District, where its facilities include a dedicated fabrication, pilot-production and testing area.
+TU Delft is itself home to a high-profile interfaculty research institute called QuTech, a collaboration with the Netherlands Organisation for Applied Scientific Research (TNO) that’s tasked with developing full-stack hardware and software layers (including enhanced qubit technologies) for quantum computing and quantum communications systems.
+Alongside this academic powerhouse, the Delft region has seen the emergence of other quantum tech start-ups like QuantWare (quantum chips), Qblox (control electronics) and Orange Quantum Systems (test solutions). All three companies work closely with Delft Circuits as part of the ImpaQT UA cooperative, a joint effort to develop open standards and interoperable technologies that enable system integrators to build quantum computing hardware stacks from off-the-shelf components.
+“The ImpaQT UA story is ongoing,” explains Kuitenbrouwer. “As partners, we are super-complementary and collaborate closely to shape the future of quantum computing.” That’s why the new development roadmap is so important for Delft Circuits: to communicate a vision from the “component layer” up the value chain to the full-stack quantum computing companies.
+As well as the talent pipeline that comes with proximity to TU Delft and QuTech, Quantum Delta Delft is home to TNO’s Quantum Information Technology Testing (QITT) Facility, which enables European companies to evaluate their cryogenic or non-cryogenic quantum devices and software in a full-stack quantum computing set-up.
++
The post Delft Circuits: cryogenic RF cable innovations offer a flexible path to quantum scalability appeared first on Physics World.
+]]>The post ‘Father of the Internet’ Vint Cerf expresses concern about the longevity of digital information appeared first on Physics World.
+]]>For individuals, failures like this are irritating. But for the wider digital ecosystem, they’re a real problem – so much so, in fact, that Vint Cerf, who’s known as one of the “fathers of the Internet”, made them the subject of his talk at last week’s Heidelberg Laureate Forum (HLF) in Heidelberg, Germany.
+“My big worry is that all this digital stuff won’t be there when we would like it to be there, or when our descendants would like to have it,” Cerf said.
+Historically, the best ways of preserving information involved writing it on durable materials such as clay tablets, high-quality paper, or a form of animal skin known as vellum. These media, Cerf observed, “have one thing in common: they don’t require electricity to be stored and preserved.”
+ +Digital media, in contrast, are much less robust. “Many of them are magnetic, and the magnetic material wears away after a while,” Cerf explained. Consequently, some old tapes are now so fragile that attempting to read them can actually lift the magnetic material off the surface: “You read it once and that’s it. It’s now transparent tape,” he said.
+Being able to read data is just the beginning, though. As my broken computer game shows, you also need programs and equipment that can persuade those data to do things. “That’s often the thing that goes first,” Cerf told me in a press conference after his talk. For example, when Cerf recently tried to retrieve data from an old three-and-a-half-inch floppy disk, he discovered that doing so would require three additional components: a drive that could read the disk, a program that could open the files stored on the disk and an old computer that could run the program. “I needed a whole lot of software help and several stages in order to make that digital content useful,” Cerf said.
+As for how to fix this problem and create a digital version of vellum, Cerf, who has been the “Chief Internet Evangelist” at Google since 2005, listed three ideas that he finds interesting. The first involves a New Jersey, US-based company called SPhotonix that does research and development work in the UK and Switzerland. It’s using lasers to write bits of data into chunks of quartz crystal, which is a very long-lasting medium. However, each crystal is roughly the size of a hockey puck, and Cerf thinks that “real work” still needs to be done to organize the information the material holds.
+The second idea is partly inspired by the clay tablets that proved so successful at preserving cuneiform writing from ancient Mesopotamia. Cerabyte, a start-up with facilities in Austria, Germany and the US, has developed a ceramic material that its founders claim could “store all data virtually forever”.
+The third idea, and the one that seems to appeal most to Cerf, is to write digital information into DNA. That might sound like an inherently fragile medium, but as Cerf pointed out, “It’s actually a very robust molecule – otherwise, life wouldn’t have persisted for several billion years.” Provided you dehydrate the DNA first, he added, it lasts for “quite a long time”.
+ +The question of how to read such information is not an easy one, and Cerf doesn’t have an answer to it. He is, however, hopeful that someone will find one. At the HLF, where he is such a revered figure that even the journalists want to take photos with him, he issued a call to arms for the young researchers in the audience. “I want you to appreciate the scope of the work that is required to preserve digital things,” Cerf told them. Without that work, he added, “recreating a digital environment in 100 years is not going to be a trivial matter.”
+The post ‘Father of the Internet’ Vint Cerf expresses concern about the longevity of digital information appeared first on Physics World.
+]]>The post Unconventional approach to dark energy problem gives observed neutrino masses appeared first on Physics World.
+]]>The Dark Energy Spectroscopic Instrument (DESI) is located on the Nicholas U Mayall four-metre Telescope at Kitt Peak National Observatory in Arizona. Its raison d’être is to shed more light on the “dark universe” – the 95% of the mass and energy in the universe that we know very little about. Dark energy is a hypothetical entity invoked to explain why the rate of expansion of the universe is (mysteriously) increasing – something that was discovered at the end of the last century.
+ +According to standard theories of cosmology, matter is thought to comprise cold dark matter (CDM) and normal matter (mostly baryons and neutrinos). DESI can observe fluctuations in the matter density of the universe known as baryonic acoustic oscillations (BAOs), which are density fluctuations that were created after the Big Bang in the hot plasma of baryons and electrons that prevailed then. BAOs expand with the growth of the universe and represent a sort of “standard ruler” that allows cosmologists to map the universe’s expansion by statistically analysing the distance that separates pairs of galaxies and quasars.
+DESI has produced the largest such 3D map of the universe ever and it recently published the first set of BAO measurements determined from observations of over 14 million extragalactic targets going back 11 billion years in time.
+In the new study, the DESI researchers combined measurements from these new data with cosmic microwave background (CMB) datasets (which measure the density of dark matter and baryons from a time when the universe was less than 400,000 years old) to search for evidence of matter converting into dark energy. They did this by focusing on a new hypothesis known as the cosmologically coupled black hole (CCBH), which was put forward five years ago by DESI team member Kevin Croker, who works at Arizona State University (ASU), and his colleague Duncan Farrah at the University of Hawaii. This physical model builds on a mathematical description of black holes as bubbles of dark energy in space that was introduced over 50 years ago. CCBH describes a scenario in which massive stars exhaust their nuclear fuel and collapse to produce black holes filled with dark energy that then grows as the universe expands. The rate of dark energy production is therefore determined by the rate at which stars form.
+Previous analyses by DESI scientists suggested that there is less matter in the universe today compared to when it was much younger. When they then added the additional, known, matter source from neutrinos, there appeared to be no “room” and the masses of these particles therefore appeared negative in their calculations. Not only is this unphysical, explains team member Rogier Windhorst of the ASU’s School of Earth and Space Exploration, it also goes against experimental measurements made so far on neutrinos that give them a greater-than-zero mass.
+When the researchers re-interpreted the new set of data with the CCBH model, they were able to resolve this issue. Since stars are made of baryons and black holes convert exhausted matter from stars into dark energy, the number of baryons today has decreased in comparison to the CMB measurements. This means that neutrinos can indeed contribute to the universe’s mass, slowing down the expansion of the universe as the dark energy produced sped it up.
+“The new data are the most precise measurements of the rate of expansion of the universe going back more than 10 billion years,” says team member Gregory Tarlé at the University of Michigan, “and it results from the hard work of the entire DESI collaboration over more than a decade. We undertook this new study to confront the CCBH hypothesis with these data.”
+“We found that the standard assumptions currently employed for cosmological analyses simply did not work and we had to carefully revisit and rewrite massive amounts of a lot of cosmological computer code,” adds Croker.
+ +“If dark energy is being sourced by black holes, these structures may be used as a laboratory to study the birth and infancy of our own universe,” he tells Physics World. “The formation of black holes may represent little Big Bangs played in reverse, and to make a biological analogy, they may be the ‘offspring’ of our universe.”
+The researchers say they studied the CCBH scenario in its simplest form in this work, and found that it performs very well. “The next big observational test will involve a new layer of complexity, where consistency with the large-scale features of the Big Bang relic radiation, or CMB, and the statistical properties of the distribution of galaxies in space will make or break the model,” says Tarlé.
+The research is described in Physical Review Letters.
+The post Unconventional approach to dark energy problem gives observed neutrino masses appeared first on Physics World.
+]]>The post Physicists extend the wave nature of large objects appeared first on Physics World.
+]]>Now, a team of researchers at Switzerland’s ETH Zürich and Spain’s Institute of Photonic Sciences in Barcelona has taken an important step towards bridging the two regimes by extending the quantum wave nature of nanoparticles — objects a thousand times larger than atoms.
+Quantum mechanics posits that even large objects behave as waves. However, the spatial extent of this wave-like behaviour, known as the “coherence length”, is far smaller than the size of large objects. This renders quantum phenomena effectively unobservable for such systems. “To push quantum physics into the macroscopic domain, we need to increase both [mass and coherence length] simultaneously”, explains lead researcher Massimiliano Rossi. This pursuit motivated the team’s recent study, which is described in Physical Review Letters.
+The researchers studied large objects called silica nanoparticles, which are 100 nm in diameter. The nanoparticles were held and levitated in vacuum using a tightly-focused laser beam.
+Nanoparticles naturally scatter the laser light, and the phase of the scattered photons encodes information about the nanoparticle’s centre-of-mass position. The researchers used this information in a feedback loop, applying electric fields to cool the nanoparticles close to their quantum ground state. The colder sample is in a more “pure” quantum state, such that the quantum wave-like behaviour extends farther in space and the coherence length is longer than in a hot sample. The team measured an initial coherence length of 21 pm (21 × 10-12 m).
+Further extending the coherence length required careful manipulation of the laser light. The researchers started with high-power light, which provided a tight harmonic potential for the nanoparticles – like a marble trapped at the bottom of a steep bowl. An advantage of using a light-induced potential is that the curvature of the bowl is easily tuned over a large range by adjusting the laser power.
+The researchers lowered the laser power in two pulses, each of which caused the bowl to become shallower, therefore allowing the marble to roll around and explore more of the bowl. In the experiment, this translated to an expansion of the nanoparticle’s coherence length to 73 pm, more than three-fold that of the initial value.
+Rossi notes that the main experimental challenge was limiting decoherence, a process that destroys quantum information. He explains that when a nanoparticle interacts with its surroundings, it becomes correlated with a noisy and unmeasurably complex environment. This interaction causes the nanoparticle’s motion to become increasingly random when measured. As a result, the nanoparticle’s quantum mechanical behaviour is washed out and the particle is well described as a classical ball.
+It was therefore critical that the researchers expand the coherence length faster than the rate of any decoherence. To achieve this, they meticulously measured, identified, and suppressed all sources of decoherence, with the dominant source being laser light scattering. Scattering was reduced during the expansion pulses because of the lower laser power.
+ +The achieved 73 pm remains orders of magnitude smaller than the size of the nanoparticle, which was 100 nm in diameter. However, Rossi remarks that “we do not know of any fundamental reason why achieving nanometre coherence lengths should be impossible.” One next step could be to use more expansion pulses to increase the coherence length further.
+With a longer expansion time, the main challenge would be to outpace decoherence. Researchers propose using hybrid traps that employ both light and electric fields to confine the nanoparticles, since an electric trap would reduce the decoherence from light scattering. Rossi is now pursuing this direction in his new research group at the Delft University of Technology in the Netherlands.
+The post Physicists extend the wave nature of large objects appeared first on Physics World.
+]]>The post Quantum gas keeps its cool appeared first on Physics World.
+]]>Our everyday world is chaotic and chaos plays a crucial and often useful role in many areas of science – from nonlinear complex systems in mathematics, physics and biology to ecology, meteorology and economics. How a system evolves depends on its initial conditions, but this evolution is, by nature, inherently unpredictable.
+ +While we know how chaos emerges in classical systems, how it does so in quantum materials is still little understood. When this happens, the quantum system reverts to being a classical one.
+Researchers have traditionally studied chaotic behaviour in driven systems – that is, rotating objects periodically kicked by an external force. The quantum version of these is the quantum kicked rotor (QKR). Here, quantum coherence effects can prevent the system from absorbing external energy, meaning that, in contrast to its classical counterpart, it doesn’t heat up – even if a lot of energy is applied. This “dynamical localization” effect has already been seen in dilute ultracold atomic gases.
+The QKR is a highly idealized single-particle model system, explains study lead Hanns-Christoph Nägerl. However, real-world systems contain many particles that interact with each other – something that can destroy dynamical localization. Recent theoretical work has suggested that this localization may persist in some types of interacting, even strongly interacting, many-body quantum systems – for example, in 1D bosonic gases.
+In the new work, Nägerl and colleagues made a QKR by subjecting samples of ultracold caesium (Cs) atoms to periodic kicks by means of a “flashed-on lattice potential”. They did this by loading a Bose–Einstein condensate of these atoms into an array of narrow 1D tubes created by a 2D optical lattice formed by laser beams propagating in the x–y plane at right angles to each other. They then increased the power of the beams to heat up the Cs atoms.
+The researchers expected the atoms to collectively absorb energy over the course of the experiment. Instead, when they recorded how their momentum distribution evolved, they found that it actually stopped spreading and that the system’s energy reached a plateau. “Despite being continually kicked and strongly interacting, it no longer absorbed energy,” says Nägerl. “We say that it had localized in momentum space – a phenomenon known as many-body dynamical localization (MBDL).”
+In this state, quantum coherence and many-body interactions prevent the system from heating up, he adds. “The momentum distribution essentially freezes and retains whatever structure it has.”
+Nägerl and colleagues repeated the experiment by varying the interaction between the atoms – from zero (non-interacting) to strongly interacting. They found that the system always localizes.
+“We had already found localization for our interacting QKR in earlier work and set out to reproduce these results in this new study,” Nägerl tells Physics World. “We had not previously realised the significance of our findings and thought that perhaps we were doing something wrong, which turned out not to be the case.”
+The MBDL is fragile, however – something the researchers proved by introducing randomness into the laser pulses. A small amount of disorder is enough the destroy the localization effect and restore diffusion, explains Nägerl: the momentum distribution smears out and the kinetic energy of the system rises sharply, meaning that it is absorbing energy.
+“This test highlights that quantum coherence is crucial for preventing thermalization in such driven many-body systems,” he says.
+Simulating such a system on classical computers is only possible for two or three particles, but the one studied in this work, reported in Science, contains 20 or more. “Our new experiments now provide precious data to which we can compare the QKR model system, which is a paradigmatic one in quantum physics,” adds Nägerl.
+ +Looking ahead, the researchers say they would now like to find out how stable MBDL is to various external perturbations. “In our present work, we report on MBDL in 1D, but would it happen in a 2D or a 3D system?” asks Nägerl. “I would like to do an experiment in which we have a 1D + 1D situation, that is, where the 1D is allowed to communicate with just one neighbouring 1D system (via tunnelling; by lowering the barrier to this system in a controlled way).”
+Another way of perturbing the system would be to add a local defect – for example a bump in the potential of a different atom, he says. “Generally speaking, we would like to measure the ‘phase diagram’ for MBDL, where the axes of the graph would quantify the strength of the various perturbations we apply.”
+The post Quantum gas keeps its cool appeared first on Physics World.
+]]>The post Andromeda image bags Royal Observatory Greenwich prize appeared first on Physics World.
+]]>The image – The Andromeda Core – showcases the core of the Andromeda Galaxy (M31) in exceptional detail, revealing the intricate structure of the galaxy’s central region and its surrounding stellar population.
+ +The image was taken with a long focal-length telescope from the AstroCamp Observatory, Nerpio, Spain.
+“Not to show it all − this is one of the greatest virtues of this photo. The Andromeda Galaxy has been photographed in so many different ways and so many times with telescopes that it is hard to imagine a new photo would ever add to what we’ve already seen,” notes astrophotographer László Francsics who was a judge for this year’s competition. “But this does just that, an unusual dynamic composition with unprecedented detail that doesn’t obscure the overall scene.”
+As well as winning the £10,000 top prize, the image has gone on display along with other selected pictures from the competition at an exhibition at the National Maritime Museum observatory that opened on 12 September.
+The award – now in its 17th year – is run by the Royal Observatory Greenwich in association with the astrophotography firm ZWO and BBC Sky at Night Magazine.
+The post Andromeda image bags Royal Observatory Greenwich prize appeared first on Physics World.
+]]>The post Protein qubit can be used as a quantum biosensor appeared first on Physics World.
+]]>Quantum technologies use qubits to store and process information. Unlike classical bits, which can exist in only two states, qubits can exist in a superposition of both these states. This means that computers employing these qubits can simultaneously process multiple streams of information, allowing them to solve problems that would take classical computers years to process.
+Qubits can be manipulated and measured with high precision, and in quantum sensing applications they act as nanoscale probes whose quantum state can be initialized, coherently controlled and read out. This allows them to detect minute changes in their environment with exquisite sensitivity.
+ +Optically addressable qubit sensors – that is, those that are read out using light pulses from a laser or other light source – are able to measure nanoscale magnetic fields, electric fields and temperature. Such devices are now routinely employed by researchers working in the physical sciences. However, their use in the life sciences is lagging behind, with most applications still at the proof-of-concept stage.
+Many of today’s quantum sensors are based on nitrogen-vacancy (NV) centres, which are crystallographic defects in diamond. These centres occur when two neighbouring carbon atoms in diamond are replaced by a nitrogen atom and an empty lattice site and they act like tiny quantum magnets with different spins. When excited with laser pulses, the fluorescent signal that they emit can be used to monitor slight changes in the magnetic properties of a nearby sample of material. This is because the intensity of the emitted NV centre signal changes with the local magnetic field.
+“The problem is that such sensors are difficult to position at well-defined sites inside living cells,” explains Peter Maurer, who co-led this new study together with David Awschalom. “And the fact that they are typically ten times larger than most proteins further restricts their applicability,” he adds.
+“So, rather than taking a conventional quantum sensor and trying to camouflage it to enter a biological system, we therefore wanted to explore the idea of using a biological system itself and developing it into a qubit,” says Awschalom.
+Fluorescent proteins, which are just 3 nm in diameter, could come into their own here as they can be genetically encoded, allowing cells to produce these sensors directly at the desired location with atomic precision. Indeed, fluorescent proteins have become the “gold standard” in cell biology thanks to this unique ability, says Maurer. And decades of biochemistry research has allowed researchers to generate a vast library of such fluorescent proteins that can be tagged to thousands of different types of biological targets.
+“We recognized that these proteins possess optical and spin properties that are strikingly similar to those of qubits formed by crystallographic defects in diamond – namely that they have a metastable triplet state,” explain Awschalom and Maurer. “Building on this insight, we combined techniques from fluorescence microscopy with methods of quantum control to encode and manipulate protein-based qubits.”
+In their work, which is detailed in Nature, the researchers used a near-infrared laser pulse to optically address a yellow fluorescent protein known as EYFP and read out its triplet spin state with up to 20% “spin contrast” – measured using optically detected magnetic resonance (ODMR) spectroscopy.
+To test the technique, the team genetically modified the protein so that it was expressed in human embryonic kidney cells and Escherichia coli (E. coli) cells. The measured OMDR signals exhibited a contrast of up to 8%. While this performance is not as good as that of NV quantum sensors, the fluorescent proteins open the door to magnetic resonance measurements directly inside living cells – something that NV centres cannot do, says Maurer. “They could thus transform medical and biochemical studies by probing protein folding, monitoring redox states or detecting drug binding at the molecular scale,” he tells Physics World.
+Beyond sensing, the unique quantum resonance “signatures” offer a new dimension for fluorescence microscopy, paving the way for highly multiplexed imaging far beyond today’s colour palette, Awschalom adds. Looking further ahead, using arrays of such protein qubits could even allow researchers to explore many-body quantum effects within biologically assembled structures.
+ +Maurer, Awschalom and colleagues say they are now busy trying to improve the stability and sensitivity of their protein-based qubits through protein engineering via “directed evolution” – similar to the way that fluorescent proteins were optimized for microscopy.
+“Another goal is to achieve single-molecule detection, enabling readout of the quantum state of individual protein qubits inside cells,” they reveal. “We also aim to expand the palette of available qubits by exploring new fluorescent proteins with improved spin properties and to develop sensing protocols capable of detecting nuclear magnetic resonance signals from nearby biomolecules, potentially revealing structural changes and biochemical modifications at the nanoscale.”
+The post Protein qubit can be used as a quantum biosensor appeared first on Physics World.
+]]>The post Peer review in the age of artificial intelligence appeared first on Physics World.
+]]>A recent survey done by IOP Publishing – the scientific publisher that brings you Physics World – reveals that physicists who do peer review are polarized regarding whether AI should be used in the process.
+IOPP’s Laura Feetham-Walker is lead author of AI and Peer Review 2025, which describes the survey and analyses its results. She joins me in this episode of the Physics World Weekly podcast in a conversation that explores reviewers’ perceptions of AI and their views of how it should, or shouldn’t, be used in peer review.
+The post Peer review in the age of artificial intelligence appeared first on Physics World.
+]]>The post If you met an alien, what would you say to it? appeared first on Physics World.
+]]>But is this a reasonable assumption? Would we really be able to communicate with aliens? Even if we could, would their way of doing science have any meaning to us? What if an advanced alien civilization had no science at all? These are some of the questions tackled by Whiteson and Warner in their entertaining and thought-provoking book.
+While Do Aliens Speak Physics? focuses on the possible differences between human and alien science, it made me think about what science means to humans – and the role of science in our civilization. Indeed, when I spoke to Whiteson for a future episode of the Physics World Weekly podcast, he told me that his original plan for the book was to examine if physics is universal or shaped by human perspective.
+But when he pitched the idea to his teenage son, Whiteson realized that approach was a bit boring and decided to spice things up using an alien landing. At the heart of the book is a new equation for estimating the number of alien civilizations that scientists could potentially communicate with – ideally, when the aliens arrive on Earth.
+The authors aren’t the first people to do such a calculation. In 1961 the US astrophysicist Frank Drake famously did so by estimating how many habitable planets might exist and whether they could harbour life that’s evolved so far that it could communicate with us. Whiteson and Warner’s “extended Drake equation” adds four extra terms related to alien science.
+The first is the probability that a civilization has developed science. The second is the likelihood that we would be able to communicate with the civilization, with the third being the probability that an alien civilization would ask scientific questions that are meaningful to us. The final term is whether human science would benefit from the answers to those questions.
+One of Whiteson and Warner’s more interesting ideas is that aliens could perceive science and technology in very different ways to us. After all, an alien civilization could be completely focused on developing technology and not be at all interested in the underlying science. Technology without science might seem deeply foreign to us today, but for most of history humans have focused on how things work – not why.
+Blacksmiths of the past, for example, developed impressive swords and other metal implements without any understanding of how the materials they worked with behaved at a microscopic level. So perhaps our alien visitors will come from a planet of blacksmiths rather than materials scientists.
+ +Mind you, communicating with alien scientists could be a massive challenge given that we do so mainly using sound and visual symbols, whereas an alien might use smells or subatomic particles to get their point across. As the authors point out, it’s difficult even translating the Danish/Norwegian word hygge into English, despite the concept’s apparent popularity in the English-speaking world. Imagine how much harder things would be if we used a different form of communication altogether.
+But could physics function as a kind of Rosetta Stone, offering a universal way of translating one language into another? We could then get the aliens to explain various physical processes – such as how a mass falls under the influence of gravity – and compare their reasoning to our understanding of the same phenomena.
+Of course, an alien scientist’s questions might depend on how they perceive the universe. In a chapter titled “Can aliens taste electrons?”, the authors explore what might happen if aliens were so small that they experience quantum effects such as entanglement in their daily lives. What if an organism were so big that it feels the gravitational tug of dark matter? Or what if an intelligent alien could exist in an ultracold environment where everything moves so slowly that their perception of physics is completely different to ours?
+The final term in the authors’ extended Drake equation looks at whether the answers to the questions of alien physics would be meaningful to humans. We naturally assume there are deep truths about nature that can be explored using experimental and mathematical tools. But what if there are no deep truths out there – and what if our alien friends are already aware of that fact?
+ +When Drake proposed his equation, humans did not know of any planets beyond the solar system. Today, however, we have discovered nearly 6000 such exoplanets, and it is possible that there are billions of habitable, Earth-like exoplanets in the Milky Way. So it does not seem at all fanciful that we could soon be communicating with an alien civilization.
+But when I asked Whiteson if he’s worried that visiting aliens could be hostile towards humans, he said he hoped for a “peaceful” visit. In fact, Whiteson is unable to think of a good reason why an advanced civilization would be hostile to Earth – pointing out that there is probably nothing of material value here for them. Fingers crossed, any visit will be driven by curiosity, peace and goodwill.
+The post If you met an alien, what would you say to it? appeared first on Physics World.
+]]>The post How the STFC Hartree Centre is helping UK industry de-risk quantum computing investment appeared first on Physics World.
+]]>The Hartree Centre gives industry fast-track access to next-generation supercomputing, AI and digital capabilities. We are a “connector” when it comes to quantum computing, helping UK businesses and public-sector organizations to de-risk the early-stage adoption of a technology that is not yet ready to buy off-the-shelf. Our remit spans quantum software, theoretical studies and, ultimately, the integration of quantum computing into existing high-performance computing (HPC) infrastructure and workflows.
+It’s evident that industry wants to understand the commercial upsides of quantum computing, but doesn’t yet have the necessary domain knowledge and skill sets to take full advantage of the opportunities. By working with the STFC Hartree Centre, businesses can help their computing and R&D teams to bridge that quantum knowledge gap.
+ +The Hartree Centre’s quantum computing effort is built around a cross-disciplinary team of scientists and a mix of expertise spanning physics, chemistry, mathematics, computer science and quantum information science. We offer specialist quantum consultancy to clients across industries as diverse as energy, pharmaceuticals and food manufacturing.
+We begin by doing the due diligence on the client’s computing challenge, understanding the computational bottlenecks and, where appropriate, translating the research problem so that it can be executed, in whole or in part, on a quantum computer or a mixture of hybrid and quantum computing resources.
+Integrating classical HPC and quantum computing is a complex challenge along three main pathways: infrastructure – bridging fundamentally different hardware architectures; software – workflow management, resource scheduling and organization; and finally applications – adapting and optimizing computing workflows across quantum and classical domains. All of these competencies are mandatory for successful exploitation of quantum computing systems.
+Correct. Ultimately, the task is how do we distribute a workload to run on an HPC platform, also on a quantum computer, when many of the algorithms and data streams must loop back and forth between the two systems.
+We have been addressing this problem with our quantum technology partners – IBM and Pasqal – and a team at Rensselaer Polytechnic in New York. Together, we have introduced a Quantum Resource Management Interface – an open-source tool that supports unified job submission for quantum and classical computing tasks and that’s scalable to cloud computing environments. It’s the “black-box” solution industry has been looking for to bridge the established HPC and emerging quantum domains.
+
The Hartree National Centre for Digital Innovation (HNCDI) is a £210m public–private partnership with IBM to create innovative digital technologies spanning HPC, AI, data analytics and quantum computing. HNCDI is the cornerstone of IBM’s quantum technology strategy in the UK and, over the past four years, the collaboration has clocked up more than 30 joint projects with industry. In each of these projects, HNCDI is using quantum computers to tackle problems that are out of reach for classical computers.
+One is streamlining drug discovery and development. As part of a joint effort with the pharmaceutical firm AstraZeneca and quantum-software developer Algorithmiq, we have improved the accuracy of molecular modelling with the help of quantum computing and, by extension, developed a better understanding of the molecular interactions and processes involved in drug synthesis. Another eye-catching development is Qiskit Machine Learning (ML), an open-source library for quantum machine-learning tasks on quantum hardware and classical simulators. While Qiskit ML started as a proof-of-concept library from IBM, our team at the Hartree Centre has, over the past couple of years, developed it into a modular tool for non-specialist users as well as quantum computational scientists and developers.
+Healthcare has yielded productive lines of enquiry, including a proof-of-concept study to demonstrate the potential of quantum machine-learning in cancer diagnostics. Working with Royal Brompton and Harefield Hospitals and Imperial College London, we have evaluated histopathology datasets to categorize different types of breast-cancer cells through AI workflows. It’s research that could eventually lead to better predictions regarding the onset and progression of disease.
+We have been collaborating with the German power utility E.ON to study the complex challenges that quantum computing may be able to address in the energy sector – such as strategic infrastructure development, effective energy demand management and streamlined integration of renewable energy sources.
+Longer term, the goal is to enable our industry partners to become at-scale end-users of quantum computing, delivering economic and societal impact along the way. As for our own development roadmap at the Hartree Centre, we are evaluating options for the implementation of a large-scale quantum computing platform to further diversify our existing portfolio of HPC, AI, data science and visual computing technologies.
+
The Hartree Centre is part of the Science and Technology Facilities Council (STFC), one of the main UK research councils supporting fundamental and applied initiatives in astronomy, physics, computational science and space science.
+Based at the Daresbury Laboratory, part of the Sci-Tech Daresbury research and innovation campus in north-west England, the Hartree Centre has more than 160 scientists and technologists specializing in supercomputing, applied scientific computing, data science, AI, cloud and quantum computing.
+“Our goal is to help UK industry generate economic growth and societal impact by exploiting advanced HPC capabilities and digital technologies,” explains Vassil Alexandrov, chief science officer at STFC Hartree Centre.
+One of the core priorities for Alexandrov and his team is the interface between “exascale” computing and scalable AI. It’s a combination of technologies that’s being lined up to tackle “grand challenges” like the climate crisis and the transition from fossil fuels to clean energy.
+A case in point is the Climate Resilience Demonstrator, which uses “digital twins” to simulate how essential infrastructure like electricity grids and telecoms networks might respond to extreme weather events. “These kinds of insights are critical to protect communities, maintain service delivery and build more resilient public infrastructure,” says Alexandrov.
+Elsewhere, as part of the Fusion Computing Lab, the Hartree Centre is collaborating with the UK Atomic Energy Authority on sustainable energy generation from nuclear fusion. “We have a joint team of around 60 scientists and engineers working on this initiative to iterate and optimize the building blocks for a fusion power plant,” notes Alexandrov. “The end-game is to deliver net power safely and affordably to the grid from magnetically confined fusion.”
+Exascale computing and AI also underpin the Research Computing and Innovation Centre, a collaboration with AWE, the organization that runs research, development and support for the UK’s nuclear-weapons stockpile.
++
The post How the STFC Hartree Centre is helping UK industry de-risk quantum computing investment appeared first on Physics World.
+]]>The post The pros and cons of reinforcement learning in physical science appeared first on Physics World.
+]]>“I think if you provide the knowledge that humans already have, it doesn’t really answer the deepest question for AI, which is how it can learn for itself to solve problems,” Silver told an audience at the 12th Heidelberg Laureate Forum (HLF) in Heidelberg, Germany, on Monday.
+ +Silver’s proposed solution is to move from the “era of human data”, in which AI passively ingests information like a student cramming for an exam, into what he calls the “era of experience” in which it learns like a baby exploring its world. In his HLF talk on Monday, Silver played a sped-up video of a baby repeatedly picking up toys, manipulating them and putting them down while crawling and rolling around a room. To murmurs of appreciation from the audience, he declared, “I think that provides a different perspective of how a system might learn.”
+Silver, a computer scientist at University College London, UK, has been instrumental in making this experiential learning happen in the virtual worlds of computer science and mathematics. As head of reinforcement learning at Google DeepMind, he was instrumental in developing AlphaZero, an AI system that taught itself to play the ancient stones-and-grid game of Go. It did this via a so-called “reward function” that pushed it to improve over many iterations, without ever being taught the game’s rules or strategy.
+More recently, Silver coordinated a follow-up project called AlphaProof that treats formal mathematics as a game. In this case, AlphaZero’s reward is based on getting correct proofs. While it isn’t yet outperforming the best human mathematicians, in 2024 it achieved silver-medal standard on problems at the International Mathematical Olympiad.
+Could a similar experiential learning approach work in the physical sciences? At an HLF panel discussion on Tuesday afternoon, particle physicist Thea Klaeboe Åarrestad began by outlining one possible application. Whenever CERN’s Large Hadron Collider (LHC) is running, Åarrestad explained, she and her colleagues in the CMS experiment must control the magnets that keep protons on the right path as they zoom around the collider. Currently, this task is performed by a person, working in real time.
+
In principle, Åarrestad continued, a reinforcement-learning AI could take over that job after learning by experience what works and what doesn’t. There’s just one problem: if it got anything wrong, the protons would smash into a wall and melt the beam pipe. “You don’t really want to do that mistake twice,” Åarrestad deadpanned.
+For Åarrestad’s fellow panellist Kyle Cranmer, a particle physicist who works on data science and machine learning at the University of Wisconsin-Madison, US, this nightmare scenario symbolizes the challenge with using reinforcement learning in physical sciences. In situations where you’re able to do many experiments very quickly and essentially for free – as is the case with AlphaGo and its descendants – you can expect reinforcement learning to work well, Cranmer explained. But once you’re interacting with a real, physical system, even non-destructive experiments require finite amounts of time and money.
+Another challenge, Cranmer continued, is that particle physics already has good theories that predict some quantities to multiple decimal places. “It’s not low-hanging fruit for getting an AI to come up with a replacement framework de novo,” Cranmer said. A better option, he suggested, might be to put AI to work on modelling atmospheric fluid dynamics, which are emergent phenomena without first-principles descriptions. “Those are super-exciting places to use ideas from machine learning,” he said.
+Silver, who was also on Tuesday’s panel, agreed that reinforcement learning isn’t always the right solution. “We should do this in areas where mistakes are small and it can learn from those small mistakes to avoid making big mistakes,” he said. To general laughter, he added that he would not recommend “letting an AI loose on nuclear arsenals”, either.
+ +Reinforcement learning aside, both Åarrestad and Cranmer are highly enthusiastic about AI. For Cranmer, one of the most exciting aspects of the technology is the way it gets scientists from different disciplines talking to each other. The HLF, which aims to connect early-career researchers with senior figures in mathematics and computer science, is itself a good example, with many talks in the weeklong schedule devoted to AI in one form or another.
+For Åarrestad, though, AI’s most exciting possibility relates to physics itself. Because the LHC produces far more data than humans and present-day algorithms can handle, Åarrestad explained, much of it is currently discarded. The idea that, as a result, she and her colleagues could be throwing away major discoveries sometimes keeps her up at night. “Is there new physics below 1 TeV?” Åarrestad wondered.
+Someday, maybe, an AI might be able to tell us.
+The post The pros and cons of reinforcement learning in physical science appeared first on Physics World.
+]]>The post MRID<sup>3D</sup> phantom eases the introduction of MRI into the radiotherapy clinic appeared first on Physics World.
+]]>One site that has transitioned to this approach is the Institut Jules Bordet in Belgium, which in 2021 acquired both an Elekta Unity MR-Linac and a Siemens MAGNETOM Aera MR-Simulator. “It was a long-term objective for our clinic to have an MR-only workflow,” says Akos Gulyban, a medical physicist at Institut Jules Bordet. “When we moved to a new campus, we decided to purchase the MR-Linac. Then we thought that if we are getting into the MR world for treatment adaptation, we also need to step up in terms of simulation.”
+The move to MR simulation delivers many clinical benefits, with MR images providing the detailed anatomical information required to delineate targets and organs-at-risk with the highest precision. But it also creates new challenges for the physicists, particularly when it comes to quality assurance (QA) of MR-based systems. “The biggest concern is geometric distortion,” Gulyban explains. “If there is no distortion correction, then the usability of the machine or the sequence is very limited.”
+While the magnetic field gradient is theoretically linear, and MRI is indeed extremely accurate at the imaging isocentre, moving away from the isocentre increases distortion. Images of regions 30 or 40 cm away from the isocentre – a reasonable distance for a classical linac – can differ from reality by 15 to 20 mm, says Gulyban. Thankfully, 3D correction algorithms can reduce this discrepancy down to just a couple of millimetres. But such corrections first require an accurate way to measure the distortion.
+
To address this task, the team at Institut Jules Bordet employ a geometric distortion phantom –the QUASAR MRID3D Geometric Distortion Analysis System from IBA Dosimetry. Gulyban explains that the MRID3D was chosen following discussions with experienced users, and that key selling points included the phantom’s automated software and its ability to efficiently store results for long-term traceability.
+“My concern was how much time we spend cross-processing, generating reports or evaluating results,” he says. “This software is fully automated, making it much easier to perform the evaluation and less dependent on the operator.”
+Gulyban adds that the team was looking for a vendor-independent solution. “I think it is a good approach to use the tools provided [by the vendor] but now we have a way to measure the same thing using a different approach. Since our new campus has a mixture of Siemens MRs and the MR-Linac, this phantom provides a vendor-independent bridge between the two worlds.”
+For quality control of the MR-Simulator, the team perform distortion measurements every three months, as well as after system interventions such as shimming and following any problems arising during other routine QA procedures. “We should not consider tests as individual islands in the QA process,” says Gulyban. “For instance, the ACR image quality phantom, which is used for more frequent evaluation, also partly assesses distortion. If we see that failing, I would directly trigger measurements with the more appropriate geometric distortion phantom.”
+To perform MR simulation, the images used for treatment planning must encompass both the target volume and the surrounding region, to ensure accurate delineation of the tumour and nearby organs-at-risk. This requires a large field-of-view (FOV) scan – plus geometric distortion QA that covers the same large FOV.
+
“You’re using this image to delineate the target and also to spare the organs-at-risk, so the image must reflect reality,” explains Kawtar Lakrad, medical physicist and clinical application specialist at IBA Dosimetry. “You don’t want that image to be twisted or the target volume to appear smaller or bigger than it actually is. You want to make sure that all geometric qualities of the image align with what’s real.”
+Typically, geometric distortion phantoms are grid-like, with control points spaced every 0.5 or 1 cm. The entire volume is imaged in the MR scanner and the locations of control points seen in the image compared with their actual positions. “If we apply this to a large FOV phantom, which for MRI will be filled with either water or oil, it’s going to be a very large grid and it’s going to be heavy, 40 or 50 kg,” says Lakrad.
+To overcome this obstacle, IBA researchers used innovative harmonic analysis algorithms to design a lightweight geometric distortion phantom with submillimetre accuracy and a large (35 x 30 cm) FOV: the MRID3D. The phantom comprises two concentric hollow acrylic cylinders, the only liquid being a prefilled mineral oil layer between the two shells, reducing its weight to just 21 kg.
+
“The idea behind the phantom was very smart because it relies on a mathematical tool,” explains Lakrad. “There is a Fourier transform for the linear signal, which is used for standard grids. But there are also spherical harmonics – and this is what’s used in the MRID3D. The control points are all on the cylinder surface, plus one in the isocentre, creating a virtual grid that measures 3D geometric distortion.” She adds that the MRID3D can also differentiate distortion due to the main magnetic field from gradient non-linearity distortion.
+Gulyban and his team at Institut Jules Bordet first used MR simulation for pelvic treatments, particularly prostate cancer, he tells Physics World. This was followed by abdominal tumours, such as pancreatic and liver cancers (where many patients were being treated on the MR-Linac) and more recently, cranial and head-and-neck irradiations.
+Gulyban points out that the introduction of the MR-Simulator was eased by the team’s experience with the MR-Linac, which helped them “step into the MR world”. Here also, the MRID3D phantom is used to quantify geometric distortion, both for initial commissioning and continuous QA of the MR-Linac.
+
“It’s like a consistency check,” he explains. “We have certain manufacturer-defined conditions that we need to meet for the MR-Linac – for instance, that distortion within a 40 mm diameter should be less than 1 mm. To ensure that these are met in a consistent fashion, we repeat the measurements with the manufacturer’s phantom and with the MRID3D. This gives us extra peace of mind that the machine is performing under the correct conditions.”
+For other cancer centres looking to integrate MR into their radiotherapy clinics, Gulyban has some key points of advice. These include starting with MR-guided radiotherapy and then adding MR simulation, identifying a suitable pathology to treat first and gain familiarity, and attending relevant courses or congresses for inspiration.
+“The biggest change is actually a change in culture because you have an active MRI in the radiotherapy department,” he notes. “We are used to the radioprotection aspects of radiotherapy, wearing a dosimeter and observing radiation protection principles. MRI is even less forgiving – every possible thing that could go wrong you have to eliminate. Closing all the doors and emptying your pockets must become a reflex habit. You have to prepare mentally for that.”
+“When you’re used to CT-based machines, moving to an MR workflow can be a little bit new,” adds Lakrad. “Most physicists are already familiar with the MR concept, but when it comes to the QA process, that’s the most challenging part. Some people would just repeat what’s done in radiology – but the use case is different. In radiotherapy, you have to delineate the target and surrounding volumes exactly. You’re going to be delivering dose, which means the tolerance between diagnostic and radiation therapy is different. That’s the biggest challenge.”
+The post MRID<sup>3D</sup> phantom eases the introduction of MRI into the radiotherapy clinic appeared first on Physics World.
+]]>The post Artificial intelligence could help detect ‘predatory’ journals appeared first on Physics World.
+]]>Open access removes the requirement for traditional subscriptions. Articles are instead made immediately and freely available for anyone to read, with publication costs covered by authors by paying an article-processing charge.
+But as the popularity of open-access journals has risen, there has been a growth in “predatory” journals that exploit the open-access model by making scientists pay publication fees without a proper peer-review process in place.
+ +To build an AI-based method for distinguishing legitimate from questionable journals, Daniel Acuña, a computer scientist at the University of Colorado Boulder, and colleagues used the Directory of Open Access Journals (DOAJ) – an online, community-curated index of open-access journals.
+The researchers trained their machine-learning model on 12,869 journals indexed on the DOAJ and 2536 journals that have been removed from the DOAJ due to questionable practices that violate the community’s listing criteria. The team then tested the tool on 15,191 journals listed by Unpaywall, an online directory of free research articles.
+To identify questionable journals, the AI-system analyses journals’ bibliometric information and the content and design of their websites, scrutinising details such as the affiliations of editorial board members and the average author h-index – a metric that quantifies a researcher’s productivity and impact.
+The AI-model flagged 1437 journals as questionable, with the researchers concluding that 1092 were genuinely questionable while 345 were false positives.
+They also identified around 1780 problematic journals that the AI screening failed to flag. According to the study authors, their analysis shows that problematic publishing practices leave detectable patterns in citation behaviour such as the last authors having a low h-index together with a high rate of self-citation.
+Acuña adds that the tool could help to pre-screen large numbers of journals, adding, however, that “human professionals should do the final analysis”. The researcher’s novel AI screening system isn’t publicly accessible but they hope to make it available to universities and publishing companies soon.
+The post Artificial intelligence could help detect ‘predatory’ journals appeared first on Physics World.
+]]>The post Are longer quantum algorithms actually good? appeared first on Physics World.
+]]>Research in this field can be broadly divided into two areas: a) designing quantum algorithms with potential practical advantages over classical algorithms (the software) and b) physically building a quantum computer (the hardware).
+One of the main approaches to algorithm design is to minimise the number of operations or runtime in an algorithm. One intuitively expects that reducing the number of operations would decrease the chance of errors – the key to constructing a reliable quantum computer.
+However, this is not always the case. In a recent paper, the research team found that minimising the number of operations in a quantum algorithm can sometimes be counterproductive, leading to an increased sensitivity to noise. Essentially, running a faster algorithm in non-ideal conditions can result in more errors than if a slower algorithm had been used.
+The authors proved that there’s a trade-off between an algorithm’s number of operations and its resilience to noise. This means that, for certain types of errors, slower algorithms might actually be better in some real-world conditions.
+These results bring together research on quantum hardware and software. The mathematical framework developed will enable quantum algorithms to be designed with the limitations of current real quantum computers in mind.
+Resilience–runtime tradeoff relations for quantum algorithms – IOPscience
+García-Pintos et al. 2025 Rep. Prog. Phys. 88 037601
++
The post Are longer quantum algorithms actually good? appeared first on Physics World.
+]]>The post The hunt for long-lived particles at the LHC appeared first on Physics World.
+]]>However, it’s been notoriously difficult to perform an experiment that actually disagrees with the model’s predictions.
+Many proposed extensions of the Standard Model, such as the fraternal twin Higgs or folded supersymmetry models, include so-called long-lived particles (LLPs).
+Unlike most particles produced in high-energy collisions, which decay almost instantaneously, LLPs have relatively long lifetimes, meaning they travel a measurable distance before decaying.
+A new paper from the CMS collaboration at CERN searched for evidence of these particles by re-examining previous data from proton-proton collision events.
+The new analysis used new techniques such as machine-learning methods to enhance the sensitivity to LLPs.
+So, did they find any new particles? The short answer, unfortunately, is no.
+However, the new study achieves up to a tenfold improvement over previous limits for LLP masses. It also places the first constraints on many proposed models that predict these particles.
+Although this study found no new physics, we’re still confident that something must be out there. And by narrowing down the possible spaces where we might find new particles, we’re one step closer to finding them.
+The search continues.
+The CMS Collaboration, 2025 Rep. Prog. Phys. 88 037801
++
The post The hunt for long-lived particles at the LHC appeared first on Physics World.
+]]>The post Relive the two decades when physicists basked in the afterglow of the Standard Model appeared first on Physics World.
+]]>
Call it millennial, generation Y or fin de siècle, high-energy physics during the last two decades of the 20th century had a special flavour. The principal pieces of the Standard Model of particle physics had come together remarkably tightly – so tightly, in fact, that physicists had to rethink what instruments to build, what experiments to plan, and what theories to develop to move forward. But it was also an era when the hub of particle physics moved from the US to Europe.
+The momentous events of the 1980s and 1990s will be the focus of the 4th International Symposium on the History of Particle Physics, which is being held on 10–13 November at CERN. The meeting will take place more than four decades after the first symposium in the series was held at Fermilab near Chicago in 1980. Entitled The Birth of Particle Physics, that initial meeting covered the years 1930 to 1950.
+Speakers back then included trailblazers such as Paul Dirac, Julian Schwinger and Victor Weisskopf. They reviewed discoveries such as the neutron and the positron and the development of relativistic quantum field theory. Those two decades before 1950 were a time when particle physicists “constructed the room”, so to speak, in which the discipline would be based.
+ +The second symposium – Pions to Quarks – was also held at Fermilab and covered the 1950s. Accelerators could now create particles seen in cosmic-ray collisions, populating what Robert Oppenheimer called the “particle zoo”. Certain discoveries of this era, such as parity violation in the weak interaction, were so shocking that C N Yang likened it to having a blackout and not knowing if the room would look the same when the lights came back on. Speakers at that 1985 event included Luis Alvarez, Val Fitch, Abdus Salam, Robert Wilson and Yang himself.
+The third symposium, The Rise of the Standard Model, was held in Stanford, California, in 1992 and covered the 1960s and 1970s. It was a time not of blackouts but of disruptions that dimmed the lights. Charge-parity violation and the existence of two types of neutrino were found in the 1960s, followed in the 1970s by deep inelastic electron scattering and quarks, neutral currents, a fourth quark and gluon jets.
+These discoveries decimated alternative approaches to quantum field theory, which was duly established for good as the skeleton of high-energy physics. The era culminated with Sheldon Glashow, Abdus Salam and Steven Weinberg winning the 1979 Nobel Prize for Physics for their part in establishing the Standard Model. Speakers at that third symposium included Murray Gell-Mann, Leon Lederman and Weinberg himself.
+The upcoming CERN event, on whose programme committee I serve, will start exactly where the previous symposium ended. “1980 is a natural historical break,” says conference co-organizer Michael Riordan, who won the 2025 Abraham Pais Prize for History of Physics. “It begins a period of the consolidation of the Standard Model. Colliders became the main instruments, and were built with specific standard-model targets in mind. And the centre of gravity of the discipline moved across the Atlantic to Europe.”
+The conference will address physics that took place at CERN’s Super Proton Synchrotron (SPS), where the W and Z particles were discovered in 1983. It will also examines the SPS’s successor – the Large Electron-Positron (LEP) collider. Opened in 1989, it was used to make precise measurements of these and other implications of the Standard Model until being controversially shut down in 2000 to make way for the Large Hadron Collider (LHC).
++There will be coverage as well of failed accelerator projects, which – perhaps perversely – can be equally interesting and revealing as successful facilities
+
Speakers at the meeting will also discuss Fermilab’s Tevatron, where the top quark – another Standard Model component – was found in 1995. Work at the Stanford Linear Accelerator Center, DESY in Germany, and Tsukuba, Japan, will be tackled too. There will be coverage as well of failed accelerator projects, which – perhaps perversely – can be equally interesting and revealing as successful facilities.
+In particular, I will speak about ISABELLE, a planned and partially built proton–proton collider at Brookhaven National Laboratory, which was terminated in 1983 to make way for the far more ambitious Superconducting Super Collider (SSC). ISABELLE was then transformed into the Relativistic Heavy Ion Collider (RHIC), which was completed in 1999 and took nuclear physics into the high-energy regime.
+Riordan will talk about the fate of the SSC, which was supposed to discover the Higgs boson or whatever else plays its mass-generating role. But in 1993 the US Congress terminated that project, a traumatic episode for US physics, about which Riordan co-authored the book Tunnel Visions. Its cancellation signalled the end of the glory years for US particle physics and the realization of the need for international collaborations in ever-costlier accelerator projects.
+ +The CERN meeting will also explore more positive developments such as the growing convergence of particle physics and cosmology during the 1980s and 1990s. During that time, researchers stepped up their studies of dark matter, neutrino oscillations and supernovas. It was a period that saw the construction of underground detectors at Gran Sasso in Italy and Kamiokande in Japan.
+Other themes to be explored include the development of the Web – which transformed the world – and the impact of globalization, the end of the Cold War, and the rise of high-energy physics in China, and physics in Russia, former Soviet Union republics, and former Eastern Bloc countries. While particle physics became more global, it also grew more dependent on, and vulnerable to, changing political ambitions, economic realities and international collaborations. The growing importance of diversity, communication and knowledge transfer will be looked at too.
+The years between 1980 and 2000 were a distinct period in the history of particle physics. It took place in the afterglow of the triumph of the Standard Model. The lights in high energy physics did not go out or even dim, to use Yang’s metaphor. Instead, the Standard Model shed so much light on high-energy physics that the effort and excitement focused around consolidating the model.
+Particle physics, during those years, was all about finding the deeply hidden outstanding pieces, developing the theory, and connecting with other areas of physics. The triumph was so complete that physicists began to wonder what bigger and more comprehensive structure the Standard Model’s “room” might be embedded in – what was “beyond the Standard Model”. A quarter of a century on, our attempt to make out that structure is still an ongoing task.
+The post Relive the two decades when physicists basked in the afterglow of the Standard Model appeared first on Physics World.
+]]>The post Are we heading for a future of superintelligent AI mathematicians? appeared first on Physics World.
+]]>One of those expressing disquiet is Yang-Hui He, a mathematical physicist at the London Institute for Mathematical Sciences. In general, He is extremely keen on AI. He’s written a textbook about the use of AI in mathematics, and he told the audience at an HLF panel discussion that he’s been peddling machine-learning techniques to his mathematical physics colleagues since 2017.
+ +More recently, though, He has developed concerns about gen AI specifically. “It is doing mathematics so well without any understanding of mathematics,” he said, a note of wonder creeping into his voice. Then, more plaintively, he added, “Where is our place?”
+Some of the things that make today’s gen AI so good at mathematics are the same as the ones that made Google’s DeepMind so good at the game of Go. As the theoretical computer scientist Sanjeev Arora pointed out in his HLF talk, “The reason it’s better than humans is that it’s basically tireless.” Put another way, if the 20th-century mathematician Alfréd Rényi once described his colleagues as “machines for turning coffee into theorems”, one advantage of 21st-century AI is that it does away with the coffee.
+Arora, however, sees even greater benefits. In his view, AI’s ability to use feedback to improve its own performance – a technique known as reinforcement learning – is particularly well-suited to mathematics.
+In the standard version of reinforcement learning, Arora explains, the AI model is given a large bank of questions, asked to generate many solutions and told to use the most correct ones (as labelled by humans) to refine its model. But because mathematics is so formalized, with answers that are so verifiably true or false, Arora thinks it will soon be possible to replace human correctness checkers with AI “proof assistants”. Indeed, he’s developing one such assistant himself, called Lean, with his colleagues at Princeton University in the US.
+But why stop there? Why not use AI to generate mathematical questions as well as producing and checking their solutions? Indeed, why not get it to write a paper, peer review it and publish it for its fellow AI mathematicians – which are, presumably, busy combing the literature for information to help them define new questions?
+Arora clearly thinks that’s where things are heading, and many of his colleagues seem to agree, at least in part. His fellow HLF panellist Javier Gómez-Serrano, a mathematician at Brown University in the US, noted that AI is already generating results in a day or two that would previously have taken a human mathematician months. “Progress has been quite quick,” he said.
+The panel’s final member, Maia Fraser of the University of Ottawa, Canada, likewise paid tribute to the “incredible things that are possible with AI now”. But Fraser, who works on mathematical problems related to neuroscience, also sounded a note of caution. “My concern is the speed of the changes,” she told the HLF audience.
+ +The risk, Fraser continued, is that some of these changes may end up happening by default, without first considering whether humans want or need them. While we can’t un-invent AI, “we do have agency” over what we want, she said.
+So, do we want a world in which AI mathematicians take humans “out of the loop” entirely? For He, the benefits may outweigh the disadvantages. “I really want to see a proof of the Riemann hypothesis,” he said, to ripples of laughter. If that means that human mathematicians “become priests to oracles”, He added, so be it.
+The post Are we heading for a future of superintelligent AI mathematicians? appeared first on Physics World.
+]]>The post Space–time crystal emerges in a liquid crystal appeared first on Physics World.
+]]>In an ordinary crystal atomic or molecular structures repeat at periodic intervals in space. In 2012, however, Frank Wilczek suggested that systems might also exist with quantum states that repeat at perfectly periodic intervals in time – even as they remain in their lowest-energy state.
+First observed experimentally in 2017, these time crystals are puzzling to physicists because they spontaneously break time–translation symmetry, which states that the laws of physics are the same no matter when you observe them. In contrast, a time crystal continuously oscillates over time, without consuming energy.
+A space–time crystal is even more bizarre. In addition to breaking time–translation symmetry, such a system would also break spatial symmetry, just like the repeating molecular patterns of an ordinary crystal. Until now, however, a space–time crystal had not been observed directly.
+In their study, Zhao and Smalyukh created a space–time crystal in the nematic phase of a liquid crystal. In this phase the crystal’s rod-like molecules align parallel to each other and also flow like a liquid. Building on computer simulations, they confined the liquid crystal between two glass plates coated with a light-sensitive dye.
+“We exploited strong light–matter interactions between dye-coated, light-reconfigurable surfaces, and the optical properties of the liquid crystal,” Smalyukh explains.
+When the researchers illuminated the top plate with linearly polarized light at constant intensity, the dye molecules rotate to align perpendicular to the direction of polarization. This reorients nearby liquid crystal molecules, and the effect propagates deeper into the bulk. However, the influence weakens with depth, so that molecules farther from the top plate are progressively less aligned.
+As light travels through this gradually twisting structure, its linear polarization is transformed, becoming elliptically polarized by the time it reaches the bottom plate. The dye molecules there become aligned with this new polarization, altering the liquid crystal alignment near the bottom plate. These changes propagate back upward, influencing molecules near the top plate again.
+This is a feedback loop, with the top and bottom plates continuously influencing each other via the polarized light passing through the liquid crystal.
+“These light-powered dynamics in confined liquid crystals leads to the emergence of particle-like topological solitons and the space–time crystallinity,” Smalyukh says.
+In this environment, particle-like topological solitons emerge as stable, localized twists in the liquid crystal’s orientation that do not decay over time. Like particles, the solitons move and interact with each other while remaining intact.
+Once the feedback loop is established, these solitons emerge in a repeating lattice-like pattern. This arrangement not only persisted as the feedback loop continued, but is sustained by it. This is a clear sign that the system exhibits crystalline order in time and space simultaneously.
+Having confirmed their conclusions with simulations, Zhao and Smalyukh are confident this is the first experimental demonstration of a space–time crystal. The discovery that such an exotic state can exist in a classical, room-temperature system may have important implications.
+ +“This is the first time that such a phenomenon is observed emerging in a liquid crystalline soft matter system,” says Smalyukh. “Our study calls for a re-examining of various time-periodic phenomena to check if they meet the criteria of time-crystalline behaviour.”
+Building on these results, the duo hope to broaden the scope of time crystal research beyond a purely theoretical and experimental curiosity. “This may help expand technological utility of liquid crystals, as well as expand the currently mostly fundamental focus of studies of time crystals to more applied aspects,” Smalyukh adds.
+The research is described in Nature Materials.
+The post Space–time crystal emerges in a liquid crystal appeared first on Physics World.
+]]>The post Quantum fluid instability produces eccentric skyrmions appeared first on Physics World.
+]]>Topological defects occur when a system rapidly transitions from a disordered to an ordered phase. These defects, which can occur in a wide range of condensed matter systems, from liquid crystals and atomic gases to the rapidly cooling early universe, can produce excitations such as solitons, vortices and skyrmions.
+Skyrmions, first discovered in magnetic materials, are swirling vortex-like spin structures that extend across a few nanometres in a material. They can be likened to 2D knots in which the magnetic moments rotate about 360° within a plane.
+Skyrmions are topologically stable, which makes them robust to external perturbations, and are much smaller than the magnetic domains used to encode data in today’s disk drives. That makes them ideal building blocks for future data storage technologies such as “racetrack” memories. Eccentric fractional skyrmions (EFSs), which had only been predicted in theory until now, have a crescent-like shape and contain singularities – points in which the usual spin structure breaks down, creating sharp distortions as it becomes unsymmetrical.
+ +“To me, the large crescent moon in the upper right corner of Van Gogh’s ‘The Starry Night’ also looks exactly like an EFS,” says Hiromitsu Takeuchi at Osaka, who co-led this new study with Jae-Yoon Choi of KAIST. “EFSs carry half the elementary charge, which means they do not fit into traditional classifications of topological defects.”
+The KHI is a classic phenomenon in fluids in which waves and vortices form at the interface between two fluids moving at different speeds. “To observe the KHI in quantum systems, we need a structure containing a thin superfluid interface (a magnetic domain wall), such as in a quantum gas of 7Li atoms,” says Takeuchi. “We also need experimental techniques that can skilfully control the behaviour of this interface. Both of these criteria have recently been met by Choi’s group.”
+The researchers began by cooling a gas of 7Li atoms to near absolute zero temperatures to create a multi-component Bose-Einstein condensate – a quantum superfluid containing two streams flowing at different speeds. At the interface of these streams, they observed vortices, which corresponded to the predicted EFSs.
+“We have shown that the behaviour of the KHI is universal and exists in both the classical and quantum regimes,” says Takeuchi. This finding could not only lead to a better understanding of quantum turbulence and the unification of quantum and classic hydrodynamics, it could also help in the development of technologies such as next-generation storage and memory devices and spintronics, an emerging technology in which magnetic spin is used to store and transfer information using much less energy than existing electronic devices.
+ +“By further refining the experiment, we might be able to verify certain predictions (some of which were made as long ago as the 19th century) about the wavelength and frequency of KHI-driven interface waves in non-viscous quantum fluids, like the one studied in this work,” he adds.
+“In addition to the universal finger pattern we observed, we expect structures like zipper and sealskin patterns, which are unique to such multi-component quantum fluids,” Takeuchi tells Physics World. “As well as experiments, it is necessary to develop a theory that more precisely describes the motion of EFSs, the interaction between these skyrmions and their internal structure in the context of quantum hydrodynamics and spontaneous symmetry breaking.”
+The study is detailed in Nature Physics.
+The post Quantum fluid instability produces eccentric skyrmions appeared first on Physics World.
+]]>The post Top quarks embrace in quasi-bound toponium appeared first on Physics World.
+]]>Gautier Hamel de Monchenault, spokesperson for CMS, explains, “Many physicists long believed this was impossible. That’s why this result is so significant — it challenges assumptions that have been around for decades, and particle physics textbooks will likely need to be updated because of it.”
+Protons and neutrons are formed from quarks, which are fundamental particles that cannot be broken down into smaller constituents.
+“There are six types of quark,” explains the German physicist Christian Schwanenberger, who is at DESY and the University of Hamburg and was involved in the study. “Five of them form bound states thanks to the strong force, one of the four fundamental forces of nature. The top quark, however, is somehow different. It is the heaviest fundamental particle we know, but so far we have not observed it forming bound states in the same way the others do.”
+The top quark’s extreme mass makes it decay almost immediately after it is produced. “The top and antitop quarks just have time to exchange a few gluons, the carriers of the strong force, before one of them decays, hence the appellation ‘quasi-bound state’,” Hamel de Monchenault explains.
+By detecting these ephemeral interactions, physicists can observe the strong force in a new regime – and the CMS team developed a clever new method to do so. The breakthrough came when the team examined how the spins of the top quark and antitop quark influence each other to create a subtle signature in the particles produced when the quarks decay.
+Top quarks are produced in proton–proton collisions at the LHC, where they quickly decay into other particles. These include bottom quarks that then decay to form jets of particles, which can be detected. Top quarks can also decay to form W bosons, which themselves decay into lighter particles (leptons) such as electrons or muons, accompanied by neutrinos.
+“We can detect the charged leptons directly and measure their energy very precisely, but we have to infer the presence of the neutrinos indirectly, through an imbalance of the total energy measured,” says Hamel de Monchenault. By studying the pattern and energy of the leptons and jets, the CMS team deduced the existence of top–antitop pairs and spotted the subtle signature of the fleeting quasi-bound state.
+The CMS researchers observed an excess of events in which the top and antitop quarks were produced almost at rest relative to each other – the precise condition needed for a quasi-bound state to form. “The signal has a statistical significance above 5σ, which means the chance it’s just a statistical fluctuation is less than one in a few million,” Hamel de Monchenault says.
+While this excess accounts for only about 1% of top quark pair production, it aligns with predictions for toponium formation and offers insights into the strong force.
+“Within the achieved precision, the result matches the predictions of advanced calculations involving the strong force,” explains Hamel de Monchenault. “An effect once thought too subtle to detect with current technology has now been observed. It’s comforting in a way: even the heaviest known quarks are not always alone – they can briefly embrace their opposites.”
+The discovery has energized the particle physics community. “Scientists are excited to explore the strong force in a completely new regime,” says Schwanenberger. Researchers will refine theoretical models, simulate toponium more precisely, and study its decay patterns and excited states. Much of this work will rely on the High-Luminosity LHC, expected to start operations around 2030, and potentially on future electron–positron colliders capable of studying top quarks with unprecedented precision.
+ +“The present results are based on LHC data recorded between 2015 and 2018 [Run 2]. Since 2022, ATLAS and CMS are recording data at a slightly higher energy, which is favourable for top quark production. The amount of data already surpasses that of Run 2, and we expect that with such huge amounts of data, the properties of this new signal can be studied in detail,” Hamel de Monchenault says.
+This research could ultimately answer a fundamental question: is the top quark simply another quark like its lighter siblings, or could it hold the key to physics beyond the Standard Model? “Investigating different toponium states will be a key part of the top quark research programme,” Schwanenberger says. “It could reshape our understanding of matter itself and reveal whatever holds the world together in its inmost folds.”
+The results are published in Reports on Progress in Physics.
+The post Top quarks embrace in quasi-bound toponium appeared first on Physics World.
+]]>The post Celebrating 10 years of gravitational waves appeared first on Physics World.
+]]>It was early in the morning of Monday 14 September 2015, exactly 10 years ago, when gravitational waves created from the collision of two black holes 1.3 billion light-years away hit the LIGO detectors in the US. The detections took place just as the two giant interferometers – one in Washington and the other in Louisiana – were being calibrated before the first observational run was due to begin four days later.
+In one of those curious accidents of history, staff on duty at the Louisiana detector had gone to bed a few hours before the waves rolled in. If they hadn’t packed in their calibrations for the night, it would have prevented LIGO from making its historic measurement, dubbed GW150914. Of course, it would surely only have been a matter of time until LIGO had spotted its first signal, with more than 200 gravitational-wave events so far detected.
+ +Observing these “ripples in space–time”, which had long been on many physicists’ bucket lists, has over the last decade become almost routine. Most gravitational-wave detections have been binary black-hole mergers, though there have also been a few neutron-star/black-hole collisions and some binary neutron-star mergers too. Gravitational-wave astronomy is now a well-established field not just thanks to LIGO but also Virgo in Italy and KAGRA in Japan.
+In fact, physicists are already planning what would be a third-generation gravitational-wave detector. The Einstein Telescope, which could do in a day what took LIGO a decade, could be open by 2035, with three locations vying to host the facility. The Italian island of Sardinia is one option. Saxony in Germany is another, with the third being a site near where Germany, Belgium and the Netherlands meet.
+ +A decision is expected to be made in two years’ time, but whichever site is picked – and assuming the €2bn construction costs can be found – Europe would be installed firmly at the forefront of gravitational-wave research. That’s because the European Space Agency is also planning a space-based gravitational-wave detector called LISA. It is set to start in 2035 – the same year as the Einstein Telescope.
+The US has its own third-generation design, dubbed the Cosmic Explorer. But given the turmoil in US science under Donald Trump, it’s far from certain if it’ll ever be built. However, if other nations step in and build a network of such facilities around the world, as researchers hope, we could well be in for a new golden age for gravitational-wave astronomy. That bucket list just got longer.
+The post Celebrating 10 years of gravitational waves appeared first on Physics World.
+]]>The post Researchers map the unrest in the Vulcano volcano appeared first on Physics World.
+]]>During unrest, the volcanic risk increases significantly – and the summer months on the island currently attract a lot of tourists that might be at risk, even from minor eruptive events or episodes of increased degassing. To examine why this unrest has occurred, researchers from the University of Geneva have collaborated with the National Institute of Geophysics and Volcanology (INGV) in Italy to recreate a 3D model of the interior of the volcano on Vulcano, using a combination of nodal seismic networks and artificial intelligence (AI).
+Until now, few studies have examined the deep underground details of volcanoes, instead relying on looking at the outline of their internal structure. This is because the geological domains where eruptions nucleate are often inaccessible using airborne geophysical techniques, and onshore studies don’t penetrate far enough into the volcanic plumbing system to look at how the magma and hydrothermal fluids mix. Recent studies have shown the outline of the plumbing systems, but they’ve not had sufficient resolution to distinguish the magma from the hydrothermal system.
+To better understand what could have caused the 2021 Vulcano unrest, the researchers deployed a nodal network of 196 seismic sensors across Vulcano and Lipari (another island in the archipelago) to measure secondary seismic waves (S-waves) using a technique called seismic ambient noise tomography. S-waves propagate slowly as they pass through fluid-rich zones, which allows magma to be identified.
+ +The researchers captured the S-wave data using the nodal sensor network and processed it with AI – using a deep neural network. This allowed the extensive seismic dispersion data to be quickly and automatically recovered, enabling generation of a 3D S-wave velocity model. The data were captured during the volcano’s early unrest’s phase, and the sensors recorded the natural ground vibrations over a period of one month. The model revealed the high-resolution tomography of the shallow part of a volcanic system in unrest, with the approach compared to taking an “X-ray” of the volcano.
+“Our study shows that our end-to-end ambient noise tomography method works with an unprecedented resolution due to using dense nodal seismic networks,” says lead author Douglas Stumpp from the University of Geneva. “The use of deep neural networks allowed us to quickly and accurately measure enormous seismic dispersion data to provide near-real time monitoring.”
+The model showed that there was no new magma body between Lipari and Vulcano within the first 2 km of the Earth’s crust, but it did reveal regions that could host cooling melts at the base of the hydrothermal system. These melts were proposed to be degassing melts that could easily release gas and brines if disturbed by an Earthquake – suggesting that tectonic fault dynamics may trigger volcanic unrest. It’s thought that the volcano might have released trapped fluids at depth after being perturbed by fault activity during the 2021 unrest.
+While this method doesn’t enable the researchers to predict when the eruption will happen, it provides a significant understanding into how the internal dynamics of volcanoes work during periods of unrest. The use of AI enables rapid processing of large amounts of data, so in the future, the approach could be used as an early warning system by analysing the behaviour of the volcano as it unfolds.
+ +In theory, this could help to design dynamic evacuation plans based on the direct real-time behaviour of the volcano, which would potentially save lives. The researchers state that this could take some time to develop due to the technical challenge of processing such massive volumes of data in real time – but they note that this is now more feasible thanks to machine learning and deep learning.
+When asked about how the researchers plan to further develop the research, Stumpp concludes that “our study paves the ground for 4D ambient noise tomography monitoring – three dimensions of space and one dimension of time. However, I believe permanent and maintained seismic nodal networks with telemetric access to the data need to be implemented to achieve this goal”.
+The research is published in Nature Communications.
+The post Researchers map the unrest in the Vulcano volcano appeared first on Physics World.
+]]>The post High-speed 3D microscope improves live imaging of fast biological processes appeared first on Physics World.
+]]>The pictures from many 3D microscopes are obtained sequentially by scanning through different depths, making them too slow for accurate live imaging of fast-moving natural functions in individual cells and microscopic animals. Even current multifocus microscopes that capture 3D images simultaneously have either relatively poor image resolution or can only image to shallow depths.
+In contrast, the new 25-camera “M25” microscope – developed during his doctorate by Eduardo Hirata-Miyasaki and his supervisor Sara Abrahamsson, both then at the University of California Santa Cruz, together with collaborators at the Marine Biological Laboratory in Massachusetts and the New Jersey Institute of Technology – enables high-resolution 3D imaging over a large field-of-view, with each camera capturing 180 × 180 × 50 µm volumes at a rate of 100 per second.
+“Because the M25 microscope is geared towards advancing biomedical imaging we wanted to push the boundaries for speed, high resolution and looking at large volumes with a high signal-to-noise ratio,” says Hirata-Miyasaki, who is now based in the Chan Zuckerberg Biohub in San Francisco.
+ +The M25, detailed in Optica, builds on previous diffractive-based multifocus microscopy work by Abrahamsson, explains Hirata-Miyasaki. In order to capture multiple focal planes simultaneously, the researchers devised a multifocus grating (MFG) for the M25. This diffraction grating splits the image beam coming from the microscope into a 5 × 5 grid of evenly illuminated 2D focal planes, each of which is recorded on one of the 25 synchronized machine vision cameras, such that every camera in the array captures a 3D volume focused on a different depth. To avoid blurred images, a custom-designed blazed grating in front of each camera lens corrects for the chromatic dispersion (which spreads out light of different wavelengths) introduced by the MFG.
+The team used computer simulations to reveal the optimal designs for the diffractive optics, before creating them at the University of California Santa Barbara nanofabrication facility by etching nanometre-scale patterns into glass. To encourage widespread use of the M25, the researchers have published the fabrication recipes for their diffraction gratings and made the bespoke software for acquiring the microscope images open source. In addition, the M25 mounts to the side port of a standard microscope, and uses off-the-shelf cameras and camera lenses.
+The M25 can image a range of biological systems, since it can be used for fluorescence microscopy – in which fluorescent dyes or proteins are used to tag structures or processes within cells – and can also work in transmission mode, in which light is shone through transparent samples. The latter allows small organisms like C. elegans larvae, which are commonly used for biological research, to be studied without disrupting them.
+The researchers performed various imaging tests using the prototype M25, including observations of the natural swimming motion of entire C. elegans larvae. This ability to study cellular-level behaviour in microscopic organisms over their whole volume may pave the way for more detailed investigations into how the nervous system of C. elegans controls its movement, and how genetic mutations, diseases or medicinal drugs affect that behaviour, Hirata-Miyasaki tells Physics World. He adds that such studies could further our understanding of human neurodegenerative and neuromuscular diseases.
+“We live in a 3D world that is also very dynamic. So with this microscope I really hope that we can keep pushing the boundaries of acquiring live volumetric information from small biological organisms, so that we can capture interactions between them and also [see] what is happening inside cells to help us understand the biology,” he continues.
+ +As part of his work at the Chan Zuckerberg Biohub, Hirata-Miyasaki is now developing deep-learning models for analysing dynamic cell and organism multichannel dynamic live datasets, like those acquired by the M25, “so that we can extract as much information as possible and learn from their dynamics”.
+Meanwhile Abrahamsson, who is currently working in industry, hopes that other microscopy development labs will make their own M25 systems. She is also considering commercializing the instrument to help ensure its widespread use.
+The post High-speed 3D microscope improves live imaging of fast biological processes appeared first on Physics World.
+]]>The post Juno: the spacecraft that is revolutionizing our understanding of Jupiter appeared first on Physics World.
+]]>Bolton and Harris chat about the mission’s JunoCam, which has produced some gorgeous images of Jupiter and it moons.
+Although the Juno mission was expected to last only a few years, the spacecraft is still going strong despite operating in Jupiter’s intense radiation belts. Bolton explains how the Juno team has rejuvenated radiation-damaged components, which has provided important insights for those designing future missions to space.
+However Juno’s future is uncertain. Despite its great success, the mission is currently scheduled to end at the end of September, which is something that Bolton also addresses in the conversation.
+The post Juno: the spacecraft that is revolutionizing our understanding of Jupiter appeared first on Physics World.
+]]>The post Optimizing upright proton therapy: hybrid delivery provides faster, sharper treatments appeared first on Physics World.
+]]>Proton arc therapy (PAT) is an emerging rotational delivery technique with potential to improve plan quality – reducing dose to organs-at-risk while maintaining target dose. The first clinical PAT treatments employed static arcs, in which multiple energy layers are delivered from many (typically 10 to 30) discrete angles. Importantly, static arc PAT can be delivered on conventional proton therapy machines. It also offers simpler beam arrangements than intensity-modulated proton therapy (IMPT).
+“In IMPT of head-and-neck cancers, the beam directions are normally set up in a complicated pattern in different planes, with range shifters needed to treat the shallow part of the tumour,” explains Erik Engwall, chief physicist at RaySearch Laboratories. “In PAT, the many beam directions are arranged in the same plane and no range shifters are typically needed. With all beams in the same plane, it is easier to move to upright treatments.”
+Upright proton therapy involves rotating the patient (in an upright position) in front of a static horizontal treatment beam. The approach could reduce costs by using compact proton delivery systems. This compactness, however, places energy selection close to the patient, increasing scattering in the proton beam. To combat this, the team propose adding a layer of shoot-through protons to each direction of the proton arc.
+The idea is that while most protons are delivered with Bragg peaks placed in the target, the sharp penumbra of the high-energy protons shooting through the target will combat beam broadening. The rotational delivery in the proton arc spreads the exit dose from these shoot-through beams over many angles, minimizing dose to surrounding tissues. And as the beamline is fixed, shoot-through protons exit in the same direction (behind the patient) for all angles, simplifying shielding to a single beam dump opposite the fixed beam.
+To test this approach, Engwall and colleagues simulated treatment plans for a virtual phantom containing three targets and an organ-at-risk, reporting their findings in Medical Physics. They used a development version of RayStation v2025 with a beam model of the Mevion s250-FIT system (which combines a compact cyclotron, an upright positioner and an in-room CT scanner).
+ +For each target, the team created static arc plans with (Arc+ST) and without shoot-through beams and with/without collimation, as well as 3-beam IMPT plans with and without shoot-through beams (all with collimation). Arc plans used 20 uniformly spaced beam directions, and the shoot-through plans included an additional layer of the highest system energy (230 MeV) for each direction.
+For all targets, Arc+ST plans showed superior conformity, homogeneity and target robustness to arc plans without shoot-through protons. Adding collimation slightly improved the arc plans without shoot-through protons but had little impact on Arc+ST plans.
+The IMPT plans achieved similar homogeneity and robustness to the best arc plans, but with far lower conformity due to the shoot-through protons delivering a concentrated exit dose behind the target (while static arcs distribute this dose over many directions). Adding shoot-through protons improved IMPT plan quality, but to a lesser degree than for PAT plans.
+The researchers repeated their analysis for a clinical head-and-neck cancer case, comparing static arcs with 5-beam IMPT. Again, Arc+ST plans performed better than any others for almost all metrics. “The Arc+ST plans have the best quality due to the sharpening of the penumbra of the shoot-through part, even better than when using a collimator,” says Engwall.
+
Notably, the findings suggest that collimation is not needed when combining arcs with shoot-through beams, enabling rapid treatments. With fast energy switching and the patient rotation at 1 rpm, Arc+ST achieved an estimated delivery time of less than 5.4 min – faster than all other plans for this case, including 5-beam IMPT.
+“Treatment time is reduced when the leaves of the dynamic collimator do not need to move,” Engwall explains. “There is also no risk of mechanical failures of the collimator and the secondary neutron production will be lower when there are fewer objects in the beamline.”
+Another benefit of upright delivery is that the shoot-through protons can be used for range verification during treatments, using a detector integrated into the beam dump behind the patient. The team investigated this concept with three simulated error scenarios: 5% systematic shift in stopping power ratio; 5 mm setup shift; and 2 cm shoulder movement. The technique successfully detected all errors.
+As the range detector is permanently installed in the treatment room and the shoot-through protons are part of the treatment plan, this method does not add time to the patient setup and can be used in every treatment fraction to detect both intra- and inter-fraction uncertainties.
+ +Although this is a proof-of-concept study, the researchers conclude that it highlights the combined advantages of the new treatment technique, which could “leverage the potential of compact upright proton treatments and make proton treatments more affordable and accessible to a larger patient group”.
+Engwall tells Physics World that the team is now collaborating with several clinical research partners to investigate the technique’s potential across larger patient data sets, for other treatment sites and multiple treatment machines.
+The post Optimizing upright proton therapy: hybrid delivery provides faster, sharper treatments appeared first on Physics World.
+]]>The post LIGO could observe intermediate-mass black holes using artificial intelligence appeared first on Physics World.
+]]>In 2015, the two LIGO interferometers made the very first observation of a gravitational wave: attributing its origin to a merger of two black holes that were roughly 1.3 billion light–years from Earth.
+Since then numerous gravitational waves have been observed with frequencies ranging from 30–2000 Hz. These are believed to be from the mergers of small black holes and neutron stars.
+So far, however, the lower reaches of the gravitational wave frequency spectrum (corresponding to much larger black holes) have gone largely unexplored. Being able to detect gravitational waves at 10–30 Hz would allow us to observe the mergers of intermediate-mass black holes at 100–100,000 solar masses. We could also measure the eccentricities of binary black hole orbits. However, these detections are not currently possible because of vibrational noise in the mirrors at the end of each interferometer arm.
+“As gravitational waves pass through LIGO’s two 4-km arms, they warp the space between them, changing the distance between the mirrors at either end,” explains Rana Adhikari at Caltech, who is part of the team that has developed the machine-learning technique. “These tiny differences in length need to be measured to an accuracy of 10-19 m, which is 1/10,000th the size of a proton. [Vibrational] noise has limited LIGO for decades.”
+To minimize noise today, these mirrors are suspended by a multi-stage pendulum system to suppress seismic disturbances. The mirrors are also polished and coated to eliminate surface imperfections almost entirely. On top of this, a feedback control system corrects for many of the remaining vibrations and imperfections in the mirrors.
+Yet for lower-frequency gravitational waves, even this subatomic level of precision and correction is not enough. As a laser beam impacts a mirror, the mirror can absorb minute amounts of energy – creating tiny thermal distortions that complicate mirror alignment. In addition, radiation pressure from the laser, combined with seismic motions that are not fully eliminated by the pendulum system, can introduce unwanted vibrations in the mirror.
+The team proposed that this problem could finally be addressed with the help of artificial intelligence (AI). “Deep loop shaping is a new AI method that helps us to design and improve control systems, with less need for deep expertise in control engineering,” describes Jonas Buchli at Google DeepMind, who led the research. “While this is helping us to improve control over high precision devices, it can also be applied to many different control problems.”
+The team’s approach is based on deep reinforcement learning, whereby a system tests small adjustments to its controls and adapts its strategy over time through a feedback system of rewards and penalties.
+With deep loop shaping, the team introduced smarter feedback controls for the pendulum system suspending the interferometer’s mirrors. This system can adapt in real time to keep the mirrors aligned with minimal control noise – counteracting thermal distortions, seismic vibrations, and forces induced by radiation pressure.
+“We tested our controllers repeatedly on the LIGO system in Livingston, Louisiana,” Buchli continues. “We found that they worked as well on hardware as in simulation, confirming that our controller keeps the observatory’s system stable over prolonged periods.”
+Based on these promising results, the team is now hopeful that deep loop shaping could help to boost the cosmological reach of LIGO and other existing detectors, along with future generations of gravitational-wave interferometers.
+“We are opening a new frequency band, and we might see a different universe much like the different electromagnetic bands like radio, light, and X-rays tell complementary stories about the universe,” says team member Jan Harms at the Gran Sasso Science Institute in Italy. “We would gain the ability to observe larger black holes, and to provide early warnings for neutron star mergers. This would allow us to tell other astronomers where to point their telescopes before the explosion occurs.”
+The research is described in Science.
+The post LIGO could observe intermediate-mass black holes using artificial intelligence appeared first on Physics World.
+]]>The post Physicists set to decide location for next-generation Einstein Telescope appeared first on Physics World.
+]]>Since that historic detection, which led to the 2017 Nobel Prize for Physics, the LIGO detectors, together with VIRGO in Italy, have measured several hundred gravitational waves – from mergers of black holes to neutron-star collisions. More recently, they have been joined by the KAGRA detector in Japan, which is located some 200 m underground, shielding it from vibrations and environmental noise.
+ +Yet the current number of gravitational waves could be dwarfed by what the planned Einstein Telescope (ET) would measure. This European-led, third-generation gravitational-wave detector would be built several hundred metres underground and be at least 10 times more sensitive than its second-generation counterparts including KAGRA. Capable of “listening” to a thousand times larger volume of the universe, the new detector would be able to spot many more sources of gravitational waves. In fact, the ET will be able to gather in a day what it took LIGO and VIRGO a decade to collect.
+The ET is designed to operate in two frequency domains. The low-frequency regime – 2–40 Hz – is below current detectors’ capabilities and will let the ET pick up waves from more massive black holes. The high-frequency domain, on the other hand, would operate from 40 Hz to 10 kHz and detect a wide variety of astrophysical sources, including merging black holes and other high-energy events. The detected signals from waves would also be much longer with the ET, lasting for hours. This would allow physicists to “tune in” much earlier as black holes or neutron stars approach each other.
+But all that is still a pipe dream, because the ET, which has a price tag of €2bn, is not yet fully funded and is unlikely to be ready until 2035 at the earliest. The precise costs will depend on the final location of the experiment, which is still up for grabs.
+Three regions are vying to host the facility: the Italian island of Sardinia, the Belgian-German-Dutch border region and the German state of Saxony. Each candidate is currently investigating the suitability of its preferred site (see box below), the results of which will be published in a “bid book” by the end of 2026. The winning site will be picked in 2027 with construction beginning shortly after.
+Other factors that will dictate where the ET is built include logistics in the host region, the presence of companies and research institutes (to build and exploit the facility) and government support. With the ET offering high-quality jobs, economic return, scientific appeal and prestige, that could give the German-Belgian-Dutch candidacy the edge given the three nations could share the cost.
+Another major factor is the design of the ET. One proposal is to build it as an equilateral triangle with each side being 10 km. The other is a twin L-shaped design where both arms are 15 km long and each detector located far from each other. The latter design is similar to the two LIGO over-ground detectors, which are 3000 km apart. If the “2L design” is chosen, the detector would then be built at two of the three competing sites.
+The 2L design is being investigated by all three sites, but those behind the Sardinia proposal strongly favour this approach. “With the detectors properly oriented relative to each other, this design could outperform the triangular design across all key scientific objectives,” claims Domenico D’Urso, scientific director of the Italian candidacy. He points to a study by the ET collaboration in 2023 that investigated the impact of the ET design on its scientific goals. “The 2L design enables, for example, more precise localization of gravitational wave sources, enhancing sky-position reconstruction,” he says. “And it provides superior overall sensitivity.”
+Three sites are vying to host the Einstein Telescope (ET), with each offering various geological advantages. Lausitz in Saxony benefits from being a former coal-mining area. “Because of this mining past, the subsurface was mapped in great detail decades ago,” says Günther Hasinger, founding director of the German Center for Astrophysics, which is currently being built in Lausitz and would house the ET if picked. The granite formation in Lausitz is also suitable for a tunnel complex because the rock is relatively dry. Not much water would need to be pumped away, causing less vibration.
+Thanks to the former lead, zinc and silver mine of Sos Enattos, meanwhile, the subsurface near Nuoro in Sardinia – another potential location for the ET – is also well known. The island is on a very stable, tectonic microplate, making it seismically quiet. Above ground, the area is undeveloped and sparsely populated, further shielding the experiment from noise.
+The third ET candidate, lying near the point where Belgium, Germany and the Netherlands meet, also has a hard subsurface, which is needed for the tunnels. It is topped by a softer, clay-like layer that would dampen vibrations from traffic and industry. “We are busy investigating the suitability of the subsurface and the damping capacity of the top layer,” says Wim Walk of the Dutch Center for Subatomic Physics (Nikhef), which is co-ordinating the candidacy for this location. “That research requires a lot of work, because the subsurface here has not yet been properly mapped.”
++
Localization is important for multimessenger astronomy. In other words, if a gravitational-wave source can be located quickly and precisely in the sky, other telescopes can be pointed towards it to observe any eventual light or other electromagnetic (EM) signals. This is what happened after LIGO detected a gravitational wave on 17 August 2017, originating from a neutron star collision. Dozens of ground- and space-based satellites were able to pick up a gamma-ray burst and the subsequent EM afterglow.
+The triangle design, however, is favoured by the Belgian-German-Dutch consortium. It would be the Earth equivalent to the European Space Agency’s planned LISA space-based gravitational-waves detector, which will consist of three spacecraft in a triangle configuration that is set for launch in 2035, the same year that the ET could open. LISA would detect gravitational waves with even much lower frequency, coming, for example, from mergers of supermassive black holes.
+While the Earth-based triangle design would not be able to locate the source as precisely, it would – unlike the 2L design – be able to do “null stream” measurements. These would yield a clearer picture of the noise from the environment and the detector itself, including “glitches”, which are bursts of noise that overlap with gravitational-wave signals. “With a non-stop influx of gravitational waves but also of noise and glitches, we need some form of automatic clean-up of the data,” says Jan Harms, a physicist at the Gran Sasso Science Institute in Italy and member of the scientific ET collaboration. “The null stream could provide that.”
+However, it is not clear if that null stream would be a fundamental advantage for data analysis, with Harms and colleagues thinking more work is needed. “For example, different forms of noise could be connected to each other, which would compromise the null stream,” he says. The problem is also that a detector with a null stream has not yet been realized. And that applies to the triangle design in general. “While the 2L design is well established in the scientific community,” adds D’Urso.
+Backers of the triangle design see the ET as being part of a wider, global network of third-generation detectors, where the localization argument no longer matters. Indeed, the US already has plans for an above-ground successor to LIGO. Known as the Cosmic Explorer, it would feature two L-shaped detectors with arm lengths of up to 40 km. But with US politics in turmoil, it is questionable how realistic these plans are.
+Matthew Evans, a physicist at the Massachusetts Institute of Technology and member of the LIGO collaboration, recognizes the “network argument”. “I think that the global gravitational waves community are double counting in some sense,” he says. Yet for Evans it is all about the exciting discoveries that could be made with a next-generation gravitational-wave detector. “The best science will be done with ET as 2Ls,” he says.
+The post Physicists set to decide location for next-generation Einstein Telescope appeared first on Physics World.
+]]>The post The destructive effects of ionising radiation appeared first on Physics World.
+]]>The first of these processes is typically photoionisation, where an atom or molecule absorbs a photon and loses one or more electrons as a result.
+What comes next can vary depending on the strength of the radiation and the nature of the material, but some of the most important effects are secondary ionisation processes. These often go on to dominate the dynamics of the whole system.
+One of these is interatomic Coulombic decay (ICD), in which energy is transferred from one excited atom or molecule to a neighbouring one, which in turn is ionised.
+ICD has captured considerable interest since its discovery, not least because it often produces low energy electrons which cause radiation damage in biological matter.
+Understanding this phenomenon better was the team’s goal in this latest work. Using the ASTRID2 synchrotron in Aarhus, Denmark, they studied what happened when extreme UV photons interacted with small clusters of helium atoms.
+To capture what was going on they used an electron velocity-map imaging spectrometer. This is a powerful diagnostic that measures any emitted electrons’ energy in addition to their angular distribution.
+Using this technique, they were able to show that the ICD process is even more efficient than previously thought. The researchers expect it to play a crucial role in other condensed phase systems exposed to ionising radiation as well.
+Knowledge gained from studies such as this one is crucial for fields like radiation therapy, where the effects of ionising radiation on human cells must be tightly controlled.
+Ben Ltaeif et al. 2025 Rep. Prog. Phys. 88 037901
++
The post The destructive effects of ionising radiation appeared first on Physics World.
+]]>The post A route to more efficient wireless charging? appeared first on Physics World.
+]]>The key to successful WPT technologies is ensuring that a high percentage of the input power is transferred to its intended destination – i.e. it must be efficient.
+The idea of parity-time (PT) symmetry helps achieve this goal by offering a way to balance energy gain and loss in a system, which helps maintain stable and efficient power flow – even in the presence of disturbances.
+Here parity means spatial reflection (like flipping left and right), and time refers to reversing the direction of time. A PT-symmetric system behaves the same when both of these transformations are applied together.
+However, this method has its limitations. It often requires very fine-tuning, it can struggle when devices do not function as idealised resistors,. Most importantly, it falls short of achieving the theoretical maximum efficiency of WPT.
+This is where the new paper comes in. They’ve performed a comprehensive experimental and theoretical study demonstrating that dispersive gain can greatly enhance the efficiency of WPT beyond the limits of previous methods.
+Dispersive gain describes a type of energy amplification in a system where the gain depends on the frequency of the signal. This means the system amplifies energy differently at different frequencies, rather than uniformly across all frequencies.
+This allows the system to naturally shift energy into the most efficient frequency modes for transfer.
+Their work could be used to enable new technologies or make existing ones more affordable. It could also open up new possibilities for harnessing dispersion effects across electronics and optics.
+Dispersive gains enhance wireless power transfer with asymmetric resonance – IOPscience
+Hao et al. 2025 Rep. Prog. Phys. 88 020501
++
The post A route to more efficient wireless charging? appeared first on Physics World.
+]]>The post ‘Breathing’ crystal reversibly releases oxygen appeared first on Physics World.
+]]>Transition-metal oxides boast a huge range of electrical properties that can be tuned all the way from insulating to superconducting. This means they can find applications in areas as diverse as energy storage, catalysis and electronic devices.
+ +Among the different material parameters that can be tuned are the oxygen vacancies. Indeed, ordering these vacancies can produce new structural phases that show much promise for oxygen-driven programmable devices.
+In the new work, a team of researchers led by physicist Hyoungjeen Jeen of Pusan and materials scientist Hiromichi Ohta in Hokkaido studied SrFe0.5Co0.5Ox. The researchers focused on this material, they say, since it belongs to the family of topotactic oxides, which are the main oxides being studied today in solid-state ionics. “However, previous work had not discussed which ion in this compound was catalytically active,” explains Jeen. “What is more, the cobalt-containing topotactic oxides studied so far were fragile and easily fractured during chemical reactions.”
+The team succeeded in creating a unique platform from a solid solution of epitaxial SrFe0.5Co0.5O2.5 in which both the cobalt and iron ions bathed in the same chemical environment. “In this way, we were able to test which ion was better for reduction reactions and whether or not it sustained its structural integrity,” Jeen tells Physics World. “We found that our material showed element-specific reduction behaviours and reversible redox reactions.”
+The researchers made their material using a pulsed laser deposition technique, ideal for the epitaxial synthesis of multi-element oxides that allowed them to grow SrFe0.5Co0.5O2.5 crystals in which the iron and cobalt ions were randomly located in the crystal. This random arrangement was key to the material’s ability to repeatedly release and absorb oxygen, they say.
+“It’s like giving the crystal ‘lungs’ so that it can inhale and exhale oxygen on command,” says Jeen.
+This simple breathing picture comes from the difference in the catalytic activity of cobalt and iron in the compound, he explains. Cobalt ions prefer to lose and gain oxygen and these ions are the main sites for the redox activity. However, since iron ions prefer not to lose oxygen during the reduction reaction, they serve as pillars in this architecture. This allows for stable and repeatable oxygen release and uptake.
+Until now, most materials that absorb and release oxygen in such a controlled fashion were either too fragile or only functioned at extremely high temperatures. The new material works under more ambient conditions and is stable. “This finding is striking in two ways: only cobalt ions are reduced, and the process leads to the formation of an entirely new and stable crystal structure,” explains Jeen.
+ +The researchers also showed that the material could return to its original form when oxygen was reintroduced, so proving that the process is fully reversible. “This is a major step towards the realization of smart materials that can adjust themselves in real time,” says Ohta. “The potential applications include developing a cathode for intermediate solid oxide fuel cells, an active medium for thermal transistors (devices that can direct heat like electrical switches), smart windows that adjust their heat flow depending on the weather and even new types of batteries.”
+Looking ahead, Jeen, Ohta and colleagues aim to investigate the material’s potential for practical applications.
+They report their present work in Nature Communications.
+The post ‘Breathing’ crystal reversibly releases oxygen appeared first on Physics World.
+]]>The post New hollow-core fibres break a 40-year limit on light transmission appeared first on Physics World.
+]]>Physicists at the University of Southampton, UK have now developed an alternative that could call time on that decades-long lull. Writing in Nature Photonics, they report hollow-core fibres that exhibit 35% less attenuation while transmitting signals 45% faster than standard glass fibres.
+The core of conventional fibres is made of pure glass and is surrounded by a cladding of slightly different glass. Because the core has a higher refractive index than the cladding, light entering the fibre reflects internally, bouncing back and forth in a process known as total internal reflection. This effect traps the light and guides it along the fibre’s length.
+ +The Southampton team led by Francesco Poletti swapped the standard glass core for air. Because air is more transparent than glass, channelling light through it cuts down on scattering and speeds up signals. The problem is that air’s refractive index is lower, so the new fibre can’t use total internal reflection. Instead, Poletti and colleagues guided the light using a mechanism called anti-resonance, which requires the walls of the hollow core to be made from ultra-thin glass membranes.
+“It’s a bit like a soap bubble,” Poletti says, explaining that such bubbles appear iridescent because their thin films reflect some wavelengths and lets others through. “We designed our fibre the same way, with glass membranes that reflect light at certain frequencies back into the core.” That anti-resonant reflection, he adds, keeps the light trapped and moving through the fibre’s hollow centre.
+To make the new air-core fibre, the researchers stacked thin glass capillaries in a precise pattern, forming a hollow channel in the middle. Heating and drawing the stack into a hair-thin filament preserved this pattern on a microscopic scale. The finished fibre has a nested design: an air core surrounded by ultra-thin layers that provide anti-resonant guidance and cut down on leakage.
+To test their design, the team measured transmission through a full spool of fibre, then cut the fibre shorter and compared the results. They also fired in light pulses and tracked the echoes. Their results show that the hollow fibres reduce attenuation to just 0.091 decibels per kilometre. This lower loss implies that fewer amplifiers would be needed in long cables, lowering costs and energy use. “There’s big potential for greener telecommunications when using our fibres,” says Poletti.
+Poletti adds that reduced attenuation (and thus lower energy use) is only one of the new fibre’s advantages. At the 0.14 dB/km attenuation benchmark, the new hollow fibre supports a bandwidth of 54 THz compared to 10 THz for a normal fibre. At the reduced 0.1 dB/km attenuation, the bandwidth is still 18 THz, which is close to twice that of a normal cable. This means that a single strand can carry far more channels at once.
+Perhaps the most impressive advantage is that because the speed of light is faster in air than in glass, data could travel the same distance up to 45% faster. “It’s almost the same speed light takes when we look at a distant star,” Poletti says. The resulting drop in latency, he adds, could be crucial for real-time services like online gaming or remote surgery, and could also speed up computing tasks such as training large language models.
+As well as the team’s laboratory tests, Microsoft has begun testing the fibres in real systems, installing segments in its network and sending live traffic through them. These trials prove the hollow-core design works with existing telecom equipment, opening the door to gradual rollout. In the longer run, adapting amplifiers and other gear that are currently tuned for solid glass fibres could unlock even better performance.
+ +Poletti believes the team’s new fibres could one day replace existing undersea cables. “I’ve been working on this technology for more than 20 years,” he says, adding that over that time, scepticism has given way to momentum, especially now with Microsoft as an industry partner. But scaling up remains a real hurdle. Making short, flawless samples is one thing; mass-producing thousands of kilometres at low cost is another. The Southampton team is now refining the design and pushing toward large-scale manufacturing. They’re hopeful that improvements could slash losses by another order of magnitude and that the anti-resonant design can be tuned to different frequency bands, including those suited to new, more efficient amplifiers.
+Other experts agree the advance marks a turning point. “The work builds on decades of effort to understand and perfect hollow-core fibres,” says John Ballato, whose group at Clemson University in the US develops fibres with specialty cores for high-energy laser and biomedical applications. While Ballato notes that such fibres have been used commercially in shorter-distance communications “for some years now”, he believes this work will open them up to long-haul networks.
+The post New hollow-core fibres break a 40-year limit on light transmission appeared first on Physics World.
+]]>The post Indefinite causal order: how quantum physics is challenging our understanding of cause and effect appeared first on Physics World.
+]]>But, does definite causal order also reign supreme in the quantum world, where concepts like position and time can be fuzzy? Most physicists are happy to accept the paradox of Schrödinger’s cat – a thought experiment in which a cat hidden in a box is simultaneously dead and alive at the same time, until you open the box to check. Schrödinger’s cat illustrates the quantum concept of “superposition”, whereby a system can be in two or more states at the same time. It is only when a measurement is made (by opening the box), does the system collapse into one of its possible states.
+But could two (or more) causally distinct processes occur at the same time in the quantum world? The answer, perhaps shockingly, is yes and this paradoxical phenomenon is called indefinite causal order (ICO).
+It turns out that different causal processes can also exist in a superposition. One example is a thought experiment called the “gravitational quantum switch”, which was proposed in 2019 by Magdalena Zych of the University of Queensland and colleagues (Nat. Comms 10 3772). This features our favourite quantum observers Alice and Bob, who are in the vicinity of a very large mass, such as a star. Alice and Bob both have initially synchronized clocks and in the quantum world, these clocks would continue to run at identical rates. However, Einstein’s general theory of relativity dictates that the flow of time is influenced by the distribution of matter in the vicinity of Alice and Bob. This means that if Alice is closer to the star than Bob, then her clock will run slower than Bob’s, and vice versa.
+Like with Schrödinger’s cat, quantum mechanics allows the star to be in a superposition of spatial states; meaning that in one state Alice is closer to the star than Bob, and in the other Bob is closer to the star than Alice. In other words, this is a superposition of a state in which Alice’s clock runs slower than Bob’s, and a state in which Bob’s clock runs slower than Alice’s.
+ +Alice and Bob are both told they will receive a message at a specific time (say noon) and that they would then pass that message on to the their counterpart. If Alice’s clock is running faster than Bob’s then she will receive the message first, and then pass it on to Bob, and vice versa. This superposition of Alice to Bob with Bob to Alice is an example of indefinite causal order.
+Now, you might be thinking “so what” because this seems to be a trivial example. But it becomes more interesting if you replace the message with a quantum particle like a photon; and have Alice and Bob perform different operations on that photon. If the two operations do not commute – such as rotations of the photon polarization in the X and Z planes – then the order in which the operations are done will affect the outcome.
+As a result, this “gravitational quantum switch” is a superposition of two different causal processes with two different outcomes. This means that Alice and Bob could do more exotic operations on the photon, such as “measure-and-reprepare” operations (where a quantum system is first measured, and then, based on the measurement outcome, a new quantum state is prepared). In this case Alice measures the quantum state of the received photon and prepares a photon that she sends to Bob (or vice versa).
+Much like Schrödinger’s cat, a gravitational quantum switch cannot currently be realized in the lab. But, never say never. Physicists have been able to create experimental analogues of some thought experiments, so who knows what the future will bring. Indeed, a gravitational quantum switch could provide important information regarding a quantum description of gravity – something that has eluded physicists ever since quantum mechanics and general relativity were being developed in the early 20th century.
+Moving on to more practical ICO experiments, physicists have already built and tested light-based quantum switches in the lab. Instead of having the position of the star determining whether Alice or Bob go first, the causal order is determined by a two-level quantum state – which can have a value of 0 or 1. If this control state is 0, then Alice goes first and if the control state is 1, then Bob goes first. Crucially, when the control state is in a superposition of 0 and 1 the system shows indefinite causal order (see figure 1).
+
In this illustration of a quantum switch a photon (driving a car) can follow two different paths, each with a different causal order. One path (top) leads to Alice’s garage followed by a visit to Bob’s drive thru. The second path (middle) visits Bob first, and then Alice. The path taken by the photon is determined by a control qubit that is represented by a traffic light. If the value of the qubit is “0” then the photon visits Alice First; if the qubit is “1” then the photon visits Bob first. Both of these scenarios have definite causal order.
+However, the control qubit can exist in a quantum superposition of “0” and “1” (bottom). In this superposition, the path followed by the photon – and therefore the temporal order in which it visits Alice and Bob – is not defined. This is an example of indefinite causal order. Of course, any attempt to identify exactly which path the photon goes through initially will destroy the superposition (and therefore the ICO) and the photon will take only one definite path.
++
The first such quantum switch was created by in 2015 by Lorenzo Procopio (now at Germany’s University of Paderborn) and colleagues at the Vienna Center for Quantum Science and Technology (Nat. Comms 6, 7913). Their quantum switch involves firing a photon at a beam splitter, which puts the photon into a superposition of a photon that has travelled straight through the splitter (state 0) and a photon that has been deflected by 90 degrees (state 1). This spatial superposition is the control state of the quantum switch, playing the role of the star in the gravitational quantum switch.
+State 0 photons first travel to an Alice apparatus where a polarization rotation is done in a specific direction (say X). Then the photons are sent to a Bob apparatus where a non-commuting rotation (say Z) is done. Conversely, the photons that travel along the state 1 path encounter Bob before Alice.
+Finally, the state 0 and state 1 paths are recombined at a second beamsplitter, which is monitored by two photon-detectors. Because Alice-then-Bob has a different effect on a photon than does Bob-then-Alice, interference can occur between recombined photons. This interference is studied by systematically changing certain aspects of the experiment. For example, by changing Alice’s direction of rotation or the polarization of the incoming photons.
+In 2017 quantum-information researcher Giulia Rubino, then at the Vienna Center for Quantum Science and Technology, teamed up with Procopia and colleagues to verify ICO in their quantum switch using a “causal witness” (Sci. Adv. 3 e1602589). This involves doing a specific set of experiments on the quantum switch and calculating a mathematical entity (the causal witness) that reveals whether a system has definite or indefinite causal order. Sure enough, this test revealed that their system does indeed have ICO. Since then, physicists working in several independent labs have successfully created their own quantum switches.
+While this effect might still seem somewhat obscure, in 2019, an international team led by the renowned Chinese physicist Jian-Wei Pan showed that a quantum switch can be very useful for doing computations that are distributed between two parties (Phys. Rev. Lett. 122 120504). In such a scenario a string of data is received and then processed by Alice, who then passes the results on to Bob for further processing. In an experiment using photons, they showed that ICO delivers an exponential speed-up of the rate at which longer strings are processed – compared to a system with no ICO.
+Physicists are also exploring if ICO could be used to enhance quantum metrology. Indeed, recent calculations by Oxford University’s Giulio Chiribella and colleagues suggest that it could lead to a significant increase in precision when compared to techniques that involve states with definite causal order (Phys. Rev. Lett. 124 190503).
+ +While other applications could be possible, it is often difficult to work out whether ICO offers the best solution to a specific problem. For example, physicists had thought a quantum switch offered an advantage when it comes to communicating along a noisy channel, but it turns out that some configurations of Alice and Bob with definite causal order were just as good as an ICO.
+Beyond the quantum switch, there are other types of circuits that would display ICO. These include “quantum circuits with quantum control of causal order”, which have yet to be implemented in the lab because of their complexity.
+But despite the challenges in creating ICO systems and proving that they outperform other solutions, it looks like ICO is set to join ranks of other weird phenomena such as superposition and entanglement that have found practical applications in quantum technologies.
+This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
+Find out more on our quantum channel.
++
The post Indefinite causal order: how quantum physics is challenging our understanding of cause and effect appeared first on Physics World.
+]]>The post Reformulation of general relativity brings it closer to Newtonian physics appeared first on Physics World.
+]]>Now Jiaxi Wu, Siddharth Boyeneni and Elias Most at the California Institute of Technology (Caltech) have addressed this challenge by developing a new formulation of general relativity that is inspired by the equations that describe electromagnetic interactions. They show that general relativity behaves in the same way as the gravitational inverse square law described by Isaac Newton more than 300 years ago. “This is a very non-trivial insight,” says Most.
+One of the fascinations of black holes is the extreme physics they invoke. These astronomical objects pack so much mass into so little space that not even light can escape their gravitational pull. Black holes (and neutron stars) can exist in binary systems in which the objects orbit each other. These pairs eventually merge to create single black holes in events that create detectable gravitational waves. The study of these waves provides an important testbed for gravitational physics. However, the mathematics of general relativity that describe these mergers is very complicated.
+According to Newtonian physics, the gravitational attraction between two masses is proportional to the inverse of the square of the distance between them – the inverse square law. However, as Most points out, “Unless in special cases, general relativity was not thought to act in the same way.”
+Over the past decade, gravitational-wave researchers have taken various approaches including post-Newtonian theory and effective one-body approaches to better understand the physics of black-hole mergers. One important challenge is how to model parameters such as orbital eccentricity and precession in black hole systems and how best to understand “ringdown”. The latter is the process whereby a black hole formed by a merger emits gravitational waves as it relaxes into a stable state.
+The trio’s recasting of the equations of general relativity was inspired by the Maxwell equations that describe how electric and magnetic fields leapfrog each other through space. According to these equations, the forces between electric charges diminish according to the same inverse square law as Newton’s gravitational attraction.
+The original reformulations of “gravitoelectromagnetism” date back to the 90s. Most explains how among those who did this early work was his Caltech colleague and LIGO Nobel laureate Kip Thorne, who exploited a special mathematical structure of the curvature of space–time.
+“This structure mathematically looks like the equations governing light and the attraction of electric charges, but the physics is quite different,” Most tells Physics World. The gravito-electric field thus derived describes how an object might squish under the forces of gravity. “Mathematically this means that the previous gravito-electric field falls off with inverse distance cubed, which is unlike the inverse distance square law of Newtonian gravity or electrostatic attraction,” adds Most.
+Most’s own work follows on from previous studies of the potential radio emission from the interaction of magnetic fields during the collision of neutron stars and black holes from which it seemed reasonable to then “think about whether some of these insights naturally carry over to Einstein’s theory of gravity”. The trio began with different formulations of general relativity and electromagnetism with the aim of deriving gravitational analogues for the electric and magnetic fields that behave more closely to classical theories of electromagnetism. They then demonstrated how their formulation might describe the behaviour of a non-rotating Schwarzschild black hole, as well as a black hole binary.
+“Our work says that actually general relativity is not so different from Newtonian gravity (or better, electric forces) when expressed in the right way,” explains Most. The actual behaviour predicted is the same in both formulations but the trio’s reformulation reveals how general relativity and Newtonian physics are more similar than they are generally considered to be. “The main new thing is then what does it mean to ‘observe’ gravity, and what does it mean to measure distances relative to how you ‘observe’.”
+Alexander Phillipov is a black-hole expert at the University of Maryland in the US and was not directly involved with Most’s research. He describes the research as “very nice”, adding that while the analogy between gravity and electromagnetism has been extensively explored in the past, there is novelty in the interpretation of results from fully nonlinear general relativistic simulations in terms of effective electromagnetic fields. “It promises to provide valuable intuition for a broad class of problems involving compact object mergers.”
+The research is described in Physical Review Letters.
+The post Reformulation of general relativity brings it closer to Newtonian physics appeared first on Physics World.
+]]>The post Researchers create glow-in-the-dark succulents that recharge with sunlight appeared first on Physics World.
+]]>Well, that vision is now a step closer thanks to researchers in China who have created glow-in-the-dark succulents that recharge in sunlight.
+ +Instead of coaxing cells to glow through genetic modification, the team instead used afterglow phosphor particles – materials similar to those found in glow-in-the-dark toys – that can absorb light and release it slowly over time.
+The researchers then injected the particles into succulents, finding that they produced a strong glow, thanks to the narrow, uniform and evenly distributed channels within the leaf that helped to disperse the particles.
+After a couple of minutes of exposure to sunlight or indoor LED light, the modified plants glowed for up to two hours. By using different types of phosphors, the researchers created plants that shine in various colours, including green, red and blue.
+The team even built a glowing plant wall with 56 succulents, which was bright enough to illuminate nearby objects.
+“I just find it incredible that an entirely human-made, micro-scale material can come together so seamlessly with the natural structure of a plant,” notes Liu. “The way they integrate is almost magical. It creates a special kind of functionality.”
+The post Researchers create glow-in-the-dark succulents that recharge with sunlight appeared first on Physics World.
+]]>The post Big data helps Gaelic football club achieve promotion following 135-year wait appeared first on Physics World.
+]]>Eamon McGleenan plays for his local team – O’Connell’s GAC Tullysaran – and is a PhD student at Queen’s University Belfast, where he is a member of the Predictive Sports Analytics (PSA) research team, which was established in 2023.
+ +McGleenan and his PhD supervisor David Jess teamed up with GAC Tullysaran to investigate whether data analysis and statistical techniques could improve their training and results.
+Over five months, the Queen’s University researchers took over 550 million individual measurements from the squad, which included information such as player running speed, accelerations and heart rates.
+“We applied mathematical models to the big data we obtained from the athletes,” notes McGleenan. “This allowed us to examine how the athletes evolved over time and we then provided key insights for the coaching staff, who then generated bespoke training routines and match tactics.”
+The efforts immediately paid off as in July GAC Tullysaran won their league by two points and were promoted for the first time in 135 years to the top-flight Senior Football League, which they will start in March.
+“The statistical insight provided by PSA is of great use and I like how it lets me get the balance of training right, especially in the run-up to match day,” noted Tullysaran manager Pauric McGlone, who adds that it also provided a bit of competition in the squad that ensured the players were “conditioned in a way that allows them to perform at their best”.
+For more about the PSA’s activities, see here.
+The post Big data helps Gaelic football club achieve promotion following 135-year wait appeared first on Physics World.
+]]>The post Zero-point motion of atoms measured directly for the first time appeared first on Physics World.
+]]>According to classical physics, molecules with no thermal energy – for example, those held at absolute zero – should not move. However, according to quantum theory, the atoms making up these molecules are never completely “frozen”, so they should exhibit some motion even at this chilly temperature. This motion comes from the atoms’ zero-point energy, which is the minimum energy allowed by quantum mechanics for atoms in their ground state at absolute zero. It is therefore known as zero-point motion.
+To study this motion, a team led by Till Jahnke from the Institute for Nuclear Physics at Goethe University Frankfurt and the Max Planck Institute for Nuclear Physics in Heidelberg used the European XFEL in Hamburg to bombard their sample – an iodopyridine molecule consisting of 11 atoms – with ultrashort, high-intensity X-ray pulses. These high-intensity pulses violently eject electrons out of the iodopyridine, causing its constituent atoms to become positively charged (and thus to repel each other) so rapidly that the molecule essentially explodes.
+ +To image the molecular fragments generated by the explosion, the researchers used a customized version of a COLTRIMS reaction microscope. This approach allowed them to reconstruct the molecule’s original structure.
+From this reconstruction, the researchers were able to show that the atoms do not simply vibrate individually, but that they do so in correlated, coordinated patterns. “This is known, of course, from quantum chemistry, but it had so far not been measured in a molecule consisting of so many atoms,” Jahnke explains.
+One of the biggest challenges Jahnke and colleagues faced was interpreting what the microscope data was telling them. “The dataset we obtained is super-rich in information and we had already recorded it in 2019 when we began our project,” he says. “It took us more than two years to understand that we were seeing something as subtle (and fundamental) as ground-state fluctuations.”
+Since the technique provides detailed information that is hidden to other imaging approaches, such as crystallography, the researchers are now using it to perform further time-resolved studies – for example, of photochemical reactions. Indeed, they performed and published the first measurements of this type at the beginning of 2025, while the current study (which is published in Science) was undergoing peer review.
+ +“We have pushed the boundaries of the current state-of-the-art of this measurement approach,” Jahnke tells Physics World, “and it is nice to have seen a fundamental process directly at work.”
+For theoretical condensed matter physicist Asaad Sakhel at Balqa Applied University, Jordan, who was not involved in this study, the new work is “an outstanding achievement”. “Being able to actually ‘see’ zero-point motion allows us to delve deeper into the mysteries of quantum mechanics in our quest to a further understanding of its foundations,” he says.
+The post Zero-point motion of atoms measured directly for the first time appeared first on Physics World.
+]]>The post Artificial intelligence predicts future directions in quantum science appeared first on Physics World.
+]]>My guests are Mario Krenn – who heads the Artificial Scientist Lab at Germany’s Max Planck Institute for the Science of Light – and Felix Frohnert, who is doing a PhD on the intersection of quantum physics and machine learning at Leiden University in the Netherlands.
+Frohnert, Krenn and colleagues published a paper earlier this year called “Discovering emergent connections in quantum physics research via dynamic word embeddings” in which they analysed more than 66,000 abstracts from the quantum-research literature to see if they could predict future trends in the field. They were particularly interested in the emergence of connections between previously isolated subfields of quantum science.
+We chat about what motivated the duo to use machine learning to study quantum science; how their prediction system works; and I ask them whether they have been able to predict current trends in quantum science using historical data.
+Their paper appears in the journal Machine Learning Science and Technology. It is published by IOP Publishing – which also brings you Physics World. Krenn is on the editorial board of the journal and in the podcast he explains why it is important to have a platform to publish research at the intersection of physics and machine learning.
+This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.
+Find out more on our quantum channel.
++
+
The post Artificial intelligence predicts future directions in quantum science appeared first on Physics World.
+]]>The post Errors in A-level physics papers could jeopardize student university admissions, Institute of Physics warns appeared first on Physics World.
+]]>The mistakes in question appeared in the physics (A) exam papers 1 and 2 set by the OCR exam board. Erratum notices had been issued to students at the start of the exam in June, but a further error in paper 2 was only spotted after the exam had taken place, causing some students to get stuck. Physics paper 2 from the rival AQA exam board was also seen to contain complex phrasing that hindered students’ ability to answer the question and led to time pressures.
+A small survey of physics teachers carried out after the exam by the IOP, which publishes Physics World, reveals that 41% were dissatisfied with the OCR physics exam papers and more than half (58%) felt that students had a negative experience. Two-thirds of teachers, meanwhile, reported that students had a negative experience during the AQA exam. A-levels are mostly taken by 18 year olds in England, Wales and Northern Ireland, with the grades being used by universities to decide admission.
+ +Grinyer says that the IOP is engaging in “regular, open dialogue with exam boards” to ensure that the assessment process supports and encourages students, while maintaining the rigour and integrity of the qualification. ���Our immediate concern,” Grinyer warns, “is that the usual standardization processes and adjustments to grade boundaries – particularly for the OCR paper with errors – may fail to compensate fully for the negative effect on exam performance for some individuals.”
+An OCR spokesperson told Physics World that the exam board is “sorry to the physics students and teachers affected by errors in A-level physics this year”. The board says that it “evaluated student performance across all physics papers, and took all necessary steps to mitigate the impact of these errors”. The OCR claims that the 13,000 students who sat OCR A-level physics A this year “can be confident” in their A-level physics results.
+“We have taken immediate steps to review and strengthen our quality assurance processes to prevent such issues from occurring in the future,” the OCR adds. “We appreciated the opportunity to meet with the Institute of Physics to discuss these issues, and also to discuss our shared interest in encouraging the growth of this vital subject.”
+Almost 23,500 students sat AQA A-level physics this year and an AQA spokesperson told Physics World that the exam board “listened to feedback and took steps to make A-level physics more accessible” to students and that there “is no need for universities to make an exception for AQA physics outcomes when it comes to admissions criteria”.
+“These exam papers were error-free, as teachers and students would expect, and we know that students found the papers this year to be more accessible than last year,” they say. “We’ll continue to engage with any feedback that we receive, including feedback from the Institute of Physics, to explore how we can enhance our A-level physics assessments and give students the best possible experience when they sit exams.”
+The IOP now wants A-level physics students to be given a “fair opportunity” when it comes to university admissions. “These issues are particularly concerning for students on widening participation pathways, many of whom already face structural barriers to high-stakes assessment,” the IOP letter states. “The added challenge of inaccessible or error-prone exam papers risks compounding disadvantage and may not reflect the true potential of these students.”
+ +The IOP also contacted AQA last year over inaccessible contexts and language used in previous physics exams. But despite AQA’s assurances that the problems would be addressed, some of the same issues have now recurred. Helen Sinclair, head of physics at the all-girls Wimbledon High School, believes that the “variable quality” of recent A-level papers have had “far-reaching consequences” on young people thinking of going into physics at university.
+“Our students have exceptionally high standards for themselves and the opaque nature of many questions affects them deeply, no matter what grades they ultimately achieve. This has even led some to choose to apply for other subjects at university,” she told Physics World. “This is not to say that papers should not be challenging; however, better scaffolding within some questions would help students anchor themselves in what is an already stressful environment, and would ultimately enable them to better demonstrate their full potential within an exam.”
++Students come out of the exams feeling disheartened, and those students share their perceptions with younger students
+Abbie Hope, Stokesley School
Those concerns are echoed by Abbie Hope, head of physics at Stokesley School near Middlesbrough. She says the errors in this year’s exam papers are “not acceptable” and believes that OCR has “failed their students”. Hope says that AQA physics papers in recent years have been “very challenging” and have resulted in students feeling like they cannot do physics. She also says some have emerged from exam halls in tears.
+“Students come out of the exams feeling disheartened and share their perceptions with younger students,” she says. “I would rather students sat a more accessible paper, with higher grade boundaries so they feel more successful when leaving the exam hall, rather than convinced they have underachieved and then getting a surprise on results day.” Hope fears the mistakes will undermine efforts to encourage uptake and participation in physics and that exam boards need to serve students and teachers better.
+Rachael Houchin, head of physics at Royal Grammar School Newcastle, says this year’s errors have added to her “growing unease” about the state of physics education in the UK. “Such incidents – particularly when they are public and recurring – do little to improve the perception of the subject or encourage its uptake,” she says. “Everyone involved in physics education – at any level – has a duty to get it right. If we fail, we risk physics drifting into the category of subjects taught predominantly in selective or independent schools, and increasingly absent from the mainstream.”
+Hari Rentala, associate director of education and workforce at the IOP, is concerned that the errors unfairly “perpetuate the myth” that physics is a difficult subject. “OCR appear to have managed the situation as best they can, but this is not much consolation for how students will have felt during the exam and over the ensuing weeks,” says Rentala. “Once again AQA set some questions that were overly challenging. We can only hope that the majority of students who had a negative experience as a result of these issues at least receive a fair grade – as grade boundaries have been adjusted down.”
+Despite the problems with some specific papers, almost 45,000 students took A-level physics in the UK – a rise of 4.3% on last year – to reach the highest level for 25 years. Physics is now the sixth most popular subject at A-level, up from ninth last year, with girls representing a quarter of all candidates. Meanwhile, in Scotland the number of entries in both National 5 and Higher physics was 13,680 and 8560, respectively, up from 13,355 and 8065 last year.
+“We are delighted so many young people, and increasing numbers of girls, are hearing the message that physics can open up a lifetime of opportunities,” says Grinyer. “If we can build on this momentum there is a real opportunity to finally close the gap between boys and girls in physics at A-level. To do that we need to continue to challenge the stereotypes that still put too many young people off physics and ensure every young person knows that physics – and a career in science and innovation – could be for them.”
+However, there is less good news for younger pupils, with a new IOP report finding that more than half a million GCSE students are expected to start the new school year with no physics teacher. It reveals that a quarter of English state schools have no specialist physics teachers at all and fears that more than 12,000 students could miss out on taking A-level physics because of this. The IOP wants the UK government to invest £120m over the next 10 years to address the shortage by retaining, recruiting and retraining a new generation of physics teachers.
++
The post Errors in A-level physics papers could jeopardize student university admissions, Institute of Physics warns appeared first on Physics World.
+]]>The post Quantum sensors reveal ‘smoking gun’ of superconductivity in pressurized bilayer nickelates appeared first on Physics World.
+]]>Superconductors are materials that conduct electricity without resistance when cooled to below a certain critical transition temperature Tc. Apart from a sharp drop in electrical resistance, another important sign that a material has crossed this threshold is the appearance of the Meissner effect, in which the material expels a magnetic field from its interior (diamagnetism). This expulsion creates such a strong repulsive force that a magnet placed atop the superconducting material will levitate above it.
+In “conventional” superconductors such as solid mercury, the Tc is so low that the materials must be cooled with liquid helium to keep them in the superconducting state. In the late 1980s, however, physicists discovered a new class of superconductors that have a Tc above the boiling point of liquid nitrogen (77 K). These “unconventional” or high-temperature superconductors are derived not from metals but from insulators containing copper oxides (cuprates).
+ +Since then, the search has been on for materials that superconduct at still higher temperatures, and perhaps even at room temperature. Discovering such materials would have massive implications for technologies ranging from magnetic resonance imaging machines to electricity transmission lines.
+In 2019 researchers at Stanford University in the US identified nickel oxides (nickelates) as additional high-temperature superconductors. This created a flurry of interest in the superconductivity community because these materials appear to superconduct in a way that differs from their copper-oxide cousins.
+Among the nickelates studied, La3Ni2O7-δ (where δ can range from 0 to 0.04) is considered particularly promising because in 2023, researchers led by Meng Wang of China’s Sun Yat-Sen University spotted certain signatures of superconductivity at a temperature of around 80 K. However, these signatures only appeared when crystals of the material were placed in a device called a diamond anvil cell (DAC). This device subjects samples of material to extreme pressures of more than 400 GPa (or 4 × 106 atmospheres) as it squeezes them between the flattened tips of two tiny, gem-grade diamond crystals.
+The problem, explains Xiaohui Yu of the CAS’ Institute of Physics, is that it is not easy to spot the Meissner effect under such high pressures. This is because the structure of the DAC limits the available sample volume and hinders the use of highly sensitive magnetic measurement techniques such as SQUID. Another problem is that the sample used in the 2023 study contains several competing phases that could mix and degrade the signal of the La3Ni2O7-δ.
+In the new work, Yu and colleagues used nitrogen-vacancy (NV) centres embedded in the DAC as in-situ quantum sensors to track and image the Meissner effect in pressurized bilayer La3Ni2O7-δ. This newly developed magnetic sensing technique boasts both high sensitivity and high spatial resolution, Yu says. What is more, it fits perfectly into the DAC high-pressure chamber.
+Next, they applied a small external magnetic field of around 120 G. Under these conditions, they measured the optically detected magnetic resonance (ODMR) spectra of the NV centres point by point. They could then extract the local magnetic field from the resonance frequencies of these spectra. “We directly mapped the Meissner effect of the bilayer nickelate samples,” Yu says, noting that the team’s image of the magnetic field clearly shows both a diamagnetic region and a region where magnetic flux is concentrated.
+The researchers began their project in late 2023, shortly after receiving single-crystal samples of La3Ni2O7-δ from Wang. “However, after two months of collecting data, we still had no meaningful results,” Yu recalls. “From these experiments, we learnt that the demagnetization signal in La3Ni2O7-δ crystals was quite weak and that we needed to improve either the nickelate sample or the sensitivity of the quantum sensor.”
+To overcome these problems, they switched to using polycrystalline samples, enhancing the quality of the nickelate samples by doping them with praseodymium to make La2PrNi2O7. This produced a sample with an almost pure bilayer structure and thus a much stronger demagnetization signal. They also used shallow NV centres implanted on the DAC cutlet (the smaller face of the two diamond tips).
+“Unlike the NV centres in the original experiments, which were randomly distributed in the pressure-transmitting medium and have relatively large ODMR widths, leading to only moderate sensitivity in the measurements, these shallow centres are evenly distributed and well aligned, making it easier for us to perform magnetic imaging with increased sensitivity,” Yu explains.
+These improvements enabled the team to obtain a demagnetization signal from the La2PrNi2O7 and La3Ni2O7-δ samples, he tells Physics World. “We found that the diamagnetic signal from the La2PrNi2O7 samples is about five times stronger than that from the La3Ni2O7-δ ones prepared under similar conditions – a result that is consistent with the fact that the Pr-doped samples are of a better quality.”
+ +Physicist Jun Zhao of Fudan University, China, who was not involved in this work, says that Yu and colleagues’ measurement represents “an important step forward” in nickelate research. “Such measurements are technically very challenging, and their success demonstrates both experimental ingenuity and scientific significance,” he says. “More broadly, their result strengthens the case for pressurized nickelates as a new platform to study high-temperature superconductivity beyond the cuprates. It will certainly stimulate further efforts to unravel the microscopic pairing mechanism.”
+As well as allowing for the precise sensing of magnetic fields, NV centres can also be used to accurately measure many other physical quantities that are difficult to measure under high pressure, such as strain and temperature distribution. Yu and colleagues say they are therefore looking to further expand the application of these structures for use as quantum sensors in high-pressure sensing.
+They report their current work in National Science Review.
+The post Quantum sensors reveal ‘smoking gun’ of superconductivity in pressurized bilayer nickelates appeared first on Physics World.
+]]>The post Quantum foundations: towards a coherent view of physical reality appeared first on Physics World.
+]]>Yet as we celebrate these achievements, we should still reflect on what quantum mechanics reveals about the world itself. What, for example, does this formalism actually tell us about the nature of reality? Do quantum systems have definite properties before we measure them? Do our observations create reality, or merely reveal it?
+These are not just abstract, philosophical questions. Having a clear understanding of what quantum theory is all about is essential to its long-term coherence and its capacity to integrate with the rest of physics. Unfortunately, there is no scientific consensus on these issues, which continue to provoke debate in the research community.
+That uncertainty was underlined by a recent global survey of physicists about quantum foundational issues, conducted by Nature (643 1157). It revealed a persistent tension between “realist” views, which seek an objective, visualizable account of quantum phenomena, and “epistemic” views that regard the formalism as merely a tool for organizing our knowledge and predicting measurement outcomes.
+Only 5% of the 1100 people who responded to the Nature survey expressed full confidence in the Copenhagen interpretation, which is still prevalent in textbooks and laboratories. Further divisions were revealed over whether the wavefunction is a physical entity, a mere calculation device, or a subjective reflection of belief. The lack of agreement on such a central feature underscores the theoretical fragility underlying quantum mechanics.
++The willingness to explore alternatives reflects the intellectual vitality of the field but also underscores the inadequacy of current approaches
+
More broadly, 75% of respondents believe that quantum theory will eventually be replaced, at least partially, by a more complete framework. Encouragingly, 85% agree that attempts to interpret the theory in intuitive or physical terms are valuable. This willingness to explore alternatives reflects the intellectual vitality of the field but also underscores the inadequacy of current approaches.
+We believe that this interpretative proliferation stems from a deeper problem, which is that quantum mechanics lacks a well-defined physical foundation. It describes the statistical outcomes of measurements, but it does not explain the mechanisms behind them. The concept of causality has been largely abandoned in favour of operational prescriptions such that quantum theory works impressively in practice but remains conceptually opaque.
+In our view, the way forward is not to multiply interpretations or continue debating them, but to pursue a deeper physical understanding of quantum phenomena. One promising path is stochastic electrodynamics (SED), a classical theory augmented by a random electromagnetic background field, the real vacuum or zero-point field discovered by Max Planck as early as 1911. This framework restores causality and locality by explaining quantum behaviour as the statistical response of particles to this omnipresent background field.
+ +Over the years, several researchers from different lines of thought have contributed to SED. Since our early days with Trevor Marshall, Timothy Boyer and others, we have refined the theory to the point that it can now account for the emergence of features that are considered building blocks of quantum formalism, such as the basic commutator and Heisenberg inequalities.
+Particles acquire wave-like properties not by intrinsic duality, but as a consequence of their interaction with the vacuum field. Quantum fluctuations, interference patterns and entanglement emerge from this interaction, without the need to resort to non-local influences or observer-dependent realities. The SED approach is not merely mechanical, but rather electrodynamic.
+We’re not claiming that SED is the final word. But it does offer a coherent picture of microphysical processes based on physical fields and forces. Importantly, it doesn’t abandon the quantum formalism but rather reframes it as an effective theory – a statistical summary of deeper dynamics. Such a perspective enables us to maintain the successes of quantum mechanics while seeking to explain its origins.
+ +For us, SED highlights that quantum phenomena can be reconciled with concepts central to the rest of physics, such as realism, causality and locality. It also shows that alternative approaches can yield testable predictions and provide new insights into long-standing puzzles. One phenomenon lying beyond current quantum formalism that we could now test, thanks to progress in experimental physics, is the predicted violation of Heisenberg’s inequalities over very short time periods.
+As quantum science continues to advance, we must not lose sight of its conceptual foundations. Indeed, a coherent, causally grounded understanding of quantum mechanics is not a distraction from technological progress but a prerequisite for its full realization. By turning our attention once again to the foundations of the theory, we may finally complete the edifice that began to rise a century ago.
+The centenary of quantum mechanics should be a time not just for celebration but critical reflection too.
+This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
+Find out more on our quantum channel.
++
The post Quantum foundations: towards a coherent view of physical reality appeared first on Physics World.
+]]>The post Twisted graphene reveals a new type of chirality appeared first on Physics World.
+]]>Traditionally, topological phenomena have been studied in spinful systems, where the presence of spin allows for chiral interactions and symmetry-breaking effects. This new study challenges that paradigm by demonstrating that topological chirality can arise even in spinless systems, purely from the three-dimensional structural arrangement of otherwise featureless units.
+The researchers mathematically investigate two types of twisted 3D graphite systems, composed of stacked 2D graphene layers. Importantly, large twist angles were used (21.8∘). In one configuration, the layers are twisted into a helical screw-like structure, while in the other, the twist angles alternate between layers, forming a periodic chiral pattern. These structural designs give rise to novel topological phases.
+A key mechanism underlying these effects is intervalley Umklapp scattering. This scattering captures the chirality of the twisted interfaces and induces a sign-flipped interlayer hopping, by introducing a π-flux lattice gauge field. This field alters the symmetry algebra of the system, enabling the emergence of spinless topological chirality.
+This research opens up a new design principle for topological materials. By engineering the spatial patterning of structureless units, researchers can induce topological chirality without relying on spin. This has significant implications for the development of topological photonic and acoustic devices, potentially leading to simpler, more tunable materials for applications in quantum computing, sensing, and waveguiding technologies.
+Spinless topological chirality from Umklapp scattering in twisted 3D structures
+Cong Chen et al 2025 Rep. Prog. Phys. 88 018001
++
Interacting topological insulators: a review by Stephan Rachel (2018)
+The post Twisted graphene reveals a new type of chirality appeared first on Physics World.
+]]>The post Unveiling topological edge states with attosecond precision appeared first on Physics World.
+]]>A Chern insulator is a unique material that acts as an insulator in its bulk but conducts electricity along its edges. These edge states arise from the material’s crystal structure of the bulk. Unlike other topological materials, Chern insulators do not require magnetic fields. Their edge conduction is topologically protected, meaning it is highly resistant to defects and noise. This makes them promising candidates for quantum technologies, spintronics, and energy-efficient electronics.
+In this study, researchers developed a new method to detect phase changes in Chern insulators. Using numerical simulations, they demonstrated that attosecond x-ray absorption spectroscopy, combined with polarization-dependent dichroism, can effectively reveal these transitions. Their semi-classical approach isolates the intra-band Berry connection, providing deeper insight into how topological edge states form and how electrons behave in these systems.
+This work represents a significant advance in topological materials research. It offers a new way to observe changes in quantum materials in real time, expands the use of attosecond spectroscopy from simple atoms and molecules to complex solids, and opens the door to studying dynamic systems like Floquet topological insulators.
+Topological phase transitions via attosecond x-ray absorption spectroscopy
+Juan F P Mosquera et al 2024 Rep. Prog. Phys. 87 117901
++
Strong–laser–field physics, non–classical light states and quantum information science by U Bhattacharya, Th Lamprou, A S Maxwell, A Ordóñez, E Pisanty, J Rivera-Dean, P Stammer, M F Ciappina, M Lewenstein and P Tzallas (2023)
+The post Unveiling topological edge states with attosecond precision appeared first on Physics World.
+]]>The post Broadband wireless gets even broader thanks to integrated transmitter appeared first on Physics World.
+]]>Modern complementary metal oxide semiconductor (CMOS) electronic devices generally produce signals at frequencies of a few GHz. These signals are then often shifted into other frequency bands for processing and transmission. For example, sending electronic signals long distances down silicon optical fibres generally means using a frequency of around 200 THz, as silicon is transparent at the corresponding “telecoms” wavelength of 1550nm.
+ +One of the most popular materials for performing this conversion is lithium niobate. This material has been called “the silicon of photonics” because it is highly nonlinear, allowing optical signals to be generated efficiently at a wide range of frequencies.
+In integrated devices, bulk lithium niobate modulators are undesirable. However, in 2018 Cheng Wang and colleagues led by Marko Lončar of Harvard University in Massachusetts, US, developed a miniaturized, thin-film version that used an interferometric design to create a much stronger electro-optic effect in a shorter distance. “Usually, the bandwidth limit is set by the radiofrequency loss,” explains Wang, who is now at the City University of Hong Kong, China. “Being shorter means you can go to much higher frequencies.”
+In the new work, Wang, together with researchers at Peking University in China and the University of California, Santa Barbara in the US, used an optimized version of this setup to make a broadband data transmission system. They divided the output of a telecom-wavelength oscillator into two arms. In one of these arms, optical signal modulation software imprinted a complex amplitude-phase pattern on the wave. The other arm was exposed to the data signal and a lithium niobate microring resonator. The two arms were then recombined at a photodetector, and the frequency difference between the two arms (in the GHz range) was transmitted using an antenna to a detector, where the process was reversed.
+Crucially, the offset between the centre frequencies of the two arms (the frequency of the beat note at the photodetector when the two arms are recombined) is determined solely by the frequency shift imposed by the lithium niobate resonator. This can be tuned anywhere between 0.5 GHz and 115 GHz via the thermo-optic effect – essentially, incorporating a small electronic heater and using it to tune the refractive index. The signal is then encoded in modulations of the beat frequency, with additional information imprinted into the phase of the waves.
+The researchers say this system is an improvement on standard electronic amplifiers because such devices usually operate in relatively narrow bands. Using them to make large jumps in frequency therefore means that signals need to be shifted multiple times. This introduces cumulative noise into the signal and is also problematic for applications such as robotic surgery, where the immediate arrival of a signal can literally be a matter of life and death.
+The researchers demonstrated wireless data transfer across a distance of 1.3 m, achieving speeds of up to 100 gigabits per second. In the present setup, they used three different horn antennas to transmit microwaves of different frequencies through free space, but they hope to improve this: “That is our next goal – to get a fully frequency-tuneable link,” says Peking University’s Haowen Shu.
+The researchers believe such a wideband setup could be crucial to the development of the “Internet of things” in which all sorts of different electronic devices are networked together without unwanted interference. Atmospheric transparency windows below 6 GHz, where loss is lower and propagation lengths are higher, are likely to be crucial for providing wireless Internet access to rural areas. Meanwhile, higher frequencies – with higher data rates – will probably be needed for augmented reality and remote surgery applications.
+ +Alan Willner, an electrical engineer and optical scientist at the University of Southern California, US, who was not involved in the research, thinks the team is on the right track. “You have lots of spectrum in various radio bands for wireless communications,” he says. “But how are you going to take advantage of these bands to transmit high data rates in a cost-effective and flexible way? Are you going to use multiple different systems – one each for microwave, millimetre wave, and terahertz? Using one tuneable and reconfigurable integrated platform to cover these bands is significantly better. This research is a great step in that direction.”
+The research is published in Nature.
+The post Broadband wireless gets even broader thanks to integrated transmitter appeared first on Physics World.
+]]>The post From ‘rewarding and exciting’ to ‘challenging and overwhelming’: what it means to have a career in intelligence and cyber security appeared first on Physics World.
+]]>
As a physics graduate or an early career researcher looking for a job, you might not think of the UK’s primary intelligence and security agency – Government Communications Headquarters (GCHQ) – as somewhere you might consider. But GCHQ, which covers counter-terrorism, cybersecurity, organized crime and defence support for the UK, hires a vast number of physicists. Indeed, to celebrate the 2025 International Year of Quantum Science and Technology, the agency has hosted many internal talks, informational campaigns and more.
+GCHQ works with the Secret Intelligence Service (MI6), MI5, as well as the armed forces, a number of international partners, and firms in the private sector and academia. To find out more about a career at GCHQ – working with cutting-edge technology to identify, analyse and disrupt threats to the UK – Physics World speaks to two people with academic backgrounds who have a long career at the organization. They tell us about the benefits, the difficulties and the complexity of working at an intelligence agency.
+ +Nia is the deputy director for science at GCHQ, where she has worked for the past 15 years. After studying physics at university, she joined GCHQ as a graduate and has since contributed to a wide range of scientific and technological initiatives in support of national security. She is a Fellow of both the Institute of Physics (IOP), which publishes Physics World, and the Institution of Engineering and Technology (IET).
+Cheryl leads GCHQ’s adoption of quantum technologies. Following a degree in engineering, her career began as an apprentice at an avionics company. Since then, she has had many roles across research and development at GCHQ and across broader UK government departments, with a focus on understanding and implementing emerging technology. Cheryl is a Fellow of the IET and a Member of the IOP.
+Nia My fascination with science was nurtured from a young age, largely inspired by my parents. My mum was a physics teacher, and my dad is a passionate historian with an insatiable curiosity about the world. Growing up in an environment rich with books, experiments, and discussions about how things work – whether exploring astrophysics, geology or ancient Egypt – instilled in me a lifelong desire to understand our universe. My mum’s electronics, mechanics and physics lessons meant there were always breadboards, crocodile clips and even a Van de Graaff generator in the house, transforming learning into an exciting tangible experience.
+Cheryl As a child I was always interested in nature and in how things work. I used to build bug farms in the garden and still have my old Observer’s books with the butterflies, etc, ticked off when spotted. Leaning towards my practical side of constantly making things (and foolishly believing my careers teacher that a physics degree would only lead to teaching), I took physics, chemistry and maths A-levels and a degree in engineering.
+Nia I was born and grew up in South Wales and attended a Welsh-language school where I studied physics, maths and chemistry at A-level. I then studied physics at Durham University for four years, before I started working at GCHQ as a graduate. My first role was in an area that is now the National Cyber Security Centre (NCSC). As the cyber security arm of GCHQ, it researches the reliability of semiconductors in national security applications and uses that research to shape policy and security standards. This was great for me as my final year in university was focused on material science and condensed matter physics which came in very useful.
+Cheryl My engineering degree apprenticeship was through an aerospace company in Cheltenham, and I worked there afterwards designing test kits for the RAF. It was almost natural that I should at least try a few years at GCHQ as a local employer and I had plans to then move to other R&D labs.
+Nia Working at GCHQ is rewarding and exciting especially as we look at the most exciting developments in emerging technologies. It can also be challenging especially when navigating the complexities of global security challenges amid an unpredictable geopolitical landscape. There are days when media reports or international events feel overwhelming, but knowing that my work contributes towards safeguarding the UK’s interests today and into the future offers a strong sense of purpose.
+The most rewarding aspect, by far, is the people. We have some of the brightest, most dedicated experts – mentors, colleagues, friends – whose commitment inspires me daily. Their support and collaboration make even the most demanding days manageable.
+Cheryl At GCHQ I found that I have been able to enjoy several very different “careers” within the organization, including opportunities to travel and to develop diverse skills. This, together with a flexibility to change working patterns to suit stages of family life, has meant I have stayed for most of my career.
++I’ve had some amazing and unique opportunities and experiences
+Cheryl, GCHQ
I’ve had some amazing and unique opportunities and experiences. In the Cheltenham area it’s accepted that so many people work here and is widely respected that we cannot talk about the detail of what we do.
+
Nia As deputy director of science at GCHQ, my role involves collaborating with experts to understand how emerging technologies, including quantum science, impact national security. Quantum offers extraordinary potential for secure communication and advanced sensing – but it equally threatens to upend existing security protocols if adversaries harness it maliciously. A deep understanding of physics is crucial – not only to spot opportunities but also to anticipate and counter threats.
+ +Quantum science is just one example of how a fundamental understanding of physics and maths gives you the foundations to understand the broad waterfront of emerging technologies coming our way. We work closely with government departments, academia, industry and start-ups to ensure the UK remains at the forefront of this field, shaping a resilient and innovative security ecosystem.
+Cheryl I first came across quantum science, technologies and quantum computing around 15 years ago through an emerging technology analysis role in R&D; and I watched and learned keenly as I could see that these would be game changing. Little did I know at the time that I would later be leading our adoption of quantum and just how significant these emerging technologies for sensing, timing and computing would grow to be.
+The UK national ecosystem developing around quantum technologies is a great mix of minds from academia, industry and government departments and is one of the most collegiate, inspiring and well-motivated communities that I have interacted with.
+Nia Many people will have heard of historic tales of the tap on the shoulder for people to work in intelligence agencies, but as with all other jobs the reality is that people can find out about careers at GCHQ in much the same way they would with any other kind of job.
++Maintaining a hunger to learn and adapt is what will set you apart
+Nia, GCHQ
I would emphasize qualities like curiosity, problem-solving and resilience as being key. The willingness to roll up your sleeves, a genuine care for collaborative work, and empathy are equally important – particularly because much of what we do is sensitive and demands trust and discretion. Maintaining a hunger to learn and adapt is what will set you apart.
+Cheryl We have roles where you will be helping to solve complex problems – doing work you simply won’t find anywhere else. It’s key to have curiosity, an open mind and don’t be put off by the fact you can’t ask too many questions in advance!
+Nia Diversity and inclusion are mission-critical for us at GCHQ, gathering the right mix of minds to find innovative solutions to the toughest of problems. We’re committed to building on our work to better represent the communities we serve, including increasing the number of people from ethnic minority backgrounds and the number of women in senior roles.
+Cheryl We are committed to having a workforce that reflects the communities we serve. Our locations in the north-west, in both Manchester and now Lancashire, are part of the mission to find the right mix of minds
+Nia One key lesson is that career paths are rarely linear. When starting out, uncertainty can feel daunting, but it’s an opportunity for growth. Embrace challenges and seize opportunities that excite you – whether they seem narrowly related to your studies or not. Every experience contributes to your development. Additionally, don’t underestimate the importance of work–life balance. GCHQ offers a supportive environment – remember, careers are marathons, not sprints. Patience and curiosity will serve you well.
+Cheryl It takes multidisciplinary teams to deliver game-changers and new ecosystems. Your initial “career choices” are just a stepping stone from which you can forge your own path and follow your instincts.
+The post From ‘rewarding and exciting’ to ‘challenging and overwhelming’: what it means to have a career in intelligence and cyber security appeared first on Physics World.
+]]>The post Desert dust helps freeze clouds in the northern hemisphere appeared first on Physics World.
+]]>In the study, which was led by environmental scientist Diego Villanueva of ETH Zürich, the researchers focused on clouds in the so-called mixed-phase regime, which form at temperatures of between −39° and 0°C and are commonly found in mid- and high-latitudes, particularly over the North Atlantic, Siberia and Canada. These mixed-phase regime clouds (MPRCs) are often topped by a liquid or ice layer, and their makeup affects how much sunlight they reflect back into space and how much water they can release as rain or snow. Understanding them is therefore important for forecasting weather and making projections of future climate.
+ +Researchers have known for a while that MPRCs are extremely sensitive to the presence of ice-nucleating particles in their environment. Such particles mainly come from mineral dust aerosols (such as K-feldspar, quartz, albite and plagioclase) that get swept up into the upper atmosphere from deserts. The Sahara Desert in northern Africa, for example, is a prime source of such dust in the Northern Hemisphere.
+Using 35 years of satellite data collected as part of the Cloud_cci project and MERRA-2 aerosol reanalyses, Villanueva and colleagues looked for correlations between dust levels and the formation of ice-topped clouds. They found that at temperatures of between -15°C and -30°C, the more dust there was, the more frequent the ice clouds were. What is more, their calculated increase in ice-topped clouds with increasing dust loading agrees well with previous laboratory experiments that predicted how dust triggers droplet freezing.
+The new study, which is detailed in Science, shows that there is a connection between aerosols in the micrometre-size range and cloud ice observed over distances of several kilometres, Villanueva says. “We found that it is the nanoscale defects on the surface of dust aerosols that trigger ice clouds, so the process of ice glaciation spans more than 15 orders of magnitude in length,” he explains.
+Thanks to this finding, Villaneuva tells Physics World that climate modellers can use the team’s dataset to better constrain aerosol-cloud processes, potentially helping them to construct better estimates of cloud feedback and global temperature projections.
+ +The result also shows how sensitive clouds are to varying aerosol concentrations, he adds. “This could help bring forward the field of cloud seeding and include this in climate geoengineering efforts.”
+The researchers say they have successfully replicated their results using a climate model and are now drafting a new manuscript to further explore the implications of dust-driven cloud glaciation for climate, especially for the Arctic.
+The post Desert dust helps freeze clouds in the northern hemisphere appeared first on Physics World.
+]]>The post Radioactive ion beams enable simultaneous treatment and imaging in particle therapy appeared first on Physics World.
+]]>Particle therapy using beams of protons or heavy ions is a highly effective cancer treatment, with the favourable depth–dose deposition – the Bragg peak – providing extremely conformal tumour targeting. This conformality, however, makes particle therapy particularly sensitive to range uncertainties, which can impact the Bragg peak position.
+One way to reduce such uncertainties is to use positron emission tomography (PET) to map the isotopes generated as the treatment beam interacts with tissues in the patient. For therapy with carbon (12C) ions, currently performed at 17 centres worldwide, this involves detecting the beta decay of 10C and 11C projectile fragments. Unfortunately, such fragments generate a small PET signal, while their lower mass shifts the measured activity peak away from the Bragg peak.
+ +The researchers – working within the ERC-funded BARB (Biomedical Applications of Radioactive ion Beams) project – propose that treatment with positron-emitting ions such as 11C could overcome these obstacles. Radioactive ion beams have the same biological effectiveness as their corresponding stable ion beams, but generate an order of magnitude larger PET signal. They also reduce the shift between the activity and dose peaks, enabling precise localization of the ion beam in vivo.
+“Range uncertainty remains the main problem of particle therapy, as we do not know exactly where the Bragg peak is,” explains Marco Durante, head of biophysics at the GSI Helmholtz Centre for Heavy Ion Research and principal investigator of the BARB project. “If we ‘aim-and-shoot’ using a radioactive beam and PET imaging, we can see where the beam is and can then correct it. By doing this, we can reduce the margins around the target that spoil the precision of particle therapy.”
+To test this premise, Durante and colleagues performed in vivo experiments at the GSI/FAIR accelerator facility in Darmstadt. For online range verification, they used a portable small-animal in-beam PET scanner built by Katia Parodi and her team at LMU Munich. The scanner, initially designed for the ERC project SIRMIO (Small-animal proton irradiator for research in molecular image-guided radiation-oncology), contains 56 depth-of-interaction detectors – based on scintillator blocks of pixelated LYSO crystals – arranged spherically with an inner diameter of 72 mm.
+
“Not only does our spherical in-beam PET scanner offer unprecedented sensitivity and spatial resolution, but it also enables on-the-fly monitoring of the activity implantation for direct feedback during irradiation,” says Parodi, co-principal investigator of the BARB project.
+The researchers used a radioactive 11C-ion beam – produced at the GSI fragment separator – to treat 32 mice with an osteosarcoma tumour implanted in the neck near the spinal cord. To encompass the full target volume, they employed a range modulator to produce a spread-out Bragg peak (SOBP) and a plastic compensator collar, which also served to position and immobilize the mice. The anaesthetized animals were placed vertically inside the PET scanner and treated with either 20 or 5 Gy at a dose rate of around 1 Gy/min.
+For each irradiation, the team compared the measured activity with Monte Carlo-simulated activity based on pre-treatment microCT scans. The activity distributions were shifted by about 1 mm, attributed to anatomical changes between the scans (with mice positioned horizontally) and irradiation (vertical positioning). After accounting for this anatomical shift, the simulation accurately matched the measured activity. “Our findings reinforce the necessity of vertical CT planning and highlight the potential of online PET as a valuable tool for upright particle therapy,” the researchers write.
+With the tumour so close to the spine, even small range uncertainties risk damage to the spinal cord, so the team used the online PET images generated during the irradiation to check that the SOPB did not cover the spine. While this was not seen in any of the animals, Durante notes that if it had, the beam could be moved to enable “truly adaptive” particle therapy. Assessing the mice for signs of radiation-induced myelopathy (which can lead to motor deficits and paralysis) revealed that no mice exhibited severe toxicity, further demonstrating that the spine was not exposed to high doses.
+
Following treatment, tumour measurements revealed complete tumour control after 20 Gy irradiation and prolonged tumour growth delay after 5 Gy, suggesting complete target coverage in all animals.
+The researchers also assessed the washout of the signal from the tumour, which includes a slow activity decrease due to the decay of 11C (which has a half-life of 20.34 min), plus a faster decrease as blood flow removes the radioactive isotopes from the tumour. The results showed that the biological washout was dose-dependent, with the fast component visible at 5 Gy but disappearing at 20 Gy.
+“We propose that this finding is due to damage to the blood vessel feeding the tumour,” says Durante. “If this is true, high-dose radiotherapy may work in a completely different way from conventional radiotherapy: rather than killing all the cancer stem cells, we just starve the tumour by damaging the blood vessels.”
+Next, the team intends to investigate the use of 10C or 15O treatment beams, which should provide stronger signals and increased temporal resolution. A new Super-FRS fragment separator at the FAIR accelerator facility will provide the high-intensity beams required for studies with 10C.
+ +Looking further ahead, clinical translation will require a realistic and relatively cheap design, says Durante. “CERN has proposed a design [the MEDICIS-Promed project] based on ISOL [isotope separation online] that can be used as a source of radioactive beams in current accelerators,” he tells Physics World. “At GSI we are also working on a possible in-flight device for medical accelerators.”
+The findings are reported in Nature Physics.
+The post Radioactive ion beams enable simultaneous treatment and imaging in particle therapy appeared first on Physics World.
+]]>The post Garbage in, garbage out: why the success of AI depends on good data appeared first on Physics World.
+]]>In many respects, AI is very similar to other data-analytics solutions in that how it works depends on two things. One is the quality of the input data. The other is the integrity of the user to ensure that the outputs are fit for purpose.
+Previously a niche tool for specialists, AI is now widely available for general-purpose use, in particular through Generative AI (GenAI) tools. Also known as Large Language Models (LLMs), they’re now widley available through, for example, OpenAI’s ChatGPT, Microsoft Co-pilot, Anthropic’s Claude, Adobe Firefly or Google Gemini.
+ +GenAI has become possible thanks to the availability of vast quantities of digitized data and significant advances in computing power. Based on neural networks, this size of model would in fact have been impossible without these two fundamental ingredients.
+GenAI is incredibly powerful when it comes to searching and summarizing large volumes of unstructured text. It exploits unfathomable amounts of data and is getting better all the time, offering users significant benefits in terms of efficiency and labour saving.
+Many people now use it routinely for writing meeting minutes, composing letters and e-mails, and summarizing the content of multiple documents. AI can also tackle complex problems that would be difficult for humans to solve, such as climate modelling, drug discovery and protein-structure prediction.
+I’d also like to give a shout out to tools such as Microsoft Live Captions and Google Translate, which help people from different locations and cultures to communicate. But like all shiny new things, AI comes with caveats, which we should bear in mind when using such tools.
+LLMs, by their very nature, have been trained on historical data. They can’t therefore tell you exactly what may happen in the future, or indeed what may have happened since the model was originally trained. Models can also be constrained in their answers.
+Take the Chinese AI app DeepSeek. When the BBC asked it what had happened at Tiananmen Square in Beijing on 4 June 1989 – when Chinese troops cracked down on protestors – the Chatbot’s answer was suppressed. Now, this is a very obvious piece of information control, but subtler instances of censorship will be harder to spot.
++Trouble is, we can’t know all the nuances of the data that models have been trained on
+
We also need to be conscious of model bias. At least some of the training data will probably come from social media and public chat forums such as X, Facebook and Reddit. Trouble is, we can’t know all the nuances of the data that models have been trained on – or the inherent biases that may arise from this.
+One example of unfair gender bias was when Amazon developed an AI recruiting tool. Based on 10 years’ worth of CVs – mostly from men – the tool was found to favour men. Thankfully, Amazon ditched it. But then there was Apple’s gender-biased credit-card algorithm that led to men being given higher credit limits than women of similar ratings.
+ +Another problem with AI is that it sometimes acts as a black box, making it hard for us to understand how, why or on what grounds it arrived at a certain decision. Think about those online Captcha tests we have to take to when accessing online accounts. They often present us with a street scene and ask us to select those parts of the image containing a traffic light.
+The tests are designed to distinguish between humans and computers or bots – the expectation being that AI can’t consistently recognize traffic lights. However, AI-based advanced driver assist systems (ADAS) presumably perform this function seamlessly on our roads. If not, surely drivers are being put at risk?
+A colleague of mine, who drives an electric car that happens to share its name with a well-known physicist, confided that the ADAS in his car becomes unresponsive, especially when at traffic lights with filter arrows or multiple sets of traffic lights. So what exactly is going on with ADAS? Does anyone know?
+My message when it comes to AI is simple: be careful what you ask for. Many GenAI applications will store user prompts and conversation histories and will likely use this data for training future models. Once you enter your data, there’s no guarantee it’ll ever be deleted. So think carefully before sharing any personal data, such medical or financial information. It also pays to keep prompts non-specific (avoiding using your name or date of birth) so that they cannot be traced directly to you.
+Democratization of AI is a great enabler and it’s easy for people to apply it without an in-depth understanding of what’s going on under the hood. But we should be checking AI-generated output before we use it to make important decisions and we should be careful of the personal information we divulge.
+It’s easy to become complacent when we are not doing all the legwork. We are reminded under the terms of use that “AI can make mistakes”, but I wonder what will happen if models start consuming AI-generated erroneous data. Just as with other data-analytics problems, AI suffers from the old adage of “garbage in, garbage out”.
+But sometimes I fear it’s even worse than that. We’ll need a collective vigilance to avoid AI being turned into “garbage in, garbage squared”.
+The post Garbage in, garbage out: why the success of AI depends on good data appeared first on Physics World.
+]]>The post Why foamy heads on Belgium beers last so long appeared first on Physics World.
+]]>When it comes to beer, a clear sign of a good brew is a big head of foam at the top of a poured glass.
+Beer foam is made of many small bubbles of air, separated from each other by thin films of liquid. These thin films must remain stable, or the bubbles will pop, and the foam will collapse.
+ +What holds these thin films together is not completely understood and is likely conglomerates of proteins, surface viscosity or the presence of surfactants – molecules that reduce surface tension and are found in soaps and detergents.
+To find out more, researchers from ETH Zurich and Eindhoven University of Technology (EUT) investigated beer-foam stability for different types of beers at varying stages of the fermentation process.
+They found that for single-fermentation beers, the foams are mostly held together with the surface viscosity of the beer. This is influenced by proteins in the beer – the more they contain the more viscous the film and more stable the foam will be.
+“We can directly visualize what’s happening when two bubbles come into close proximity,” notes EUT material scientist Emmanouil Chatzigiannakis. “We can directly see the bubble’s protein aggregates, their interface, and their structure.”
+When it comes to double-fermented beers, however, the proteins in the beer are altered slightly by yeast cells and come together to form a two-dimensional membrane that keeps foam intact longer.
+The head was found to be even more stable for triple-fermented beers, which include Belgium Trappist beers. The proteins change further and behave like a surfactant that stabilizes the bubbles.
+The team says that the finding of how the fermentation process alters the stability of bubbles could be used to produce more efficient ways of creating foams – or identify ways to control the amount of froth so that everyone can pour a perfect glass of beer every time. Cheers!
+The post Why foamy heads on Belgium beers last so long appeared first on Physics World.
+]]>The post Making molecules with superheavy elements could shake up the periodic table appeared first on Physics World.
+]]>“We compared the chemical properties of nobelium side-by-side to simultaneously produced molecules containing actinium (element number 89),” says Pore, a research scientist at LBNL. “The success of these measurements demonstrates the possibility to further improve our understanding of heavy and superheavy-element chemistry and so ensure that these elements are placed correctly on the periodic table.”
+ +The periodic table currently lists 118 elements. As well as vertical “groups” containing elements with similar properties and horizontal “periods” in which the number of protons (atomic number Z) in the nucleus increases from left to right, these elements are arranged in three blocks. The block that contains actinides such as actinium (Ac) and nobelium (No), as well as the slightly lighter lanthanide series, is often shown offset, below the bottom of the main table.
+Arranging the elements this way is helpful because it gives scientists an intuitive feel for the chemical properties of different elements. It has even made it possible to predict the properties of new elements as they are discovered in nature or, more recently, created in the laboratory.
+The problem is that the traditional patterns we’ve come to know and love may start to break down for elements at the bottom of the table, putting an end to the predictive periodic table as we know it. The reason, Pore explains, is that these heavy nuclei have a very large number of protons. In the actinides (Z > 88), for example, the intense charge of these “extra” protons exerts such a strong pull on the inner electrons that relativistic effects come into play, potentially changing the elements’ chemical properties.
+“As some of the electrons are sucked towards the centre of the atom, they shield some of the outer electrons from the pull,” Pore explains. “The effect is expected to be even stronger in the superheavy elements, and this is why they might potentially not be in the right place on the periodic table.”
+Understanding the full impact of these relativistic effects is difficult because elements heavier than fermium (Z = 100) need to be produced and studied atom by atom. This means resorting to complex equipment such as accelerated ion beams and the FIONA (For the Identification Of Nuclide A) device at LBNL’s 88-Inch Cyclotron Facility.
+The team chose to study Ac and No in part because they represent the extremes of the actinide series. As the first in the series, Ac has no electrons in its 5f shell and is so rare that the crystal structure of an actinium-containing molecule was only determined recently. The chemistry of No, which contains a full complement of 14 electrons in its 5f shell and is the heaviest of the actinides, is even less well known.
+In the new work, which is described in Nature, Pore and colleagues produced and directly identified molecular species containing Ac and No ions. To do this, they first had to produce Ac and No. They achieved this by accelerating beams of 48Ca with the 88-Inch Cyclotron and directing them onto targets of 169Tm and 208Pb, respectively. They then used the Berkeley Gas-filled Separator to separate the resulting actinide ions from unreacted beam material and reaction by-products.
+The next step was to inject the ions into a chamber in the FIONA spectrometer known as a gas catcher. This chamber was filled with high-purity helium, as well as trace amounts of H2O and N2, at a pressure of approximately 150 torr. After interactions with the helium gas reduced the actinide ions to their 2+ charge state, so-called “coordination compounds” were able to form between the 2+ actinide ions and the H2O and N2 impurities. This compound-formation step took place either in the gas buffer cell itself or as the gas-ion mixture exited the chamber via a 1.3-mm opening and entered a low-pressure (several torr) environment. This transition caused the gas to expand at supersonic speeds, cooling it rapidly and allowing the molecular species to stabilize.
+ +Once the actinide molecules formed, the researchers transferred them to a radio-frequency quadrupole cooler-buncher ion trap. This trap confined the ions for up to 50 ms, during which time they continued to collide with the helium buffer gas, eventually reaching thermal equilibrium. After they had cooled, the molecules were reaccelerated using FIONA’s mass spectrometer and identified according to their mass-to-charge ratio.
+FIONA is much faster than previous such instruments and more sensitive. Both properties are important when studying the chemistry of heavy and superheavy elements, which Pore notes are difficult to make, and which decay quickly. “Previous experiments measured the secondary particles made when a molecule with a superheavy element decayed, but they couldn’t identify the exact original chemical species,” she explains. “Most measurements reported a range of possible molecules and were based on assumptions from better-known elements. Our new approach is the first to directly identify the molecules by measuring their masses, removing the need for such assumptions.”
+As well as improving our understanding of heavy and superheavy elements, Pore says the new work might also have applications in radioactive isotopes used in medical treatment. For example, the 225Ac isotope shows promise for treating certain metastatic cancers, but it is difficult to make and only available in small quantities, which limits access for clinical trials and treatment. “This means that researchers have had to forgo fundamental chemistry experiments to figure out how to get it into patients,” Pore notes. “But if we could understand such radioactive elements better, we might have an easier time producing the specific molecules needed.”
+The post Making molecules with superheavy elements could shake up the periodic table appeared first on Physics World.
+]]>The post Super sticky underwater hydrogels designed using data mining and AI appeared first on Physics World.
+]]>The way in which new materials are designed is changing, with data becoming ever more important in the discovery and design process. Designing soft materials is a particularly tricky task that requires selection of different “building blocks” (monomers in polymeric materials, for example) and optimization of their arrangement in molecular space.
+Soft materials also exhibit many complex behaviours that need to be balanced, and their molecular and structural complexities make it difficult for computational methods to help in the design process – often requiring costly trial and error experimental approaches instead. Now, researchers at Hokkaido University in Japan have combined artificial intelligence (AI) with data mining methods to develop an ultra-sticky hydrogel material suitable for very wet environments – a difficult design challenge because the properties that make materials soft don’t usually promote adhesion. They report their findings in Nature.
+Hydrogels are a permeable soft material composed of interlinked polymer networks with water held within the network. Hydrogels are highly versatile, with properties controlled by altering the chemical makeup and structure of the material.
+Designing hydrogels computationally to perform a specific function is difficult, however, because the polymers used to build the hydrogel network can contain a plethora of chemical functional groups, complicating the discovery of suitable polymers and the structural makeup of the hydrogel. The properties of hydrogels are also influenced by factors including the molecular arrangement and intermolecular interactions between molecules (such as van der Waals forces and hydrogen bonds). There are further challenges for adhesive hydrogels in wet environments, as hydrogels will swell in the presence of water, which needs to be factored into the material design.
+To develop a hydrogel with a strong and lasting underwater adhesion, the researchers mined data from the National Center for Biotechnology Information (NCBI) Protein database. This database contains the amino acid sequences responsible for adhesion in underwater biological systems – such as those found in bacteria, viruses, archaea and eukaryotes. The protein sequences were synthetically mimicked and adapted for the polymer strands in hydrogels.
+ +“We were inspired by nature’s adhesive proteins, but we wanted to go beyond mimicking a few examples. By mining the entire protein database, we aimed to systematically explore new design rules and see how far AI could push the boundaries of underwater adhesion,” says co-lead author Hailong Fan.
+The researchers used information from the database to initially design and synthesize 180 bioinspired hydrogels, each with a unique polymer network and all of which showed adhesive properties beyond other hydrogels. To improve them further, the team employed machine learning to create hydrogels demonstrating the strongest underwater adhesive properties to date, with instant and repeatable adhesive strengths exceeding 1 MPa – an order-of-magnitude improvement over previous underwater adhesives. In addition, the AI-designed hydrogels were found to be functional across many different surfaces in both fresh and saline water.
+“The key achievement is not just creating a record-breaking underwater adhesive hydrogel but demonstrating a new pathway – moving from biomimetic experience to data-driven, AI-guided material design,” says Fan.
+The researchers took the three best performing hydrogels and tested them in different wet environments to show that they could maintain their adhesive properties for long time periods. One hydrogel was used to stick a rubber duck to a rock by the sea, which remained in place despite continuous wave impacts over many tide cycles. A second hydrogel was used to patch up a 20 mm hole on a pipe filled with water and instantly stopped a high-pressure leak. This hydrogel remained in place for five months without issue. The third hydrogel was placed under the skin of mice to demonstrate biocompatibility.
+ +The super strong adhesive properties in wet environments could have far ranging applications, from biomedical engineering (prosthetic coatings or wearable biosensors) to deep-sea exploration and marine farming. The researchers also note that this data-driven approach could be adapted for designing other functional soft materials.
+When asked about what’s next for this research, Fan says that “our next step is to study the molecular mechanisms behind these adhesives in more depth, and to expand this data-driven design strategy to other soft materials, such as self-healing and biomedical hydrogels”.
+The post Super sticky underwater hydrogels designed using data mining and AI appeared first on Physics World.
+]]>The post From a laser lab to <em>The Economist</em>: physicist Jason Palmer on his move to journalism appeared first on Physics World.
+]]>Palmer did a PhD in chemical physics at Imperial College London before turning his hand to science writing with stints at the BBC and New Scientist.
+He explains how he made the transition from the laboratory to the newsroom and offers tips for scientists planning to make the same career journey. We also chat about how artificial intelligence is changing how journalists work.
+The post From a laser lab to <em>The Economist</em>: physicist Jason Palmer on his move to journalism appeared first on Physics World.
+]]>The post Crainio’s Panicos Kyriacou explains how their light-based instrument can help diagnose brain injury appeared first on Physics World.
+]]>Every three minutes in the UK, someone is admitted to hospital with a head injury, it’s a very common problem. But when someone has a blow to the head, nobody knows how bad it is until they actually reach the hospital. TBI is something that, at the moment, cannot be assessed at the point of injury.
+ +From the time of impact to the time that the patient receives an assessment by a neurosurgical expert is known as the golden hour. And nobody knows what’s happening to the brain during this time – you don’t know how best to manage the patient, whether they have a severe TBI with intracranial pressure rising in the head, or just a concussion or a medium TBI.
+Once at the hospital, the neurosurgeons have to assess the patient’s intracranial pressure, to determine whether it is above the threshold that classifies the injury as severe. And to do that, they have to drill a hole in the head – literally – and place an electrical probe into the brain. This really is one of the most invasive non-therapeutic procedures, and you obviously can’t do this to every patient that comes with a blow in the head. It has its risks, there is a risk of haemorrhage or of infection.
+Therefore, there’s a need to develop technologies that can measure intracranial pressure more effectively, earlier and in a non-invasive manner. For many years, this was almost like a dream: “How can you access the brain and see if the pressure is rising in the brain, just by placing an optical sensor on the forehead?”
+The research goes back to 2016, at the Research Centre for Biomedical Engineering at City, University of London (now City St George’s, University of London), when the National Institute for Health Research (NIHR) gave us our first grant to investigate the feasibility of a non-invasive intracranial sensor based on light technologies. We developed a prototype, secured the intellectual property and conducted a feasibility study on TBI patients at the Royal London Hospital, the biggest trauma hospital in the UK.
+It was back in 2021, before Crainio was established, that we first discovered that after we shone certain frequencies of light, like near-infrared, into the brain through the forehead, the optical signals coming back – known as the photoplethysmogram, or PPG – contained information about the physiology or the haemodynamics of the brain.
+When the pressure in the brain rises, the brain swells up, but it cannot go anywhere because the skull is like concrete. Therefore, the arteries and vessels in the brain are compressed by that pressure. PPG measures changes in blood volume as it pulses through the arteries during the cardiac cycle. If you have a viscoelastic artery that is opening and closing, the volume of blood changes and this is captured by the PPG. Now, if you have an artery that is compromised, pushed down because of pressure in the brain, that viscoelastic property is impacted and that will impact the PPG.
+Changes in the PPG signal due to changes arising from compression of the vessels in the brain, can give us information about the intracranial pressure. And we developed algorithms to interrogate this optical signal and machine learning models to estimate intracranial pressure.
+Following our research within the university, Crainio was set up in 2022. It brought together a team of experts in medical devices and optical sensors to lead the further development and commercialization of this device. And this small team worked tirelessly over the last few years to generate funding to progress the development of the optical sensor technology and bring it to a level that is ready for further clinical trials.
+
In 2023, Crainio was successful with an Innovate UK biomedical catalyst grant, which will enable the company to engage in a clinical feasibility study, optimize the probe technology and further develop the algorithms. The company was later awarded another NIHR grant to move into a validation study.
+The interest in this project has been overwhelming. We’ve had a very positive feedback from the neurocritical care community. But we also see a lot of interest from communities where injury to the brain is significant, such as rugby associations, for example.
+While Crainio’s primary focus is to deliver a technology for use in critical care, the system could also be used in ambulances, in helicopters, in transfer patients and beyond. The device is non-invasive, the sensor is just like a sticking plaster on the forehead and the backend is a small box containing all the electronics. In the past few years, working in a research environment, the technology was connected into a laptop computer. But we are now transferring everything into a graphical interface, with a monitor to be able to see the signals and the intracranial pressure values in a portable device.
+The first study, a feasibility study on the sensor technology, was done during the time when the project was within the university. The second round is led by Crainio using a more optimized probe. Learning from the technical challenges we had in the first study, we tried to mitigate them with a new probe design. We’ve also learned more about the challenges associated with the acquisition of signals, the type of patients, how long we should monitor.
+We are now at the stage where Crainio has redeveloped the sensor and it looks amazing. The technology has received approval by MHRA, the UK regulator, for clinical studies and ethical approvals have been secured. This will be an opportunity to work with the new probe, which has more advanced electronics that enable more detailed acquisition of signals from TBI patients.
+We are again partnering with the Royal London Hospital, as well as collaborators from the traumatic brain injury team at Cambridge and we’re expecting to enter clinical trials soon. These are patients admitted into neurocritical trauma units and they all have an invasive intracranial pressure bolt. This will allow us to compare the physiological signal coming from our intracranial pressure sensor with the gold standard.
+The signals will be analysed by Crainio’s data science team, with machine learning algorithms used to look at changes in the PPG signal, extract morphological features and build models to develop the technology further. So we’re enriching the study with a more advanced technology, and this should lead to more accurate machine learning models for correctly capturing dynamic changes in intracranial pressure.
++The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose traumatic brain injury effectively, faster, accurately and earlier
+
This time around, we will also record more information from the patients. We will look at CT scans to see whether scalp density and thickness have an impact. We will also collect data from commercial medical monitors within neurocritical care to see the relation between intracranial pressure and other physiological data acquired in the patients. We aim to expand our knowledge of what happens when a patient’s intracranial pressure rises – what happens to their blood pressures? What happens to other physiological measurements?
+Crainio is very ambitious. We’re hoping that within the next couple of years we will progress adequately in order to achieve CE marking and all meet the standards that are necessary to launch a medical device.
+The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose TBI effectively, faster, accurately and earlier. This can only yield better outcomes and improve patients’ quality-of-life.
+Of course, as a company we’re interested in being successful commercially. But the ambition here is, first of all, to keep the cost affordable. We live in a world where medical technologies need to be affordable, not only for Western nations, but for nations that cannot afford state-of-the-art technologies. So this is another of Crainio’s primary aims, to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.
+The post Crainio’s Panicos Kyriacou explains how their light-based instrument can help diagnose brain injury appeared first on Physics World.
+]]>The post Extremely stripped star reveals heavy elements as it explodes appeared first on Physics World.
+]]>
For the first time, astronomers have observed clear evidence for a heavily stripped star that has shed many of its outer layers before its death in a supernova explosion. Led by Steve Schulze at Northwestern University, the team has spotted the spectral signatures of heavier elements that are usually hidden deep within stellar interiors.
+Inside a star, atomic nuclei fuse together to form heavier elements in a process called nucleosynthesis. This releases a vast amount of energy that offsets the crushing force of gravity.
+As stars age, different elements are consumed and produced. “Observations and models of stars tell us that stars are enormous balls of hydrogen when they are born,” Schulze explains. “The temperature and density at the core are so high that hydrogen is fused into helium. Subsequently, helium fuses into carbon, and this process continues until iron is produced.”
+Ageing stars are believed to have an onion-like structure, with a hydrogen outer shell enveloping deeper layers of successively heavier elements. Near the end of a star’s life, inner-shell elements including silicon, sulphur, and argon fuse to form a core of iron. Unlike lighter elements, iron does not release energy as it fuses, but instead consumes energy from its surroundings. As a result, the star can no longer withstand its own gravity and it will collapse rapidly in and then explode in a dramatic supernova.
+Rarely, astronomers can observe an old star that has blown out its outer layers before exploding. When the explosion finally occurs, heavier elements that are usually hidden within deeper shells create absorption lines in the supernova’s light spectrum, allowing astronomers to determine the compositions of these inner layers. So far, inner-layer elements as heavy as carbon and oxygen have been observed, but not direct evidence for elements in deeper layers.
+ +Yet in 2021, a mysterious new observation was made by a programme of the Zwicky Transient Facility headed by Avishay Gal-Yam at the Weizmann Institute of Science in Israel. The team was scanning the sky for signs of infant supernovae at the very earliest stages following their initial explosion.
+“On 7 September 2021 it was my duty to look for infant supernovae,” Schulze recounts. “We discovered SN 2021yfj due to its rapid increase in brightness. We immediately contacted Alex Filippenko’s group at the University of California Berkeley to ask whether they could obtain a spectrum of this supernova.”
+When the results arrived, the team realised that the absorption lines in the supernova’s spectrum were unlike anything they had encountered previously. “We initially had no idea that most of the features in the spectrum were produced by silicon, sulphur, and argon,” Schulze continues. Gal-Yam took up the challenge of identifying the mysterious features in the spectrum.
+In the meantime, the researchers examined simultaneous observations of SN 2021yfj, made by a variety of ground- and space-based telescopes. When Gal-Yam’s analysis was complete, all of the team’s data confirmed the same result. “We had detected a supernova embedded in a shell of material rich in silicon, sulphur, and argon,” Schulze describes. “These elements are formed only shortly before a star dies, and are often hidden beneath other materials – therefore, they are inaccessible under normal circumstances.”
+ +The result provided clear evidence that the star had been more heavily stripped back towards the end of its life than any other observed previously: shedding many of its outer layers before the final explosion.
+“SN 2021yfj demonstrates that stars can die in far more extreme ways than previously imagined,” says Schulze. “It reveals that our understanding of how stars evolve and die is still not complete, despite billions of them having already been studied.” By studying their results, the team now hopes that astronomers can better understand the later stages of stellar evolution, and the processes leading up to these dramatic ends.
+The research is described in Nature.
+The post Extremely stripped star reveals heavy elements as it explodes appeared first on Physics World.
+]]>The post Rainer Weiss: US gravitational-wave pioneer dies aged 92 appeared first on Physics World.
+]]>Weiss was born in Berlin, Germany, on 29 September 1932 shortly before the Nazis rose to power. With a father who was Jewish and an ardent communist, Weiss and his family were forced to flee the country – first to Czechoslovakia and then to the US in 1939. Weiss was raised in New York, finishing his school days at the private Columbia Grammar School thanks to a scholarship from a refugee relief organization.
+ +In 1950 Weiss began studying electrical engineering at Massachusetts Institute of Technology (MIT) before switching to physics, eventually earning a PhD in 1962, developing atomic clocks under the supervision of Jerrold Zacharias,. He then worked at Tufts University before moving to Princeton University, where he was a research associate with the astronomer and physicist Robert Dicke.
+In 1964 Weiss returned to MIT, where he began developing his idea of using a large interferometer to measure gravitational waves. Teaming up with Kip Thorne at the California Institute of Technology (Caltech), Weiss drew up a feasibility study for a kilometre-scale laser interferometer. In 1979 the National Science Foundation funded Caltech and MIT to develop the proposal to build LIGO.
+Construction of two LIGO detectors – one in Hanford, Washington and the other at Livingston, Louisiana, each of which featured arms 4 km long – began in 1990, with the facilities opening in 2002. After almost a decade of operation, however, no waves had been detected so in 2011 the two observatories were upgraded to make them 10 times more sensitive than before.
+On 14 September 2015 – during the first observation run of what was known as Advanced LIGO, or aLIGO – the interferometer detected gravitational waves from two merging black holes some 1.3 billion light-years from Earth. The discovery was announced by those working on aLIGO in February 2016.
+The following year, Weiss was awarded one half of the 2017 Nobel Prize for Physics “for decisive contributions to the LIGO detector and the observation of gravitational waves”. The other half was shared by Thorne and fellow Caltech physicist Barry Barish, who was LIGO project director.
+As well as pioneering the detection of gravitational waves, Weiss also developed atomic clocks and led efforts to measure the spectrum of the cosmic microwave background via weather balloons. He co-founded NASA’s Cosmic Background Explorer project, measurements from which have helped support the Big Bang theory describing the expansion of the universe.
+ +In addition to the Nobel prize, Weiss was awarded the Gruber Prize in Cosmology in 2006, the Einstein Prize from the American Physical Society in 2007 as well as the Shaw Prize and the Kavli Prize in Astrophysics, both in 2016.
+MIT’s dean of science Nergis Mavalvala, who worked with Weiss to build an early prototype of a gravitational-wave detector as part of her PhD in the 1990s, says that every gravitational-wave event that is observed “will be a reminder of his legacy”.
+“[Weiss] leaves an indelible mark on science and a gaping hole in our lives,” says Mavalvala. “I am heartbroken, but also so grateful for having him in my life, and for the incredible gifts he has given us – of passion for science and discovery, but most of all to always put people first.”
+The post Rainer Weiss: US gravitational-wave pioneer dies aged 92 appeared first on Physics World.
+]]>The post Famous double-slit experiment gets its cleanest test yet appeared first on Physics World.
+]]>First performed in the 1800s by Thomas Young, the double-slit experiment has been revisited many times. Its setup is simple: send light toward a pair of slits in a screen and watch what happens. Its outcome, however, is anything but. If the light passes through the slits unobserved, as it did in Young’s original experiment, an interference pattern of bright and dark fringes appears, like ripples overlapping in a pond. But if you observe which slit the light goes through, as Albert Einsten proposed in a 1920s “thought experiment” and as other physicists have since demonstrated in the laboratory, the fringes vanish in favour of two bright spots. Hence, whether light acts as a wave (fringes) or a particle (spots) depends on whether anyone observes it. Reality itself seems to shift with the act of looking.
+Einstein disliked the implications of this, and he and Niels Bohr debated them extensively. According to Einstein, observation only has an effect because it introduces noise. If the slits were mounted on springs, he suggested, their recoil would reveal the photon’s path without destroying the fringes.
+Bohr countered that measuring the photon’s recoil precisely enough to reveal its path would blur the slits’ positions and erase interference. For him, this was not a flaw of technology but a law of nature – namely, his own principle of complementarity, which states that quantum systems can show wave-like or particle-like behaviour, but never both at once.
+Physicists have performed numerous versions of the experiment since, and each time the results have sided with Bohr. Yet the unavoidable noise in real set-ups left room for doubt that this counterintuitive rule was truly fundamental.
+To celebrate the International Year of Quantum Science and Technology, physicists in Wolfgang Ketterle’s group at MIT performed Einstein’s thought experiment directly. They began by cooling more than 10,000 rubidium atoms to near absolute zero and trapping them in a laser-made lattice such that each one acted as an individual scatterer of light. If a faint beam of light was sent through this lattice, a single photon could scatter off an atom.
+Since the beam was so faint, the team could collect very little information per experimental cycle. “This was the most difficult part,” says team member Hanzhen Lin, a PhD student at MIT. “We had to repeat the experiment thousands of times to collect enough data.”
+In every such experiment, the key was to control how much photon path information the atoms provided. The team did this by adjusting the laser traps to tune the “fuzziness” of the atoms’ position. Tightly trapped atoms had well-defined positions and so, according to Heisenberg’s uncertainty principle, they could not reveal much about the photon’s path. In these experiments, fringes appeared. Loosely trapped atoms, in contrast, had more position uncertainty and were able to move, meaning an atom struck by a photon could carry a trace of that interaction. This faint record was enough to collapse the interference fringes, leaving only spots. Once again, Bohr was right.
+While Lin acknowledges that theirs is not the first experiment to measure scattered light from trapped atoms, he says it is the first to repeat the measurements after the traps were removed, while the atoms floated freely. This went further than Einstein’s spring-mounted slit idea, and (since the results did not change) eliminated the possibility that the traps were interfering with the observation.
+ +“I think this is a beautiful experiment and a testament to how far our experimental control has come,” says Thomas Hird, a physicist who studies atom-light interactions at the University of Birmingham, UK, and was not involved in the research. “This probably far surpasses what Einstein could have imagined possible.”
+The MIT team now wants to observe what happens when there are two atoms per site in the lattice instead of one. “The interactions between the atoms at each site may give us interesting results,” Lin says.
+The team describes the experiment in Physical Review Letters.
+This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
+Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.
+Find out more on our quantum channel.
++
The post Famous double-slit experiment gets its cleanest test yet appeared first on Physics World.
+]]>