id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
995700f985ce3c92c586e8d984b79faed66141d37a766027796dc83bbe8c0a1e
2025-12-17T08:16:58+00:00
Improving precision in muon g-2 calculations
The gyromagnetic ratio is the ratio of a particle’s magnetic moment and its angular momentum. This value determines how a particle responds to a magnetic field. According to classical physics, muons should have a gyromagnetic ratio equal to 2. However, owing to quantum mechanics, there is a small difference between the expected gyromagnetic ratio and the observed value. This discrepancy is known as the anomalous magnetic moment. The anomalous magnetic moment is incredibly sensitive to quantum fluctuations. It can be used to test the Standard Model of physics, and previous consistent experimental discrepancies have hinted at new physics beyond the Standard Model. The search for the anomalous magnetic moment is one of the most precise tests in modern physics. To calculate the anomalous magnetic moment, experiments such as Fermilab’s Muon g-2 experiment have been set up where researchers measure the muon’s wobble frequency, which is caused by its magnetic moment. But effects such as hadronic vacuum polarization and hadronic light-by-light scattering cause uncertainty in the measurement. Unlike hadronic vacuum polarization, hadronic light-by-light cannot be directly extracted from experimental cross-section data, making it dependent on the model used and a significant computational challenge. In this work, the researcher took a major step in resolving the anomalous magnetic moment of the muon. Their method calculated how the neutral pion contributes to hadronic light-by-light scattering, used domain wall fermions to preserve symmetry, employed eight different lattice configurations with variational pion masses, and introduced a pion structure function to find the key contributions in a model-independent method. The pion transition form factor was computed directly at arbitrary space-like photon momenta, and a Gegenbauer expansion was used to confirm that about 98% of the π⁰-pole contribution was determined in a model-independent way. The analysis also included finite-volume corrections and chiral and continuum extrapolations and yielded a value for the π⁰ decay width. The development of a more accurate and model-independent anomalous magnetic moment for the muon has reduced major theoretical uncertainties and can make Standard Model precision tests more robust. Lattice QCD calculation of the π0-pole contribution to the hadronic light-by-light scattering in the anomalous magnetic moment of the muon Tian Lin et al 2025 Rep. Prog. Phys. 88 080501 Do you want to learn more about this topic? The muon Smasher’s guide Hind Al Ali et al (2022) The post Improving precision in muon g-2 calculations appeared first on Physics World.
https://physicsworld.com/a/improving-precision-in-muon-g-2-calculations/
Space & Physics
svg
8245fa642b95c593f7e37e74b212797add2c7c94efd71d57bc97e2c36dbe849c
2025-12-17T08:16:11+00:00
How does quantum entanglement move between different particles?
Entanglement is a phenomenon where two or more particles become linked in such a way that a measurement on one of the particles instantly influences the state of the other, no matter how far apart they are. It is a defining property of quantum mechanics, which is key to all quantum technologies and remains a serious challenge to realize in large systems. However, a team of researchers from Sweden and Spain has recently made a large step forward in the field of ultrafast entanglement. Here, pairs of extreme ultraviolet pulses are used to exert quantum control on the attosecond timescale (a few quintillionths of a second). Specifically, they studied ultrafast photoionisation. In this process, a high-energy light pulse hits an atom, ejecting an electron and leaving behind an ion. This process can create entanglement between the electron and the ion in a controlled way. However, the entanglement is fragile and can be disrupted or transferred as the system evolves. For instance, as the newly-created ion emits a photon to release energy, the entanglement shifts from the electron – ion pair to the electron–photon pair. This transfer process takes a considerable amount of time, on the scale of 10s of nanoseconds. This means that the ion-electron pair is macroscopically separated, on the centimetre scale. The team found that during this transition, all three particles – electron, ion, and photon – are entangled together in a multipartite state. They did this by using a mathematical tool called von Neumann entropy to track how much information is shared between all three particles. Although this work was purely theoretical, they also proposed an experimental method to study entanglement transfer. The setup would use two synchronised free-electron laser pulses, with attosecond precision, to measure the electron’s energy and to detect if a photon was emitted. By measuring both particles in coincidence, entanglement can be detected. The results could be generalised to other scenarios and will help us understand how quantum information can move between different particles. This brings us one small step closer to future technologies like quantum communication and computing. Entanglement transfer in a composite electron–ion–photon system – IOPscience A. Stenquist et al 2025 Rep. Prog. Phys. 88 080502     The post How does quantum entanglement move between different particles? appeared first on Physics World.
https://physicsworld.com/a/how-does-quantum-entanglement-move-between-different-particles/
Space & Physics
svg
57f8e136da74ab56e9a2078957da435cff32237edb383e58196d1b900efe1353
2025-12-16T16:19:23+00:00
Motion through quantum space–time is traced by ‘q-desics’
Physicists searching for signs of quantum gravity have long faced a frustrating problem. Even if gravity does have a quantum nature, its effects are expected to show up only at extremely small distances, far beyond the reach of experiments. A new theoretical study by Benjamin Koch and colleagues at the Technical University of Vienna in Austria suggests a different strategy. Instead of looking for quantum gravity where space–time is tiny, the researchers argue that subtle quantum effects could influence how particles and light move across huge cosmical distances. Their work introduces a new concept called q-desics, short for quantum-corrected paths through space–time. These paths generalize the familiar trajectories predicted by Einstein’s general theory of relativity and could, in principle, leave observable fingerprints in cosmology and astrophysics. General relativity and quantum mechanics are two of the most successful theories in physics, yet they describe nature in radically different ways. General relativity treats gravity as the smooth curvature of space–time, while quantum mechanics governs the probabilistic behavior of particles and fields. Reconciling the two has been one of the central challenges of theoretical physics for decades. “One side of the problem is that one has to come up with a mathematical framework that unifies quantum mechanics and general relativity in a single consistent theory,” Koch explains. “Over many decades, numerous attempts have been made by some of the most brilliant minds humanity has to offer.” Despite this effort, no approach has yet gained universal acceptance. There is another, perhaps deeper difficulty. “We have little to no guidance, neither from experiments nor from observations that could tell us whether we actually are heading in the right direction or not,” Koch says. Without experimental clues, many ideas about quantum gravity remain largely speculative. That does not mean the quest lacks value. Fundamental research often pays off in unexpected ways. “We rarely know what to expect behind the next tree in the jungle of knowledge,” Koch says. “We only can look back and realize that some of the previously explored trees provided treasures of great use and others just helped us to understand things a little better.” Almost every test of general relativity relies on a simple assumption. Light rays and freely falling particles follow specific paths, known as geodesics, determined entirely by the geometry of space–time. From gravitational lensing to planetary motion, this idea underpins how physicists interpret astronomical data. Koch and his collaborators asked what happens to this assumption when space–time itself is treated as a quantum object. “Almost all interpretations of observational astrophysical and astronomical data rest on the assumption that in empty space light and particles travel on a path which is described by the geodesic equation,” Koch says. “We have shown that in the context of quantum gravity this equation has to be generalized.” The result is the q-desic equation. Instead of relying only on an averaged, classical picture of space–time, q-desics account for the underlying quantum structure more directly. In practical terms, this means that particles may follow paths that deviate slightly from those predicted by classical general relativity, even when space–time looks smooth on average. Crucially, the team found that these deviations are not confined to tiny distances. “What makes our first results on the q-desics so interesting is that apart from these short distance effects, there are also long range effects possible, if one takes into account the existence of the cosmological constant,” Koch says. This opens the door to possible tests using existing astronomical data. According to the study, q-desics could differ from ordinary geodesics over cosmological distances, affecting how matter and light propagate across the universe. “The q-desics might be distinguished from geodesics at cosmological large distances,” Koch says, “which would be an observable manifestation of quantum gravity effects.” The researchers propose revisiting cosmological observations. “Currently, there are many tensions popping up between the Standard Model of cosmology and observed data,” Koch notes. “All these tensions are linked, one way or another, to the use of geodesics at vastly different distance scales.” The q-desic framework offers a new lens through which to examine such discrepancies. So far, the team has explored simplified scenarios and idealized models of quantum space–time. Extending the framework to more realistic situations will require substantial effort. “The initial work was done with one PhD student (Ali Riahina) and one colleague (Ángel Rincón),” Koch says. “There are many things to be revisited and explored that our to-do list is growing far too long for just a few people.” One immediate goal is to encourage other researchers to engage with the idea and test it in different theoretical settings. Whether q-desics will provide an observational window into quantum gravity remains to be seen. But by shifting attention from the smallest scales to the largest structures in the cosmos, the work offers a fresh perspective on an enduring problem. The research is described in Physical Review D. The post Motion through quantum space–time is traced by ‘q-desics’ appeared first on Physics World.
https://physicsworld.com/a/motion-through-quantum-space-time-is-traced-by-q-desics/
Space & Physics
svg
cea8c7dbb6b280646ec36b2627485b86599b5a527645314fb7f87ee6e3837c19
2025-12-16T11:15:08+00:00
From building a workforce to boosting research and education – future quantum leaders have their say
The International Year of Quantum Science and Technology has celebrated all the great developments in the sector – but what challenges and opportunities lie in store? That was the question deliberated by four future leaders in the field at the Royal Institution in central London in November. The discussion took place during the two-day conference “Quantum science and technology: the first 100 years; our quantum future”, which was part of a week-long series of quantum-related events in the UK organized by the Institute of Physics. <><><><>Deep thinkers The challenges and opportunities for quantum science and technology were discussed during a conference organized by the Institute of Physics at the Royal Institution on 5 November 2025 by (left to right, seated) Muhammad Hamza Waseem; Sarah Alam Malik; Mehul Malik; and Nicole Gillett. The discussion was chaired by Physics World editor-in-chief Matin Durrani (standing, far right). (Courtesy: Tushna Commissariat) Nicole Gillett is a senior software engineer at Riverlane, in Cambridge, UK. The company is a leader in quantum error correction, which is a critical part of a fully functioning, fault-tolerant quantum computer. Errors arise because quantum bits, or qubits, are so fragile and correcting them is far trickier than with classical devices. Riverlane is therefore trying to find ways to correct for errors without disturbing a device’s quantum states. Gillett is part of a team trying to understand how best to implement error-correcting algorithms on real quantum-computing chips. Mehul Malik, who studied physics at a liberal arts college in New York, was attracted to quantum physics because of what he calls a “weird middle ground between artistic creative thought and the rigour of physics”. After doing a PhD at the University of Rochester, he spent five years as a postdoc with Anton Zeilinger at the University of Vienna in Austria before moving to Heriot-Watt University in the UK. As head of its Beyond Binary Quantum Information research group, Malik works on quantum information processing and communication and fundamental studies of entanglement. Sarah Alam Malik is a particle physicist at University College London, using particle colliders to detect and study potential candidates for dark matter. She is also trying to use quantum computers to speed up the discovery of new physics given that what she calls “our most cherished and compelling theories” for physics beyond the Standard Model, such as supersymmetry, have not yet been seen. In particular, Malik is trying to find new physics in a way that’s “model agnostic” – in other words, using quantum computers to search particle-collision data for anomalous events that have not been seen before. Muhammad Hamza Waseem studied electrical engineering in Pakistan, but got hooked on quantum physics after getting involved in recreating experiments to test Bell’s inequalities in what he claims was the first quantum optics lab in the country. Waseem then moved to the the University of Oxford in the UK, to do a PhD studying spin waves to make classical and quantum logic circuits. Unable to work when his lab shut during the COVID-19 pandemic, Waseem approached Quantinuum to see if he could help them in their quest to build quantum computers using ion traps. Now based at the company, he studies how quantum computers can do natural-language processing. “Think ChatGPT, but powered with quantum computers,” he says. What will be the biggest or most important application of quantum technology in your field over the next 10 years? Nicole Gillett: If you look at roadmaps of quantum-computing companies, you’ll find that IBM, for example, intends to build the world’s first utility scale and fault-tolerant quantum computer by the end of the decade. Beyond 2033, they’re committing to have a system that could support 2000 “logical qubits”, which are essentially error-corrected qubits, in which the data of one qubit has been encoded into many qubits. Mehul Malik: In my field, quantum networks that can distribute individual quantum particles or entangled states over large and short distances will have a significant impact within the next 10 years. Quantum networks will connect smaller, powerful quantum processors to make a larger quantum device, whether for computing or communication. The technology is quite mature – in fact, we’ve already got a quantum network connecting banks in London. I will also add something slightly controversial. We often try to distinguish between quantum and non-quantum technologies, but what we’re heading towards is combining classical state-of-the-art devices with technology based on inherently quantum effects – what you might call “quantum adjacent technology”. Single-photon detectors, for example, are going to revolutionize healthcare, medical imaging and even long-distance communication. Sarah Alam Malik: For me, the biggest impact of quantum technology will be applying quantum computing algorithms in physics. Can we quantum simulate the dynamics of, say, proton–proton collisions in a more efficient and accurate manner? Can we combine quantum computing with machine learning to sift through data and identify anomalous collisions that are beyond those expected from the Standard Model? Quantum technology is letting us ask very fundamental questions about nature. Quantum technology, in other words, is letting us ask very fundamental questions about nature. Emerging in theoretical physics, for example, is the idea that the fundamental layer of reality may not be particles and fields, but units of quantum information. We’re looking at the world through this new quantum-theoretic lens and asking questions like, whether it’s possible to measure entanglement in top quarks and even explore Bell-type inequalities at particle colliders. Muhammad Hamza Waseem: Technologically speaking, the biggest impact will be simulating quantum systems using a quantum computer. In fact, researchers from Google already claim to have simulated a wormhole in a quantum computer, albeit a very simple version that could have been tackled with a classical device (Nature 612 55). But the most significant impact has to do with education. I believe quantum theory teaches us that reality is not about particles and individuals – but relations. I’m not saying that particles don’t exist but they emerge from the relations. In fact, with colleagues at the University of Oxford, we’ve used this idea to develop a new way of teaching quantum theory, called Quantum in Pictures. We’ve already tried our diagrammatic approach with a group of 16–18-year-olds, teaching them the entire quantum-information course that’s normally given to postgraduates at Oxford. At the end of our two-month course, which had one lecture and tutorial per week, students took an exam with questions from past Oxford papers. An amazing 80% of students passed and half got distinctions. For quantum theory to have a big impact, we have to make quantum physics more accessible to everyone. I’ve also tried the same approach on pupils in Pakistan: the youngest, who was just 13, can now explain quantum teleportation and quantum entanglement. My point is that for quantum theory to have a big impact, we have to make quantum physics more accessible to everyone. Nicole Gillett: The challenge will be building up a big enough quantum workforce. Sometimes people hear the words “quantum computer” and get scared, worrying they’re going to have to solve Hamiltonians all the time. But is it possible to teach students at high-school level about these concepts? Can we get the ideas across in a way that is easy to understand so people are interested and excited about quantum computing? At Riverlane, we’ve run week-long summer workshops for the last two years, where we try to teach undergraduate students enough about quantum error correction so they can do “decoding”. That’s when you take the results of error correction and try to figure out what errors occurred on your qubits. By combining lectures and hands-on tutorials we found we could teach students about error corrections – and get them really excited too. Our biggest challenge will be not having a workforce ready for quantum computing. We had students from physics, philosophy, maths and computer science take the course – the only pre-requisite, apart from being curious about quantum computers, is some kind of coding ability. My point is that these kinds of boot camps are going to be so important to inspire future generations. We need to make the information accessible to people because otherwise our biggest challenge will be not having a workforce ready for quantum computing. Mehul Malik: One of the big challenges is international cooperation and collaboration. Imagine if, in the early days of the Internet, the US military had decided they’d keep it to themselves for national-security reasons or if CERN hadn’t made the World Wide Web open source. We face the same challenge today because we live in a world that’s becoming polarized and protectionist – and we don’t want that to hamper international collaboration. Over the last few decades, quantum science has developed in a very international way and we have come so far because of that. I have lived in four different continents, but when I try to recruit internationally, I face significant hurdles from the UK government, from visa fees and so on. To really progress in quantum tech, we need to collaborate and develop science in a way that’s best for humanity not just for each nation. Sarah Alam Malik: One of the most important challenges will be managing the hype that inevitably surrounds the field right now. We’ve already seen this with artificial intelligence (AI), which has gone though the whole hype cycle. Lots of people were initially interested, then the funding dried up when reality didn’t match expectations. But now AI has come back with such resounding force that we’re almost unprepared for all the implications of it. Quantum can learn from the AI hype cycle, finding ways to manage expectations of what could be a very transformative technology. In the near- and mid-term, we need to not overplay things and be cautious of this potentially transformative technology – yet be braced for the impact it could potentially have. It’s a case of balancing hype with reality. Muhammad Hamza Waseem: Another important challenge is how to distribute funding between research on applications and research on foundations. A lot of the good technology we use today emerged from foundational ideas in ways that were not foreseen by the people originally working on them. So we must ensure that foundational research gets the funding it deserves or we’ll hit a dead end at some point. Mehul Malik: AI is already changing how I do research, speeding up the way I discover knowledge. Using Google Gemini, for example, I now ask my browser questions instead of searching for specific things. But you still have to verify all the information you gather, for example, by checking the links it cites. I recently asked AI a complex physics question to which I knew the answer and the solution it gave was terrible. As for how quantum is changing research, I’m less sure, but better detectors through quantum-enabled research will certainly be good. Muhammad Hamza Waseem: AI is already being deployed in foundational research, for example, to discover materials for more efficient batteries. A lot of these applications could be integrated with quantum computing in some way to speed work up. In other words, a better understanding of quantum tech will let us develop AI that is safer, more reliable, more interpretable – and if something goes wrong, you know how to fix it. It’s an exciting time to be a researcher, especially in physics. Sarah Alam Malik: I’ve often wondered if AI, with the breadth of knowledge that it has across all different fields, already has answers to questions that we couldn’t answer – or haven’t been able to answer – just because of the boundaries between disciplines. I’m a physicist and so can’t easily solve problems in biology. But could AI help us to do breakthrough research at the interface between disciplines? Nicole Gillett: As a software engineer, I once worked at an Internet security company called CloudFlare, which taught me that it’s never too early to be thinking about how any new technology – both AI and quantum – might be abused. What’s also really interesting is whether AI and machine learning can be used to build quantum computers by developing the coding algorithms they need. Companies like Google are active in this area and so are Riverlane too. Mehul Malik: I recently discussed this question with a friend who works in AI, who said that the huge AI boom in industry, with all the money flowing in to it, has effectively killed academic research in the field. A lot of AI research is now industry-led and goal-orientated – and there’s a risk that the economic advantages of AI will kill curiosity-driven research. The remedy, according to my friend, is to pay academics in AI more as they are currently being offered much larger salaries to work in the private sector. We need to diversify so that the power to control or chart the course of quantum technologies is not in the hands of a few privileged monopolies. Another issue is that a lot of power is in the hands a just a few companies, such as Nvidia and ASML. The lesson for the quantum sector is that we need to diversify early on so that the power to control or chart the course of quantum technologies is not in the hands of a few privileged monopolies. Sarah Alam Malik: Quantum technology has a lot to learn from AI, which has shown that we need to break down the barriers between disciplines. After all, some of the most interesting and impactful research in AI has happened because companies can hire whoever they need to work on a particular problem, whether it’s a computer scientist, a biologist, a chemist, a physicist or a mathematician. Nature doesn’t differentiate between biology and physics. In academia we not only need people who are hyper specialized but also a crop of generalists who are knee-deep in one field but have experience in other areas too. Muhammad Hamza Waseem: AI research is in a weird situation where there are lots of excellent applications but so little is understood about how AI machines work. We have no good scientific theory of intelligence or of consciousness. We need to make sure that quantum computing research does not become like that and that academic research scientists are well-funded and not distracted by all the hype that industry always creates. At the start of the previous century, the mathematician David Hilbert said something like “physics is becoming too difficult for the physicists”. I think quantum computing is also somewhat becoming too challenging for the quantum physicists. We need everyone to get involved for the field to reach its true potential. (Courtesy: iStock/Peach) Today’s AI systems use vast amounts of energy, but should we also be concerned about the environmental impact of quantum computers? Google, for example, has already carried out quantum error-correction experiments in which data from the company’s quantum computers had to be processed once every microsecond per round of error correction (Nature 638 920). “Finding ways to process it to keep up with the rate at which it’s being generated is a very interesting area of research,” says Nicole Gillett. However, quantum computers could cut our energy consumption by allowing calculations to be performed far more quickly and efficiently than is possible with classical machines. For Mehul Malik, another important step towards “green” quantum technology will be to lower the energy that quantum devices require and to build detectors that work at room temperature and are robust against noise. Quantum computers themselves can also help, he thinks, by discovering energy-efficient technologies, materials and batteries. A quantum laptop? (Courtesy: iStock/inkoly) Will we ever see portable quantum computers or will they always be like today’s cloud-computing devices in distant data centres? Muhammad Hamza Waseem certainly does not envisage a word processor that uses a quantum computer. But he points to companies like SPINQ, which has built a two quantum bit computer for educational purposes. “In a sense, we already have a portable quantum computer,” he says. For Mehul Malik, though, it’s all about the market. “If there’s a need for it,” he joked, “then somebody will make it.” If I were science minister… (Courtesy: Shutterstock/jenny on the moon) When asked by Peter Knight – one of the driving forces behind the UK’s quantum-technology programme – what the panel would do if they were science minister, Nicole Gillett said she would seek to make the UK the leader in quantum computing by investing heavily in education. Mehul Malik would cut the costs of scientists moving across borders, pointing out that many big firms have been founded by immigrants. Sarah Alam Malik called for long-term funding – and not to give up if short-term gains don’t transpire. Muhammad Hamza Waseem, meanwhile, said we should invest more in education, research and the international mobility of scientists. This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications. Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ. Find out more on our quantum channel. The post From building a workforce to boosting research and education – future quantum leaders have their say appeared first on Physics World.
https://physicsworld.com/a/from-building-a-workforce-to-boosting-research-and-education-future-quantum-leaders-have-their-say/
Space & Physics
svg
3c8d1fa9d42e6216f122004ba02044678a251bf250b5cfc310e759f3be96b1d5
2025-12-15T16:00:53+00:00
Will this volcano explode, or just ooze? A new mechanism could hold some answers
An international team of researchers has discovered a new mechanism that can trigger the formation of bubbles in magma – a major driver of volcanic eruptions. The finding could improve our understanding of volcanic hazards by improving models of magma flow through conduits beneath Earth’s surface. Volcanic eruptions are thought to occur when magma deep within the Earth’s crust decompresses. This decompression allows volatile chemicals dissolved in the magma to escape in gaseous form, producing bubbles. The more bubbles there are in the viscous magma, the faster it will rise, until eventually it tears itself apart. “This process can be likened to a bottle of sparkling water containing dissolved volatiles that exolve when the bottle is opened and the pressure is released,” explains Olivier Roche, a member of the volcanology team at the Magmas and Volcanoes Laboratory (LMV) at the Université Clermont Auvergne (UCA) in France and lead author of the study. The new work, however, suggests that this explanation is incomplete. In their study, Roche and colleagues at UCA, the French National Research Institute for Sustainable Development (IRD), Brown University in the US and ETH Zurich in Switzerland began with the assumption that the mechanical energy in magma comes from the pressure gradient between the nucleus of a gas bubble and the ambient liquid. “However, mechanical energy may also be provided by shear stress in the magma when it is in motion,” Roche notes. “We therefore hypothesized that magma shearing forces could induce bubble nucleation too.” To test their theory, the researchers reproduced the internal movements of magma in liquid polyethylene oxide saturated with carbon dioxide at 80°C. They then set up a device to observe bubble nucleation in situ while the material was experiencing shear stress. They found that the energy provided by viscous shear is large enough to trigger bubble formation – even if decompression isn’t present. The effect, which the team calls shear-induced bubble nucleation, depends on the magma’s viscosity and on the amount of gas it contains. According to Roche, the presence of this effect could help researchers determine whether an eruption is likely to be explosive or effusive. “Understanding which mechanism is at play is fundamental for hazard assessment,” he says. “If many gas bubbles grow deep in the volcano conduit in a volatile-rich magma, for example, they can combine with each other and form larger bubbles that then open up degassing conduits connected to the surface. “This process will lead to effusive eruptions, which is counterintuitive (but supported by some earlier observations),” he tells Physics World. “It calls for the development of new conduit flow models to predict eruptive style for given initial conditions (essentially volatile content) in the magma chamber.” By integrating this mechanism into future predictive models, the researchers aim to develop tools that anticipate the intensity of eruptions better, allowing scientists and local authorities to improve the way they manage volcanic hazards. Looking ahead, they are planning new shear experiments on liquids that contain solid particles, mimicking crystals that form in magma and are believed to facilitate bubble nucleation. In the longer term, they plan to study combinations of shear and compression, though Roche acknowledges that this “will be challenging technically”. They report their present work in Science. The post Will this volcano explode, or just ooze? A new mechanism could hold some answers appeared first on Physics World.
https://physicsworld.com/a/will-this-volcano-explode-or-just-ooze-a-new-mechanism-could-hold-some-answers/
Space & Physics
svg
b69e005636d86d6b9d3e865cb0b180a28515dfe131c2a4753450f011eef1ba45
2025-12-15T12:41:01+00:00
Remote work expands collaboration networks but reduces research impact, study suggests
Academics who switch to hybrid working and remote collaboration do less impactful research. That’s according to an analysis of how scientists’ collaboration networks and academic outputs evolved before, during and after the COVID-19 pandemic (arXiv: 2511.18481). It involved studying author data from the arXiv preprint repository and the online bibliographic catalogue OpenAlex. To explore the geographic spread of collaboration networks, Sara Venturini from the Massachusetts Institute of Technology and colleagues looked at the average distance between the institutions of co-authors. They found that while the average distance between team members on publications increased from 2000 to 2021, there was a particularly sharp rise after 2022. This pattern, the researchers claim, suggests that the pandemic led to scientists collaborating more often with geographically distant colleagues. They found consistent patterns when they separated papers related to COVID-19 from those in unrelated areas, suggesting the trend was not solely driven by research on COVID-19. The researchers also examined how the number of citations a paper received within a year of publication changed with distance between the co-authors’ institutions. In general, as the average distance between collaborators increases, citations fall, the authors found. They suggest that remote and hybrid working hampers research quality by reducing spontaneous, serendipitous in-person interactions that can lead to deep discussions and idea exchange. Despite what the authors say is a “concerning decline” in citation impact, there are, however, benefits to increasing remote interactions. In particular, as the geography of collaboration networks increases, so too does international partnerships and authorship diversity. Lingfei Wu, a computational social scientist at the University of Pittsburgh, who was not involved in the study, told Physics World that he was surprised by the finding that remote teams produce less impactful work. “In our earlier research, we found that historically, remote collaborations tended to produce more impactful but less innovative work,” notes Wu. “For example, the Human Genome Project published in 2001 shows how large, geographically distributed teams can also deliver highly impactful science. One would expect the pandemic-era shift toward remote collaboration to increase impact rather than diminish it.” Wu says his work suggests that remote work is effective for implementing ideas but less effective for generating them, indicating that scientists need a balance between remote and in-person interactions. “Use remote tools for efficient execution, but reserve in-person time for discussion, brainstorming, and informal exchange,” he adds. The post Remote work expands collaboration networks but reduces research impact, study suggests appeared first on Physics World.
https://physicsworld.com/a/remote-work-expands-collaboration-networks-but-reduces-research-impact-study-suggests/
Space & Physics
svg
e65fcf8a6385085720d6e418149443169624d79c4ffe5cbde8b1779b8114e3e7
2025-12-15T12:00:02+00:00
How well do you know AI? Try our interactive quiz to find out
There are 12 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong. Check your scores at the end – and why not test your colleagues too? How did you do? 10–12 Top shot – congratulations, you’re the next John Hopfield 7–9 Strong skills – good, but not quite Nobel standard 4–6 Weak performance – should have asked ChatGPT 0–3 Worse than random – are you a bot?   Physics World‘s coverage of this interactive quiz is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research. The post How well do you know AI? Try our interactive quiz to find out appeared first on Physics World.
https://physicsworld.com/a/how-well-do-you-know-ai-try-our-interactive-quiz-to-find-out/
Space & Physics
svg
447da6985de005b459033f8e223cd6c599f0efeff1b6900b9a1a4b40664d9e8c
2025-12-15T10:00:08+00:00
International Year of Quantum Science and Technology quiz
This quiz was first published in February 2025. Now you can enjoy it in our new interactive quiz format and check your final score. There are 18 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong.   The post International Year of Quantum Science and Technology quiz appeared first on Physics World.
https://physicsworld.com/a/international-year-of-quantum-science-and-technology-quiz/
Space & Physics
svg
702745a8e0c807979e25126a931fbd98d8b4be85a7133e171e2d97167a54bb77
2025-12-12T11:30:01+00:00
Components of RNA among life’s building blocks found in NASA asteroid sample
More molecules and compounds vital to the origin of life have been detected in asteroid samples delivered to Earth by NASA’s OSIRIS-REx mission. The discovery strengthens the case that not only did life’s building blocks originate in space, but that the ingredients of RNA, and perhaps RNA itself, were brought to our planet by asteroids. Two new papers in Nature Geoscience and Nature Astronomy describe the discovery of the sugars ribose and glucose in the 120 g of samples returned from the near-Earth asteroid 101955 Bennu, as well as an unusual carbonaceous “gum” that holds important compounds for life. The findings complement the earlier discovery of amino acids and the nucleobases of RNA and DNA in the Bennu samples. A third new paper, in Nature Astronomy, addresses the abundance of pre-solar grains, which is dust that originated from before the birth of our Solar System, such as dust from supernovae. Scientists led by Ann Nguyen of NASA’s Johnson Space Center found six times more dust direct from supernova explosions than is found, on average, in meteorites and other sampled asteroids. This could suggest differences in the concentration of different pre-solar dust grains in the disc of gas and dust that formed the Solar System. It’s the discovery of organic materials useful for life that steals the headlines, though. For example, the discovery of the space gum, which is essentially a hodgepodge chain of polymers, represents something never found in space before. Scott Sandford of NASA’s Ames Research Center, co-lead author of the Nature Astronomy paper describing the gum discovery, tells Physics World: “The material we see in our samples is a bit of a molecular jumble. It’s carbonaceous, but much richer in nitrogen and, to a lesser extent, oxygen, than most of the organic compounds found in extraterrestrial materials.” Sandford refers to the material as gum because of its pliability, bending and dimpling when pressure is applied, rather like chewing gum. And while much of its chemical functionality is replicated in similar materials on our planet, “I doubt it matches exactly with anything seen on Earth,” he says. Initially, Sandford found the gum using an infrared microscope, nicknaming the dust grains containing the gum “Lasagna” and “Neapolitan” because the grains are layered. To extract them from the rock in the sample, Sandford went to Zack Gainsforth of the University of California, Berkeley, who specializes in analysing and extracting materials from samples like this. Having welded a tungsten needle to the Neapolitan sample in order to lift it, the pair quickly realised that the grain was very delicate. “When we tried to lift the sample it began to deform,” Gainsforth says. “Scott and I practically jumped out of our chairs and brainstormed what to do. After some discussion, we decided that we should add straps to give it enough mechanical rigidity to survive the lift.” By straps, Gainsforth is referring to micro-scale platinum scaffolding applied to the grain to reinforce its structure while they cut it away with an ion beam. Platinum is often used as a radiation shield to protect samples from an ion beam, “but how we used it was anything but standard,” says Gainsforth. “Scott and I made an on-the-fly decision to reinforce the samples based on how they were reacting to our machinations.” With the sample extracted and reinforced, they used the ion beam cutter to shave it down until it was a thousand times thinner than a human hair, at which point it could be studied by electron microscopy and X-ray spectrometry. “It was a joy to watch Zack ‘micro-manipulate’ [the sample],” says Sandford. The nitrogen in the gum was found to be in nitrogen heterocycles, which are the building blocks of nucleobases in DNA and RNA. This brings us to the other new discovery, reported in Nature Geoscience, of the sugars ribose and glucose in the Bennu samples, by a team led by Yoshihiro Furukawa of Tohoku University in Japan. Glucose is the primary source of energy for life, while ribose is a key component of the sugar-phosphate backbone that connects the information-carrying nucleobases in RNA molecules. Furthermore, the discovery of ribose now means that everything required to assemble RNA molecules is present in the Bennu sample. Notable by its absence, however, was deoxyribose, which is ribose minus one oxygen atom. Deoxyribose in DNA performs the same job as ribose in RNA, and Furukawa believes that its absence supports a popular hypothesis about the origin of life on Earth called RNA world. This describes how the first life could have used RNA instead of DNA to carry genetic information, catalyse biochemical reactions and self-replicate. Intriguingly, the presence of all RNA’s ingredients on Bennu raises the possibility that RNA could have formed in space before being brought to Earth. “Formation of RNA from its building blocks requires a dehydration reaction, which we can expect to have occurred both in ancient Bennu and on primordial Earth,” Furukawa tells Physics World. However, RNA would be very hard to detect because of its expected low abundance in the samples, making identifying it very difficult. So until there’s information to the contrary, “the present finding means that the ingredients of RNA were delivered from space to the Earth,” says Furukawa. Nevertheless, these discoveries are major milestones in the quest of astrobiologists and space chemists to understand the origin of life on Earth. Thanks to Bennu and the asteroid 162173 Ryugu, from which a sample was returned by the Japanese Aerospace Exploration Agency (JAXA) mission Hayabusa2, scientists are increasingly confident that the building blocks of life on Earth came from space. The post Components of RNA among life’s building blocks found in NASA asteroid sample appeared first on Physics World.
https://physicsworld.com/a/components-of-rna-among-lifes-building-blocks-found-in-nasa-asteroid-sample/
Space & Physics
svg
9ca67c1d9a87210c24530f82da0e252524af38ba309c8f7fd9f5589ae62a2ed7
2025-12-12T11:00:26+00:00
Institute of Physics celebrates 2025 Business Award winners at parliamentary event
A total of 14 physics-based firms in sectors from quantum and energy to healthcare and aerospace have won 2025 Business Awards from the Institute of Physics (IOP), which publishes Physics World. The awards were presented at a reception in the Palace of Westminster yesterday attended by senior parliamentarians and policymakers as well as investors, funders and industry leaders. The IOP Business Awards, which have been running since 2012, recognise the role that physics and physicists play in the economy, creating jobs and growth “by powering innovation to meet the challenges facing us today, ranging from climate change to better healthcare and food production”. More than 100 firms have now won Business Awards, with around 90% of those companies still commercially active. The parliamentary event honouring the 2025 winners were hosted by Dave Robertson, the Labour MP for Lichfield, who spent 10 years as a physics teacher in Birmingham before working for teaching unions. There was also a speech from Baron Sharma, who studied applied physics before moving into finance and later becoming a Conservative MP, Cabinet minister and president of the COP-26 climate summit. Seven firms were awarded 2025 IOP Business Innovation Awards, which recognize companies that have “delivered significant economic and/or societal impact through the application of physics”. They include Oxford-based Tokamak Energy, which has developed “compact, powerful, robust, quench-resilient” high-temperature superconducting magnets for commercial fusion energy and for propulsion systems, accelerators and scientific instruments. Oxford Instruments was honoured for developing a novel analytical technique for scanning electron microscopes, enabling new capabilities and accelerating time to results by at least an order of magnitude. Ionoptika, meanwhile, was recognized for developing Q-One, which is a new generation of focused ion-beam instrumentation, providing single atom through to high-dose nanoscale advanced materials engineering for photonic and quantum technologies. The other four winners were: electronics firm FlexEnable for their organic transistor materials; Lynkeos Technology for the development of muonography in the nuclear industry; the renewable energy company Sunamp for their thermal storage system; and the defence and security giant Thales UK for the development of a solid-state laser for laser rangefinders. Six other companies have won an IOP Start-up Award, which celebrates young companies “with a great business idea founded on a physics invention, with the potential for business growth and significant societal impact”. They include Astron Systems for developing “long-lifetime turbomachinery to enable multi-reuse small rocket engines and bring about fully reusable small launch vehicles”, along with MirZyme Therapeutics for “pioneering diagnostics and therapeutics to eliminate preeclampsia and transform maternal health”. The other four winners were: Celtic Terahertz Technology for a metamaterial filter technology; Nellie Technologies for a algae-based carbon removal technology; Quantum Science for their development of short-wave infrared quantum dot technology; and Wayland Additive for the development and commercialisation of charge-neutralised electron beam metal additive manufacturing. James McKenzie, a former vice-president for business at the IOP, who was involved in judging the awards, says that all awardees are “worthy winners”. “It’s the passion, skill and enthusiasm that always impresses me,” McKenzie told Physics World. iFAST Diagnostics were also awarded the IOP Lee Lucas Award that recognises early-stage companies taking innovative products into the medical and healthcare sector. The firm, which was spun out of the University of Southampton, develops blood tests that can test the treatment of bacterial infections in a matter of hours rather than days. They are expecting to have approval for testing next year. “Especially inspiring was the team behind iFAST,” adds McKenzie, “who developed a method to test very rapid tests cutting time from 48 hours to three hours, so patients can be given the right antibiotics.” “The award-winning businesses are all outstanding examples of what can be achieved when we build upon the strengths we have, and drive innovation off the back of our world-leading discovery science,” noted Tom Grinyer, IOP chief executive officer. “In the coming years, physics will continue to shape our lives, and we have some great strengths to build upon here in the UK, not only in specific sectors such as quantum, semiconductors and the green economy, but in our strong academic research and innovation base, our growing pipeline of spin-out and early-stage companies, our international collaborations and our growing venture capital community.” For the full list of winners, see here. The post Institute of Physics celebrates 2025 Business Award winners at parliamentary event appeared first on Physics World.
https://physicsworld.com/a/institute-of-physics-celebrates-2025-business-award-winners-at-parliamentary-event/
Space & Physics
svg
aa5fec2398c36460b6376c7860dab9dbb1bfbd9e3227a1b34702144241ebe66f
2025-12-12T09:00:46+00:00
Leftover gamma rays produce medically important radioisotopes
The “leftover” gamma radiation produced when the beam of an electron accelerator strikes its target is usually discarded. Now, however, physicists have found a new use for it: generating radioactive isotopes for diagnosing and treating cancer. The technique, which piggybacks on an already-running experiment, uses bremsstrahlung from an accelerator facility to trigger nuclear reactions in a layer of zinc foil. The products of these reactions include copper isotopes that are hard to make using conventional techniques, meaning that the technique could reduce their costs and expand access to treatments. Radioactive nuclides are commonly used to treat cancer, and so-called theranostic pairs are especially promising. These pairs occur when one isotope of an element provides diagnostic imaging while another delivers therapeutic radiation – a combination that enables precision tumour targeting to improve treatment outcomes. One such pair is 64Cu and 67Cu: the former emits positrons that can identify tumours in PET scans while the latter produces beta particles that can destroy cancerous cells. They also have a further clinical advantage in that copper binds to antibodies and other biomolecules, allowing the isotopes to be delivered directly into cells. Indeed, these isotopes have already been used to treat cancer in mice, and early clinical studies in humans are underway. Researchers led by Mamad Eslami of the University of York, UK have now put forward a new way to make both isotopes. Their method exploits the fact that gamma rays generated by the intense electron beams in particle accelerator experiments interact only weakly with matter (relative to electrons or neutrons, at least). This means that many of them pass right through their primary target and into a beam dump. These “wasted” photons still carry enough energy to drive further nuclear reactions, though, and Eslami and colleagues realized that they could be harnessed to produce 64Cu and 67Cu. Eslami and colleagues tested their idea at the Mainz Microtron, an electron accelerator at Johannes Gutenberg University Mainz in Germany. “We wanted to see whether GeV-scale bremsstrahlung, already available at the electron accelerator, could be used in a truly parasitic configuration,” Eslami says. The real test, he adds, was whether they could produce 67Cu alongside the primary experiment, which was using the same electron beam and photon field to study hadron physics, without disturbing it or degrading the beam conditions. The answer turned out to be “yes”. What’s more, the researchers found that their approach could produce enough 67Cu for medical applications in about five days – roughly equal to the time required for a nuclear reactor to produce the equivalent amount of another important medical radionuclide, lutetium-177. “Our results indicate that, under suitable conditions, high-energy electron and photon facilities that were originally built for nuclear or particle physics experiments could also be used to produce 67Cu and other useful radionuclides,” Eslami tells Physics World. In practice, however, Eslami adds that this will be only realistic at sites with a strong, well-characterized bremsstrahlung fields. High-power multi-GeV electron facilities such as the planned Electron-Ion Collider at Brookhaven National Laboratory in the US, or a high-repetition laser-plasma electron source, are two possibilities. Even with this restriction, team member Mikhail Bashkanov is excited about the advantages. “If we could do away with the necessity of using nuclear reactors to produce medical isotopes and solely generate them with high-energy photon beams from laser-plasma accelerators, we could significantly improve nuclear medicine treatments and reduce their costs,” Bashkanov says. The researchers, who detail their work in Physical Review C, now plan to test their method at other electron accelerators, especially those with higher beam power and GeV-scale beams, to quantify the 67Cu yields they can expect to achieve in realistic target and beam-dump configurations. In parallel, Eslami adds, they want to explore parasitic operation at emerging laser-plasma-driven electron sources that are being developed for muon tomography. They would also like to link their irradiation studies to target design, radiochemistry and timing constraints to see whether the method can deliver clinically useful activities of 67Cu and other useful isotopes in a reliable and cost-effective way. The post Leftover gamma rays produce medically important radioisotopes appeared first on Physics World.
https://physicsworld.com/a/leftover-gamma-rays-produce-medically-important-radioisotopes/
Space & Physics
svg
b4525f4f29b6db303c21f25794c201ce90db33e7017d8f6383a07b534d607409
2025-12-11T14:27:27+00:00
Top 10 Breakthroughs of the Year in physics for 2025 revealed
Physics World is delighted to announce its Top 10 Breakthroughs of the Year for 2025, which includes research in astronomy, antimatter, atomic and molecular physics and more. The Top Ten is the shortlist for the Physics World Breakthrough of the Year, which will be revealed on Thursday 18 December. Our editorial team has looked back at all the scientific discoveries we have reported on since 1 January and has picked 10 that we think are the most important. In addition to being reported in Physics World in 2025, the breakthroughs must meet the following criteria: Here, then, are the Physics World Top 10 Breakthroughs for 2025, listed in no particular order. You can listen to Physics World editors make the case for each of our nominees in the Physics World Weekly podcast. And, come back next week to discover who has bagged the 2025 Breakthrough of the Year. To Tim McCoy, Sara Russell, Danny Glavin, Jason Dworkin, Yoshihiro Furukawa, Ann Nguyen, Scott Sandford, Zack Gainsforth and an international team of collaborators for identifying salt, ammonia, sugar, nitrogen- and oxygen-rich organic materials, and traces of metal-rich supernova dust, in samples returned from the near-Earth asteroid 101955 Bennu. The incredible chemical richness of this asteroid, which NASA’s OSIRIS-REx spacecraft visited in 2020, lends support to the longstanding hypothesis that asteroid impacts could have “seeded” the early Earth with the raw ingredients needed for life to form. The discoveries also enhance our understanding of how Bennu and other objects in the solar system formed out of the disc of material that coalesced around the young Sun. To Takamasa Momose of the University of British Columbia, Canada, and Susumu Kuma of the RIKEN Atomic, Molecular and Optical Physics Laboratory, Japan for observing superfluidity in a molecule for the first time. Molecular hydrogen is the simplest and lightest of all molecules, and theorists predicted that it would enter a superfluid state at a temperature between 1‒2 K. But this is well below the molecule’s freezing point of 13.8 K, so Momose, Kuma and colleagues first had to develop a way to keep the hydrogen in a liquid state. Once they did that, they then had to work out how to detect the onset of superfluidity. It took them nearly 20 years, but by confining clusters of hydrogen molecules inside helium nanodroplets, embedding a methane molecule within the clusters, and monitoring the methane’s rotation, they were finally able to do it. They now plan to study larger clusters of hydrogen, with the aim of exploring the boundary between classical and quantum behaviour in this system. To researchers at the University of Southampton and Microsoft Azure Fiber in the UK, for developing a new type of optical fibre that reduces signal loss, boosts bandwidth and promises faster, greener communications. The team, led by Francesco Poletti, achieved this feat by replacing the glass core of a conventional fibre with air and using glass membranes that reflect light at certain frequencies back into the core to trap the light and keep it moving through the fibre’s hollow centre. Their results show that the hollow-core fibres exhibit 35% less attenuation than standard glass fibres – implying that fewer amplifiers would be needed in long cables – and increase transmission speeds by 45%. Microsoft has begun testing the new fibres in real systems, installing segments in its network and sending live traffic through them. These trials open the door to gradual rollout and Poletti suggests that the hollow-core fibres could one day replace existing undersea cables. To Francesco Fracchiolla and colleagues at the Trento Proton Therapy Centre in Italy for delivering the first clinical treatments using proton arc therapy (PAT). Proton therapy – a precision cancer treatment – is usually performed using pencil-beam scanning to precisely paint the dose onto the tumour. But this approach can be limited by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this by moving to an arc trajectory with protons delivered over a large number of beam angles and the potential to optimize the number of energies used for each beam direction. Working with researchers at RaySearch Laboratories in Sweden, the team performed successful dosimetric comparisons with clinical proton therapy plans. Following a feasibility test that confirmed the viability of clinical PAT delivery, the researchers used PAT to treat nine cancer patients. Importantly, all treatments were performed using the centre’s existing proton therapy system and clinical workflow. To Peter Maurer and David Awschalom at the University of Chicago Pritzker School of Molecular Engineering and colleagues for designing a protein quantum bit (qubit) that can be produced directly inside living cells and used as a magnetic field sensor. While many of today’s quantum sensors are based on nitrogen–vacancy (NV) centres in diamond, they are large and hard to position inside living cells. Instead, the team used fluorescent proteins, which are just 3 nm in diameter and can be produced by cells at a desired location with atomic precision. These proteins possess similar optical and spin properties to those of NV centre-based qubits – namely that they have a metastable triplet state. The researchers used a near-infrared laser pulse to optically address a yellow fluorescent protein and read out its triplet spin state with up to 20% spin contrast. They then genetically modified the protein to be expressed in bacterial cells and measured signals with a contrast of up to 8%. They note that although this performance does not match that of NV quantum sensors, it could enable magnetic resonance measurements directly inside living cells, which NV centres cannot do. To Guangyu Zhang, Luojun Du and colleagues at the Institute of Physics of the Chinese Academy of Sciences for producing the first 2D sheets of metal. Since the discovery of graphene – a sheet of carbon just one atom thick – in 2004, hundreds of other 2D materials have been fabricated and studied. In most of these, layers of covalently bonded atoms are separated by gaps where neighbouring layers are held together only by weak van der Waals (vdW) interactions, making it relatively easy to “shave off” single layers to make 2D sheets. Many thought that making atomically thin metals, however, would be impossible given that each atom in a metal is strongly bonded to surrounding atoms in all directions. The technique developed by Zhang and Du and colleagues involves heating powders of pure metals between two monolayer-MoS2/sapphire vdW anvils. Once the metal powders are melted into a droplet, the researchers applied a pressure of 200 MPa and continued this “vdW squeezing” until the opposite sides of the anvils cooled to room temperature and 2D sheets of metal were formed. The team produced five atomically thin 2D metals – bismuth, tin, lead, indium and gallium – with the thinnest being around 6.3 Å. The researchers say their work is just the “tip of the iceberg” and now aim to study fundamental physics with the new materials. To CERN’s BASE collaboration for being the first to perform coherent spin spectroscopy on a single antiproton – the antimatter counterpart of the proton. Their breakthrough is the most precise measurement yet of the antiproton’s magnetic properties, and could be used to test the Standard Model of particle physics. The experiment begins with the creation of high-energy antiprotons in an accelerator. These must be cooled (slowed down) to cryogenic temperatures without being lost to annihilation. Then, a single antiproton is held in an ultracold electromagnetic trap, where microwave pulses manipulate its spin state. The resulting resonance peak was 16 times narrower than previous measurements, enabling a significant leap in precision. This level of quantum control opens the door to highly sensitive comparisons of the properties of matter (protons) and antimatter (antiprotons). Unexpected differences could point to new physics beyond the Standard Model and may also reveal why there is much more matter than antimatter in the visible universe. To Richard Allen, director of the Berkeley Seismological Laboratory at the University of California, Berkeley, and Google’s Marc Stogaitis and colleagues for creating a global network of Android smartphones that acts as an earthquake early warning system. Traditional early warning systems use networks of seismic sensors that rapidly detect earthquakes in areas close to the epicentre and issue warnings across the affected region. Building such seismic networks, however, is expensive, and many earthquake-prone regions do not have them. The researchers utilized the accelerometer in millions of phones in 98 countries to create the Android Earthquake Alert (AEA) system. Testing the app between 2021 and 2024 led to the detection of an average of 312 earthquakes a month, with magnitudes ranging from 1.9 to 7.8. For earthquakes of magnitude 4.5 or higher, the system sent “TakeAction” alerts to users, sending them, on average, 60 times per month for an average of 18 million individual alerts per month. The system also delivered lesser “BeAware” alerts to regions expected to experience a shaking intensity of magnitude 3 or 4. The team now aims to produce maps of ground shaking, which could assist the emergency response services following an earthquake. To Lisa Nortmann at Germany’s University of Göttingen and colleagues for creating the first detailed “weather map” of an exoplanet. The forecast for exoplanet WASP-127b is brutal with winds reaching 33,000 km/hr, which is much faster than winds found anywhere in the Solar System. The WASP-127b is a gas giant located about 520 light–years from Earth and the team used the CRIRES+ instrument on the European Southern Observatory’s Very Large Telescope to observe the exoplanet as it transited across its star in less than 7 h. Spectral analysis of the starlight that filtered through WASP-127b’s atmosphere revealed Doppler shifts caused by supersonic equatorial winds. By analysing the range of Doppler shifts, the team created a rough weather map of WASP-127b, even though they could not resolve light coming from specific locations on the exoplanet. Nortmann and colleagues concluded that the exoplanet’s poles are cooler that the rest of WASP-127b, where temperatures can exceed 1000 °C. Water vapour was detected in the atmosphere, raising the possibility of exotic forms of rain. To the team led by Yichao Zhang at the University of Maryland and Pinshane Huang of the University of Illinois at Urbana-Champaign for capturing the highest-resolution images ever taken of individual atoms in a material. The team used an electron-microscopy technique called electron ptychography to achieve a resolution of 15 pm, which is about 10 times smaller than the size of an atom. They studied a stack of two atomically-thin layers of tungsten diselenide, which were rotated relative to each other to create a moiré superlattice. These twisted 2D materials are of great interest to physicists because their electronic properties can change dramatically with small changes in rotation angle. The extraordinary resolution of their microscope allowed them to visualize collective vibrations in the material called moiré phasons. These are similar to phonons, but had never been observed directly until now. The team’s observations align with theoretical predictions for moiré phasons. Their microscopy technique should boost our understanding of the role that moiré phasons and other lattice vibrations play in the physics of solids. This could lead to the engineering of new and useful materials. Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research. The post Top 10 Breakthroughs of the Year in physics for 2025 revealed appeared first on Physics World.
https://physicsworld.com/a/top-10-breakthroughs-of-the-year-in-physics-for-2025-revealed/
Space & Physics
svg
649896b401059c0a022de3f7293c2b47f01743346a82d956bff8f13053df7143
2025-12-11T14:27:25+00:00
Exploring this year’s best physics research in our Top 10 Breakthroughs of 2025
This episode of the Physics World Weekly podcast features a lively discussion about our Top 10 Breakthroughs of 2025, which include important research in quantum sensing, planetary science, medical physics, 2D materials and more. Physics World editors explain why we have made our selections and look at the broader implications of this impressive body of research. The top 10 serves as the shortlist for the Physics World Breakthrough of the Year award, the winner of which will be announced on 18 December. Links to all the nominees, more about their research and the selection criteria can be found here. Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research. The post Exploring this year’s best physics research in our Top 10 Breakthroughs of 2025 appeared first on Physics World.
https://physicsworld.com/a/exploring-this-years-best-physics-research-in-our-top-10-breakthroughs-of-2025/
Space & Physics
svg
8122321e58451fc1678c64dc7bdc764ffc3a2039f1a5f652b3ee54daf26834b7
2025-12-11T09:00:30+00:00
Astronomers observe a coronal mass ejection from a distant star
The Sun regularly produces energetic outbursts of electromagnetic radiation called solar flares. When these flares are accompanied by flows of plasma, they are known as coronal mass ejections (CMEs). Now, astronomers at the Netherlands Institute for Radio Astronomy (ASTRON) have spotted a similar event occurring on a star other than our Sun – the first unambiguous detection of a CME outside our solar system. Astronomers have long predicted that the radio emissions associated with CMEs from other stars should be detectable. However, Joseph Callingham, who led the ASTRON study, says that he and his colleagues needed the highly sensitive low-frequency radio telescope LOFAR – plus ESA’s XMM-Newton space observatory and “some smart software” developed by Cyril Tasse and Philippe Zarka at the Observatoire de Paris-PSL, France – to find one. Using these tools, the team detected short, intense radio signals from a star located around 40 light-years away from Earth. This star, called StKM 1-1262, is very different from our Sun. At only around half of the Sun’s mass, it is classed as an M-dwarf star. It also rotates 20 times faster and boasts a magnetic field 300 times stronger. Nevertheless, the burst it produced had the same frequency, time and polarization properties as the plasma emission from an event called a solar type II burst that astronomers identify as a fast CME when it comes from the Sun. “This work opens up a new observational frontier for studying and understanding eruptions and space weather around other stars,” says Henrik Eklund, an ESA research fellow working at the European Space Research and Technology Centre (ESTEC) in Noordwijk, Netherlands, who was not involved in the study. “We’re no longer limited to extrapolating our understanding of the Sun’s CMEs to other stars.” The high speed of this burst – around 2400 km/s – would be atypical for our own Sun, with only around 1 in every 20 solar CMEs reaching that level. However, the ASTRON team says that M-dwarfs like StKM 1-1262 could emit CMEs of this type as often as once a day. According to Eklund, this has implications for extraterrestrial life, as most of the known planets in the Milky Way are thought to orbit stars of this type, and such bursts could be powerful enough to strip their atmospheres. “It seems that intense space weather may be even more extreme around smaller stars – the primary hosts of potentially habitable exoplanets,” he says. “This has important implications for how these planets keep hold of their atmospheres and possibly remain habitable over time.” Erik Kuulkers, a project scientist at XMM-Newton who was also not directly involved in the study, suggests that this atmosphere-stripping ability could modify the way we hunt for life in stellar systems akin to our Solar System. “A planet’s habitability for life as we know it is defined by its distance from its parent star – whether or not it sits within the star’s ‘habitable zone’, a region where liquid water can exist on the surface of planets with suitable atmospheres,” Kuulkers says. “What if that star was especially active, regularly producing CMEs, however? A planet regularly bombarded by these ejections might lose its atmosphere entirely, leaving behind a barren uninhabitable world, despite its orbit being ‘just right’. Kuulkers adds that the study’s results also contain lessons for our own Solar System. “Why is there still life on Earth despite the violent material being thrown at us?” he asks. “It is because we are safeguarded by our atmosphere.” The ASTRON team’s next step will be to look for more stars like StKM 1-1262, which Kuulkers agrees is a good idea. “The more events we can find, the more we learn about CMEs and their impact on a star’s environment,” he says. Additional observations at other wavelengths “would help”, he adds, “but we have to admit that events like the strong one reported on in this work don’t happen too often, so we also need to be lucky enough to be looking at the right star at the right time.” For now, the ASTRON researchers, who report their work in Nature, say they have reached the limit of what they can detect with LOFAR. “The next step is to use the next generation Square Kilometre Array, which will let us find many more such stars since it is so much more sensitive,” Callingham tells Physics World. The post Astronomers observe a coronal mass ejection from a distant star appeared first on Physics World.
https://physicsworld.com/a/astronomers-observe-a-coronal-mass-ejection-from-a-distant-star/
Space & Physics
svg
3f313d100c8526adc2fb4e6915855aa91a15a2343fb46198e40f582d88643b79
2025-12-10T16:49:58+00:00
Sterile neutrinos: KATRIN and MicroBooNE come up empty handed
Two major experiments have found no evidence for sterile neutrinos – hypothetical particles that could help explain some puzzling observations in particle physics. The KATRIN experiment searched for sterile neutrinos that could be produced during the radioactive decay of tritium; whereas the MicroBooNE experiment looked for the effect of sterile neutrinos on the transformation of muon neutrinos into electron neutrinos. Neutrinos are low-mass subatomic particles with zero electric charge that interact with matter only via the weak nuclear force and gravity. This makes neutrinos difficult to detect, despite the fact that the particles are produced in copious numbers by the Sun, nuclear reactors and collisions in particle accelerators. Neutrinos were first proposed in 1930 to explain the apparent missing momentum, spin and energy in the radioactive beta decay of nuclei. The they were first observed in 1956 and by 1975 physicists were confident that three types (flavours) of neutrino existed – electron, muon and tau – along with their respective antiparticles. At the same time, however, it was becoming apparent that something was amiss with the Standard Model description of neutrinos because the observed neutrino flux from sources like the Sun did not tally with theoretical predictions. Then in the late 1990s experiments in Canada and Japan revealed that neutrinos of one flavour transform into other flavours as then propagate through space. This quantum phenomenon is called neutrino oscillation and requires that neutrinos have both flavour and mass. Takaaki Kajita and Art McDonald shared the 2015 Nobel Prize for Physics for this discovery – but that is not the end of the story. One gaping hole in our knowledge is that physicists do not know the neutrino masses – having only measured upper limits for the three flavours. Furthermore, there is some experimental evidence that the current Standard-Model description of neutrino oscillation is not quite right. This includes lower-than-expected neutrino fluxes from some beta-decaying nuclei and some anomalous oscillations in neutrino beams. One possible explanation for these oscillation anomalies is the existence of a fourth type of neutrino. Because we have yet to detect this particle, the assumption is that it does not interact via the weak interaction – which is why these hypothetical particles are called sterile neutrinos. Now, two very different neutrino experiments have both reported no evidence of sterile neutrinos. One is KATRIN, which is located at the Karlsruhe Institute of Technology (KIT) in Germany. It has the prime mission of making a very precise measurement of the mass of the electron antineutrino. The idea is to measure the energy spectrum of electrons emitted in the beta decay of tritium and infer an upper limit on the mass of the electron antineutrino from the shape of the curve. If sterile neutrinos exist, then they could sometimes be emitted in place of electron antineutrinos during beta decay. This would change the electron energy spectrum – but this was not observed at KATRIN. “In the measurement campaigns underlying this analysis, we recorded over 36 million electrons and compared the measured spectrum with theoretical models. We found no indication of sterile neutrinos,” says Kathrin Valerius of the Institute for Astroparticle Physics at KIT and co-spokesperson of the KATRIN collaboration. Meanwhile, physicists on the MicroBooNE experiment at Fermilab in the US have looked for evidence for sterile neutrinos in how muon neutrinos oscillate into electron neutrinos. Beams of muon neutrinos are created by firing a proton beam at a solid target. The neutrinos at Fermilab then travel several hundred metres (in part through solid ground) to MicroBooNE’s liquid-argon time projection chamber. This detects electron neutrinos with high spatial and energy resolution, allowing detailed studies of neutrino oscillations. If sterile neutrinos exist, they would be involved in the oscillation process and would therefore affect the number of electron neutrinos detected by MicroBooNE. Neutrino beams from two different sources were used in the experiments, but no evidence for sterile neutrinos was found. Together, these two experiments rule out sterile neutrinos as an explanation for some – but not all – previously observed oscillation anomalies. So more work is needed to fully understand neutrino physics. Indeed, current and future neutrino experiments are well placed to discover physics beyond the Standard Model, which could lead to solutions to some of the greatest mysteries of physics. “Any time you rule out one place where physics beyond the Standard Model could be, that makes you look in other places,” says Justin Evans at the UK’s University of Manchester, who is co-spokesperson for MicroBooNE. “This is a result that is going to really spur a creative push in the neutrino physics community to come up with yet more exciting ways of looking for new physics.” Both groups report their results in papers in Nature: Katrin paper; MicroBooNE paper. The post Sterile neutrinos: KATRIN and MicroBooNE come up empty handed appeared first on Physics World.
https://physicsworld.com/a/sterile-neutrinos-katrin-and-microboone-come-up-empty-handed/
Space & Physics
svg
11f33c49d55270a6efe1b50fdcef5fdc54c8cc52a86639aace62ec098537e7b0
2025-12-10T14:00:03+00:00
Bridging borders in medical physics: guidance, challenges and opportunities
As the world population ages and the incidence of cancer and cardiac disease grows alongside, there’s an ever-increasing need for reliable and effective diagnostics and treatments. Medical physics plays a central role in both of these areas – from the development of a suite of advanced diagnostic imaging modalities to the ongoing evolution of high-precision radiotherapy techniques. But access to medical physics resources – whether equipment and infrastructure, education and training programmes, or the medical physicists themselves – is massively imbalanced around the world. In low- and middle-income countries (LMICs), fewer than 50% of patients have access to radiotherapy, with similar shortfalls in the availability of medical imaging equipment. Lower-income countries also have the least number of medical physicists per capita. This disparity has led to an increasing interest in global health initiatives, with professional organizations looking to provide support to medical physicists in lower income regions. Alongside, medical physicists and other healthcare professionals seek to collaborate internationally in clinical, educational and research settings. Successful multicultural collaborations, however, can be hindered by cultural, language and ethical barriers, as well as issues such as poor access to the internet and the latest technology advances. And medical physicists trained in high-income contexts may not always understand the circumstances and limitations of those working within lower income environments. Aiming to overcome these obstacles, a new book entitled Global Medical Physics: A Guide for International Collaboration provides essential guidance for those looking to participate in such initiatives. The text addresses the various complexities of partnering with colleagues in different countries and working within diverse healthcare environments, encompassing clinical and educational medical physics circles, as well as research and academic environments. “I have been involved in providing support to medical physicists in lower income contexts for a number of years, especially through the International Atomic Energy Agency (IAEA), but also through professional organizations like the American Association of Physicists in Medicine (AAPM),” explains the book’s editor Jacob Van Dyk, emeritus professor at Western University in Canada. “It is out of these experiences that I felt it might be appropriate and helpful to provide some educational materials that address these issues. The outcome was this book, with input from those with these collaborative experiences.” The book brings together contributions from 34 authors across 21 countries, including both high- and low-resource settings. The authors – selected for their expertise and experience in global health and medical physics activities – provide guidelines for success, as well as noting potential barriers and concerns, on a wide range of themes targeted at multiple levels of expertise. This guidance includes, for example: advice on how medical physicists can contribute to educational, clinical and research-based global collaborations and the associated challenges; recommendations on building global inter-institutional collaborations, covering administrative, clinical and technical challenges and ethical issues; and a case study on the Radiation Planning Assistant project, which aims to use automated contouring and treatment planning to assist radiation oncologists in LMICs. In another chapter, the author describes the various career paths available to medical physicists, highlighting how they can help address the disparity in healthcare resources through their careers. There’s also a chapter focusing on CERN as an example of a successful collaboration engaging a worldwide community, including a discussion of CERN’s involvement in collaborative medical physics projects. With the rapid emergence of artificial intelligence (AI) in healthcare, the book takes a look at the role of information and communication technologies and AI within global collaborations. Elsewhere, authors highlight the need for data sharing in medical physics, describing example data sharing applications and technologies. Other chapters consider the benefits of cross-sector collaborations with industry, sustainability within global collaborations, the development of effective mentoring programmes – including a look at challenges faced by LMICs in providing effective medical physics education and training – and equity, diversity and inclusion and ethical considerations in the context of global medical physics. The book rounds off by summarizing the key topics discussed in the earlier chapters. This information is divided into six categories: personal factors, collaboration details, project preparation, planning and execution, and post-project considerations. “Hopefully, the book will provide an awareness of factors to consider when involved in global international collaborations, not only from a high-income perspective but also from a resource-constrained perspective,” says Van Dyk. “It was for this reason that when I invited authors to develop chapters on specific topics, they were encouraged to invite a co-author from another part of the world, so that it would broaden the depth of experience.” The post Bridging borders in medical physics: guidance, challenges and opportunities appeared first on Physics World.
https://physicsworld.com/a/bridging-borders-in-medical-physics-guidance-challenges-and-opportunities/
Space & Physics
svg
b01d5f7b7c03e7585aac64924b238730974f15d017dd9262387ab2b20744e874
2025-12-10T11:00:15+00:00
Can we compare Donald Trump’s health chief to Soviet science boss Trofim Lysenko?
The US has turned Trofim Lysenko into a hero. Born in 1898, Lysenko was a Ukrainian plant breeder, who in 1927 found he could make pea and grain plants develop at different rates by applying the right temperatures to their seeds. The Soviet news organ Pravda was enthusiastic, saying his discovery could make crops grow in winter, turn barren fields green, feed starving cattle and end famine. Despite having trained as a horticulturist, Lysenko rejected the then-emerging science of genetics in favour of Lamarckism, according to which organisms can pass on acquired traits to offspring. This meshed well with the Soviet philosophy of “dialectical materialism”, which sees both the natural and human worlds as evolving not through mechanisms but environment. Stalin took note of Lysenko’s activities and had him installed as head of key Soviet science agencies. Once in power, Lysenko dismissed scientists who opposed his views, cancelled their meetings, funded studies of discredited theories, and stocked committees with loyalists. Although Lysenko had lost his influence by the time Stalin died in 1953 – with even Pravda having turned against him – Soviet agricultural science had been destroyed. Lysenko’s views and actions have a resonance today when considering the activities of Robert F Kennedy Jr, who was appointed by Donald Trump as secretary of the US Department of Health and Human Services in February 2025. Of course, Trump has repeatedly sought to impose his own agenda on US science, with his destructive impact outlined in a detailed report published by the Union of Concerned Scientists in July 2025. Last May Trump signed executive order 14303, “Restoring Gold Standard Science”, which blasts scientists for not acting “in the best interests of the public”. He has withdrawn the US from the World Health Organization (WHO), ordered that Federal-sponsored research fund his own priorities, redefined the hazards of global warming, and cancelled the US National Climate Assessment (NSA), which had been running since 2000. But after Trump appointed Kennedy, the assault on science continued into US medicine, health and human services. In what might be called a philosophy of “political materialism”, Kennedy fired all 17 members of the Advisory Committee on Immunization Practices of the US Centers for Disease Control and Prevention (CDC), cancelled nearly $500m in mRNA vaccine contracts, hired a vaccine sceptic to study its connection with autism despite numerous studies that show no connection, and ordered the CDC to revise its website to reflect his own views on the cause of autism. In his 2021 book The Real Anthony Fauci: Bill Gates, Big Pharma, and the Global War on Democracy and Public Health, Kennedy promotes not germ theory but what he calls “miasma theory”, according to which diseases are prevented by nutrition and lifestyle. Of course, there are fundamental differences between the 1930s Soviet Union and the 2020s United States. Stalin murdered and imprisoned his opponents, while the US administration only defunds and fires them. Stalin and Lysenko were not voted in, while Trump came democratically to power, with elected representatives confirming Kennedy. Kennedy has also apologized for his most inflammatory remarks, though Stalin and Lysenko never did (nor does Trump for that matter). What’s more, Stalin’s and Lysenko’s actions were more grounded in apparent scientific realities and social vision than Trump’s or Kennedy’s. Stalin substantially built up much of the Soviet science and technology infrastructure, whose dramatic successes include launching the first Earth satellite Sputnik in 1957. Though it strains credulity to praise Stalin, his vision to expand Soviet agricultural production during a famine was at least plausible and its intention could be portrayed as humanitarian. Lysenko was a scientist, Kennedy is not. As for Lysenko, his findings seemed to carry on those of his scientific predecessors. Experimentally, he expanded the work of Russian botanist Ivan Michurin, who bred new kinds of plants able to grow in different regions. Theoretically, his work connected not only with dialectical materialism but also with that of the French naturalist Jean-Baptiste Lamarck, who claimed that acquired traits can be inherited. Trump and Kennedy are off-the-wall by comparison. Trump has called climate change a con job and hoax and seeks to stop research that says otherwise. In 2019 he falsely stated that Hurricane Dorian was predicted to hit Alabama, then ordered the National Oceanic and Atmospheric Administration to issue a statement supporting him. Trump has said he wants the US birth rate to rise and that he will be the “fertilization president”, but later fired fertility and IVF researchers at the CDC. As for Kennedy, he has said that COVID-19 “is targeted to attack Caucasians and Black people” and that Ashkenazi Jews and Chinese are the most immune (he disputed the remark, but it’s on video). He has also sought to retract a 2025 vaccine study from the Annals of Internal Medicine (178 1369) that directly refuted his views on autism. US Presidents often have pet scientific projects. Harry Truman created the National Science Foundation, Dwight D Eisenhower set up NASA, John F Kennedy started the Apollo programme, while Richard Nixon launched the Environmental Protection Agency (EPA) and the War on Cancer. But it’s one thing to support science that might promote a political agenda and another to quash science that will not. One ought to be able to take comfort in the fact that if you fight nature, you lose – except that the rest of us lose as well. Thanks to Lysenko’s actions, the Soviet Union lost millions of tons of grain and hundreds of herds of cattle. The promise of his work evaporated and Stalin’s dreams vanished. Lysenko, at least, was motivated by seeming scientific promise and social vision; the US has none. Trump has damaged the most important US scientific agencies, destroyed databases and eliminated the EPA’s research arm, while Kennedy has replaced health advisory committees with party loyalists. While Kennedy may not last his term – most Trump Cabinet officials don’t – the paths he has sent science policy on surely will. For Trump and Kennedy, the policy seems to consist only of supporting pet projects. Meanwhile, cases of measles in the US have reached their highest level in three decades, the seas continue to rise and the climate is changing. It is hard to imagine how enemy agents could damage US science more effectively. The post Can we compare Donald Trump’s health chief to Soviet science boss Trofim Lysenko? appeared first on Physics World.
https://physicsworld.com/a/can-we-compare-donald-trumps-health-chief-to-soviet-science-boss-trofim-lysenko/
Space & Physics
svg
1a823f04356b820813bdc72bfcb8d2ba2cbefa08ddc1a10fbf16df83b9f700d6
2025-12-10T09:19:15+00:00
Diagnosing brain cancer without a biopsy
Early diagnosis of primary central nervous system lymphoma (PCNSL) remains challenging because brain biopsies are invasive and imaging often lacks molecular specificity. A team led by researchers at Shenzhen University has now developed a minimally invasive fibre-optic plasmonic sensor capable of detecting PCNSL-associated microRNAs in the eye’s aqueous humor with attomolar sensitivity. At the heart of the approach is a black phosphorus (BP)–engineered surface plasmon resonance (SPR) interface. An ultrathin BP layer is deposited on a gold-coated fiber tip. Because of the work-function difference between BP and gold, electrons transfer from BP into the Au film, creating a strongly enhanced local electric field at the metal–semiconductor interface. This BP–Au charge-transfer nano-interface amplifies refractive-index changes at the surface far more efficiently than conventional metal-only SPR chips, enabling the detection of molecular interactions that would otherwise be too subtle to resolve and pushing the limit of detection down to 21 attomolar without nucleic-acid amplification. The BP layer also provides a high-area, biocompatible surface for immobilizing RNA reporters. To achieve sequence specificity, the researchers integrated CRISPR-Cas13a, an RNA-guided nuclease that becomes catalytically active only when its target sequence is perfectly matched to a designed CRISPR RNA (crRNA). When the target microRNA (miR-21) is present, activated Cas13a cleaves RNA reporters attached to the BP-modified fiber surface, releasing gold nanoparticles and reducing the local refractive index. The resulting optical shift is read out in real time through the SPR response of the BP-enhanced fiber probe, providing single-nucleotide-resolved detection directly on the plasmonic interface. With this combined strategy, the sensor achieved a limit of detection of 21 attomolar in buffer and successfully distinguished single-base-mismatched microRNAs. In tests on aqueous-humor samples from patients with PCNSL, the CRISPR-BP-FOSPR assay produced results that closely matched clinical qPCR data, despite operating without any amplification steps. Because aqueous-humor aspiration is a minimally invasive ophthalmic procedure, this BP-driven plasmonic platform may offer a practical route for early PCNSL screening, longitudinal monitoring, and potentially the diagnosis of other neurological diseases reflected in eye-fluid biomarkers. More broadly, the work showcases how black-phosphorus-based charge-transfer interfaces can be used to engineer next-generation, fibre-integrated biosensors that combine extreme sensitivity with molecular precision. Ultra-sensitive detection of microRNA in intraocular fluid using optical fiber sensing technology for central nervous system lymphoma diagnosis Yanqi Ge et al 2025 Rep. Prog. Phys. 88 070502 Do you want to learn more about this topic? Theoretical and computational tools to model multistable gene regulatory networks by Federico Bocci, Dongya Jia, Qing Nie, Mohit Kumar Jolly and José Onuchic (2023) The post Diagnosing brain cancer without a biopsy appeared first on Physics World.
https://physicsworld.com/a/diagnosing-brain-cancer-without-a-biopsy/
Space & Physics
svg
3b5663b5abd7d6cf3a685775c6298ad445e49befd620bc315289f63452ece706
2025-12-10T09:18:58+00:00
5f electrons and the mystery of δ-plutonium
Plutonium is considered a fascinating element. It was first chemically isolated in 1941 at the University of California, but its discovery was hidden until after the Second World War. There are six distinct allotropic phases of plutonium with very different properties. At ambient pressure, continuously increasing the temperature converts the room-temperature, simple monoclinic a phase through five phase transitions, the final one occurring at approximately 450°C. The delta (δ) phase is perhaps the most interesting allotrope of plutonium. δ-plutonium is technologically important, has a very simple crystal structure, but its electronic structure has been debated for decades. Researchers have attempted to understand its anomalous behaviour and how the properties of δ-plutonium are connected to the 5f electrons. The 5f electrons are found in the actinide group of elements which includes plutonium. Their behaviour is counterintuitive. They are sensitive to temperature, pressure and composition, and behave in both a localised manner, staying close to the nucleus and in a delocalised (itinerant) manner, more spread out and contributing to bonding. Both these states can support magnetism depending on actinide element. The 5f electrons contribute to δ-phase stability, anomalies in the material’s volume and bulk modulus, and to a negative thermal expansion where the δ-phase reduces in size when heated. In this work, the researchers present a comprehensive model to predict the thermodynamic behaviour of δ-plutonium, which has a face-centred cubic structure. They use density functional theory, a computational technique that explores the overall electron density of the system and incorporate relativistic effects to capture the behaviour of fast-moving electrons and complex magnetic interactions. The model includes a parameter-free orbital polarization mechanism to account for orbital-orbital interactions, and incorporates anharmonic lattice vibrations and magnetic fluctuations, both transverse and longitudinal modes, driven by temperature-induced excitations. Importantly, it is shown that negative thermal expansion results from magnetic fluctuations. This is the first model to integrate electronic effects, magnetic fluctuations, and lattice vibrations into a cohesive framework that aligns with experimental observations and semi-empirical models such as CALPHAD. It also accounts for fluctuating states beyond the ground state and explains how gallium composition influences thermal expansion. Additionally, the model captures the positive thermal expansion behaviour of the high-temperature epsilon phase, offering new insight into plutonium’s complex thermodynamics. First principles free energy model with dynamic magnetism for δ-plutonium Per Söderlind et al 2025 Rep. Prog. Phys. 88 078001 Do you want to learn more about this topic? Pu 5f population: the case for n = 5.0 J G Tobin and M F Beaux II (2025) The post 5f electrons and the mystery of δ-plutonium appeared first on Physics World.
https://physicsworld.com/a/5f-electrons-and-the-mystery-of-%ce%b4-plutonium/
Space & Physics
svg
7805e03c49b6f88d0a4c30577dab82f7ccf4c19d7d999743b943f65bf3f9e0a6
2025-12-10T08:58:07+00:00
Scientists explain why ‘seeding’ clouds with silver iodide is so efficient
Silver iodide crystals have long been used to “seed” clouds and trigger precipitation, but scientists have never been entirely sure why the material works so well for that purpose. Researchers at TU Wien in Austria are now a step closer to solving the mystery thanks to a new study that characterized surfaces of the material in atomic-scale detail. “Silver iodide has been used in atmospheric weather modification programs around the world for several decades,” explains Jan Balajka from TU Wien’s Institute of Applied Physics, who led this research. “In fact, it was chosen for this purpose as far back as the 1940s because of its atomic crystal structure, which is nearly identical to that of ice – it has the same hexagonal symmetry and very similar distances between atoms in its lattice structure.” The basic idea, Balajka continues, originated with the 20th-century American atmospheric scientist Bernard Vonnegut, who suggested in 1947 that introducing small silver iodide (AgI) crystals into a cloud could provide nuclei for ice to grow on. But while Vonnegut’s proposal worked (and helped to inspire his brother Kurt’s novel Cat’s Cradle), this simple picture is not entirely accurate. The stumbling block is that nucleation occurs at the surface of a crystal, not inside it, and the atomic structure of an AgI surface differs significantly from its interior. To investigate further, Balajka and colleagues used high-resolution atomic force microscopy (AFM) and advanced computer simulations to study the atomic structure of 2‒3 nm diameter AgI crystals when they are broken into two pieces. The team’s measurements revealed that the surfaces of both freshly cleaved structures differed from those found inside the crystal. More specifically, team member Johanna Hütner, who performed the experiments, explains that when an AgI crystal is cleaved, the silver atoms end up on one side while the iodine atoms appear on the other. This has implications for ice growth, because while the silver side maintains a hexagonal arrangement that provides an ideal template for the growth of ice layers, the iodine side reconstructs into a rectangular pattern that no longer lattice-matches the hexagonal symmetry of ice crystals. The iodine side is therefore incompatible with the epitaxial growth of hexagonal ice. “Our works solves this decades-long controversy of the surface vs bulk structure of AgI, and shows that structural compatibility does matter,” Balajka says. According to Balajka, the team’s experiments were far from easy. Many experimental methods for studying the structure and properties of material surfaces are based on interactions with charged particles such as electrons or ions, but AgI is an electrical insulator, which “excludes most of the tools available,” he explains. Using AFM enabled them to overcome this problem, he adds, because this technique detects interatomic forces between a sharp tip and the surface and does not require a conductive sample. Another problem is that AgI is photosensitive and decomposes when exposed to visible light. While this property is useful in other contexts – AgI was a common ingredient in early photographic plates – it created complications for the TU Wien team. “Conventional AFM setups make use of optical laser detection to map the topography of a sample,” Balajka notes. To avoid destroying their sample while studying it, the researchers therefore had to use a non-contact AFM based on a piezoelectric sensor that detects electrical signals and does not require optical readout. They also adapted their setup to operate in near-darkness, using only red light while manipulating the Ag to ensure that stray light did not degrade the samples. The computational modelling part of the work introduced yet another hurdle to overcome. “Both Ag and I are atoms with a high number of electrons in their electron shells and are thus highly polarizable,” Balajka explains. “The interaction between such atoms cannot be accurately described by standard computational modelling methods such as density functional theory (DFT), so we had to employ highly accurate random-phase approximation (RPA) calculations to obtain reliable results.” The researchers acknowledge that their study, which is detailed in Science Advances, was conducted under highly controlled conditions – ultrahigh vacuum, low pressure and temperature and a dark environment – that are very different from those that prevail inside real clouds. “The next logical step for us is therefore to confirm whether our findings hold under more representative conditions,” Balajka says. “We would like to find out whether the structure of AgI surfaces is the same in air and water, and if not, why.” The researchers would also like to better understand the atomic arrangement of the rectangular reconstruction of the iodine surface. “This would complete the picture for the use of AgI in ice nucleation, as well as our understanding of AgI as a material overall,” Balajka says. The post Scientists explain why ‘seeding’ clouds with silver iodide is so efficient appeared first on Physics World.
https://physicsworld.com/a/scientists-explain-why-seeding-clouds-with-silver-iodide-is-so-efficient/
Space & Physics
svg
1fade7e3363ab7e99faedebf9fe0a4cb914172e968bd3979ea65800083309fa2
2025-12-09T17:22:00+00:00
Slow spectroscopy sheds light on photodegradation
Using a novel spectroscopy technique, physicists in Japan have revealed how organic materials accumulate electrical charge through long-term illumination by sunlight – leading to material degradation. Ryota Kabe and colleagues at the Okinawa Institute of Science and Technology have shown how charge separation occurs gradually via a rare multi-photon ionization process, offering new insights into how plastics and organic semiconductors degrade in sunlight. In a typical organic solar cell, an electron-donating material is interfaced with an electron acceptor. When the donor absorbs a photon, one of its electrons may jump across the interface, creating a bound electron-hole pair which may eventually dissociate – creating two free charges from which useful electrical work can be extracted. Although such an interface vastly boosts the efficiency of this process, it is not necessary for charge separation to occur when an electron donor is illuminated. “Even single-component materials can generate tiny amounts of charge via multiphoton ionization,” Kabe explains. “However, experimental evidence has been scarce due to the extremely low probability of this process.” To trigger charge separation in this way, an electron needs to absorb one or more additional photons while in its excited state. Since the vast majority of electrons fall back into their ground states before this can happen, the spectroscopic signature of this charge separation is very weak. This makes it incredibly difficult to detect using conventional spectroscopy techniques, which can generally only make observations over timescales of up to a few milliseconds. “While weak multiphoton pathways are easily buried under much stronger excited-state signals, we took the opposite approach in our work,” Kabe describes. “We excited samples for long durations and searched for traces of accumulated charges in the slow emission decay.” Key to this approach was an electron donor called NPD. This organic material has a relatively long triplet lifetime, where an excited electron is prevented from transitioning back to its ground state. As a result, these molecules emit phosphorescence over relatively long timescales. In addition, Kabe’s team dispersed their NPD samples into different host materials with carefully selected energy levels. In one medium, the energies of both the highest-occupied and lowest-unoccupied molecular orbitals lay below NPD’s corresponding levels, so that the host material acted as an electron acceptor. As a result, charge transfer occurred in the same way as it would across a typical donor-acceptor interface. Yet in another medium, the host’s lowest-unoccupied orbital lay above NPD’s – blocking charge transfer, and allowing triplet states to accumulate instead. In this case, the only way for charge separation to occur was through multi-photon ionization. Since NPD’s long triplet lifetime allowed its electrons to be excited gradually over an extended period of illumination, its weak charge accumulation became detectable through slow emission decay analysis. In contrast, more conventional methods involve multiple, ultra-fast laser pulses, severely restricting the timescale over which measurements can be made. Altogether, this approach enabled the team to clearly distinguish between the two charge generation pathways. “Using this method, we confirmed that charge generation occurred via resonance-enhanced multiphoton ionization mediated by long-lived triplet states, even in single-component organic materials,” Kabe describes. This result offers insights into how plastics and organic semiconductors are degraded by sunlight over years or decades. The conventional explanation is that sunlight generates free radicals. These are molecules that lose an electron through ionization, leaving behind an unpaired electron which readily reacts with other molecules in the surrounding environment. Since photodegradation unfolds over such a long timescale, researchers could not observe this charge generation in single-component organic materials – until now. “The method will be useful for analysing charge behaviour in organic semiconductor devices and for understanding long-term processes such as photodegradation that occur gradually under continuous light exposure,” Kabe says. The research is described in Science Advances. The post Slow spectroscopy sheds light on photodegradation appeared first on Physics World.
https://physicsworld.com/a/slow-spectroscopy-sheds-light-on-photodegradation/
Space & Physics
svg
256cfeab3b95997a8bf945641d48ce09640fdd97af51e9e09bfa47787d4d98d4
2025-12-09T14:59:37+00:00
Fermilab opens new building dedicated to Tevatron pioneer Helen Edwards
Fermilab has officially opened a new building named after the particle physicist Helen Edwards. Officials from the lab and the US Department of Energy (DOE) opened the Helen Edwards Engineering Research Center at a ceremony held on 5 December. The new building is the lab’s largest purpose-built lab and office space since the lab’s iconic Wilson Hall, which was completed in 1974. Construction of the Helen Edwards Engineering Research Center began in 2019 and was completed three years later. The centre is an 7500 m2 multi-story lab and office building that is adjacent and connected to Wilson Hall. The new centre is designed as a collaborative lab where engineers, scientists and technicians design, build and test technologies across several areas of research such as neutrino science, particle detectors, quantum science and electronics. The centre also features cleanrooms, vibration-sensitive labs and cryogenic facilities in which the components of the near detector for the Deep Underground Neutrino Experiment will be assembled and tested. With a PhD in experimental particle physics from Cornell University, Edwards was heavily involved with commissioning the university’s 10 GeV electron synchrotron. In 1970 Fermilab’s director Robert Wilson appointed Edwards as associate head of the lab’s booster section and she later became head of the accelerator division. While at Fermilab, Edwards’ primary responsibility was designing, constructing, commissioning and operating the Tevatron, which led to the discoveries of the top quark in 1995 and the tau neutrino in 2000. Edwards retired in the early 1990s but continued to work as guest scientists at Fermilab and officially switched the Tevatron off during a ceremony held on 30 September 2011. Edwards died in 2016. Darío Gil, the undersecretary for science at the DOE says that Edwards’ scientific work “is a symbol of the pioneering spirit of US research”. “Her contributions to the Tevatron and the lab helped the US become a world leader in the study of elementary particles,” notes Gil. “We honour her legacy by naming this research centre after her as Fermilab continues shaping the next generation of research using [artificial intelligence], [machine learning] and quantum physics.” The post Fermilab opens new building dedicated to Tevatron pioneer Helen Edwards appeared first on Physics World.
https://physicsworld.com/a/fermilab-opens-new-building-dedicated-to-tevatron-pioneer-helen-edwards/
Space & Physics
svg
9fe2bccd859d6af6e02adbee66f09654d8e9639be70024558f43c29ffca3f458
2025-12-09T09:52:54+00:00
Memristors could measure a single quantum of resistance
A proposed new way of defining the standard unit of electrical resistance would do away with the need for strong magnetic fields when measuring it. The new technique is based on memristors, which are programmable resistors originally developed as building blocks for novel computing architectures, and its developers say it would considerably simplify the experimental apparatus required to measure a single quantum of resistance for some applications. Electrical resistance is a physical quantity that represents how much a material opposes the flow of electrical current. It is measured in ohms (Ω), and since 2019, when the base units of the International System of Units (SI) were most recently revised, the ohm has been defined in terms of the von Klitzing constant h/e2, where h and e are the Planck constant and the charge on an electron, respectively. To measure this resistance with high precision, scientists use the fact that the von Klitzing constant is related to the quantized change in the Hall resistance of a two-dimensional electron system (such as the one that forms in a semiconductor heterostructure) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in a material like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. Generating such high fields typically requires a superconducting electromagnet, however. Researchers connected to a European project called MEMQuD are now advocating a completely different approach. Their idea is based on memristors, which are programmable resistors that “remember” their previous resistance state even after they have been switched off. This previous resistance state can be changed by applying a voltage or current. In the new work, a team led by Gianluca Milano of Italy’s Istituto Nazionale di Ricerca Metrologia (INRiM); Vitor Cabral of the Instituto Português da Qualidade; and Ilia Valov of the Institute of Electrochemistry and Energy Systems at the Bulgarian Academy of Sciences studied a device based on memristive nanoionics cells made from conducting filaments of silver. When an electrical field is applied to these filaments, their conductance changes in distinct, quantized steps. The MEMQuD team reports that the quantum conductance levels achieved in this set-up are precise enough to be exploited as intrinsic standard values. Indeed, a large inter-laboratory comparison confirmed that the values deviated by just -3.8% and 0.6% from the agreed SI values for the fundamental quantum of conductance, G0, and 2G0, respectively. The researchers attribute this precision to tight, atomic-level control over the morphology of the nanochannels responsible for quantum conductance effects, which they achieved by electrochemically polishing the silver filaments into the desired configuration. The researchers say their results are building towards a concept known as an “NMI-in-a-chip” – that is, condensing the services of a national metrology institute into a microchip. “This could lead to measuring devices that have their resistance references built-in directly into the chip,” says Milano, “so doing away with complex measurements in laboratories and allowing for devices with zero-chain traceability – that is, those that do not require calibration since they have embedded intrinsic standards.” Yuma Okazaki of Japan’s National Institute of Advanced Industrial Science and Technology (AIST), who was not involved in this work, says that the new technique could indeed allow end users to directly access a quantum resistance standard. “Notably, this method can be demonstrated at room temperature and under ambient conditions, in contrast to conventional methods that require cryogenic and vacuum equipment, which is expensive and require a lot of electrical power,” Okazaki says. “If such a user-friendly quantum standard becomes more stable and its uncertainty is improved, it could lead to a new calibration scheme for ensuring the accuracy of electronics used in extreme environments, such as space or the deep ocean, where traditional quantum standards that rely on cryogenic and vacuum conditions cannot be readily used.” The MEMQuD researchers, who report their work in Nature Nanotechnology, now plan to explore ways to further decrease deviations from the agreed SI values for G0 and 2G0. These include better material engineering, an improved measurement protocol, and strategies for topologically protecting the memristor’s resistance. The post Memristors could measure a single quantum of resistance appeared first on Physics World.
https://physicsworld.com/a/memristors-could-measure-a-single-quantum-of-resistance/
Space & Physics
svg
973fe84e3e77730a8332ad253b083c0f27dd380515a337be24dc7d91828a0ad2
2025-12-08T14:00:07+00:00
Oak Ridge Quantum Science Center prioritizes joined-up thinking, multidisciplinary impacts
Travis Humble is a research leader who’s thinking big, dreaming bold, yet laser-focused on operational delivery. The long-game? To translate advances in fundamental quantum science into a portfolio of enabling technologies that will fast-track the practical deployment of quantum computers for at-scale scientific, industrial and commercial applications. As director of the Quantum Science Center (QSC) at Oak Ridge National Laboratory (ORNL) in East Tennessee, Humble and his management team are well placed to transform that research vision into scientific, economic and societal upside. Funded to the tune of $115 million through its initial five-year programme (2020–25), QSC is one of five dedicated National Quantum Information Science Research Centers (NQISRC) within the US Department of Energy (DOE) National Laboratory system. Validation came in spades last month when, despite the current turbulence around US science funding, QSC was given follow-on DOE backing of $125 million over five years (2025–30) to create “a new scientific ecosystem” for fault-tolerant, quantum-accelerated high-performance computing (QHPC). In short, QSC will target the critical research needed to amplify the impact of quantum computing through its convergence with leadership-class exascale HPC systems. “Our priority in Phase II QSC is the creation of a common software ecosystem to host the compilers, programming libraries, simulators and debuggers needed to develop hybrid-aware algorithms and applications for QHPC,” explains Humble. Equally important, QSC researchers will develop and integrate new techniques in quantum error correction, fault-tolerant computing protocols and hybrid algorithms that combine leading-edge computing capabilities for pre- and post-processing of quantum programs. “These advances will optimize quantum circuit constructions and accelerate the most challenging computational tasks within scientific simulations,” Humble adds. At the heart of the QSC programme sits ORNL’s leading-edge research infrastructure for classical HPC, a capability that includes Frontier, the first supercomputer to break the exascale barrier and still one of the world’s most powerful. On that foundation, QSC is committed to building QHPC architectures that take advantage of both quantum computers and exascale supercomputing to tackle all manner of scientific and industrial problems beyond the reach of today’s HPC systems alone. “Hybrid classical-quantum computing systems are the future,” says Humble. “With quantum computers connecting both physically and logically to existing HPC systems, we can forge a scalable path to integrate quantum technologies into our scientific infrastructure.” Industry partnerships are especially important in this regard. Working in collaboration with the likes of IonQ, Infleqtion and QuEra, QSC scientists are translating a range of computationally intensive scientific problems – quantum simulations of exotic matter, for example – onto the vendors’ quantum computing platforms, generating excellent results out the other side. “With our broad representation of industry partners,” notes Humble, “we will establish a common framework by which scientific end-users, software developers and hardware architects can collaboratively advance these tightly coupled, scalable hybrid computing systems.” It’s a co-development model that industry values greatly. “Reciprocity is key,” Humble adds. “At QSC, we get to validate that QHPC can address real-world research problems, while our industry partners gather user feedback to inform the ongoing design and optimization of their quantum hardware and software.” Innovation being what it is, quantum computing systems will continue to trend on an accelerating trajectory, with more qubits, enhanced fidelity, error correction and fault-tolerance key reference points on the development roadmap. Phase II QSC, for its part, will integrate five parallel research thrusts to advance the viability and uptake of QHPC technologies. The collaborative software effort, led by ORNL’s Vicente Leyton, will develop openQSE, an adaptive, end-to-end software ecosystem for QHPC systems and applications. Yigit Subasi from Los Alamos National Laboratory (LANL) will lead the hybrid algorithms thrust, which will design algorithms that combine conventional and quantum methods to solve challenging problems in the simulation of model materials. Meanwhile, the QHPC architectures thrust, under the guidance of ORNL’s Chris Zimmer, will co-design hybrid computing systems that integrate quantum computers with leading-edge HPC systems. The scientific applications thrust, led by LANL’s Andrew Sornberger, will develop and validate applications of quantum simulation to be implemented on prototype QHPC systems. Finally, ORNL’s Michael McGuire will lead the thrust to establish experimental baselines for quantum materials that ultimately validate QHPC simulations against real-world measurements. Longer term, ORNL is well placed to scale up the QHPC model. After all, the laboratory is credited with pioneering the hybrid supercomputing model that uses graphics processing units in addition to conventional central processing units (including the launch in 2012 of Titan, the first supercomputer of this type operating at over 10 petaFLOPS). “The priority for all the QSC partners,” notes Humble, “is to transition from this still-speculative research phase in quantum computing, while orchestrating the inevitable convergence between quantum technology, existing HPC capabilities and evolving scientific workflows.” Much like its NQISRC counterparts (which have also been allocated further DOE funding through 2030), QSC provides the “operational umbrella” for a broad-scope collaboration of more than 300 scientists and engineers from 20 partner institutions. With its own distinct set of research priorities, that collective activity cuts across other National Laboratories (Los Alamos and Pacific Northwest), universities (among them Berkeley, Cornell and Purdue) and businesses (including IBM and IQM) to chart an ambitious R&D pathway addressing quantum-state (qubit) resilience, controllability and, ultimately, the scalability of quantum technologies. “QSC is a multidisciplinary melting pot,” explains Humble, “and I would say, alongside all our scientific and engineering talent, it’s the pooled user facilities that we are able to exploit here at Oak Ridge and across our network of partners that gives us our ‘grand capability’ in quantum science [see box, “Unique user facilities unlock QSC opportunities”]. Certainly, when you have a common research infrastructure, orchestrated as part a unified initiative like QSC, then you can deliver powerful science that translates into real-world impacts.” <><>Neutron insights ORNL director Stephen Streiffer tours the linear accelerator tunnel at the Spallation Neutron Source (SNS). QSC scientists are using the SNS to investigate entirely new classes of strongly correlated materials that demonstrate topological order and quantum entanglement. (Courtesy: Alonda Hines/ORNL, US DOE) Deconstructed, QSC’s Phase I remit (2020–25) spanned three dovetailing and cross-disciplinary research pathways: discovery and development of advanced materials for topological quantum computing (in which quantum information is stored in a stable topological state – or phase – of a physical system rather than the properties of individual particles or atoms); development of next-generation quantum sensors (to characterize topological states and support the search for dark matter); as well as quantum algorithms and simulations (for studies in fundamental physics and quantum chemistry). Underpinning that collective effort: ORNL’s unique array of scientific user facilities. A case in point is the Spallation Neutron Source (SNS), an accelerator-based neutron-scattering facility that enables a diverse programme of pure and applied research in the physical sciences, life sciences and engineering. QSC scientists, for example, are using SNS to investigate entirely new classes of strongly correlated materials that demonstrate topological order and quantum entanglement – properties that show great promise for quantum computing and quantum metrology applications. “The high-brightness neutrons at SNS give us access to this remarkable capability for materials characterization,” says Humble. “Using the SNS neutron beams, we can probe exotic materials, recover the neutrons that scatter off of them and, from the resultant signals, infer whether or not the materials exhibit quantum properties such as entanglement.” While SNS may be ORNL’s “big-ticket” user facility, the laboratory is also home to another high-end resource for quantum studies: the Center for Nanophase Material Science (CNMS), one of the DOE’s five national Nanoscience Research Centers, which offers QSC scientists access to specialist expertise and equipment for nanomaterials synthesis; materials and device characterization; as well as theory, modelling and simulation in nanoscale science and technology. Thanks to these co-located capabilities, QSC scientists pioneered another intriguing line of enquiry – one that will now be taken forward elsewhere within ORNL – by harnessing so-called quantum spin liquids, in which electron spins can become entangled with each other to demonstrate correlations over very large distances (relative to the size of individual atoms). In this way, it is possible to take materials that have been certified as quantum-entangled and use them to design new types of quantum devices with unique geometries – as well as connections to electrodes and other types of control systems – to unlock novel physics and exotic quantum behaviours. The long-term goal? Translation of quantum spin liquids into a novel qubit technology to store and process quantum information. SNS, CNMS and Oak Ridge Leadership Computing Facility (OLCF) are DOE Office of Science user facilities. When he’s not overseeing the technical direction of QSC, Humble is acutely attuned to the need for sustained and accessible messaging. The priority? To connect researchers across the collaboration – physicists, chemists, material scientists, quantum information scientists and engineers – as well as key external stakeholders within the DOE, government and industry. “In my experience,” he concludes, ”the ability of the QSC teams to communicate efficiently – to understand each other’s concepts and reasoning and to translate back and forth across disciplinary boundaries – remains fundamental to the success of our scientific endeavours.” Listen to the Physics World podcast: Oak Ridge’s Quantum Science Center takes a multidisciplinary approach to developing quantum materials and technologies <><>The next generation Quantum science graduate students and postdoctoral researchers present and discuss their work during a poster session at the fifth annual QSC Summer School. Hosted at Purdue University in April this year, the school is one of several workforce development efforts supported by QSC. (Courtesy: Dave Mason/Purdue University) With an acknowledged shortage of skilled workers across the quantum supply chain, QSC is doing its bit to bolster the scientific and industrial workforce. Front-and-centre: the fifth annual QSC Summer School, which was held at Purdue University in April this year, hosting 130 graduate students (the largest cohort to date) through an intensive four-day training programme. The Summer School sits as part of a long-term QSC initiative to equip ambitious individuals with the specialist domain knowledge and skills needed to thrive in a quantum sector brimming with opportunity – whether that’s in scientific research or out in industry with hardware companies, software companies or, ultimately, the end-users of quantum technologies in key verticals like pharmaceuticals, finance and healthcare. “While PhD students and postdocs are integral to the QSC research effort, the Summer School exposes them to the fundamental ideas of quantum science elaborated by leading experts in the field,” notes Vivien Zapf, a condensed-matter physicist at Los Alamos National Laboratory who heads up QSC’s advanced characterization efforts. “It’s all about encouraging the collective conversation,” she adds, “with lots of opportunities for questions and knowledge exchange. Overall, our emphasis is very much on training up scientists and engineers to work across the diversity of disciplines needed to translate quantum technologies out of the lab into practical applications.” The programme isn’t for the faint-hearted, though. Student delegates kicked off this year’s proceedings with a half-day of introductory presentations on quantum materials, devices and algorithms. Next up: three and a half days of intensive lectures, panel discussions and poster sessions covering everything from entangled quantum networks to quantum simulations of superconducting qubits. Many of the Summer School’s sessions were also made available virtually on Purdue’s Quantum Coffeehouse Live Stream on YouTube – the streamed content reaching quantum learners across the US and further afield. Lecturers were drawn from the US National Laboratories, leading universities (such as Harvard and Northwestern) and the quantum technology sector (including experts from IBM, PsiQuantum, NVIDIA and JPMorganChase). The post Oak Ridge Quantum Science Center prioritizes joined-up thinking, multidisciplinary impacts appeared first on Physics World.
https://physicsworld.com/a/oak-ridge-quantum-science-center-prioritizes-joined-up-thinking-multidisciplinary-impacts/
Space & Physics
svg
f3af4d896320d6b903c39ad29c9766e0c62169a909c5c2cd4e42ea69090fb33b
2025-12-08T11:00:42+00:00
So you want to install a wind turbine? Here’s what you need to know
As a physicist in industry, I spend my days developing new types of photovoltaic (PV) panels. But I’m also keen to do something for the transition to green energy outside work, which is why I recently installed two PV panels on the balcony of my flat in Munich. Fitting them was great fun – and I can now enjoy sunny days even more knowing that each panel is generating electricity. However, the panels, which each have a peak power of 440 W, don’t cover all my electricity needs, which prompted me to take an interest in a plan to build six wind turbines in a forest near me on the outskirts of Munich. Curious about the project, I particularly wanted to find out when the turbines will start generating electricity for the grid. So when I heard that a weekend cycle tour of the site was being organized to showcase it to local residents, I grabbed my bike and joined in. As we cycle, I discover that the project – located in Forstenrieder Park – is the joint effort of four local councils and two “citizen-energy” groups, who’ve worked together for the last five years to plan and start building the six turbines. Each tower will be 166 m high and the rotor blades will be 80 m long, with the plan being for them to start operating in 2027. I’ve never thought of Munich as a particularly windy city, but at the height at which the blades operate, there’s always a steady, reliable flow of wind I’ve never thought of Munich as a particularly windy city. But tour leader Dieter Maier, who’s a climate adviser to Neuried council, explains that at the height at which the blades operate, there’s always a steady, reliable flow of wind. In fact, each turbine has a designed power output of 6.5 MW and will deliver a total of 10 GWh in energy over the course of a year. Cycling around, I’m excited to think that a single turbine could end up providing the entire electricity demand for Neuried. But installing wind turbines involves much more than just the technicalities of generating electricity. How do you connect the turbines to the grid? How do you ensure planes don’t fly into the turbines? What about wildlife conservation and biodiversity? At one point of our tour, we cycle round a 90-degree bend in the forest and I wonder how a huge, 80 m-long blade will be transported round that kind of tight angle? Trees will almost certainly have to be felled to get the blade in place, which sounds questionable for a supposedly green project. Fortunately, project leaders have been working with the local forest manager and conservationists, finding ways to help improve the local biodiversity despite the loss of trees. As a representative of BUND (one of Germany’s biggest conservation charities) explains on the tour, a natural, or “unmanaged”, forest consists of a mix of areas with a higher or lower density of trees. But Forstenrieder Park has been a managed forest for well over a century and is mostly thick with trees. Clearing trees for the turbines will therefore allow conservationists to grow more of the bushes and plants that currently struggle to find space to flourish. To avoid endangering birds and bats native to this forest, meanwhile, the turbines will be turned off when the animals are most active, which coincidentally corresponds to low wind periods in Munich. Insurance costs have to be factored in too. Thankfully, it’s quite unlikely that a turbine will burn down or get ice all over its blades, which means liability insurance costs are low. But vandalism is an ever-present worry. In fact, at the end of our bike tour, we’re taken to a local wind turbine that is already up and running about 13 km further south of Forstenrieder Park. This turbine, I’m disappointed to discover, was vandalized back in 2024, which led to it being fenced off and video surveillance cameras being installed. But for all the difficulties, I’m excited by the prospect of the wind turbines supporting the local energy needs. I can’t wait for the day when I’m on my balcony, solar panels at my side, sipping a cup of tea made with water boiled by electricity generated by the rotor blades I can see turning round and round on the horizon. The post So you want to install a wind turbine? Here’s what you need to know appeared first on Physics World.
https://physicsworld.com/a/so-you-want-to-install-a-wind-turbine-heres-what-you-need-to-know/
Space & Physics
svg
eeb0714c0a658a4d37facdb0accd7c821d5905f690a1cb17fe4395483c4f36d9
2025-12-05T14:21:05+00:00
Galactic gamma rays could point to dark matter
Gamma rays emitted from the halo of the Milky Way could be produced by hypothetical dark-matter particles. That is the conclusion of an astronomer in Japan who has analysed data from NASA’s Fermi Gamma-ray Space Telescope. The energy spectrum of the emission is what would be expected from the annihilation of particles called WIMPs. If this can be verified, it would mark the first observation of dark matter via electromagnetic radiation. Since the 1930s astronomers have known that there is something odd about galaxies, galaxy clusters and larger structures in the universe. The problem is that there is not nearly enough visible matter in these objects to explain their dynamics and structure. A rotating galaxy, for example, should be flinging out its stars because it does not have enough self-gravitation to hold itself together. Today, the most popular solution to this conundrum is the existence of a hypothetical substance called dark matter. Dark-matter particles would have mass and interact with each other and normal matter via the gravitational force, gluing rotating galaxies together. However, the fact that we have never observed dark matter directly means that the particles must rarely, if ever, interact via the other three forces. The weakly interacting massive particle (WIMP) is a dark-matter candidate that interacts via the weak nuclear force (or a similarly weak force). As a result of this interaction, pairs of WIMPs are expected to occasionally annihilate to create high-energy gamma rays and other particles. If this is true, dense areas of the universe such as galaxies should be sources of these gamma rays. Now, Tomonori Totani of the University of Tokyo has analysed data from the Fermi telescope and identified an excess of gamma rays emanating from the halo of the Milky Way. What is more, Totani’s analysis suggests that the energy spectrum of the excess radiation (from about 10−100 GeV) is consistent with hypothetical WIMP annihilation processes. “If this is correct, to the extent of my knowledge, it would mark the first time humanity has ‘seen’ dark matter,” says Totani. “This signifies a major development in astronomy and physics,” he adds. While Totani is confident of his analysis, his conclusion must be verified independently. Furthermore, work will be needed to rule out conventional astrophysical sources of the excess radiation. Catherine Heymans, who is Astronomer Royal for Scotland told Physics World, “I think it’s a really nice piece of work, and exactly what should be happening with the Fermi data”. The research is described in Journal of Cosmology and Astroparticle Physics. Heymans describes Totani’s paper as “well written and thorough”. The post Galactic gamma rays could point to dark matter appeared first on Physics World.
https://physicsworld.com/a/galactic-gamma-rays-could-point-to-dark-matter/
Space & Physics
svg
14f50701af988826f24526c3037aa029540a44e15735795d49dc7b804c95b6bf
2025-12-05T09:00:59+00:00
Simple feedback mechanism keeps flapping flyers stable when hovering
Researchers in the US have shed new light on the puzzling and complex flight physics of creatures such as hummingbirds, bumblebees and dragonflies that flap their wings to hover in place. According to an interdisciplinary team at the University of Cincinnati, the mechanism these animals deploy can be described by a very simple, computationally basic, stable and natural feedback mechanism that operates in real time. The work could aid the development of hovering robots, including those that could act as artificial pollinators for crops. If you’ve ever watched a flapping insect or hummingbird hover in place – often while engaged in other activities such as feeding or even mating – you’ll appreciate how remarkable they are. To stay aloft and stable, these animals must constantly sense their position and motion and make corresponding adjustments to their wing flaps. Biophysicists have previously put forward many highly complex explanations for how they do this, but according to the Cincinnati team of Sameh Eisa and Ahmed Elgohary, some of this complexity is not necessary. Earlier this year, the pair developed their own mathematical and control theory based on a mechanism they call “extremum seeking for vibrational stabilization”. Eisa describes this mechanism as “very natural” because it relies on just two main components. The first is the wing flapping motion itself, which he says is “naturally built in” for flapping creatures that use it to propel themselves. The second is a simple feedback mechanism involving sensations and measurements related to the altitude at which the creatures aim to stabilize their hovering. The general principle, he continues, is that a system (in this case an insect or hummingbird) can steer itself towards a stable position by continuously adjusting a high-amplitude, high-frequency input control or signal (in this case, a flapping wing action). “This adjustment is simply based on the feedback of measurement (the insects’ perceptions) and stabilization (hovering) occurs when the system optimizes what it is measuring,” he says. As well as being relatively easy to describe, Eisa tells Physics World that this mechanism is biologically plausible and computationally basic, dramatically simplifying the physics of hovering. “It is also categorically different from all available results and explanations in the literature for how stable hovering by insects and hummingbirds can be achieved,” he adds. In the latest study, which is detailed in Physical Review E, the researchers compared their simulation results to reported biological data on a hummingbird and five flapping insects (a bumblebee, a cranefly, a dragonfly, a hawkmoth and a hoverfly). They found that their simulation fit the data very closely. They also ran an experiment on a flapping, light-sensing robot and observed that it behaved like a moth: it elevated itself to the level of the light source and then stabilized its hovering motion. Eisa says he has always been fascinated by such optimized biological behaviours. “This is especially true for flyers, where mistakes in execution could potentially mean death,” he says. “The physics behind the way they do it is intriguing and it probably needs elegant and sophisticated mathematics to be described. However, the hovering creatures appear to be doing this very simply and I found discovering the secret of this puzzle very interesting and exciting.” Eisa adds that this element of the work ended up being very interdisciplinary, and both his own PhD in applied mathematics and the aerospace engineering background of Elgohary came in very useful. “We also benefited from lengthy discussions with a biologist colleague who was a reviewer of our paper,” Eisa says. “Luckily, they recognized the value of our proposed technique and ended up providing us with very valuable inputs.” Eisa thinks the work could open up new lines of research in several areas of science and engineering. “For example, it opens up new ideas in neuroscience and animal sensory mechanisms and could almost certainly be applied to the development of airborne robotics and perhaps even artificial pollinators,” he says. “The latter might come in useful in the future given the high rate of death many species of pollinating insects are encountering today.” The post Simple feedback mechanism keeps flapping flyers stable when hovering appeared first on Physics World.
https://physicsworld.com/a/simple-feedback-mechanism-keeps-flapping-flyers-stable-when-hovering/
Space & Physics
svg
129753964e644f9ae825d010f9aeba2bddc99e4befaa999287f905e508dc14ce
2025-12-04T14:55:09+00:00
Building a quantum future using topological phases of matter and error correction
This episode of the Physics World Weekly podcast features Tim Hsieh of Canada’s Perimeter Institute for Theoretical Physics. We explore some of today’s hottest topics in quantum science and technology – including topological phases of matter; quantum error correction and quantum simulation. Our conversation begins with an exploration of the quirky properties quantum matter and how these can be exploited to create quantum technologies. We look at the challenges that must be overcome to create large-scale quantum computers; and Hsieh reveals which problem he would solve first if he had access to a powerful quantum processor. This interview was recorded earlier this autumn when I had the pleasure of visiting the Perimeter Institute and speaking to four physicists about their research. This is the third of those conversations to appear on the podcast. The first interview in this series from the Perimeter Institute was with Javier Toledo-Marín, “Quantum computing and AI join forces for particle physics”; and the second was with Bianca Dittrich, “Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge“.   This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online. The post Building a quantum future using topological phases of matter and error correction appeared first on Physics World.
https://physicsworld.com/a/building-a-quantum-future-using-topological-phases-of-matter-and-error-correction/
Space & Physics
svg
5389dad601a4ed9c3d2c2a1310a9f5500ce40574859648656f2081a668643b86
2025-12-04T13:00:09+00:00
Generative AI model detects blood cell abnormalities
The shape and structure of blood cells provide vital indicators for diagnosis and management of blood disease and disorders. Recognizing subtle differences in the appearance of cells under a microscope, however, requires the skills of experts with years of training, motivating researchers to investigate whether artificial intelligence (AI) could help automate this onerous task. A UK-led research team has now developed a generative AI-based model, known as CytoDiffusion, that characterizes blood cell morphology with greater accuracy and reliability than human experts. Conventional discriminative machine learning models can match human performance at classifying cells in blood samples into predefined classes. But discriminative models, which learn to recognise cell images based on expert labels, struggle with never-before-seen cell types and images from differing microscopes and staining techniques. To address these shortfalls, the team – headed up at the University of Cambridge, University College London and Queen Mary University of London – created CytoDiffusion around a diffusion-based generative AI classifier. Rather than just learning to separate cell categories, CytoDiffusion models the full range of blood cell morphologies to provide accurate classification with robust anomaly detection. “Our approach is motivated by the desire to achieve a model with superhuman fidelity, flexibility and metacognitive awareness that can capture the distribution of all possible morphological appearances,” the researchers write. For AI-based analysis to be adopted in the clinic, it’s essential that users trust a model’s learned representations. To assess whether CytoDiffusion could effectively capture the distribution of blood cell images, the team used it to generate synthetic blood cell images. Analysis by experienced haematologists revealed that these synthetic images were near-indistinguishable from genuine images, showing that CytoDiffusion genuinely learns the morphological distribution of blood cells rather than using artefactual shortcuts. The researchers used multiple datasets to develop and evaluate their diffusion classifier, including CytoData, a custom dataset containing more than half a million anonymized cell images from almost 3000 blood smear slides. In standard classification tasks across these datasets, CytoDiffusion achieved state-of-the-art performance, matching or exceeding the capabilities of traditional discriminative models. Effective diagnosis from blood smear samples also requires the ability to detect rare or previously unseen cell types. The researchers evaluated CytoDiffusion’s ability to detect blast cells (immature blood cells) in the test datasets. Blast cells are associated with blood malignancies such as leukaemia, and high detection sensitivity is essential to minimize false negatives. In one dataset, CytoDiffusion detected blast cells with sensitivity and specificity of 0.905 and 0.962, respectively. In contrast, a discriminative model exhibited a poor sensitivity of 0.281. In datasets with erythroblasts as the abnormal cells, CytoDiffusion again outperformed the discriminative model, demonstrating that it can detect abnormal cell types not present in its training data, with the high sensitivity required for clinical applications. It’s important that a classification model is robust to different imaging conditions and can function with sparse training data, as commonly found in clinical applications. When trained and tested on diverse image datasets (different hospitals, microscopes and staining procedures), CytoDiffusion achieved state-of-the-art accuracy in all cases. Likewise, after training on limited subsets of 10, 20 and 50 images per class, CytoDiffusion consistently outperformed discriminative models, particularly in the most data-scarce conditions. Another essential feature of clinical classification tasks, whether performed by a human or an algorithm, is knowing the uncertainty in the final decision. The researchers developed a framework for evaluating uncertainty and showed that CytoDiffusion produced superior uncertainty estimates to human experts. With uncertainty quantified, cases with high certainty could be processed automatically, with uncertain cases flagged for human review. “When we tested its accuracy, the system was slightly better than humans,” says first author Simon Deltadahl from the University of Cambridge in a press statement. “But where it really stood out was in knowing when it was uncertain. Our model would never say it was certain and then be wrong, but that is something that humans sometimes do.” Finally, the team demonstrated CytoDiffusion’s ability to create heat maps highlighting regions that would need to change for an image to be reclassified. This feature provides insight into the model’s decision-making process and shows that it understands subtle differences between similar cell types. Such transparency is essential for clinical deployment of AI, making models more trustworthy as practitioners can verify that classifications are based on legitimate morphological features. “The true value of healthcare AI lies not in approximating human expertise at lower cost, but in enabling greater diagnostic, prognostic and prescriptive power than either experts or simple statistical models can achieve,” adds co-senior author Parashkev Nachev from University College London. CytoDiffusion is described in Nature Machine Intelligence. The post Generative AI model detects blood cell abnormalities appeared first on Physics World.
https://physicsworld.com/a/generative-ai-model-detects-blood-cell-abnormalities/
Space & Physics
svg
7df3af274cd70bc57c38caf729a73ffb977323eda09858e85a983eb6b7768198
2025-12-04T11:50:11+00:00
Light pollution from satellite mega-constellations threaten space-based observations
Almost every image that will be taken by future space observatories in low-Earth orbit could be tainted due to light contamination from satellites. That is according to a new analysis from researchers at NASA, which stresses that light pollution from satellites orbiting Earth must be reduced to guarantee astronomical research is not affected. The number of satellites orbiting Earth has increased from about 2000 in 2019 to 15 000 today. Many of these are part of so-called mega-constellations that provide services such as Internet coverage around the world, including in areas that were previously unable to access it. Examples of such constellations include SpaceX’s Starlink as well as Amazon’s Kuiper and Eutelsat’s OneWeb. Many of these mega-constellations share the same space as space-based observatories such as NASA’s Hubble Space Telescope. This means that the telescopes can capture streaks of reflected light from the satellites that render the images or data completely unusable for research purposes. That is despite anti-reflective coating that is applied to some newer satellites in SpaceX’s Starlink constellation, for example. Previous work has explored the impact of such satellites constellations on ground-based astronomy, both optical and radioastronomy. Yet their impact on telescopes in space has been overlooked. To find out more, Alejandro Borlaff from NASA’s Ames Research Center, and colleagues simulated the view of four space-based telescopes: Hubble and the near-infrared observatory SPHEREx, which launched in 2025, as well at the European Space Agency’s proposed near-infrared ARRAKIHS mission and China’s planned Xuntian telescopes. These observatories are, or will be placed, between 400 and 800 km from the Earth’s surface. The authors found that if the population of mega-constellation satellites grows to the 56 000 that is projected by the end of the decade, it would contaminate about 39.6% of Hubble’s images and 96% of images from the other three telescopes. Borlaff and colleagues predict that the average number of satellites observed per exposure would be 2.14 for Hubble, 5.64 for SPHEREx, 69 for ARRAKIHS, and 92 for Xuntian. The authors note that one solution could be to deploy satellites at lower orbits than the telescopes operate, which would make them about four magnitudes dimmer. The downside is that emissions from these lower satellites could have implications for Earth’s ozone layer. Katherine Courtney, chair of the steering board for the Global Network on Sustainability in Space, says that without astronomy, the modern space economy “simply wouldn’t exist”. “The space industry owes its understanding of orbital mechanics, and much of the technology development that has unlocked commercial opportunities for satellite operators, to astronomy,” she says. “The burgeoning growth of the satellite population brings many benefits to life on Earth, but the consequences for the future of astronomy must be taken into consideration.” Courtney adds that there is now “an urgent need for greater dialogue and collaboration between astronomers and satellite operators to mitigate those impacts and find innovative ways for commercial and scientific operations to co-exist in space.” The post Light pollution from satellite mega-constellations threaten space-based observations appeared first on Physics World.
https://physicsworld.com/a/light-pollution-from-satellite-mega-constellations-threaten-space-based-observations/
Space & Physics
svg
c11a2ccd78688edada0907125187205662fb1b75995a3d7b294d3dde1085b1dd
2025-12-04T09:00:45+00:00
Physicists use a radioactive molecule’s own electrons to probe its internal structure
Physicists have obtained the first detailed picture of the internal structure of radium monofluoride (RaF) thanks to the molecule’s own electrons, which penetrated the nucleus of the molecule and interacted with its protons and neutrons. This behaviour is known as the Bohr-Weisskopf effect, and study co-leader Shane Wilkins says that this marks the first time it has been observed in a molecule. The measurements themselves, he adds, are an important step towards testing for nuclear symmetry violation, which might explain why our universe contains much more matter than antimatter. RaF contains the radioactive isotope 225Ra, which is not easy to make, let alone measure. Producing it requires a large accelerator facility at high temperature and high velocity, and it is only available in tiny quantities (less than a nanogram in total) for short periods (it has a nuclear half-life of around 15 days). “This imposes significant challenges compared to the study of stable molecules, as we need extremely selective and sensitive techniques in order to elucidate the structure of molecules containing 225Ra,” says Wilkins, who performed the measurements as a member of Ronald Fernando Garcia Ruiz’s research group at the Massachusetts Institute of Technology (MIT), US. The team chose RaF despite these difficulties because theory predicts that it is particularly sensitive to small nuclear effects that break the symmetries of nature. “This is because, unlike most atomic nuclei, the radium atom’s nucleus is octupole deformed, which basically means it has a pear shape,” explains the study’s other co-leader, Silviu-Marian Udrescu. In their study, which is detailed in Science, the MIT team and colleagues at CERN, the University of Manchester, UK and KU Leuven in the Netherlands focused on RaF’s hyperfine structure. This structure arises from interactions between nuclear and electron spins, and studying it can reveal valuable clues about the nucleus. For example, the nuclear magnetic dipole moment can provide information on how protons and neutrons are distributed inside the nucleus. In most experiments, physicists treat electron-nucleus interactions as taking place at (relatively) long ranges. With RaF, that’s not the case. Udrescu describes the radium atom’s electrons as being “squeezed” within the molecule, which increases the probability that they will interact with, and penetrate, the radium nucleus. This behaviour manifests itself as a slight shift in the energy levels of the radium atom’s electrons, and the team’s precision measurements – combined with state-of-the-art molecular structure calculations – confirm that this is indeed what happens. “We see a clear breakdown of this [long-range interactions] picture because the electrons spend a significant amount of time within the nucleus itself due to the special properties of this radium molecule,” Wilkins explains. “The electrons thus act as highly sensitive probes to study phenomena inside the nucleus.” According to Udrescu, the team’s work “lays the foundations for future experiments that use this molecule to investigate nuclear symmetry violation and test the validity of theories that go beyond the Standard Model of particle physics.” In this model, each of the matter particles we see around us – from baryons like protons to leptons such as electrons – should have a corresponding antiparticle that is identical in every way apart from its charge and magnetic properties (which are reversed). The problem is that the Standard Model predicts that the Big Bang that formed our universe nearly 14 billion years ago should have generated equal amounts of antimatter and matter – yet measurements and observations made today reveal an almost entirely matter-based universe. Subtler differences between matter particles and their antimatter counterparts might explain why the former prevailed, so by searching for these differences, physicists hope to explain antimatter-matter asymmetry. Wilkins says the team’s work will be important for future such searches in species like RaF. Indeed, Wilkins, who is now at Michigan State University’s Facility for Rare Isotope Beams (FRIB), is building a new setup to cool and slow beams of radioactive molecules to enable higher-precision spectroscopy of species relevant to nuclear structure, fundamental symmetries and astrophysics. His long-term goal, together with other members of the RaX collaboration (which includes FRIB and the MIT team as well as researchers at Harvard University and the California Institute of Technology), is to implement advanced laser-based techniques using radium-containing molecules. The post Physicists use a radioactive molecule’s own electrons to probe its internal structure appeared first on Physics World.
https://physicsworld.com/a/physicists-use-a-radioactive-molecules-own-electrons-to-probe-its-internal-structure/
Space & Physics
svg
487592a051affa5a406fefb2a985931bad31e5556947679898cc969164bebd5f
2025-12-03T16:18:34+00:00
Quantum-scale thermodynamics offers a tighter definition of entropy
A new, microscopic formulation of the second law of thermodynamics for coherently driven quantum systems has been proposed by researchers in Switzerland and Germany. The researchers applied their formulation to several canonical quantum systems, such as a three-level maser. They believe the result provides a tighter definition of entropy in such systems, and could form a basis for further exploration. In any physical process, the first law of thermodynamics says that the total energy must always be conserved, with some converted to useful work and the remainder dissipated as heat. The second law of thermodynamics says that, in any allowed process, the total amount of heat (the entropy) must always increase. “I like to think of work being mediated by degrees of freedom that we control and heat being mediated by degrees of freedom that we cannot control,” explains theoretical physicist Patrick Potts of the University of Basel in Switzerland. “In the macroscopic scenario, for example, work would be performed by some piston – we can move it.” The heat, meanwhile, goes into modes such as phonons generated by friction. This distinction, however, becomes murky at small scales: “Once you go microscopic everything’s microscopic, so it becomes much more difficult to say ‘what is it that that you control – where is the work mediated – and what is it that you cannot control?’,” says Potts. Potts and colleagues in Basel and at RWTH Aachen University in Germany examined the case of optical cavities driven by laser light, systems that can do work: “If you think of a laser as being able to promote a system from a ground state to an excited state, that’s very important to what’s being done in quantum computers, for example,” says Potts. “If you rotate a qubit, you’re doing exactly that.” The light interacts with the cavity and makes an arbitrary number of bounces before leaking out. This emergent light is traditionally treated as heat in quantum simulations. However, it can still be partially coherent – if the cavity is empty, it can be just as coherent as the incoming light and can do just as much work. In 2020, quantum optician Alexia Auffèves of Université Grenoble Alpes in France and colleagues noted that the coherent component of the light exiting a cavity could potentially do work. In the new study, the researchers embedded this in a consistent thermodynamic framework. They studied several examples and formulated physically consistent laws of thermodynamics. In particular, they looked at the three-level maser, which is a canonical example of a quantum heat engine. However, it has generally been modelled semi-classically by assuming that the cavity contains a macroscopic electromagnetic field. “The old description will tell you that you put energy into this macroscopic field and that is work,” says Potts, “But once you describe the cavity quantum mechanically using the old framework then – poof! – the work is gone…Putting energy into the light field is no longer considered work, and whatever leaves the cavity is considered heat.” The researchers new thermodynamic treatment allows them to treat the cavity quantum mechanically and to parametrize the minimum degree of entropy in the radiation that emerges – how much radiation must be converted to uncontrolled degrees of freedom that can do no useful work and how much can remain coherent. The researchers are now applying their formalism to study thermodynamic uncertainty relations as an extension of the traditional second law of thermodynamics. “It’s actually a trade-off between three things – not just efficiency and power, but fluctuations also play a role,” says Potts. “So the more fluctuations you allow for, the higher you can get the efficiency and the power at the same time. These three things are very interesting to look at with this new formalism because these thermodynamic uncertainty relations hold for classical systems, but not for quantum systems.” “This [work] fits very well into a question that has been heavily discussed for a long time in the quantum thermodynamics community, which is how to properly define work and how to properly define useful resources,” says quantum theorist Federico Cerisola of the UK’s University of Exeter. “In particular, they very convincingly argue that, in the particular family of experiments they’re describing, there are resources that have been ignored in the past when using more standard approaches that can still be used for something useful.” Cerisola says that, in his view, the logical next step is to propose a system – ideally one that can be implemented experimentally – in which radiation that would traditionally have been considered waste actually does useful work. The research is described in Physical Review Letters. The post Quantum-scale thermodynamics offers a tighter definition of entropy appeared first on Physics World.
https://physicsworld.com/a/quantum-scale-thermodynamics-offers-a-tighter-definition-of-entropy/
Space & Physics
svg
d8c2932205f0df905efd03194cae44aa504b92704ff13bcbd8c6210929f1ccb4
2025-12-03T13:00:22+00:00
Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time
When I was five years old, my family moved into a 1930s semi-detached house with a long strip of garden. At the end of the garden was a miniature orchard of eight apple trees the previous owners had planted – and it was there that I, much like another significantly more famous physicist, learned an important lesson about gravity. As I read in the shade of the trees, an apple would sometimes fall with a satisfying thunk into the soft grass beside me. Less satisfyingly, they sometimes landed on my legs, or even my head – and the big cooking apples really hurt. I soon took to sitting on old wooden pallets crudely wedged among the higher branches. It was not comfortable, but at least I could return indoors without bruises. The effects of gravity become common sense so early in life that we rarely stop to think about them past childhood. In his new book Crush: Close Encounters with Gravity, James Riordon has decided to take us back to the basics of this most fundamental of forces. Indeed, he explores an impressively wide range of topics – from why we dream of falling and why giraffes should not exist (but do), to how black holes form and the existence of “Planet 9”. Riordon, a physicist turned science writer, makes for a deeply engaging author. He is not afraid to put himself into the story, introducing difficult concepts through personal experience and explaining them with the help of everything including the kitchen sink, which in his hands becomes an analogue for a black hole. Gravity as a subject can easily be both too familiar and too challenging. In Riordon’s words, “Things with mass attract each other. That’s really all there is to Newtonian gravity.” While Albert Einstein’s theory of general relativity, by contrast, is so intricate that it takes years of university-level study to truly master. Riordon avoids both pitfalls: he manages to make the simple fascinating again, and the complex understandable. He provides captivating insights into how gravity has shaped the animal kingdom, a perspective I had never much considered. Did you know that tree snakes have their hearts positioned closer to their heads than their land-based cousins? I certainly didn’t. The higher placement ensures a steady blood flow to the brain, even when the snake is climbing vertically. It is one of many examples that make you look again at the natural world with fresh eyes. Riordon’s treatment of gravity in Einstein’s abstract space–time is equally impressive, perhaps unsurprisingly, as his previous books include Very Easy Relativity and Relatively Easy Relativity. Riordon takes a careful, patient approach – though I have never before heard general relativity reduced to “space–time is squishy”. But why not? The phrase sticks and gives us a handhold as we scale the complications of the theory. For those who want to extend the challenge, a mathematical background to the theory is provided in an appendix, and every chapter is well referenced and accompanied with suggestions for further reading. If anything, I found myself wanting more examples of gravity as experienced by humans and animals on Earth, as opposed to in the context of the astronomical realm. I found these down-to-earth chapters the most fascinating: they formed a bridge between the vast and the local, reminding us that the same force that governs the orbits of galaxies also brings an apple to the ground. This may be a reaction only felt by astronomers like me, who already spend their days looking upward. I can easily see how the balance Riordon chose is necessary for someone without that background, and Einstein’s gravity does require galactic scales to appreciate, after all. Crush is a generally uncomplicated and pleasurable read. The anecdotes can sometimes be a little long-winded and there are parts of the book that are not without challenge. But it is pitched perfectly for the curious general reader and even for those dipping their toes into popular science for the first time. I can imagine an enthusiastic A-level student devouring it; it is exactly the kind of book I would have loved at that age. Even if some of it would have gone over my head, Riordon’s enthusiasm and gift for storytelling would have kept me more than interested, as I sat up on that pallet in my favourite apple tree. I left that house, and that tree, a long time ago, but just a few miles down the road from where I live now stands another, far more famous apple tree. In the garden of Woolsthorpe Manor near Grantham, Newton is said to have watched an apple fall. From that small event, he began to ask the questions that reshaped his and our understanding of the universe. Whether or not the story is true hardly matters – Newton was constantly inspired by the natural world, so it isn’t improbable, and that apple tree remains a potent symbol of curiosity and insight. “[Newton] could tell us that an apple falls, and how quickly it will do it. As for the question of why it falls, that took Einstein to answer,” writes Riordon. Crush is a crisp and fresh tour through a continuum from orchards to observatories, showing that every planetary orbit, pulse of starlight and even every apple fall is part of the same wondrous story. The post Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time appeared first on Physics World.
https://physicsworld.com/a/bring-gravity-back-down-to-earth-from-giraffes-and-tree-snakes-to-squishy-space-time/
Space & Physics
svg
604a335fda10bddc9772ded5d32d7b707c1efa4006bd56d0228d03550a84235a
2025-12-03T12:00:45+00:00
Ice XXI appears in a diamond anvil cell
A new phase of water ice, dubbed ice XXI, has been discovered by researchers working at the European XFEL and PETRA III facilities. The ice, which exists at room temperature and is structurally distinct from all previously observed phases of ice, was produced by rapidly compressing water to high pressures of 2 GPa. The finding could shed light on how different ice phases form at high pressures, including on icy moons and planets. On Earth, ice can take many forms, and its properties depend strongly on its structure. The main type of naturally-occurring ice is hexagonal ice (Ih), so-called because the water molecules arrange themselves in a hexagonal lattice (this is the reason why snowflakes have six-fold symmetry). However, under certain conditions – usually involving very high pressures and low temperatures – ice can take on other structures. Indeed, 20 different forms of ice have been identified so far, denoted by roman numerals (ice I, II, III and so on up to ice XX). Researchers from the Korea Research Institute of Standards and Science (KRISS) have now produced a 21st form of ice by applying pressures of up to two gigapascals. Such high pressures are roughly 20 000 times higher than normal air pressure at sea level, and they allow ice to form even at room temperature – albeit only within a device known as a dynamic diamond anvil cell (dDAC) that is capable of producing such extremely high pressures. “In this special pressure cell, samples are squeezed between the tips of two opposing diamond anvils and can be compressed along a predefined pressure pathway,” explains Cornelius Strohm, a member of the DESY HIBEF team that set up the experiment using the High Energy Density (HED) instrument at the European XFEL. The structure of ice XXI is different from all previously observed phases of ice because its molecules are much more tightly packed. This gives it the largest unit cell volume of all currently known types of ice, says KRISS scientist Geun Woo Lee. It is also metastable, meaning that it can exist even though another form of ice (in this case ice VI) would be more stable under the conditions in the experiment. “This rapid compression of water allows it to remain liquid up to higher pressures, where it should have already crystallized to ice VI,” explains Lee. “Ice VI is an especially intriguing phase, thought to be present in the interior of icy moons such as Titan and Ganymede. Its highly distorted structure may allow complex transition pathways that lead to metastable ice phases.” To study how the new ice sample formed, the researchers rapidly compressed and decompressed it over 1000 times in the diamond anvil cell while imaging it every microsecond using the European XFEL, which produces megahertz frequency X-ray pulses at extremely high rates. They found that the liquid water crystallizes into different structures depending on how supercompressed it is. The KRISS team then used the P02.2 beamline at PETRA III to determine that the ice XXI has a body-centred tetragonal crystal structure with a large unit cell (a = b = 20.197 Å and c = 7.891 Å) at approximately 1.6 GPa. This unit cell contains 152 water molecules, resulting in a density of 1.413 g cm−3. The experiments were far from easy, recalls Lee. Upon crystallization, Ice XXI grows upwards (that is, in the vertical direction), which makes it difficult to precisely analyse its crystal structure. “The difficulty for us is to keep it stable for a long enough period to make precise structural measurements in single crystal diffraction study,” he says. The multiple pathways of ice crystallization unearthed in this work, which is detailed in Nature Materials, imply that many more ice phases may exist. Lee says it is therefore important to analyse the mechanism behind the formation of these phases. “This could, for example, help us better understand the formation and evolution of these phases on icy moons or planets,” he tells Physics World. The post Ice XXI appears in a diamond anvil cell appeared first on Physics World.
https://physicsworld.com/a/ice-xxi-appears-in-a-diamond-anvil-cell/
Space & Physics
svg
a07042bd58d49d462a47680c619a8222f34b908181a0f4d54b97e58bf1d7de92
2025-12-03T10:00:03+00:00
Studying the role of the quantum environment in attosecond science
Attosecond science is undoubtedly one of the fastest growing branches of physics today. Its popularity was demonstrated by the award of the 2023 Nobel Prize in Physics to Anne L’Huillier, Paul Corkum and Ferenc Krausz for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter. One of the most important processes in this field is dephasing. This happens when an electron loses its phase coherence because of interactions with its surroundings. This loss of coherence can obscure the fine details of electron dynamics, making it harder to capture precise snapshots of these rapid processes. The most common way to model this process in light-matter interactions is by using the relaxation time approximation. This approach greatly simplifies the picture as it avoids the need to model every single particle in the system. Its use is fine for dilute gases, but it doesn’t work as well with intense lasers and denser materials, such as solids, because it greatly overestimates ionisation. This is a significant problem as ionisation is the first step in many processes such as electron acceleration and high-harmonic generation. To address this problem, a team led by researchers from the University of Ottawa have developed a new method to correct for this problem. By introducing a heat bath into the model they were able to represent the many-body environment that interacts with electrons, without significantly increasing the complexity. This new approach should enable the identification of new effects in attosecond science or wherever strong electromagnetic fields interact with matter. Strong field physics in open quantum systems – IOPscience N. Boroumand et al, 2025 Rep. Prog. Phys. 88 070501   The post Studying the role of the quantum environment in attosecond science appeared first on Physics World.
https://physicsworld.com/a/studying-the-role-of-the-quantum-environment-in-attosecond-science/
Space & Physics
svg
bbf35e4421a394ab84e333f4ef2010b218e1c171233ebf780d42a37f514ba55a
2025-12-03T09:59:45+00:00
Characterising quantum many-body states
Describing the non-classical properties of a complex many-body system (such as entanglement or coherence) is an important part of quantum technologies. An ideal tool for this task would work well with large systems, be easily computable and easily measurable. Unfortunately, such a tool for every situation does not yet exist. With this goal in mind a team of researchers – Marcin Płodzień and Maciej Lewenstein (ICFO, Barcelona, Spain) and Jan Chwedeńczuk (University of Warsaw, Poland) – began work on a special type of quantum state used in quantum computing – graph states. These states can be visualised as graphs or networks where each vertex represents a qubit, and each edge represents an interaction between pairs of qubits. The team studied four different shapes of graph states using new mathematical tools they developed. They found that one of these in particular, the Turán graph, could be very useful in quantum metrology. Their method is (relatively) straightforward and does not require many assumptions. This means that it could be applied to any shape of graph beyond the four studied here. The results will be useful in various quantum technologies wherever precise knowledge of many-body quantum correlations is necessary. Many-body quantum resources of graph states – IOPscience M. Płodzień et al, 2025 Rep. Prog. Phys. 88 077601   The post Characterising quantum many-body states appeared first on Physics World.
https://physicsworld.com/a/characterising-quantum-many-body-states/
Space & Physics
svg
5b6ba490dff036bca648b7eef27b6093a46625811c9dd7a9f8f59cdf5374ac04
2025-12-02T14:13:19+00:00
Extra carbon in the atmosphere may disrupt radio communications
Higher levels of carbon dioxide (CO2) in the Earth’s atmosphere could harm radio communications by enhancing a disruptive effect in the ionosphere. According to researchers at Kyushu University, Japan, who modelled the effect numerically for the first time, this little-known consequence of climate change could have significant impacts on shortwave radio systems such as those employed in broadcasting, air traffic control and navigation. “While increasing CO2 levels in the atmosphere warm the Earth’s surface, they actually cool the ionosphere,” explains study leader Huixin Liu of Kyushu’s Faculty of Science. “This cooling doesn’t mean it is all good: it decreases the air density in the ionosphere and accelerates wind circulation. These changes affect the orbits and lifespan of satellites and space debris and also disrupt radio communications through localized small-scale plasma irregularities.” One such irregularity is a dense but transient layer of metal ions that forms between 90‒120 km above the Earth’s surface. This sporadic E-layer (Es), as it is known, is roughly 1‒5 km thick and can stretch from tens to hundreds of kilometres in the horizontal direction. Its density is highest during the day, and it peaks around the time of the summer solstice. The formation of the Es is hard to predict, and the mechanisms behind it are not fully understood. However, the prevailing “wind shear” theory suggests that vertical shears in horizontal winds, combined with the Earth’s magnetic field, cause metallic ions such as Fe+, Na+ and Ca+ to converge in the ionospheric dynamo region and form thin layers of enhanced ionization. The ions themselves largely come from metals in meteoroids that enter the Earth’s atmosphere and disintegrate at altitudes of around 80‒100 km. While previous research has shown that increases in CO2 trigger atmospheric changes on a global scale, relatively little is known about how these increases affect smaller-scale ionospheric phenomena like the Es. In the new work, which is published in Geophysical Research Letters, Liu and colleagues used a whole-atmosphere model to simulate the upper atmosphere at two different CO2 concentrations: 315 ppm and 667 ppm. “The 315 ppm represents the CO2 concentration in 1958, the year in which recordings started at the Mauna Loa observatory, Hawaii,” Liu explains. “The 667 ppm represents the projected CO2 concentration for the year 2100, based on a conservative assumption that the increase in CO2 is constant at a rate of around 2.5 ppm/year since 1958.” The researchers then evaluated how these different CO2 levels influence a phenomenon known as vertical ion convergence (VIC) which, according to the wind shear theory, drives the Es. The simulations revealed that the higher the atmospheric CO2 levels, the greater the VIC at altitudes of 100–120 km. “What is more, this increase is accompanied by the VIC hotspots shifting downwards by approximately 5 km,” says Liu. “The VIC patterns also change dramatically during the day and these diurnal variability patterns continue into the night.” According to the researchers, the physical mechanism underlying these changes depends on two factors. The first is reduced collisions between metallic ions and the neutral atmosphere as a direct result of cooling in the ionosphere. The second is changes in the zonal wind shear, which are likely caused by long-term trends in atmosphere tides. “These results are exciting because they show that the impacts of CO2 increase can extend all the way from Earth’s surface to altitudes at which HF and VHF radio waves propagate and communications satellites orbit,” Liu tells Physics World. “This may be good news for ham radio amateurs, as you will likely receive more signals from faraway countries more often. For radio communications, however, especially at HF and VHF frequencies employed for aviation, ships and rescue operations, it means more noise and frequent disruption in communication and hence safety. The telecommunications industry might therefore need to adjust their frequencies or facility design in the future.” The post Extra carbon in the atmosphere may disrupt radio communications appeared first on Physics World.
https://physicsworld.com/a/extra-carbon-in-the-atmosphere-may-disrupt-radio-communications/
Space & Physics
svg
51a744f46e1a00d3a14d75e7f654c06320e9bd3a182792c4db8780f3fee6bbcf
2025-12-02T12:00:32+00:00
Phase-changing material generates vivid tunable colours
Structural colours – created using nanostructures that scatter and reflect specific wavelengths of light – offer a non-toxic, fade-resistant and environmentally friendly alternative to chemical dyes. Large-scale production of structural colour-based materials, however, has been hindered by fabrication challenges and a lack of effective tuning mechanisms. In a step towards commercial viability, a team at the University of Central Florida has used vanadium dioxide (VO2) – a material with temperature-sensitive optical and structural properties – to generate tunable structural colour on both rigid and flexible surfaces, without requiring complex nanofabrication. Senior author Debashis Chanda and colleagues created their structural colour platform by stacking a thin layer of VO2 on top of a thick, reflective layer of aluminium to form a tunable thin-film cavity. At specific combinations of VO2 grain size and layer thickness this structure strongly absorbs certain frequency bands of visible light, producing the appearance of vivid colours. The key enabler of this approach is the fact that at a critical transition temperature, VO2 reversibly switches from insulator to metal, accompanied by a change in its crystalline structure. This phase change alters the interference conditions in the thin-film cavity, varying the reflectance spectra and changing the perceived colour. Controlling the thickness of the VO2 layer enables the generation of a wide range of structural colours. The bilayer structures are grown via a combination of magnetron sputtering and electron-beam deposition, techniques compatible with large-scale production. By adjusting the growth parameters during fabrication, the researchers could broaden the colour palette and control the temperature at which the phase transition occurs. To expand the available colour range further, they added a third ultrathin layer of high-refractive index titanium dioxide on top of the bilayer. The researchers describe a range of applications for their flexible coloration platform, including a colour-tunable maple leaf pattern, a thermal sensing label on a coffee cup and tunable structural coloration on flexible fabrics. They also demonstrated its use on complex shapes, such as a toy gecko with a flexible tunable colour coating and an embedded heater. “These preliminary demonstrations validate the feasibility of developing thermally responsive sensors, reconfigurable displays and dynamic colouration devices, paving the way for innovative solutions across fields such as wearable electronic, cosmetics, smart textiles and defence technologies,” the team concludes. The research is described in Proceedings of the National Academy of Sciences. The post Phase-changing material generates vivid tunable colours appeared first on Physics World.
https://physicsworld.com/a/phase-changing-material-generates-vivid-tunable-colours/
Space & Physics
svg
95bf6e884667502e51816bf2aac6b1736abcd22ac9d0a8fa32f4e6cdd164bba5
2025-12-02T09:00:41+00:00
Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics
Susumu Noda of Kyoto University has won the 2026 Rank Prize for Optoelectronics for the development of the Photonic Crystal Surface Emitting Laser (PCSEL). For more than 25 years, Noda developed this new form of laser, which has potential applications in high-precision manufacturing as well as in LIDAR technologies. Following the development of the laser in 1960, in more recent decades optical fibre lasers and semiconductor lasers have become competing technologies. A semiconductor laser works by pumping an electrical current into a region where an n-doped (excess of electrons) and a p-doped (excess of “holes”) semiconductor material meet, causing electrons and holes to combine and release photons. Semiconductors have several advantages in terms of their compactness, high “wallplug” efficiency, and ruggedness, but lack in other areas such as having a low brightness and functionality. This means that conventional semiconductor lasers required external optical and mechanical elements to improve their performance, which results in large and impractical systems. In the late 1990s, Noda began working on a new type of semiconductor laser that could challenge the performance of optical fibre lasers. These so-called PCSELs employ a photonic crystal layer in between the semiconductor layers. Photonic crystals are nanostructured materials in which a periodic variation of the dielectric constant — formed, for example, by a lattice of holes — creates a photonic band-gap. Noda and his research made a series of breakthrough in the technology such as demonstrating control of polarization and beam shape by tailoring the phonic crystal structure and expansion into blue–violet wavelengths. The resulting PCSELs emit a high-quality, symmetric beam with narrow divergence and boast high brightness and high functionality while maintaining the benefits of conventional semiconductor lasers. In 2013, 0.2 W PCSELs became available and a few years later Watt-class PCSEL lasers became operational. Noda says that it is “a great honour and a surprise” to receive the prize. “I am extremely happy to know that more than 25 years of research on photonic-crystal surface-emitting lasers has been recognized in this way,” he adds. “I do hope to continue to further develop the research and its social implementation.” Susumu Noda received his BSc and then PhD in electronics from Kyoto University in 1982 and 1991, respectively. From 1984 he also worked at Mitsubishi Electric Corporation, before joining Kyoto University in 1988 where he is currently based. Founded in 1972 by the British industrialist and philanthropist Lord J Arthur Rank, the Rank Prize is awarded biennially in nutrition and optoelectronics. The 2026 Rank Prize for Optoelectronics, which has a cash award of £100 000, will be awarded formally at an event held in June. The post Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics appeared first on Physics World.
https://physicsworld.com/a/semiconductor-laser-pioneer-susumu-noda-wins-2026-rank-prize-for-optoelectronics/
Space & Physics
svg
8cf920c9c18705bb03ee91a25fdaaf5a3487d9234683967aee436a2c335e2772
2025-12-01T14:00:32+00:00
Staying the course with lockdowns could end future pandemics in months
As a theoretical and mathematical physicist at Imperial College London, UK, Bhavin Khatri spent years using statistical physics to understand how organisms evolve. Then the COVID-19 pandemic struck, and like many other scientists, he began searching for ways to apply his skills to the crisis. This led him to realize that the equations he was using to study evolution could be repurposed to model the spread of the virus – and, crucially, to understand how it could be curtailed. In a paper published in EPL, Khatri models the spread of a SARS-CoV-2-like virus using branching process theory, which he’d previously used to study how advantageous alleles (variations in a genetic sequence) become more prevalent in a population. He then uses this model to assess the duration that interventions such as lockdowns would need to be applied in order to completely eliminate infections, with the strength of the intervention measured in terms of the number of people each infected person goes on to infect (the virus’ effective reproduction number, R). Tantalizingly, the paper concludes that applying such interventions worldwide in June 2020 could have eliminated the COVID virus by January 2021, several months before the widespread availability of vaccines reduced its impact on healthcare systems and led governments to lift restrictions on social contact. Physics World spoke to Khatri to learn more about his research and its implications for future pandemics. One important finding is that we can accurately calculate the distribution of times required for a virus to become extinct by making a relatively simple approximation. This approximation amounts to assuming that people have relatively little population-level “herd” immunity to the virus – exactly the situation that many countries, including the UK, faced in March 2020. Making this approximation meant I could reduce the three coupled differential equations of the well-known SIR model (which models pandemics via the interplay between Susceptible, Infected and Recovered individuals) to a single differential equation for the number of infected individuals in the population. This single equation turned out to be the same one that physics students learn when studying radioactive decay. I then used the discrete stochastic version of exponential decay and standard approaches in branching process theory to calculate the distribution of extinction times. Alongside the formal theory, I also used my experience in population genetic theory to develop an intuitive approach for calculating the mean of this extinction time distribution. In population genetics, when a mutation is sufficiently rare, changes in its number of copies in the population are dominated by randomness. This is true even if the mutation has a large selective advantage: it has to grow by chance to sufficient critical size – on the order of 1/(selection strength) – for selection to take hold. The same logic works in reverse when applied to a declining number of infections. Initially, they will decline deterministically, but once they go below a threshold number of individuals, changes in infection numbers become random. Using the properties of such random walks, I calculated an expression for the threshold number and the mean duration of the stochastic phase. These agree well with the formal branching process calculation. In practical terms, the main result of this theoretical work is to show that for sufficiently strong lockdowns (where, on average, only one of every two infected individuals goes on to infect another person, R=0.5), this distribution of extinction times was narrow enough to ensure that the COVID pandemic virus would have gone extinct in a matter of months, or at most a year. Leaving politics and the likelihood of social acceptance aside for the moment, if a sufficiently strong lockdown could have been maintained for a period of roughly six months across the globe, then I am confident that the virus could have been reduced to very low levels, or even made extinct. The question then is: is this a stable situation? From the perspective of a single nation, if the rest of the world still has infections, then that nation either needs to maintain its lockdown or be prepared to re-impose it if there are new imported cases. From a global perspective, a COVID-free world should be a stable state, unless an animal reservoir of infections causes re-infections in humans. As for the practical success of such a strategy, that depends on politics and the willingness of individuals to remain in lockdown. Clearly, this is not in the model. One thing I do discuss, though, is that this strategy becomes far more difficult once more infectious variants of SARS-CoV-2 evolve. However, the problem I was working on before this one (which I eventually published in PNAS) concerned the probability of evolutionary rescue or resistance, and that work suggests that evolution of new COVID variants reduces significantly when there are fewer infections. So an elimination strategy should also be more robust against the evolution of new variants. I’d like them to conclude that pandemics with similar properties are, in principle, controllable to small levels of infection – or complete extinction – on timescales of months, not years, and that controlling them minimizes the chance of new variants evolving. So, although the question of the political and social will to enact such an elimination strategy is not in the scope of the paper, I think if epidemiologists, policy experts, politicians and the public understood that lockdowns have a finite time horizon, then it is more likely that this strategy could be adopted in the future. I should also say that my work makes no comment on the social harms of lockdowns, which shouldn’t be minimized and would need to be weighed against the potential benefits. I think the most interesting next avenue will be to develop theory that lets us better understand the stability of the extinct state at the national and global level, under various assumptions about declining infections in other countries that adopted different strategies and the role of an animal reservoir. It would also be interesting to explore the role of “superspreaders”, or infected individuals who infect many other people. There’s evidence that many infections spread primarily through relatively few superspreaders, and heuristic arguments suggest that taking this into account would decrease the time to extinction compared to the estimates in this paper. I’ve also had a long-term interest in understanding the evolution of viruses from the lens of what are known as genotype phenotype maps, where we consider the non-trivial and often redundant mapping from genetic sequences to function, where the role of stochasticity in evolution can be described by statistical physics analogies. For the evolution of the antibodies that help us avoid virus antigens, this would be a driven system, and theories of non-equilibrium statistical physics could play a role in answering questions about the evolution of new variants. The post Staying the course with lockdowns could end future pandemics in months appeared first on Physics World.
https://physicsworld.com/a/staying-the-course-with-lockdowns-could-end-future-pandemics-in-months/
Space & Physics
svg
87bf3cacc2c546fb9bf1b6062eb80d5fe92d443dc4b788179d646e8cf911f41b
2025-12-01T11:00:21+00:00
When is good enough ‘good enough’?
Whether you’re running a business project, carrying out scientific research, or doing a spot of DIY around the house, knowing when something is “good enough” can be a tough question to answer. To me, “good enough” means something that is fit for purpose. It’s about striking a balance between the effort required to achieve perfection and the cost of not moving forward. It’s an essential mindset when perfection is either not needed or – as is often the case – not attainable. When striving for good enough, the important thing to focus on is that your outcome should meet expectations, but not massively exceed them. Sounds simple, but how often have we heard people say things like they’re “polishing coal”, striving for “gold plated” or “trying to make a silk purse out of a sow’s ear”. It basically means they haven’t understood, defined or even accepted the requirements of the end goal. Trouble is, as we go through school, college and university, we’re brought up to believe that we should strive for the best in whatever we study. Those with the highest grades, we’re told, will probably get the best opportunities and career openings. Unfortunately, this approach means we think we need to aim for perfection in everything in life, which is not always a good thing. So why is aiming for “good enough” a good thing to do? First, there’s the notion of “diminishing returns”. It takes a disproportionate amount of effort to achieve the final, small improvements that most people won’t even notice. Put simply, time can be wasted on unnecessary refinements, as embodied by the 80/20 rule (see box). Also known as the Pareto principle – in honour of the Italian economist Vilfredo Pareto who first came up with the idea – the 80/20 rule states that for many outcomes, 80% of consequences or results come from 20% of the causes or effort. The principle helps to identify where to prioritize activities to boost productivity and get better results. It is a guideline, and the ratios can vary, but it can be applied to many things in both our professional and personal lives. Examples from the world of business include the following: Business sales: 80% of a company’s revenue might come from 20% of its customers. Company productivity: 80% of your results may come from 20% of your daily tasks. Software development: 80% of bugs could be caused by 20% of the code. Quality control: 20% of defects may cause 80% of customer complaints. Good enough also helps us to focus efforts. When a consumer or customer doesn’t know exactly what they want, or a product development route is uncertain, it can be better to deliver things in small chunks. Providing something basic but usable can be used to solicit feedback to help clarify requirements or make improvements or additions that can be incorporated into the next chunk. This is broadly along the lines of a “minimum viable product”. Not seeking perfection reminds us too that solutions to problems are often uncertain. If it’s not clear how, or even if, something might work, a proof of concept (PoC) can instead be a good way to try something out. Progress can be made by solving a specific technical challenge, whether via a basic experiment, demonstration or short piece of research. A PoC should help avoid committing significant time and resource to something that will never work. Aiming for “good enough” naturally leads us to the notion of “continuous improvement”. It’s a personal favourite of mine because it allows for things to be improved incrementally as we learn or get feedback, rather than producing something in one go and then forgetting about it. It helps keep things current and relevant and encourages a culture of constantly looking for a better way to do things. Finally, when searching for good enough, don’t forget the idea of ballpark estimates. Making approximations sounds too simple to be effective, but sometimes a rough estimate is really all you need. If an approximate guess can inform and guide your next steps or determine whether further action will be necessary then go for it. Being good enough doesn’t just lead to practical outcomes, it can benefit our personal well-being too. Our time, after all, is a precious commodity and we can’t magically increase this resource. The pursuit of perfection can lead to stagnation, and ultimately burnout, whereas achieving good enough allows us to move on in a timely fashion. A good-enough approach will even make you less stressed. By getting things done sooner and achieving more, you’ll feel freer and happier about your work even if it means accepting imperfection. Mistakes and errors are inevitable in life, so don’t be afraid to make them; use them as learning opportunities, rather than seeing them as something bad. Remember – the person who never made a mistake never got out of bed. Recognizing that you’ve done the best you can for now is also crucial for starting new projects and making progress. By accepting good enough you can build momentum, get more things done, and consistently take actions toward achieving your goals. Finally, good enough is also about shared ownership. By inviting someone else to look at what you’ve done, you can significantly speed up the process. In my own career I’ve often found myself agonising over some obscure detail or feeling something is missing, only to have my quandary solved almost instantly simply by getting someone else involved – making me wish I’d asked them sooner. Good enough comes with some caveats. Regulatory or legislative requirements means there will always be projects that have to reach a minimum standard, which will be your top priority. The precise nature of good enough will also depend on whether you’re making stuff (be it cars or computers) or dealing with intangible commodities such as software or services. So what’s the conclusion? Well, in the interests of my own time, I’ve decided to apply the 80/20 rule and leave it to you to draw your own conclusion. As far as I’m concerned, I think this article has been good enough, but I’m sure you’ll let me know if it hasn’t. Consider it as a minimally viable product that I can update in a future column. The post When is good enough ‘good enough’? appeared first on Physics World.
https://physicsworld.com/a/when-is-good-enough-good-enough/
Space & Physics
svg
b6fb450402aaac463a160fcfefb00cfa9acd01bdaa5a5ea33bc3cf52d3ef45aa
2025-12-01T09:06:19+00:00
Looking for inconsistencies in the fine structure constant
New high-precision laser spectroscopy measurements on thorium-229 nuclei could shed more light on the fine structure constant, which determines the strength of the electromagnetic interaction, say physicists at TU Wien in Austria. The electromagnetic interaction is one of the four known fundamental forces in nature, with the others being gravity and the strong and weak nuclear forces. Each of these fundamental forces has an interaction constant that describes its strength in comparison with the others. The fine structure constant, α, has a value of approximately 1/137. If it had any other value, charged particles would behave differently, chemical bonding would manifest in another way and light-matter interactions as we know them would not be the same. “As the name ‘constant’ implies, we assume that these forces are universal and have the same values at all times and everywhere in the universe,” explains study leader Thorsten Schumm from the Institute of Atomic and Subatomic Physics at TU Wien. “However, many modern theories, especially those concerning the nature of dark matter, predict small and slow fluctuations in these constants. Demonstrating a non-constant fine-structure constant would shatter our current understanding of nature, but to do this, we need to be able to measure changes in this constant with extreme precision.” With thorium spectroscopy, he says, we now have a very sensitive tool to search for such variations. The new work builds on a project that led, last year, to the worlds’s first nuclear clock, and is based on precisely determining how the thorium-229 (229Th) nucleus changes shape when one of its neutrons transitions from a ground state to a higher-energy state. “When excited, the 229Th nucleus becomes slightly more elliptic,” Schumm explains. “Although this shape change is small (at the 2% level), it dramatically shifts the contributions of the Coulomb interactions (the repulsion between protons in the nucleus) to the nuclear quantum states.” The result is a change in the geometry of the 229Th nucleus’ electric field, to a degree that depends very sensitively on the value of the fine structure constant. By precisely observing this thorium transition, it is therefore possible to measure whether the fine-structure constant is actually a constant or whether it varies slightly. After making crystals of 229Th doped in a CaF2 matrix at TU Wien, the researchers performed the next phase of the experiment in a JILA laboratory at the University of Colorado, Boulder, US, firing ultrashort laser pulses at the crystals. While they did not measure any changes in the fine structure constant, they did succeed in determining how such changes, if they exist, would translate into modifications to the energy of the first nuclear excited state of 229Th. “It turns out that this change is huge, a factor 6000 larger than in any atomic or molecular system, thanks to the high energy governing the processes inside nuclei,” Schumm says. “This means that we are by a factor of 6000 more sensitive to fine structure variations than previous measurements.” Researchers in the field have debated the likelihood of such an “enhancement factor” for decades, and theoretical predictions of its value have varied between zero and 10 000. “Having confirmed such a high enhancement factor will now allow us to trigger a ‘hunt’ for the observation of fine structure variations using our approach,” Schumm says. Andrea Caputo of CERN’s theoretical physics department, who was not involved in this work, calls the experimental result “truly remarkable”, as it probes nuclear structure with a precision that has never been achieved before. However, he adds that the theoretical framework is still lacking. “In a recent work published shortly before this work, my collaborators and I showed that the nuclear-clock enhancement factor K is still subject to substantial theoretical uncertainties,” Caputo says. “Much progress is therefore still required on the theory side to model the nuclear structure reliably.” Schumm and colleagues are now working on increasing the spectroscopic accuracy of their 229Th transition measurement by another one to two orders of magnitude. “We will then start hunting for fluctuations in the transition energy,” he reveals, “tracing it over time and – through the Earth’s movement around the Sun – space. The present work is detailed in Nature Communications. The post Looking for inconsistencies in the fine structure constant appeared first on Physics World.
https://physicsworld.com/a/looking-for-inconsistencies-in-the-fine-structure-constant/
Space & Physics
svg
7735e3de81982f958ca48ffe7ef1d2e3eeb784b389997521200c030242f21680
2025-11-28T14:45:22+00:00
Heat engine captures energy as Earth cools at night
A new heat engine driven by the temperature difference between Earth’s surface and outer space has been developed by Tristan Deppe and Jeremy Munday at the University of California Davis. In an outdoor trial, the duo showed how their engine could offer a reliable source of renewable energy at night. While solar cells do a great job of converting the Sun’s energy into electricity, they have one major drawback, as Munday explains: “Lack of power generation at night means that we either need storage, which is expensive, or other forms of energy, which often come from fossil fuel sources.” One solution is to exploit the fact that the Earth’s surface absorbs heat from the Sun during the day and then radiates some of that energy into space at night. While space has a temperature of around −270° C, the average temperature of Earth’s surface is a balmy 15° C. Together, these two heat reservoirs provide the essential ingredients of a heat engine, which is a device that extracts mechanical work as thermal energy flows from a heat source to a heat sink. “At first glance, these two entities appear too far apart to be connected through an engine. However, by radiatively coupling one side of the engine to space, we can achieve the needed temperature difference to drive the engine,” Munday explains. For the concept to work, the engine must radiate the energy it extracts from the Earth within the atmospheric transparency window. This is a narrow band of infrared wavelengths that pass directly into outer space without being absorbed by the atmosphere. To demonstrate this concept, Deppe and Munday created a Stirling engine, which operates through the cyclical expansion and contraction of an enclosed gas as it moves between hot and cold ends. In their setup, the ends were aligned vertically, with a pair of plates connecting each end to the corresponding heat reservoir. For the hot end, an aluminium mount was pressed into soil, transferring the Earth’s ambient heat to the engine’s bottom plate. At the cold end, the researchers attached a black-coated plate that emitted an upward stream of infrared radiation within the transparency window. In a series of outdoor experiments performed throughout the year, this setup maintained a temperature difference greater than 10° C between the two plates during most months. This was enough to extract more than 400 mW per square metre of mechanical power throughout the night. “We were able to generate enough power to run a mechanical fan, which could be used for air circulation in greenhouses or residential buildings,” Munday describes. “We also configured the device to produce both mechanical and electrical power simultaneously, which adds to the flexibility of its operation.” With this promising early demonstration, the researchers now predict that future improvements could enable the system to extract as much as 6 W per square metre under the same conditions. If rolled out commercially, the heat engine could help reduce the reliance of solar power on night-time energy storage – potentially opening a new route to cutting carbon emissions. The research has described in Science Advances. The post Heat engine captures energy as Earth cools at night appeared first on Physics World.
https://physicsworld.com/a/heat-engine-captures-energy-as-earth-cools-at-night/
Space & Physics
svg
626a4b8234f761f3de474447c11379d470f92b6e38e341f269d22c0e97f75ebb
2025-11-28T09:40:36+00:00
Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics
A new microscale version of the flumes that are commonly used to reproduce wave behaviour in the laboratory will make it far easier to study nonlinear hydrodynamics. The device consists of a layer of superfluid helium just a few atoms thick on a silicon chip, and its developers at the University of Queensland, Australia, say it could help us better understand phenomena ranging from oceans and hurricanes to weather and climate. “The physics of nonlinear hydrodynamics is extremely hard to model because of instabilities that ultimately grow into turbulence,” explains study leader Warwick Bowen of Queensland’s Quantum Optics Laboratory. “It is also very hard to study in experiments since these often require hundreds-of-metre-long wave flumes.” While such flumes are good for studying shallow-water dynamics like tsunamis and rogue waves, Bowen notes that they struggle to access many of the complex wave behaviours, such as turbulence, found in nature. The team say that the geometrical structure of the new wave-on-a-chip device can be designed at will using lithographic techniques and built in a matter of days. Superfluid helium placed on its surface can then be controlled optomechanically. Thanks to these innovations, the researchers were able to experimentally measure nonlinear hydrodynamics millions of times faster than would be possible using traditional flumes. They could also “amplify” the nonlinearities of complex behaviours, making them orders of magnitude stronger than is possible in even the largest wave flumes. “This promises to change the way we do nonlinear hydrodynamics, with the potential to discover new equations that better explain the complex physics behind it,” Bowen says. “Such a technique could be used widely to improve our ability to predict both natural and engineered hydrodynamic behaviours.” So far, the team has measured several effects, including wave steepening, shock fronts and solitary wave fission thanks to the chip. While these nonlinear behaviours had been predicted in superfluids, they had never been directly observed there until now. The Quantum Optics Laboratory researchers have been studying superfluid helium for over a decade. A key feature of this quantum liquid is that it flows without resistance, similar to the way electrons move without resistance in a superconductor. “We realized that this behaviour could be exploited in experimental studies of nonlinear hydrodynamics because it allows waves to be generated in a very shallow depth – even down to just a few atoms deep,” Bowen explains. In conventional fluids, Bowen continues, resistance to motion becomes hugely important at small scales, and ultimately limits the nonlinear strengths accessible in traditional flume-based testing rigs. “Moving from the tens-of-centimetre depths of these flumes to tens-of-nanometres, we realized that superfluid helium could allow us to achieve many orders of magnitude stronger nonlinearities – comparable to the largest flows in the ocean – while also greatly increasing measurement speeds. It was this potential that attracted us to the project.” The experiments were far from simple, however. To do them, the researchers needed to cryogenically cool the system to near absolute zero temperatures. They also needed to fabricate exceptionally thin superfluid helium films that interact very weakly with light, as well as optical devices with structures smaller than a micron. Combining all these components required what Bowen describes as “something of a hero experiment”, with important contributions coming from the team’s co-leader, Christopher Baker, and Walter Wasserman, who was then a PhD student in the group. The wave dynamics themselves, Bowen adds, were “exceptionally complex” and were analysed by Matthew Reeves, the first author of a Science paper describing the device. As well as the applications areas mentioned earlier, the team say the new work, which is supported by the US Defense Advanced Research Project Agency’s APAQuS Program, could also advance our understanding of strongly-interacting quantum structures that are difficult to model theoretically. “Superfluid helium is a classic example of such a system,” explains Bowen, “and our measurements represent the most precise measurements of wave physics in these. Other applications may be found in quantum technologies, where the flow of superfluid helium could – somewhat speculatively – replace superconducting electron flow in future quantum computing architectures.” The researchers now plan to use the device and machine learning techniques to search for new hydrodynamics equations. The post Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics appeared first on Physics World.
https://physicsworld.com/a/microscale-wave-on-a-chip-device-sheds-light-on-nonlinear-hydrodynamics/
Space & Physics
svg
aaf6374dd4795a6acc8fa2b4a3c0ddfb3c1ce482ef15149f20c0d2700f8ae7dd
2025-11-27T16:21:23+00:00
Electrical charge on objects in optical tweezers can be controlled precisely
An effect first observed decades ago by Nobel laureate Arthur Ashkin has been used to fine tune the electrical charge on objects held in optical tweezers. Developed by an international team led by Scott Waitukaitis of the Institute of Science and Technology Austria, the new technique could improve our understanding of aerosols and clouds. Optical tweezers use focused laser beams to trap and manipulate small objects about 100 nm to 1 micron in size. Their precision and versatility have made them a staple across fields from quantum optics to biochemistry. Ashkin shared the 2018 Nobel prize for inventing optical tweezers and in the 1970s he noticed that trapped objects can be electrically charged by the laser light. “However, his paper didn’t get much attention, and the observation has essentially gone ignored,” explains Waitukaitis. Waitukaitis’ team rediscovered the effect while using optical tweezers to study how charges build up in the ice crystals accumulating inside clouds. In their experiment, micron-sized silica spheres stood in for the ice, but Ashkin’s charging effect got in their way. “Our goal has always been to study charged particles in air in the context of atmospheric physics – in lightning initiation or aerosols, for example,” Waitukaitis recalls. “We never intended for the laser to charge the particle, and at first we were a bit bummed out that it did so.” Their next thought was that they had discovered a new and potentially useful phenomenon. “Out of due diligence we of course did a deep dive into the literature to be sure that no one had seen it, and that’s when we found the old paper from Ashkin, “ says Waitukaitis. In 1976, Ashkin described how optically trapped objects become charged through a nonlinear process whereby electrons absorb two photons simultaneously. These electrons can acquire enough energy to escape the object, leaving it with a positive charge. Yet beyond this insight, Ashkin “wasn’t able to make much sense of the effect,” Waitukaitis explains. “I have the feeling he found it an interesting curiosity and then moved on.” To study the effect in more detail, the team modified their optical tweezers setup so its two copper lens holders doubled as electrodes, allowing them to apply an electric field along the axis of the confining, opposite-facing laser beams. If the silica sphere became charged, this field would cause it to shake, scattering a portion of the laser light back towards each lens. The researchers picked off this portion of the scattered light using a beam splitter, then diverted it to a photodiode, allowing them to track the sphere’s position. Finally, they converted the measured amplitude of the shaking particle into a real-time charge measurement. This allowed them to track the relationship between the sphere’s charge and the laser’s tuneable intensity. Their measurements confirmed Ashkin’s 1976 hypothesis that electrons on optically-trapped objects undergo two-photon absorption, allowing them to escape. Waitukaitis and colleagues improved on this model and showed how the charge on a trapped object can be controlled precisely by simply adjusting the laser’s intensity. As for the team’s original research goal, the effect has actually been very useful for studying the behaviour of charged aerosols. “We can get [an object] so charged that it shoots off little ‘microdischarges’ from its surface due to breakdown of the air around it, involving just a few or tens of electron charges at a time,” Waitukaitis says. “This is going to be really cool for studying electrostatic phenomena in the context of particles in the atmosphere.“ The study is described in Physical Review Letters. The post Electrical charge on objects in optical tweezers can be controlled precisely appeared first on Physics World.
https://physicsworld.com/a/electrical-charge-on-objects-in-optical-tweezers-can-be-controlled-precisely/
Space & Physics
svg
e102a718457e5964eb5569a10975a7f8c6ee490364d51336b0d9562ffa075056
2025-11-27T15:00:43+00:00
Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge
Earlier this autumn I had the pleasure of visiting the Perimeter Institute for Theoretical Physics in Waterloo Canada – where I interviewed four physicists about their research. This is the second of those conversations to appear on the podcast – and it is with Bianca Dittrich, whose research focuses on quantum gravity. Albert Einstein’s general theory of relativity does a great job at explaining gravity but it is thought to be incomplete because it is incompatible with quantum mechanics. This is an important shortcoming because quantum mechanics is widely considered to be one of science’s most successful theories. Developing a theory of quantum gravity is a crucial goal in physics, but it is proving to be extremely difficult. In this episode, Dittrich explains some of the challenges and talks about ways forward – including her current research on spin foams. We also chat about the intersection of quantum gravity and condensed matter physics; and the difficulties of testing theories against observational data. IOP Publishing’s new Progress In Series: Research Highlights website offers quick, accessible summaries of top papers from leading journals like Reports on Progress in Physics and Progress in Energy. Whether you’re short on time or just want the essentials, these highlights help you expand your knowledge of leading topics. The post Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge appeared first on Physics World.
https://physicsworld.com/a/quantum-gravity-we-explore-spin-foams-and-other-potential-solutions/
Space & Physics
svg
106d04364b6e1845754cd604d86386aafdb87c94371ead9b979c5bc37873192b
2025-11-27T10:32:21+00:00
Can fast qubits also be robust?
Qubits – the building blocks of quantum computers – are plagued with a seemingly unsurmountable dilemma. If they’re fast, they aren’t robust. And if they’re robust, they aren’t fast. Both qualities are important, because all potentially useful quantum algorithms rely on being able to perform many manipulations on a qubit before its state decays. But whereas faster qubits are typically realized by strongly coupling them to the external environment, enabling them to interact more strongly with the driving field, robust qubits with long coherence times are typically achieved by isolating them from their environment. These seemingly contradictory requirements made simultaneously fast and robust qubits an unsolved challenge – until now. In an article published in Nature Communications, a team of physicists led by Dominik Zumbühl from the University of Basel, Switzerland show that it is, in fact, possible to increase both the coherence time and operational speed of a qubit, demonstrating a pathway out of this long-standing impasse. The key ingredient driving this discovery is something called the direct Rashba spin-orbit interaction. The best-known example of spin-orbit interaction comes from atomic physics. Consider a hydrogen atom, in which a single electron revolves around a single proton in the nucleus. During this orbital motion, the electron interacts with the static electric field generated by the positively charged nucleus. The electron in turn experiences an effective magnetic field that couples to the electron’s intrinsic magnetic moment, or spin. This coupling of the electron’s orbital motion to its spin is called spin-orbit (SO) interaction. Aided by collaborators at the University of Oxford, UK and TU Eindhoven in the Netherlands, Zumbühl and colleagues chose to replace this simple SO interaction with a far more complex landscape of electrostatic potential generated by a 10-nanometer-thick germanium wire coated with a thin silicon shell. By removing a single electron from this wire, they create states known as holes that can be used as qubits, with quantum information being encoded in the hole’s spin. Importantly, the underlying crystal structure of the silicon-coated germanium wire constrains these holes to discrete energy levels called bands. “If you were to mathematically model a low-level hole residing in one of these bands using perturbation theory – a commonly applied method in which more remote bands are treated as corrections to the ground state – you would find a term that looks structurally similar to the spin–orbit interaction known from atomic physics,” explains Miguel Carballido, who conducted the work during his PhD at Basel, and is now a senior research associate at the University of New South Wales’ School of Electrical Engineering and Telecommunications in Sydney, Australia. By encoding the quantum states in these energy levels, the spin-orbit interaction can be used to drive the hole-qubit between its two spin states. What makes this interaction special is that it can be tuned using an external electric field. Thus, by applying a stronger electric field, the interaction can be strengthened – resulting in faster qubit manipulation. This ability to make a qubit faster by tuning an external parameter isn’t new. The difference is that whereas in other approaches, a stronger interaction also means higher sensitivity to fluctuations in the driving field, the Basel researchers found a way around this problem. As they increase the electric field, the spin-orbit interaction increases up to a certain point. Beyond this point, any further increase in the electric field will cause the hole to remain stuck within a low energy band. This restricts the hole’s ability to interact with other bands to change its spin, causing the SO interaction strength to drop. By tuning the electric field to this peak, they can therefore operate in a “plateau” region where the SO interaction is the strongest, but the sensitivity to noise is the lowest. This leads to high coherence times (see figure), meaning that the qubit remains in the desired quantum state for longer. By reaching this plateau, where the qubit is both fast and robust, the researchers demonstrate the ability to operate their device in the “compromise-free” regime. So, is quantum computing now a solved problem? The researchers’ answer is “not yet”, as there are still many challenges to overcome. “A lot of the heavy lifting is being done by the quasi 1D system provided by the nanowire,” remarks Carballido, “but this also limits scalability.” He also notes that the success of the experiment depends on being able to fabricate each qubit device very precisely, and doing this reproducibly remains a challenge. The post Can fast qubits also be robust? appeared first on Physics World.
https://physicsworld.com/a/can-fast-qubits-also-be-robust/
Space & Physics
svg
411d4c1e594ed0915708c44feb0b48470b56c7db37647f44e0711e72e6174309
2025-11-26T13:52:29+00:00
Did cannibal stars and boson stars populate the early universe?
In the early universe, moments after the Big Bang and cosmic inflation, clusters of exotic, massive particles could have collapsed to form bizarre objects called cannibal stars and boson stars. In turn, these could have then collapsed to form primordial black holes – all before the first elements were able to form. This curious chain of events is predicted by a new model proposed by a trio of scientists at SISSA, the International School for Advanced Studies in Trieste, Italy. Their proposal involves a hypothetical moment in the early universe called the early matter-dominated (EMD) epoch. This would have lasted only a few seconds after the Big Bang, but could have been dominated by exotic particles, such as the massive, supersymmetric particles predicted by string theory. “There are no observations that hint at the existence of an EMD epoch – yet!” says SISSA’s Pranjal Ralegankar. “But many cosmologists are hoping that an EMD phase occurred because it is quite natural in many models.” Some models of the early universe predict the formation of primordial black holes from quantum fluctuations in the inflationary field. Now, Ralegankar and his colleagues, Daniele Perri and Takeshi Kobayashi propose a new and more natural pathway for forming primordial holes via an EMD epoch. They postulate that in the first second of existence when the universe was small and incredibly hot, exotic massive particles emerged and clustered in dense haloes. The SISSA physicists propose that the haloes then collapsed into hypothetical objects called cannibal stars and boson stars. Cannibal stars are powered by particles annihilating each other, which would have allowed the objects to resist further gravitational collapse for a few seconds. However, they would not have produced light like normal stars. “The particles in a cannibal star can only talk to each other, which is why they are forced to annihilate each other to counter the immense pressure from gravity,” Ralegankar tells Physics World. “They are immensely hot, simply because the particles that we consider are so massive. The temperature of our cannibal stars can range from a few GeV to on the order of 1010 GeV. For comparison, the Sun is on the order of keV.” Boson stars, meanwhile, would be made from pure a Bose–Einstein condensate, which is a state of matter whereby the individual particles quantum mechanically act as one. Both the cannibal stars and boson stars would exist within larger haloes that would quickly collapse to form primordial black holes with masses about the same as asteroids (about 1014–1019 kg). All of this could have taken place just 10 s after the Big Bang. Ralegankar, Perri and Kobayashi point out that the total mass of primordial black holes that their model produces matches the amount of dark matter in the universe. “Current observations rule out black holes to be dark matter, except in the asteroid-mass range,” says Ralegankar. “We showed that our models can produce black holes in that mass range.” Richard Massey, who is a dark-matter researcher at Durham University in the UK, agrees that microlensing observations by projects such as the Optical Gravitational Lensing Experiment (OGLE) have ruled out a population of black holes with planetary masses, but not asteroid masses. However, Massey is doubtful that these black holes could make up dark matter. “It would be pretty contrived for them to make up a large fraction of what we call dark matter,” he says. “It’s possible that dark matter could be these primordial black holes, but they’d need to have been created with the same mass no matter where they were and whatever environment they were in, and that mass would have to be tuned to evade current experimental evidence.” In the coming years, upgrades to OGLE and the launch of NASA’s Roman Space Telescope should finally provide sensitivity to microlensing events produced by objects in the asteroid mass range, allowing researchers to settle the matter. It is also possible that cannibal and boson stars exist today, produced by collapsing haloes of dark matter. But unlike those proposed for the early universe, modern cannibal and boson stars would be stable and long-lasting. “Much work has already been done for boson stars from dark matter, and we are simply suggesting that future studies should also think about the possibility of cannibal stars from dark matter,” explains Ralegankar. “Gravitational lensing would be one way to search for them, and depending on models, maybe also gamma rays from dark-matter annihilation.” The research is described in Physical Review D. The post Did cannibal stars and boson stars populate the early universe? appeared first on Physics World.
https://physicsworld.com/a/did-cannibal-stars-and-boson-stars-populate-the-early-universe/
Space & Physics
svg
b3f302fe2fd3089923243eb102dc083a05ea03e160712360ffdc008c6a11ff8a
2025-11-26T11:00:31+00:00
Academic assassinations are a threat to global science
The deliberate targeting of scientists in recent years has become one of the most disturbing, and overlooked, developments in modern conflict. In particular, Iranian physicists and engineers have been singled out for almost two decades, with sometimes fatal consequences. In 2007 Ardeshir Hosseinpour, a nuclear physicist at Shiraz University, died in mysterious circumstances that were widely attributed to poisoning or radioactive exposure. Over the following years, at least five more Iranian researchers have been killed. They include particle physicist Masoud Ali-Mohammadi, who was Iran’s representative at the Synchrotron-light for Experimental Science and Applications in the Middle East project. Known as SESAME, it is the only scientific project in the Middle East where Iran and Israel collaborate. Others to have died include nuclear engineer Majid Shahriari, another Iranian representative at SESAME, and nuclear physicist Mohsen Fakhrizadeh, who were both killed by bombing or gunfire in Tehran. These attacks were never formally acknowledged, nor were they condemned by international scientific institutions. The message, however, was implicit: scientists in politically sensitive fields could be treated as strategic targets, even far from battlefields. What began as covert killings of individual researchers has now escalated, dangerously, into open military strikes on academic communities. Israeli airstrikes on residential areas in Tehran and Isfahan during the 12-day conflict between the two countries in June led to at least 14 Iranian scientists and engineers and members of their family being killed. The scientists worked in areas such as materials science, aerospace engineering and laser physics. I believe this shift, from covert assassinations to mass casualties, crossed a line. It treats scientists as enemy combatants simply because of their expertise. The assassinations of scientists are not just isolated tragedies; they are a direct assault on the global commons of knowledge, corroding both international law and international science. Unless the world responds, I believe the precedent being set will endanger scientists everywhere and undermine the principle that knowledge belongs to humanity, not the battlefield. International humanitarian law is clear: civilians, including academics, must be protected. Targeting scientists based solely on their professional expertise undermines the Geneva Convention and erodes the civilian–military distinction at the heart of international law. Iran, whatever its politics, remains a member of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency. Its scientists are entitled under international law to conduct peaceful research in medicine, energy and industry. Their work is no more inherently criminal than research that other countries carry out in artificial intelligence (AI), quantum technology or genetics. If we normalize the preemptive assassination of scientists, what stops global rivals from targeting, say, AI researchers in Silicon Valley, quantum physicists in Beijing or geneticists in Berlin? Once knowledge itself becomes a liability, no researcher is safe. Equally troubling is the silence of the international scientific community with organizations such as the UN, UNESCO and the European Research Council as well as leading national academies having not condemned these killings, past or present. Silence is not neutral. It legitimizes the treatment of scientists as military assets. It discourages international collaboration in sensitive but essential research and it creates fear among younger researchers, who may abandon high-impact fields to avoid risk. Science is built on openness and exchange, and when researchers are murdered for their expertise, the very idea of science as a shared human enterprise is undermined. I believe that international scientific organizations should act. At a minimum, they should publicly condemn the assassination of scientists and their families; support independent investigations under international law; as well as advocate for explicit protections for scientists and academic facilities in conflict zones. Importantly, voices within Israel’s own scientific community can play a critical role too. Israeli academics, deeply committed to collaboration and academic freedom, understand the costs of blurring the boundary between science and war. Solidarity cannot be selective. Recent events are a test case for the future of global science. If the international community tolerates the targeting of scientists, it sets a dangerous precedent that others will follow. What appears today as a regional assault on scientists from the Global South could tomorrow endanger researchers in China, Europe, Russia or the US. Science without borders can only exist if scientists are recognized and protected as civilians without borders. That principle is now under direct threat and the world must draw a red line – killing scientists for their expertise is unacceptable. To ignore these attacks is to invite a future in which knowledge itself becomes a weapon, and the people who create it expendable. That is a world no-one should accept. The post Academic assassinations are a threat to global science appeared first on Physics World.
https://physicsworld.com/a/academic-assassinations-are-a-threat-to-global-science/
Space & Physics
svg
bc6e936df7df61ed6659085e2f12dfee3d06416ef06df57fc739f3d3829e3808
2025-11-26T08:39:10+00:00
DNA as a molecular architect
DNA is a fascinating macromolecule that guides protein production and enables cell replication. It has also found applications in nanoscience and materials design. Colloidal crystals are ordered structures made from tiny particles suspended in fluid that can bond to other particles and add functionalisation to materials. By controlling colloidal particles, we can build advanced nanomaterials using a bottom-up approach. There are several ways to control colloidal particle design, ranging from experimental conditions such as pH and temperature to external controls like light and magnetic fields. An exciting approach is to use DNA-mediated processes. DNA binds to colloidal surfaces and regulates how the colloids organize, providing molecular-level control. These connections are reversible and can be broken using standard experimental conditions (e.g., temperature), allowing for dynamic and adaptable systems. One important motivation is their good biocompatibility, which has enabled applications in biomedicine such as drug delivery, biosensing, and immunotherapy. Programmable Atom Equivalents (PAEs) are large colloidal particles whose surfaces are functionalized with single-stranded DNA, while separate, much smaller DNA-coated linkers, called Electron Equivalents (EEs), roam in solution and mediate bonds between PAEs. In typical PAE-EE systems, the EEs carry multiple identical DNA ends that can all bind the same type of PAE, which limits the complexity of the assemblies and makes it harder to program highly specific connections between different PAE types. In this study, the researchers investigate how EEs with arbitrary valency, carrying many DNA arms, regulate interactions in a binary mixture of two types of PAEs. Each EE has multiple single-stranded DNA ends of two different types, each complementary to the DNA on one of the PAE species. The team develops a statistical mechanical model to predict how EEs distribute between the PAEs and to calculate the effective interaction, a measure of how strongly the PAEs attract each other, which in turn controls the structures that can form. Using this model, they inform Monte Carlo simulations to predict system behaviour under different conditions. The model shows quantitative agreement with simulation results and reveals an anomalous dependence of PAE-PAE interactions on EE valency, with interactions converging at high valency. Importantly, the researchers identify an optimal valency that maximizes selectivity between targeted and non-targeted binding pairs. This groundbreaking research provides design principles for programmable self-assembly and offers a framework that can be integrated into DNA nanoscience. Designed self-assembly of programmable colloidal atom-electron equivalents Xiuyang Xia et al 2025 Rep. Prog. Phys. 88 078101 Do you want to learn more about this topic? Assembly of colloidal particles in solution by Kun Zhao and Thomas G Mason (2018) The post DNA as a molecular architect appeared first on Physics World.
https://physicsworld.com/a/dna-as-a-molecular-architect/
Space & Physics
svg
176b05fc3aa34e83e43da626a96305d7f7e81161dcc3b4fa04967a7fe8ff6d82
2025-11-26T08:37:20+00:00
The link between protein evolution and statistical physics
Proteins are made up of a sequence of building blocks called amino acids. Understanding these sequences is crucial for studying how proteins work, how they interact with other molecules, and how changes (mutations) can lead to diseases. These mutations happen over vastly different time periods and are not completely random but strongly correlated, both in space (distinct sites along the sequences) and in time (subsequent mutations of the same site). It turns out that these correlations are very reminiscent of disordered physical systems, notably glasses, emulsions, and foams. A team of researchers from Italy and France have now used this similarity to build a new statistical model to simulate protein evolution. They went on to study the role of different factors causing these mutations. They found that the initial (ancestral) protein sequence has a significant influence on the evolution process, especially in the short term. This means that information from the ancestral sequence can be traced back over a certain period and is not completely lost. The strength of interactions between different amino acids within the protein affects how long this information persists. Although ultimately the team did find differences between the evolution of physical systems and that of protein sequences, this kind of insight would not have been possible without using the language of statistical physics, i.e. space-time correlations. The researchers expect that their results will soon be tested in the lab thanks to upcoming advances in experimental techniques. Fluctuations and the limit of predictability in protein evolution – IOPscience S. Rossi et al, 2025 Rep. Prog. Phys. 88 078102 The post The link between protein evolution and statistical physics appeared first on Physics World.
https://physicsworld.com/a/the-link-between-protein-evolution-and-statistical-physics/
Space & Physics
svg
3e01cb23b1a9e67bdef4d76c2192a94c2e97b4f19ced5011d8f88c8beacec3d5
2025-11-25T17:00:07+00:00
‘Caustic’ light patterns inspire new glass artwork
UK artist Alison Stott has created a new glass and light artwork – entitled Naturally Focused – that is inspired by the work of theoretical physicist Michael Berry from the University of Bristol. Stott, who recently competed an MA in glass at Arts University Plymouth, spent over two decades previously working in visual effects for film and television, where she focussed on creating photorealistic imagery. Her studies touched on how complex phenomena can arise from seemingly simple set-ups, for example in a rotating glass sculpture lit by LEDs. “My practice inhabits the spaces between art and science, glass and light, craft and experience,” notes Stott. “Working with molten glass lets me embrace chaos, indeterminacy, and materiality, and my work with caustics explores the co-creation of light, matter, and perception.” The new artwork is based on “caustics” – the curved patterns that form when light is reflected or refracted by curved surfaces or objects The focal point of the artwork is a hand-blown glass lens that was waterjet-cut into a circle and polished so that its internal structure and optical behaviour are clearly visible. The lens is suspended within stainless steel gyroscopic rings and held by a brass support and stainless stell backplate. The rings can be tilted or rotated to “activate shifting field of caustic projections that ripple across” the artwork. Mathematical equations are also engraved onto the brass that describe the “singularities of light” that are visible on the glass surface. The work is inspired by Berry’s research into the relationship between classical and quantum behaviour and how subtle geometric structures govern how waves and particles behave. Berry recently won the 2025 Isaac Newton Medal and Prize, which is presented by the Institute of Physics, for his “profound contributions across mathematical and theoretical physics in a career spanning over 60 years”. Stott says that working with Berry has pushed her understanding of caustics. “The more I learn about how these structures emerge and why they matter across physics, the more compelling they become,” notes Stott. “My aim is to let the phenomena speak for themselves, creating conditions where people can directly encounter physical behaviour and perhaps feel the same awe and wonder I do.” The artwork will go on display at the University of Bristol following a ceremony to be held on 27 November. The post ‘Caustic’ light patterns inspire new glass artwork appeared first on Physics World.
https://physicsworld.com/a/caustic-light-patterns-inspire-new-glass-artwork/
Space & Physics
svg
a0fc29b7fc81899ac526e52eb9fbad4180bd8fcbd71bc43811ba1ecb1689048f
2025-11-25T16:00:12+00:00
Is your WiFi spying on you?
WiFi networks could pose significant privacy risks even to people who aren’t carrying or using WiFi-enabled devices, say researchers at the Karlsruhe Institute of Technology (KIT) in Germany. According to their analysis, the current version of the technology passively records information that is detailed enough to identify individuals moving through networks, prompting them to call for protective measures in the next iteration of WiFi standards. Although wireless networks are ubiquitous and highly useful, they come with certain privacy and security risks. One such risk stems from a phenomenon known as WiFi sensing, which the researchers at KIT’s Institute of Information Security and Dependability (KASTEL) define as “the inference of information about the networks’ environment from its signal propagation characteristics”. “As signals propagate through matter, they interfere with it – they are either transmitted, reflected, absorbed, polarized, diffracted, scattered, or refracted,” they write in their study, which is published in the Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security (CCS ’25). “By comparing an expected signal with a received signal, the interference can be estimated and used for error correction of the received data.” An under-appreciated consequence, they continue, is that this estimation contains information about any humans who may have unwittingly been in the signal’s path. By carefully analysing the signal’s interference with the environment, they say, “certain aspects of the environment can be inferred” – including whether humans are present, what they are doing and even who they are. The KASTEL team terms this an “identity inference attack” and describes it as a threat that is as widespread as it is serious. “This technology turns every router into a potential means for surveillance,” says Julian Todt, who co-led the study with his KIT colleague Thorsten Strufe. “For example, if you regularly pass by a café that operates a WiFi network, you could be identified there without noticing it and be recognized later – for example by public authorities or companies.” While Todt acknowledges that security services, cybercriminals and others do have much simpler ways of tracking individuals – for example by accessing data from CCTV cameras or video doorbells – he argues that the ubiquity of wireless networks lends itself to being co-opted as a near-permanent surveillance infrastructure. There is, he adds, “one concerning property” about wireless networks: “They are invisible and raise no suspicion.” Although the possibility of using WiFi networks in this way is not new, most previous WiFi-based security attacks worked by analysing so-called channel state information (CSI). These data indicate how a radio signal changes when it reflects off walls, furniture, people or animals. However, the KASTEL researchers note that the latest WiFi standard, known as WiFi 5 (802.11ac), changes the picture by enabling a new and potentially easier form of attack based on beamforming feedback information (BFI). While beamforming uses similar information as CSI, Todt explains that it does so on the sender’s side instead of the receiver’s. This means that a BFI-based surveillance method would require nothing more than standard devices connected to the WiFi network. “The BFI could be used to create images from different perspectives that might then serve to identify persons that find themselves in the WiFi signal range,” Todt says. “The identity of individuals passing through these radio waves could then be extracted using a machine-learning model. Once trained, this model would make an identification in just a few seconds.” In their experiments, Todt and colleagues studied 197 participants as they walked through a WiFi field while being simultaneously recorded with CSI and BFI from four different angles. The participants had five different “walking styles” (such as walking normally and while carrying a backpack) as well as different gaits. The researchers found that they could identify individuals with nearly 100% accuracy, regardless of the recording angle or the individual’s walking style or gait. “The technology is powerful, but at the same time entails risks to our fundamental rights, especially to privacy,” says Strufe. He warns that authoritarian states could use the technology to track demonstrators and members of opposition groups, prompting him and his colleagues to “urgently call” for protective measures and privacy safeguards to be included in the forthcoming IEEE 802.11bf WiFi standard. “The literature on all novel sensing solutions highlights their utility for various novel applications,” says Todt, “but the privacy risks that are inherent to such sensing are often overlooked, or worse — these sensors are claimed to be privacy-friendly without any rationale for these claims. As such, we feel it necessary to point out the privacy risks that novel solutions such as WiFi sensing bring with them.” The researchers say they would like to see approaches developed that can mitigate the risk of identity inference attack. However, they are aware that this will be difficult, since this type of attack exploits the physical properties of the actual WiFi signal. “Ideally, we would influence the WiFi standard to contain privacy-protections in future versions,” says Todt, “but even the impact of this would be limited as there are already millions of WiFi devices out there that are vulnerable to such an attack.” The post Is your WiFi spying on you? appeared first on Physics World.
https://physicsworld.com/a/is-your-wifi-spying-on-you/
Space & Physics
svg
83bc627d6f190752a0d768f7a1073c2a0a6a0d4b0c744a4cc8f4ec12adfeb928
2025-11-25T14:10:42+00:00
Reversible degradation phenomenon in PEMWE cells
  In proton exchange membrane water electrolysis (PEMWE) systems, voltage cycles dropping below a threshold are associated with reversible performance improvements, which remain poorly understood despite being documented in literature. The distinction between reversible and irreversible performance changes is crucial for accurate degradation assessments. One approach in literature to explain this behaviour is the oxidation and reduction of iridium. Iridium-based electrocatalyst activity and stability in PEMWE hinge on their oxidation state, influenced by the applied voltage. Yet, full-cell PEMWE dynamic performance remains under-explored, with a focus typically on stability rather than activity. This study systematically investigates reversible performance behaviour in PEMWE cells using Ir-black as an anodic catalyst. Results reveal a recovery effect when the low voltage level drops below 1.5 V, with further enhancements observed as the voltage decreases, even with a short holding time of 0.1 s. This reversible recovery is primarily driven by improved anode reaction kinetics, likely due to changing iridium oxidation states, and is supported by alignment between the experimental data and a dynamic model that links iridium oxidation/reduction processes to performance metrics. This model allows distinguishing between reversible and irreversible effects and enables the derivation of optimized operation schemes utilizing the recovery effect. Tobias Krenz is a simulation and modelling engineer at Siemens Energy in the Transformation of Industry business area focusing on reducing energy consumption and carbon-dioxide emissions in industrial processes. He completed his PhD from Liebniz University Hannover in February 2025. He earned a degree from Berlin University of Applied Sciences in 2017 and a MSc from Technische Universität Darmstadt in 2020.   Alexander Rex is a PhD candidate at the Institute of Electric Power Systems at Leibniz University Hannover. He holds a degree in mechanical engineering from Technische Universität Braunschweig, an MEng from Tongji University, and an MSc from Karlsruhe Institute of Technology (KIT). He was a visiting scholar at Berkeley Lab from 2024 to 2025. The post Reversible degradation phenomenon in PEMWE cells appeared first on Physics World.
https://physicsworld.com/a/reversible-degradation-phenomenon-in-pemwe-cells/
Space & Physics
svg
d29a4be380718588952eaf040228d48c7700ef0483f85a909bdebed7e2b9d207
2025-11-25T13:59:34+00:00
Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness
Ramy Shelbaya has been hooked on physics ever since he was a 12-year-old living in Egypt and read about the Joint European Torus (JET) fusion experiment in the UK. Biology and chemistry were interesting to him but never quite as “satisfying”, especially as they often seemed to boil down to physics in the end. “So I thought, maybe that’s where I need to go,” Shelbaya recalls. His instincts seem to have led him in the right direction. Shelbaya is now chief executive of Quantum Dice, an Oxford-based start-up he co-founded in 2020 to develop quantum hardware for exploiting the inherent randomness in quantum mechanics. It closed its first funding round in 2021 with a seven-figure investment from a consortium of European investors, while also securing grant funding on the same scale. Now providing cybersecurity hardware systems for clients such as BT, Quantum Dice is launching a piece of hardware for probabilistic computing, based on the same core innovation. Full of joy and zeal for his work, Shelbaya admits that his original decision to pursue physics was “scary”. Back then, he didn’t know anyone who had studied the subject and was not sure where it might lead. Fortunately, Shelbaya’s parents were onboard from the start and their encouragement proved “incredibly helpful”. His teachers also supported him to explore physics in his extracurricular reading, instilling a confidence in the subject that eventually led Shelbaya to do undergraduate and master’s degrees in physics at École normale supérieure PSL in France. He then moved to the UK to do a PhD in atomic and laser physics at the University of Oxford. Just as he was wrapping up his PhD, Oxford University Innovation (OUI) – which manages its technology transfer and consulting activities – launched a new initiative that proved pivotal to Shelbaya’s career. OUI had noted that the university generated a lot of IP and research results that could be commercialized but that the academics producing it often favoured academic work over progressing the technology transfer themselves. On the other hand, lots of students were interested in entering the world of business. To encourage those who might be business-minded to found their own firms, while also fostering more spin-outs from the university’s patents and research, OUI launched the Student Entrepreneurs’ Programme (StEP). A kind of talent show to match budding entrepreneurs with technology ready for development, StEP invited participants to team up, choose commercially promising research from the university, and pitch for support and mentoring to set up a company. As part of Oxford’s atomic and laser physics department, Shelbaya was aware that it had been developing a quantum random number generator. So when the competition was launched, he collaborated with other competition participants to pitch the device. “My team won, and this is how Quantum Dice was born.” The initial technology was geared towards quantum random number generation, for particular use in cybersecurity. Random numbers are at the heart of all encryption algorithms, but generating truly random numbers has been a stumbling block, with the “pseudorandom” numbers people make do with being prone to prediction and hence security violation. Quantum mechanics provides a potential solution because there is inherent randomness in the values of certain quantum properties. Although for a long time this randomness was “a bane to quantum physicists”, as Shelbaya puts it, Quantum Dice and other companies producing quantum random number generators are now harnessing it for useful technologies. Where Quantum Dice sees itself as having an edge over its competitors is in its real-time quality assurance of the true quantum randomness of the device’s output. This means it should be able to spot any corruption to the output due to environmental noise or someone tampering with the device, which is an issue with current technologies. Quantum Dice already offers Quantum Random Number Generator (QRNG) products in a range of form factors that integrate directly within servers, PCs and hardware security systems. Clients can also access the company’s cloud-based solution – Quantum Entropy-as-a-Service – which, powered by its QRNG hardware, integrates into the Internet of Things and cloud infrastructure. Recently Quantum Dice has also launched a probabilistic computing processor based on its QRNG for use in algorithms centred on probabilities. These are often geared towards optimization problems that apply in a number of sectors, including supply chains and logistics, finance, telecommunications and energy, as well as simulating quantum systems, and Boltzmann machines – a type of energy-based machine learning model for which Shelbaya says researchers “have long sought efficient hardware”. After winning the start-up competition in 2019 things got trickier when Quantum Dice was ready to be incorporated, which occurred just at the start of the first COVID-19 lockdown. Shelbaya moved the prototype device into his living room because it was the only place they could ensure access to it, but it turned out the real challenges lay elsewhere. “One of the first things we needed was investments, and really, at that stage of the company, what investors are investing in is you,” explains Shelbaya, highlighting how difficult this is when you cannot meet in person. On the plus side, since all his meetings were remote, he could speak to investors in Asia in the morning, Europe in the afternoon and the US in the evening, all within the same day. Another challenge was how to present the technology simply enough so that people would understand and trust it, while not making it seem so simple that anyone could be doing it. “There’s that sweet spot in the middle,” says Shelbaya. “That is something that took time, because it’s a muscle that I had never worked.” The company performed well for its size and sector in terms of securing investments when their first round of funding closed in 2021. Shelbaya is shy of attributing the success to his or even the team’s abilities alone, suggesting this would “underplay a lot of other factors”. These include the rising interest in quantum technologies, and the advantages of securing government grant funding programmes at the same time, which he feels serves as “an additional layer of certification”. For Shelbaya every day is different and so are the challenges. Quantum Dice is a small new company, where many of the 17 staff are still fresh from university, so nurturing trust among clients, particularly in the high-stakes world of cybersecurity is no small feat. Managing a group of ambitious, energetic and driven young people can be complicated too. But there are many rewards, ranging from seeing a piece of hardware finally work as intended and closing a deal with a client, to simply seeing a team “you have been working to develop, working together without you”. For others hoping to follow a similar career path, Shelbaya’s advice is to do what you enjoy – not just because you will have fun but because you will be good at it too. “Do what you enjoy,” he says, “because you will likely be great at it.” The post Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness appeared first on Physics World.
https://physicsworld.com/a/ramy-shelbaya-the-physicist-and-ceo-capitalizing-on-quantum-randomness/
Space & Physics
svg
216684f569997e3c120c2cacd77b88609067bd208cbfe37ebb912673a6a82931
2025-11-25T09:00:39+00:00
‘Patchy’ nanoparticles emerge from new atomic stencilling technique
Researchers in the US and Korea have created nanoparticles with carefully designed “patches” on their surfaces using a new atomic stencilling technique. These patches can be controlled with incredible precision, and could find use in targeted drug delivery, catalysis, microelectronics and tissue engineering. The first step in the stencilling process is to create a mask on the surface of gold nanoparticles. This mask prevents a “paint” made from grafted-on polymers from attaching to certain areas of the nanoparticles. “We then use iodide ions as a stencil,” explains Qian Chen, a materials scientist and engineer at the University of Illinois at Urbana-Champaign, US, who led the new research effort. “These adsorb (stick) to the surface of the nanoparticles in specific patterns that depend on the shape and atomic arrangement of the nanoparticles’ facets. That’s how we create the patches – the areas where the polymers selectively bind.” Chen adds that she and her collaborators can then tailor the surface chemistry of these tiny patchy nanoparticles in a very controlled way. The team decided to develop the technique after realizing there was a gap in the field of microfabrication stencilling. While techniques in this area have advanced considerably in recent years, allowing ever-smaller microdevices to be incorporated into ever-faster computer chips, most of them rely on top-down approaches for precisely controlling nanoparticles. By comparison, Chen says, bottom-up methods have been largely unexplored even though they are low-cost, solution-processable, scalable and compatible with complex, curved and three-dimensional surfaces. Reporting their work in Nature, the researchers say they were inspired by the way proteins naturally self-assemble. “One of the holy grails in the field of nanomaterials is making complex, functional structures from nanoscale building blocks,” explains Chen. “It’s extremely difficult to control the direction and organization of each nanoparticle. Proteins have different surface domains, and thanks to their interactions with each other, they can make all the intricate machines we see in biology. We therefore adopted that strategy by creating patches or distinct domains on the surface of the nanoparticles.” Philip Moriarty, a physicist of the University of Nottingham, UK who was not involved in the project, describes it as “elegant and impressive” work. “Chen and colleagues have essentially introduced an entirely new mode of self-assembly that allows for much greater control of nanoparticle interactions,” he says, “and the ‘atomic stencil’ concept is clever and versatile.” The team, which includes researchers at the University of Michigan, Pennsylvania State University, Cornell, Brookhaven National Laboratory and Korea’s Chonnam National University as well as Urbana-Champaign, agrees that the potential applications are vast. “Since we can now precisely control the surface properties of these nanoparticles, we can design them to interact with their environment in specific ways,” explains Chen. “That opens the door for more effective drug delivery, where nanoparticles can target specific cells. It could also lead to new types of catalysts, more efficient microelectronic components and even advanced materials with unique optical and mechanical properties.” She and her colleagues say they now want to extend their approach to different types of nanoparticles and different substrates to find out how versatile it truly is. They will also be developing computational models that can predict the outcome of the stencilling process – something that would allow them to design and synthesize patchy nanoparticles for specific applications on demand. The post ‘Patchy’ nanoparticles emerge from new atomic stencilling technique appeared first on Physics World.
https://physicsworld.com/a/patchy-nanoparticles-emerge-from-new-atomic-stencilling-technique/
Space & Physics
svg
594166fa8cda7e73feec02291ae491fa4c64f9ec151f3010540719df985099ca
2025-11-24T17:00:14+00:00
Scientists in China celebrate the completion of the underground JUNO neutrino observatory
The $330m Jiangmen Underground Neutrino Observatory (JUNO) has released its first results following the completion of the huge underground facility in August. JUNO is located in Kaiping City, Guangdong Province, in the south of the country around 150 km west of Hong Kong. Construction of the facility began in 2015 and was set to be complete some five years later. Yet the project suffered from serious flooding, which delayed construction. JUNO, which is expected to run for more than 30 years, aims to study the relationship between the three types of neutrino: electron, muon and tau. Although JUNO will be able to detect neutrinos produced by supernovae as well as those from Earth, the observatory will mainly measure the energy spectrum of electron antineutrinos released by the Yangjiang and Taishan nuclear power plants, which both lie 52.5 km away. To do this, the facility has a 80 m high and 50 m diameter experimental hall located 700 m underground. Its main feature is a 35 m radius spherical neutrino detector, containing 20,000 tonnes of liquid scintillator. When an electron antineutrino occasionally bumps into a proton in the liquid, it triggers a reaction that results in two flashes of light that are detected by the 43,000 photomultiplier tubes that observe the scintillator. On 18 November, a paper was submitted to the arXiv preprint server concluding that the detector’s key performance indicators fully meet or surpass design expectations. New measurement Neutrinos oscillate from one flavour to another as they travel near the speed of light, rarely interacting with matter. This oscillation is a result of each flavour being a combination of three neutrino mass states. Yet scientists do not know the absolute masses of the three neutrinos but can measure neutrino oscillation parameters, known as θ12, θ23 and θ13, as well as the square of the mass differences (Δm2) between two different types of neutrinos. A second JUNO paper submitted on 18 November used data collected between 26 August and 2 November to measure the solar neutrino oscillation parameter θ12 and Δm221 with a factor of 1.6 better precision than previous experiments. Those earlier results, which used solar neutrinos instead of reactor antineutrinos, showed a 1.5 “sigma” discrepancy with the Standard Model of particle physics. The new JUNO measurements confirmed this difference, dubbed the solar neutrino tension, but further data will be needed to prove or disprove the finding. “Achieving such precision within only two months of operation shows that JUNO is performing exactly as designed,” says Yifang Wang from the Institute of High Energy Physics of the Chinese Academy of Sciences, who is JUNO project manager and spokesperson. “With this level of accuracy, JUNO will soon determine the neutrino mass ordering, test the three-flavour oscillation framework, and search for new physics beyond it.” JUNO, which is an international collaboration of more than 700 scientists from 75 institutions across 17 countries including China, France, Germany, Italy, Russia, Thailand, and the US, is the second neutrino experiment in China, after the Daya Bay Reactor Neutrino Experiment. It successfully measured a key neutrino oscillation parameter called θ13 in 2012 before being closed down in 2020. JUNO is also one of three next-generation neutrino experiments, the other two being the Hyper-Kamiokande in Japan and the Deep Underground Neutrino Experiment in the US. Both are expected to become operational later this decade. The post Scientists in China celebrate the completion of the underground JUNO neutrino observatory appeared first on Physics World.
https://physicsworld.com/a/scientists-in-china-celebrate-the-completion-of-the-underground-juno-neutrino-observatory/
Space & Physics
svg
7991c30e08e11ddc6ceb8da62735878050da7ea3ced9400cadedecea2a742035
2025-11-24T15:08:30+00:00
Accelerator experiment sheds light on missing blazar radiation
New experiments at CERN by an international team have ruled out a potential source of intergalactic magnetic fields. The existence of such fields is invoked to explain why we do not observe secondary gamma rays originating from blazars. Led by Charles Arrowsmith at the UK’s University of Oxford, the team suggests the absence of gamma rays could be the result of an unexplained phenomenon that took place in the early universe. A blazar is an extraordinarily bright object with a supermassive black hole at its core. Some of the matter falling into the black hole is accelerated outwards in a pair of opposing jets, creating intense beams of radiation. If a blazar jet points towards Earth, we observe a bright source of light including high-energy teraelectronvolt gamma rays. During their journey across intergalactic space, these gamma-ray photons will occasionally collide with the background starlight that permeates the universe. These collisions can create cascades of electrons and positrons that can then scatter off photons to create gamma rays in the gigaelectronvolt energy range. These gamma-rays should travel in the direction of the original jet, but this secondary radiation has never been detected. Magnetic fields could be the reason for this dearth, as Arrowsmith explains: “The electrons and positrons in the pair cascade would be deflected by an intergalactic magnetic field, so if this is strong enough, we could expect these pairs to be steered away from the line of sight to the blazar, along with the reprocessed gigaelectronvolt gamma rays.” It is not clear, however, that such fields exist – and if they do, what could have created them. Another explanation for the missing gamma rays involves the extremely sparse plasma that permeates intergalactic space. The beam of electron–positron pairs could interact with this plasma, generating magnetic fields that separate the pairs. Over millions of years of travel, this process could lead to beam–plasma instabilities that reduce the beam’s ability to create gigaelectronvolt gamma rays that are focused on Earth. Oxford’s Gianluca Gregori explains, “We created an experimental platform at the HiRadMat facility at CERN to create electron–positron pairs and transport them through a metre-long ambient argon plasma, mimicking the interaction of pair cascades from blazars with the intergalactic medium”. Once the pairs had passed through the plasma, the team measured the degree to which they had been separated. Called Fireball, the experiment found that the beams remained far more tightly focused than expected. “When these laboratory results are scaled up to the astrophysical system, they confirm that beam–plasma instabilities are not strong enough to explain the absence of the gigaelectronvolt gamma rays from blazars,” Arrowsmith explains. Unless the pair beam is perfectly collimated, or composed of pairs with exactly equal energies, instabilities were actively suppressed in the plasma. While the experiment suggests that an intergalactic magnetic field remains the best explanation for the lack of gamma rays, the mystery is far from solved. Gregori explains, “The early universe is believed to be extremely uniform – but magnetic fields require electric currents, which in turn need gradients and inhomogeneities in the primordial plasma.” As a result, confirming the existence of such a field could point to new physics beyond the Standard Model, which may have dominated in the early universe. More information could come with opening of the Cherenkov Telescope Array Observatory. This will comprise ground-based gamma-ray detectors planned across facilities in Spain and Chile, which will vastly improve on the resolutions of current-generation detectors. The research is described in PNAS. The post Accelerator experiment sheds light on missing blazar radiation appeared first on Physics World.
https://physicsworld.com/a/accelerator-experiment-sheds-light-on-missing-blazar-radiation/
Space & Physics
svg
0385111be7b2215cf28e00e55298c423cec5928733137b655fd3248ff53f12c1
2025-11-24T11:10:28+00:00
Why quantum metrology is the driving force for best practice in quantum standardization
The standardization process helps to promote the legitimacy of emerging quantum technologies by distilling technical inputs and requirements from all relevant stakeholders across industry, research and government. Put simply: if you understand a technology well enough to standardize elements of it, that’s when you know it’s moved beyond hype and theory into something of practical use for the economy and society. Standards will, over time, help the quantum technology industry achieve critical mass on the supply side, with those economies of scale driving down prices and increasing demand. As the nascent quantum supply chain evolves – linking component manufacturers, subsystem developers and full-stack quantum computing companies – standards will also ensure interoperability between products from different vendors and different regions. Those benefits flow downstream as well because standards, when implemented properly, increase trust among end-users by defining a minimum quality of products, processes and services. Equally important, as new innovations are rolled out into the marketplace by manufacturers, standards will ensure compatibility across current and next-generation quantum systems, reducing the likelihood of lock-ins to legacy technologies. I have strategic oversight of our core technical programmes in quantum computing, quantum networking, quantum metrology and quantum-enabled PNT (position, navigation and timing). It’s a broad-scope remit that spans research, training as well as responsibility for standardization and international collaboration, with the latter often going hand-in-hand. Right now, we have over 150 people working within the NPL quantum metrology programme. Their collective focus is on developing the measurement science necessary to build, test and evaluate a wide range of quantum devices and systems. Our research helps innovators, whether in an industry or university setting, to push the limits of quantum technology by providing leading-edge capabilities and benchmarking to measure the performance of new quantum products and services. That’s right. For starters, we have a team focusing on the inter-country strategic relationships, collaborating closely with colleagues at other National Metrology Institutes (like NIST in the US and PTB in Germany). A key role in this regard is our standards specialist who, given his background working in the standards development organizations (SDOs), acts as a “connector” between NPL’s quantum metrology teams and, more widely, the UK’s National Quantum Technology Programme and the international SDOs. We also have a team of technical experts who sit on specialist working groups within the SDOs. Their inputs to standards development are not about NPL’s interests, rather providing expertise and experience gained from cutting-edge metrology; also building a consolidated set of requirements gathered from stakeholders across the quantum community to further the UK’s strategic and technical priorities in quantum. Absolutely. We believe that quantum metrology and standardization are key enablers of quantum innovation, fast-tracking the adoption and commercialization of quantum technologies while building confidence among investors and across the quantum supply chain and early-stage user base. For NPL and its peers, the task right now is to agree on the terminology and best practice as we figure out the performance metrics, benchmarks and standards that will enable quantum to go mainstream. Front-and-centre is the UK Quantum Standards Network Pilot. This initiative – which is being led by NPL – brings together representatives from industry, academia and government to work on all aspects of standards development: commenting on proposals and draft standards; discussing UK standards policy and strategy; and representing the UK in the European and international SDOs. The end-game? To establish the UK as a leading voice in quantum standardization, both strategically and technically, and to ensure that UK quantum technology companies have access to global supply chains and markets. The Quantum Standards Network Pilot also provides a direct line to prospective end-users of quantum technologies in business sectors like finance, healthcare, pharmaceuticals and energy. What’s notable is that the end-users are often preoccupied with questions that link in one way or another to standardization. For example: how well do quantum technologies stack up against current solutions? Are quantum systems reliable enough yet? What does quantum cost to implement and maintain, including long-term operational costs? Are there other emerging technologies that could do the same job? Is there a solid, trustworthy supply chain? The quantum landscape is changing fast, with huge scope for disruptive innovation in quantum computing, quantum communications and quantum sensing. Faced with this level of complexity, NMI-Q leverages the combined expertise of the world’s leading National Metrology Institutes – from the G7 countries and Australia – to accelerate the development and adoption of quantum technologies. No one country can do it all when it comes to performance metrics, benchmarks and standards in quantum science and technology. As such, NMI-Q’s priorities are to conduct collaborative pre-standardization research; develop a set of “best measurement practices” needed by industry to fast-track quantum innovation; and, ultimately, shape the global standardization effort in quantum. NPL’s prominent role within NMI-Q (I am the co-chair along with Barbara Goldstein of NIST) underscores our commitment to evidence-based decision-making in standards development and, ultimately, to the creation of a thriving quantum ecosystem. Every day, our measurement scientists address cutting-edge problems in quantum – as challenging as anything they’ll have encountered previously in an academic setting. What’s especially motivating, however, is that the NPL is a mission-driven endeavour with measurement outcomes linking directly to wider societal and economic benefits – not just in the UK, but internationally as well. Measurement for Quantum (M4Q) is a flagship NPL programme that provides industry partners with up to 20 days of quantum metrology expertise to address measurement challenges in applied R&D and product development. The service – which is free of charge for projects approved after peer review – helps companies to bridge the gap from technology prototype to full commercialization. To date, more than two-thirds of the companies to participate in M4Q report that their commercial opportunity has increased as a direct result of NPL support. In terms of specifics, the M4Q offering includes the following services: Apply for M4Q support here. Further reading Performance metrics and benchmarks point the way to practical quantum advantage End note: NPL retains copyright on this article. The post Why quantum metrology is the driving force for best practice in quantum standardization appeared first on Physics World.
https://physicsworld.com/a/why-quantum-metrology-is-the-driving-force-for-best-practice-in-quantum-standardization/
Space & Physics
svg
059bb92c3590b6fa49b617f8713a8bc238cd176667a996f4b4467770872b58f6
2025-11-24T11:00:09+00:00
Ask me anything: Jason Palmer – ‘Putting yourself in someone else’s shoes is a skill I employ every day’
One thing I can say for sure that I got from working in academia is the ability to quickly read, summarize and internalize information from a bunch of sources. Journalism requires a lot of that. Being able to skim through papers – reading the abstract, reading the conclusion, picking the right bits from the middle and so on – that is a life skill. In terms of other skills, I’m always considering who’s consuming what I’m doing rather than just thinking about how I’d like to say something. You have to think about how it’s going to be received – what’s the person on the street going to hear? Is this clear enough? If I were hearing this for the first time, would I understand it? Putting yourself in someone else’s shoes – be it the listener, reader or viewer – is a skill I employ every day. The best thing is the variety. I ended up in this business and not in scientific research because of a desire for a greater breadth of experience. And boy, does this job have it. I get to talk to people around the world about what they’re up to, what they see, what it’s like, and how to understand it. And I think that makes me a much more informed person than I would be had I chosen to remain a scientist. When I did research – and even when I was a science journalist – I thought “I don’t need to think about what’s going on in that part of the world so much because that’s not my area of expertise.” Now I have to, because I’m in this chair every day. I need to know about lots of stuff, and I like that feeling of being more informed. I suppose what I like the least about my job is the relentlessness of it. It is a newsy time. It’s the flip side of being well informed, you’re forced to confront lots of bad things – the horrors that are going on in the world, the fact that in a lot of places the bad guys are winning. When I started in science journalism, I wasn’t a journalist – I was a scientist pretending to be one. So I was always trying to show off what I already knew as a sort of badge of legitimacy. I would call some professor on a topic that I wasn’t an expert in yet just to have a chat to get up to speed, and I would spend a bunch of time showing off, rabbiting on about what papers I’d read and what I knew, just to feel like I belonged in the room or on that call. And it’s a waste of time. You have to swallow your ego and embrace the idea that you may sound like you don’t know stuff even if you do. You might sound dumber, but that’s okay – you’ll learn more and faster, and you’ll probably annoy people less. In journalism in particular, you don’t want to preload the question with all of the things that you already know because then the person you’re speaking to can fill in those blanks – and they’re probably going to talk about things you didn’t know you didn’t know, and take your conversation in a different direction. It’s one of the interesting things about science in general. If you go into a situation with experts, and are open and comfortable about not knowing it all, you’re showing that you understand that nobody can know everything and that science is a learning process. The post Ask me anything: Jason Palmer – ‘Putting yourself in someone else’s shoes is a skill I employ every day’ appeared first on Physics World.
https://physicsworld.com/a/ask-me-anything-jason-palmer-putting-yourself-in-someone-elses-shoes-is-a-skill-i-employ-every-day/
Space & Physics
svg
e3a47617815943e9d356a7472fbeb5549ad2f4d39d12fb2a43b4df6a36c0f07a
2025-11-21T14:20:32+00:00
Sympathetic cooling gives antihydrogen experiment a boost
Physicists working on the Antihydrogen Laser Physics Apparatus (ALPHA) experiment at CERN have trapped and accumulated 15,000 antihydrogen atoms in less than 7 h. This accumulation rate is more than 20 times the previous record. Large ensembles of antihydrogen could be used to search for tiny, unexpected differences between matter and antimatter – which if discovered could point to physics beyond the Standard Model. According to the Standard Model every particle has an antimatter counterpart – or antiparticle. It also says that roughly equal amounts of matter and antimatter were created in the Big Bang. But, today there is much more matter than antimatter in the visible universe, and the reason for this “baryon asymmetry” is one of the most important mysteries of physics. The Standard Model predicts the properties of antiparticles. An antiproton, for example, has the same mass as a proton and the opposite charge. The Standard Model also predicts how antiparticles interact with matter and antimatter. If physicists could find discrepancies between the measured and predicted properties of antimatter, it could help explain the baryon asymmetry and point to other new physics beyond the Standard Model. Just as a hydrogen atom comprises a proton bound to an electron, an antihydrogen antiatom comprises an antiproton bound to an antielectron (positron). Antihydrogen offers physicists several powerful ways to probe antimatter at a fundamental level. Trapped antiatoms can be released in freefall to determine if they respond to gravity in the same way as atoms. Spectroscopy can be used to make precise measurements of how the electromagnetic force binds the antiproton and positron in antihydrogen with the aim of finding differences compared to hydrogen. So far, antihydrogen’s gravitational and electromagnetic properties appear to be identical to hydrogen. However, these experiments were done using small numbers of antiatoms, and having access to much larger ensembles would improve the precision of such measurements and could reveal tiny discrepancies. However, creating and storing antihydrogen is very difficult. Today, antihydrogen can only be made in significant quantities at CERN in Switzerland. There, a beam of protons is fired at a solid target, creating antiprotons that are then cooled and stored using electromagnetic fields. Meanwhile, positrons are gathered from the decay of radioactive nuclei and cooled and stored using electromagnetic fields. These antiprotons and positrons are then combined in a special electromagnetic trap to create antihydrogen. This process works best when the antiprotons and positrons have very low kinetic energies (temperatures) when combined. If the energy is too high, many antiatoms will be escape the trap. So, it is crucial that the positrons and antiprotons to be as cold as possible. Recently, ALPHA physicists have used a technique called sympathetic cooling on positrons, and in a new paper they describe their success. Sympathetic cooling has been used for several decades to cool atoms and ions. It originally involved mixing a hard-to-cool atomic species with atoms that are relatively easy to cool using lasers. Energy is transferred between the two species via the electromagnetic interaction, which chills the hard-to-cool atoms. The ALPHA team used beryllium ions to sympathetically cool positrons to 10 K, which is five degrees colder than previously achieved using other techniques. These cold positrons boosted the efficiency of the creation and trapping of antihydrogen, allowing the team to accumulate 15,000 antihydrogen atoms in less than 7 h. This is more than a 20-fold improvement over their previous record of accumulating 2000 antiatoms in 24 h. “These numbers would have been considered science fiction 10 years ago,” says ALPHA spokesperson Jeffrey Hangst, who is a Denmark’s Aarhus University. Team member Maria Gonçalves, a PhD student at the UK’s Swansea University, says, “This result was the culmination of many years of hard work. The first successful attempt instantly improved the previous method by a factor of two, giving us 36 antihydrogen atoms”. The effort was led by Niels Madsen of the UK’s Swansea University. He enthuses, “It’s more than a decade since I first realized that this was the way forward, so it’s incredibly gratifying to see the spectacular outcome that will lead to many new exciting measurements on antihydrogen”. The cooling technique is described in Nature Communications. The post Sympathetic cooling gives antihydrogen experiment a boost appeared first on Physics World.
https://physicsworld.com/a/sympathetic-cooling-gives-antihydrogen-experiment-a-boost/
Space & Physics
svg
0d96542895c916e565cec7be3864ef784429292e44502642877e4016d057937e
2025-11-21T09:00:02+00:00
Plasma bursts from young stars could shed light on the early life of the Sun
The Sun frequently ejects high-energy bursts of plasma that then travel through interplanetary space. These so-called coronal mass ejections (CMEs) are accompanied by strong magnetic fields, which, when they interact with the Earth’s atmosphere, can trigger solar storms that can severely damage satellite systems and power grids. In the early days of the solar system, the Sun was far more active than it is today and ejected much bigger CMEs. These might have been energetic enough to affect our planet’s atmosphere and therefore influence how life emerged and evolved on Earth, according to some researchers. Since it is impossible to study the early Sun, astronomers use proxies – that is, stars that resemble it. These “exo-suns” are young G-, K- and M-type stars and are far more active than our Sun is today. They frequently produce CMEs with energies far larger than the most energetic solar flares recorded in recent times, which might not only affect their planets’ atmospheres, but may also affect the chemistry on these planets. Until now, direct observational evidence for eruptive CME-like phenomena on young solar analogues has been limited. This is because clear signatures of stellar eruptions are often masked by the brightness of their host stars and flares on these. Measurements of Doppler shifts in optical lines have allowed astronomers to detect a few possible stellar eruptions associated with giant superflares on a young solar analogue, but these detections have been limited to single-wavelength data at “low temperatures” of around 104 K. Studies at higher temperatures have been few and far between. And although scientists have tried out promising techniques, such as X-ray and UV dimming, to advance their understanding of these “cool” stars, few simultaneous multi-wavelength observations have been made. On 29 March 2024, astronomers at Kyoto University in Japan detected a large Carrington-class flare – or superflare – in the far-ultraviolet from EK Draconis, a G-type star located approximately 112 light-years away from the Sun. Thanks to simultaneous observations in the ultraviolet and optical ranges of the electromagnetic spectrum, they say they have now been able to obtain the first direct evidence for a multi-temperature CME from this young solar analogue (which is around 50 to 125 million years old and has a radius similar to the Sun). The researchers’ campaign spanned four consecutive nights from 29 March to 1 April 2024. They made their ultraviolet observations with the Hubble Space Telescope and the Transiting Exoplanet Survey Satellite (TESS) and performed optical monitoring using three ground-based telescopes in Japan, Korea and the US. They found that the far-ultraviolet and optical lines were Doppler shifted during and just before the superflare, with the ultraviolet observations showing blueshifted emission indicative of hot plasma. About 10 minutes later, the optical telescopes observed blueshifted absorption in the hydrogen Hα line, which indicates cooler gases. According to the team’s calculations, the hot plasma had a temperature of 100 000 K and was ejected at speeds of 300–550 km/s, while the “cooler” gas (with a temperature of 10 000 K) was ejected at 70 km/s. “These findings imply that it is the hot plasma rather than the cool plasma that carries kinetic energy into planetary space,” explains study leader Kosuke Namekata. “The existence of this plasma suggests that such CMEs from our Sun in the past, if frequent and strong, could have driven shocks and energetic particles capable of eroding or chemically altering the atmosphere of the early Earth and the other planets in our solar system.” “The discovery,” he tells Physics World, “provides the first observational link between solar and stellar eruptions, bridging stellar astrophysics, solar physics and planetary science.” Looking forward, the researchers, who report their work in Nature Astronomy, now plan to conduct similar, multiwavelength campaigns on other young solar analogues to determine how frequently such eruptions occur and how they vary from star to star. “In the near future, next-generation ultraviolet space telescopes such as JAXA’s LAPYUTA and NASA’s ESCAPADE, coordinated with ground-based facilities, will allow us to trace these events more systematically and understand their cumulative impact on planetary atmospheres,” says Namekata. The post Plasma bursts from young stars could shed light on the early life of the Sun appeared first on Physics World.
https://physicsworld.com/a/plasma-bursts-from-young-stars-could-shed-light-on-the-early-life-of-the-sun/
Space & Physics
svg
d1fda71261ef85be13d42e363f7e8e691ea494234fa0556d4529db50ecc3dc71
2025-11-20T17:00:54+00:00
Flattened halo of dark matter could explain high-energy ‘glow’ at Milky Way’s heart
Astronomers have long puzzled over the cause of a mysterious “glow” of very high energy gamma radiation emanating from the centre of our galaxy. One possibility is that dark matter – the unknown substance thought to make up more than 25% of the universe’s mass – might be involved. Now, a team led by researchers at Germany’s Leibniz Institute for Astrophysics Potsdam (AIP) says that a flattened rather than spherical distribution of dark matter could account for the glow’s properties, bringing us a step closer to solving the mystery. Dark matter is believed to be responsible for holding galaxies together. However, since it does not interact with light or other electromagnetic radiation, it can only be detected through its gravitational effects. Hence, while astrophysical and cosmological evidence has confirmed its presence, its true nature remains one of the greatest mysteries in modern physics. “It’s extremely consequential and we’re desperately thinking all the time of ideas as to how we could detect it,” says Joseph Silk, an astronomer at Johns Hopkins University in the US and the Institut d’Astrophysique de Paris and Sorbonne University in France who co-led this research together with the AIP’s Moorits Mihkel Muru. “Gamma rays, and specifically the excess light we’re observing at the centre of our galaxy, could be our first clue.” The problem, Muru explains, is that the way scientists have usually modelled dark matter to account for the excess gamma-ray radiation in astronomical observations was highly simplified. “This, of course, made the calculations easier, but simplifications always fuzzy the details,” he says. “We showed that in this case, the details are important: we can’t model dark matter as a perfectly symmetrical cloud and instead have to take into account the asymmetry of the cloud.” Muru adds that the team’s findings, which are detailed in Phys. Rev. Lett., provide a boost to the “dark matter annihilation” explanation of the excess radiation. According to the standard model of cosmology, all galaxies – including our own Milky Way – are nested inside huge haloes of dark matter. The density of this dark matter is highest at the centre, and while it primarily interacts through gravity, some models suggest that it could be made of massive, neutral elementary particles that are their own antimatter counterparts. In these dense regions, therefore, such dark matter species could be mutually annihilating, producing substantial amounts of radiation. Pierre Salati, an emeritus professor at the Université Savoie Mont Blanc, France, who was not involved in this work, says that in these models, annihilation plays a crucial role in generating a dark matter component with an abundance that agrees with cosmological observations. “Big Bang nucleosynthesis sets stringent bounds on these models as a result of the overall concordance between the predicted elemental abundances and measurements, although most models do survive,” Salati says. “One of the most exciting aspects of such explanations is that dark matter species might be detected through the rare antimatter particles – antiprotons, positrons and anti-deuterons – that they produce as they currently annihilate inside galactic halos.” Silvia Manconi of the Laboratoire de Physique Théorique et Hautes Energies (LPTHE), France, who was also not involved in the study, describes it as “interesting and stimulating”. However, she cautions that – as is often the case in science – reality is probably more complex than even advanced simulations can capture. “This is not the first time that galaxy simulations have been used to study the implications of the excess and found non-spherical shapes,” she says, though she adds that the simulations in the new work offer “significant improvements” in terms of their spatial resolution. Manconi also notes that the study does not demonstrate how the proposed distribution of dark matter would appear in data from the Fermi Gamma-ray Space Telescope’s Large Area Telescope (LAT), or how it would differ quantitatively from observations of a distribution of old stars. Forthcoming observations with radio telescopes such as MeerKat and FAST, she adds, may soon identify pulsars in this region of the galaxy, shedding further light on other possible contributions to the excess of gamma rays. Muru acknowledges that better modelling and observations are still needed to rule out other possible hypotheses. “Studying dark matter is very difficult, because it doesn’t emit or block light, and despite decades of searching, no experiment has yet detected dark matter particles directly,” he tells Physics World. “A confirmation that this observed excess radiation is caused by dark matter annihilation through gamma rays would be a big leap forward.” New gamma-ray telescopes with higher resolution, such as the Cherenkov Telescope Array, could help settle this question, he says. If these telescopes, which are currently under construction, fail to find star-like sources for the glow and only detect diffuse radiation, that would strengthen the alternative dark matter annihilation explanation. Muru adds that a “smoking gun” for dark matter would be a signal that matches current theoretical predictions precisely. In the meantime, he and his colleagues plan to work on predicting where dark matter should be found in several of the dwarf galaxies that circle the Milky Way. “It’s possible we will see the new data and confirm one theory over the other,” Silk says. “Or maybe we’ll find nothing, in which case it’ll be an even greater mystery to resolve.” The post Flattened halo of dark matter could explain high-energy ‘glow’ at Milky Way’s heart appeared first on Physics World.
https://physicsworld.com/a/flattened-halo-of-dark-matter-could-explain-high-energy-glow-at-milky-ways-heart/
Space & Physics
svg
9ca7f73088dea10a010642391aa00d473b0c648800e6a12a3326514a24469ac8
2025-11-20T13:55:11+00:00
Talking physics with an alien civilization: what could we learn?
It is book week here at Physics World and over the course of three days we are presenting conversations with the authors of three fascinating and fun books about physics. Today, my guest is the physicist Daniel Whiteson, who along with the artist Andy Warner has created the delightful book Do Aliens Speak Physics?. Is physics universal, or is it shaped by human perspective? This will be a very important question if and when we are visited by an advanced alien civilization. Would we recognize our visitors’ alien science – or indeed, could a technologically-advanced civilization have no science at all? And would we even be able to communicate about science with our alien guests? Whiteson, who is a particle physicist at the University of California Irvine, tackles these profound questions and much more in this episode of the Physics World Weekly podcast.   This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online. The post Talking physics with an alien civilization: what could we learn? appeared first on Physics World.
https://physicsworld.com/a/talking-physics-with-an-alien-civilization-what-could-we-learn/
Space & Physics
svg
b851205a68503b088173eed7b0835d1b06c63a897b12980f784a85b3528f8dd8
2025-11-20T10:05:07+00:00
International Quantum Year competition for science journalists begins
This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications. Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ. Find out more on our quantum channel. The post International Quantum Year competition for science journalists begins appeared first on Physics World.
https://physicsworld.com/a/international-quantum-year-competition-for-science-journalists-begins/
Space & Physics
svg
285935ee98338172f482b29fe161094ed284cd79af7338587b7df5097096600c
2025-11-20T09:00:58+00:00
New cylindrical metamaterials could act as shock absorbers for sensitive equipment
A 3D-printed structure called a kagome tube could form the backbone of a new system for muffling damaging vibrations. The structure is part of a class of materials known as topological mechanical metamaterials, and unlike previous materials in this group, it is simple enough to be deployed in real-world situations. According to lead developer James McInerney of the Wright-Patterson Air Force Base in Ohio, US, it could be used as shock protection for sensitive systems found in civil and aerospace engineering applications. McInerney and colleagues’ tube-like design is made from a lattice of beams arranged in such a way that low-energy vibrational modes called floppy modes become localized to one side. “This provides good properties for isolating vibrations because energy input into the system on the floppy side does not propagate to the other side,” McInerney says. The key to this desirable behaviour, he explains, is the arrangement of the beams that form the lattice structure. Using a pattern first proposed by the 19th century physicist James Clerk Maxwell, the beams are organized into repeating sub-units to form stable, two-dimensional structures known as topological Maxwell lattices. Previous versions of these lattices could not support their own weight. Instead, they were attached to rigid external mounts, making it impractical to integrate them into devices. The new design, in contrast, is made by folding a flat Maxwell lattice into a cylindrical tube that is self-supporting. The tube features a connected inner and outer layer – a kagome bilayer – and its radius can be precisely engineered to give it the topological behaviour desired. The researchers, who detail their work in Physical Review Applied, first tested their structure numerically by attaching a virtual version to a mechanically sensitive sample and a source of low-energy vibrations. As expected, the tube diverted the vibrations away from the sample and towards the other end of the tube. Next, they developed a simple spring-and-mass model to understand the tube’s geometry by considering it as a simple monolayer. This modelling indicated that the polarization of the tube should be similar to the polarization of the monolayer. They then added rigid connectors to the tube’s ends and used a finite-element method to calculate the frequency-dependent patterns of vibrations propagating across the structure. They also determined the effective stiffness of the lattice as they applied loads parallel and perpendicular to it. The researchers are targeting vibration-isolation applications that would benefit from a passive support structure, especially in cases where the performance of alternative passive mechanisms, such as viscoelastomers, is temperature-limited. “Our tubes do not necessarily need to replace other vibration isolation mechanisms,” McInerney explains. “Rather, they can enhance the capabilities of these by having the load-bearing structure assist with isolation.” The team’s first and most important task, McInerney adds, will be to explore the implications of physically mounting the kagome tube on its vibration isolation structures. “The numerical study in our paper uses idealized mounting conditions so that the input and output are perfectly in phase with the tube vibrations,” he says. “Accounting for the potential impedance mismatch between the mounts and the tube will enable us to experimentally validate our work and provide realistic design scenarios.” The post New cylindrical metamaterials could act as shock absorbers for sensitive equipment appeared first on Physics World.
https://physicsworld.com/a/new-cylindrical-metamaterials-could-act-as-shock-absorbers-for-sensitive-equipment/
Space & Physics
svg
66b5d87ba5c7ecf7450f3297f76856df9290e3fe29829fe1134269a664feca16
2025-11-19T14:00:00+00:00
Breakfast physics, delving into quantum 2.0, the science of sound, an update to everything: micro reviews of recent books
Physics Around the Clock: Adventures in the Science of Everyday Living By Michael Banks Why do Cheerios tend to stick together while floating in a bowl of milk? Why does a runner’s ponytail swing side to side? These might not be the most pressing questions in physics, but getting to the answers is both fun and provides insights into important scientific concepts. These are just two examples of everyday physics that Physics World news editor Michael Banks explores in his book Physics Around the Clock, which begins with the physics (and chemistry) of your morning coffee and ends with a formula for predicting the winner of those cookery competitions that are mainstays of evening television. Hamish Johnston   Quantum 2.0: the Past, Present and Future of Quantum Physics By Paul Davies You might wonder why the world needs yet another book about quantum mechanics, but for physicists there’s no better guide than Paul Davies. Based for the last two decades at Arizona State University in the US, in Quantum 2.0 Davies tackles the basics of quantum physics – along with its mysteries, applications and philosophical implications – with great clarity and insight. The book ends with truly strange topics such as quantum Cheshire cats and delayed-choice quantum erasers – see if you prefer his descriptions to those we’ve attempted in Physics World this year. Matin Durrani   Can You Get Music on the Moon? the Amazing Science of Sound and Space By Sheila Kanani, illustrated by Liz Kay Why do dogs bark but wolves howl? How do stars “sing”? Why does thunder rumble? This delightful, fact-filled children’s book answers these questions and many more, taking readers on an adventure through sound and space. Written by planetary scientist Sheila Kanani and illustrated by Liz Kay, Can you get Music on the Moon? reveals not only how sound is produced but why it can make us feel certain things. Each of the 100 or so pages brims with charming illustrations that illuminate the many ways that sound is all around us. Michael Banks A Short History of Nearly Everything 2.0 By Bill Bryson Alongside books such as Stephen Hawking’s A Brief History of Time and Carl Sagan’s Cosmos, British-American author Bill Bryson’s A Short History of Nearly Everything is one of the bestselling popular-science books of the last 50 years. First published in 2003, the book became a fan favourite of readers across the world and across disciplines as Bryson wove together a clear and humorous narrative of our universe. Now, 22 years later, he has released an updated and revised volume – A Short History of Nearly Everything 2.0 – that covers major updates in science from the past two decades. This includes the discovery of the Higgs boson and the latest on dark-matter research. The new edition is still imbued with all the wit and wisdom of the original, making it the perfect Christmas present for scientists and anyone else curious about the world around us. Tushna Commissariat The post Breakfast physics, delving into quantum 2.0, the science of sound, an update to everything: micro reviews of recent books appeared first on Physics World.
https://physicsworld.com/a/breakfast-physics-delving-into-quantum-2-0-the-science-of-sound-an-update-to-everything-micro-reviews-of-recent-books/
Space & Physics
svg
87aef23e812fd02fdfbe23b402b1afd92340240fae389e8c68031b072c1854cc
2025-11-19T13:00:07+00:00
Quantum 2.0: Paul Davies on the next revolution in physics
In this episode of Physics World Stories, theoretical physicist, cosmologist and author Paul Davies discusses his latest book, Quantum 2.0: the Past, Present and Future of Quantum Physics. A Regents Professor at Arizona State University, Davies reflects on how the first quantum revolution transformed our understanding of nature – and what the next one might bring. He explores how emerging quantum technologies are beginning to merge with artificial intelligence, raising new ethical and philosophical questions. Could quantum AI help tackle climate change or tackle issues like hunger? And how far should we go in outsourcing planetary management to machines that may well prioritize their own survival? Davies also turns his gaze to the arts, imagining a future where quantum ideas inspire music, theatre and performance. From jazz improvized by quantum algorithms to plays whose endings depend on quantum outcomes, creativity itself could enter a new superposition. Hosted by Andrew Glester, this episode blends cutting-edge science and imagination in trademark Paul Davies style. This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications. Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ. Find out more on our quantum channel.   The post Quantum 2.0: Paul Davies on the next revolution in physics appeared first on Physics World.
https://physicsworld.com/a/quantum-2-0-paul-davies-on-the-next-revolution-in-physics/
Space & Physics
svg
8047782249d06ed6ec3f49ce0b8b0bdcf3890443ecf503282d017b6911d9bc3e
2025-11-19T08:04:53+00:00
Flexible electrodes for the future of light detection
Photodetectors convert light into electrical signals and are essential in technologies ranging from consumer electronics and communications to healthcare. They also play a vital role in scientific research. Researchers are continually working to improve their sensitivity, response speed, spectral range, and design efficiency. Since the discovery of graphene’s remarkable electrical properties, there has been growing interest in using graphene and other two-dimensional (2D) materials to advance photodetection technologies. When light interacts with these materials, it excites electrons that must travel to a nearby contact electrode to generate an electrical signal. The ease with which this occurs depends on the work functions of the materials involved, specifically, the difference between them, known as the Schottky barrier height. Selecting an optimal combination of 2D material and electrode can minimize this barrier, enhancing the photodetector’s sensitivity and speed. Unfortunately, traditional electrode materials have fixed work functions which are limiting 2D photodetector technology. PEDOT:PSS is a widely used electrode material in photodetectors due to its low cost, flexibility, and transparency. In this study, the researchers have developed PEDOT:PSS electrodes with tunable work functions ranging from 5.1 to 3.2 eV, making them compatible with a variety of 2D materials and ideal for optimizing device performance in metal-semiconductor-metal architectures. In addition, their thorough investigation demonstrates that the produced photodetectors performed excellently, with a significant forward current flow (rectification ratio ~10⁵), a strong conversion of light to electrical output (responsivity up to 1.8 A/W), and an exceptionally high Ilight/Idark ratio of 10⁸. Furthermore, the detectors were highly sensitive with low noise, had very fast response times (as fast as 3.2 μs), and thanks to the transparency of PEDOT:PSS, showed extended sensitivity into the near-infrared region. This study demonstrates a tunable, transparent polymer electrode that enhances the performance and versatility of 2D photodetectors, offering a promising path toward flexible, self-powered, and wearable optoelectronic systems, and paving the way for next-generation intelligent interactive technologies. A homogenous polymer design with widely tunable work functions for high-performance two-dimensional photodetectors Youchen Chen et al 2025 Rep. Prog. Phys. 88 068003 Do you want to learn more about this topic? Two-dimensional material/group-III nitride hetero-structures and devices by Tingting Lin, Yi Zeng, Xinyu Liao, Jing Li, Changjian Zhou and Wenliang Wang (2025) The post Flexible electrodes for the future of light detection appeared first on Physics World.
https://physicsworld.com/a/flexible-electrodes-for-the-future-of-light-detection/
Space & Physics
svg
9dc033f10d04cd097300702dc7780fbc7fb1105aee10eb2c4e97a5ed7a922d57
2025-11-19T08:02:58+00:00
Quantum cryptography in practice
Quantum Conference Key Agreement (QCKA) is a cryptographic method that allows multiple parties to establish a shared secret key using quantum technology. This key can then be used for secure communication among the parties. Unlike traditional methods that rely on classical cryptographic techniques, QCKA leverages the principles of quantum mechanics, particularly multipartite entanglement, to ensure security. A key aspect of QCKA is creating and distributing entangled quantum states among the parties. These entangled states have unique properties that make it impossible for an eavesdropper to intercept the key without being detected. Researchers measure the efficiency and performance of the key agreement protocol using a metric known as the key rate. One problem with state-of-the-art QCKA schemes is that this key rate decreases exponentially with the number of users. Previous solutions to this problem, based on single-photon interference, have come at the cost of requiring global phase locking. This makes them impractical to put in place experimentally. However, the authors of this new study have been able to circumvent this requirement, by adopting an asynchronous pairing strategy. Put simply, this means that measurements taken by different parties in different places do not need to happen at exactly at the same time. Their solution effectively removes the need for global phase locking while still maintaining the favourable scaling of the key rate as in other protocols based on single-photon interference. The new scheme represents an important step towards realising QCKA at long distances by allowing for much more practical experimental configurations. Repeater-like asynchronous measurement-device-independent quantum conference key agreement – IOPscience Yu-Shuo Lu et al., 2025 Rep. Prog. Phys. 88 067901 The post Quantum cryptography in practice appeared first on Physics World.
https://physicsworld.com/a/quantum-cryptography-in-practice/
Space & Physics
svg
a534e2bc884f45e1d3ff22d0b6da0f7ddf04a30a7097a185da2b11426d1f1b52
2025-11-18T16:00:17+00:00
Scientists realize superconductivity in traditional semiconducting material
The ability to induce superconductivity in materials that are inherently semiconducting has been a longstanding research goal. Improving the conductivity of semiconductor materials could help develop quantum technologies with a high speed and energy efficiency, including superconducting quantum bits (qubits) and cryogenic CMOS control circuitry. However, this task has proved challenging in traditional semiconductors – such as silicon or germanium – as it is difficult to maintain the optimal superconductive atomic structure. In a new study, published in Nature Nanotechnology, researchers have used molecular beam epitaxy (MBE) to grow gallium-hyperdoped germanium films that retain their superconductivity. When asked about the motivation for this latest work, Peter Jacobson from the University of Queensland tells Physics World about his collaboration with Javad Shabani from New York University. “I had been working on superconducting circuits when I met Javad and discovered the new materials their team was making,” he explains. “We are all trying to understand how to control materials and tune interfaces in ways that could improve quantum devices.” Germanium is a group IV element, so its properties bridge those of both metals and insulators. Superconductivity can be induced in germanium by manipulating its atomic structure to introduce more electrons into the atomic lattice. These extra electrons interact with the germanium lattice to create electron pairs that move without resistance, or in other words, they become superconducting. Hyperdoping germanium (at concentrations well above the solid solubility limit) with gallium induces a superconducting state. However, this material is traditionally unstable due to the presence of structural defects, dopant clustering and poor thickness control. There have also been many questions raised as to whether these materials are intrinsically superconducting, or whether it is actually gallium clusters and unintended phases that are solely responsible for the superconductivity of gallium-doped germanium. Considering these issues and looking for a potential new approach, Jacobson notes that X-ray absorption measurements at the Australian Synchrotron were “the first real sign” that Shabani’s team had grown something special. “The gallium signal was exceptionally clean, and early modelling showed that the data lined up almost perfectly with a purely substitutional picture,” he explains. “That was a genuine surprise. Once we confirmed and extended those results, it became clear that we could probe the mechanism of superconductivity in these films without the usual complications from disorder or spurious phases.” In a new approach, Jacobson, Shabani and colleagues used MBE to grow the crystals instead of relying on ion implantation techniques, allowing the germanium to by hyperdoped with gallium. Using MBE forces the gallium atoms to replace germanium atoms within the crystal lattice at levels much higher than previously seen. The process also provided better control over parasitic heating during film growth, allowing the researchers to achieve the structural precision required to understand and control the superconductivity of these germanium:gallium (Ge:Ga) materials, which were found to become superconducting at 3.5 K with a carrier concentration of 4.15 × 1021 holes/cm3. The critical gallium dopant threshold to achieve this was 17.9%. Using synchrotron-based X-ray absorption, the team found that the gallium dopants were substitutionally incorporated into the germanium lattice and induced a tetragonal distortion to the unit cell. Density functional theory calculations showed that this causes a shift in the Fermi level into the valence band and flattens electronic bands. This suggests that the structural order of gallium in the germanium lattice creates a narrow band that facilitates superconductivity in germanium, and that this superconductivity arises intrinsically in the germanium, rather than being governed by defects and gallium clusters. The researchers tested trilayer heterostructures – Ge:Ga/Si/Ge:Ga and Ge:Ga/Ge/Ge:Ga – as proof-of-principle designs for vertical Josephson junction device architectures. In the future, they hope to develop these into fully fledged Josephson junction devices. Commenting on the team’s future plans for this research, Jacobson concludes: “I’m very keen to examine this material with low-temperature scanning tunnelling microscopy (STM) to directly measure the superconducting gap, because STM adds atomic-scale insights that complement our other measurements and will help clarify what sets hyperdoped germanium apart”. The post Scientists realize superconductivity in traditional semiconducting material appeared first on Physics World.
https://physicsworld.com/a/scientists-realize-superconductivity-in-traditional-semiconducting-material/
Space & Physics
svg
f405aa61d5c6cd7df9263cfaca6159944a7c5e1742df62f2cba48bed5c999e80
2025-11-18T14:20:21+00:00
Better coffee, easier parking and more: the fascinating physics of daily life
It is book week here at Physics World and over the course of three days we are presenting conversations with the authors of three fascinating and fun books about physics. First up is my Physics World colleague Michael Banks, whose book Physics Around the Clock: Adventures in the Science of Everyday Living starts with your morning coffee and ends with a formula for making your evening television viewing more satisfying. As well as the rich physics of coffee, we chat about strategies for finding the best parking spot and the efficient boarding of aeroplanes. If you have ever wondered why a runner’s ponytail swings from side-to-side when they reach a certain speed – we have the answer for you. Other daily mysteries that we explore include how a hard steel razor blade can be dulled by cutting relatively soft hairs and why quasiparticles called “jamitons” are helping physicists understand the spontaneous appearance of traffic jams. And a warning for squeamish listeners, we do talk about the amazing virus-spreading capabilities of a flushing toilet.   This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online. The post Better coffee, easier parking and more: the fascinating physics of daily life appeared first on Physics World.
https://physicsworld.com/a/better-coffee-easier-parking-and-more-the-fascinating-physics-of-daily-life/
Space & Physics
svg
8a7f1c3cd2f30ed779036df1cd59d8d96efe7f0c1bfa0d3dd83fa6e21e1a3ade
2025-11-18T11:00:36+00:00
Cosmic dawn: the search for the primordial hydrogen signal
“This is one of the big remaining frontiers in astronomy,” says Phil Bull, a cosmologist at the Jodrell Bank Centre for Astrophysics at the University of Manchester. “It’s quite a pivotal era of cosmic history that, it turns out, we don’t actually understand.” Bull is referring to the vital but baffling period in the early universe – from 380,000 years to one billion years after the Big Bang – when its structure went from simple to complex. To lift the veil on this epoch, experiments around the world – from Australia to the Arctic – are racing to find a specific but elusive signal from the earliest hydrogen atoms. This signal could confirm or disprove scientists’ theories of how the universe evolved and the physics that governs it. Hydrogen is the most abundant element in the universe. As neutral hydrogen atoms change states, they can emit or absorb photons. This spectral transition, which can be stimulated by radiation, produces an emission or absorption radio wave signal with a wavelength of 21 cm. To find out what happened during that early universe, astronomers are searching for these 21 cm photons that were emitted by primordial hydrogen atoms. But despite more teams joining the hunt every year, no-one has yet had a confirmed detection of this radiation. So who will win the race to find this signal and how is the hunt being carried out? Let’s first return to about 380,000 years after the Big Bang, when the universe had expanded and cooled to below 3000 K. At this stage, neutral atoms, including atomic hydrogen, could form. Thanks to the absence of free electrons, ordinary matter particles could decouple from light, allowing it to travel freely across the universe. This ancient radiation that permeates the sky is known as the cosmic microwave background (CMB). But after that we don’t know much about what happened for the next few hundred million years. Meanwhile, the oldest known galaxy MoM-z14 – which existed about 280 million years after the Big Bang – was observed in April 2025 by the James Webb Space Telescope. So there is currently a gap of just under 280 million years in our observations of the early universe. “It’s one of the last blank spots,” says Anastasia Fialkov, an astrophysicist at the Institute of Astronomy of the University of Cambridge. This “blank spot” is a bridge between the early, simple universe and today’s complex structured cosmos. During this early epoch, the universe went from being filled with a thick cloud of neutral hydrogen, to being diversely populated with stars, black holes and everything in between. It covers the end of the cosmic dark ages, the cosmic dawn, and the epoch of reionization – and is arguably one of the most exciting periods in our universe’s evolution. During the cosmic dark ages, after the CMB flooded the universe, the only “ordinary” matter (made up of protons, neutrons and electrons) was neutral hydrogen (75% by mass) and neutral helium (25%), and there were no stellar structures to provide light. It is thought that gravity then magnified any slight fluctuations in density, causing some of this primordial gas to clump and eventually form the first stars and galaxies – a time called the cosmic dawn. Next came the epoch of reionization, when ultraviolet and X-ray emissions from those first celestial objects heated and ionized the hydrogen atoms, turning the neutral gas into a charged plasma of electrons and protons. One way scientists are trying to observe this imprint is to measure the average – or “global” – signal across the sky, looking at how it shifts from absorption to emission compared to the CMB. Normally, a 21 cm radio wave signal has a frequency of about 1420 MHz. But this ancient signal, according to theory, has been emitted and absorbed at different intensities throughout this cosmic “blank spot”, depending on the universe’s evolutionary processes at the time. The expanding universe has also stretched and distorted the signal as it travelled to Earth. Theories predict that it would now be in the 1 to 200 MHz frequency range – with lower frequencies corresponding to older eras – and would have a wavelength of metres rather than centimetres. Importantly, the shape of the global 21 cm signal over time could confirm the lambda-cold dark matter (ΛCDM) model, which is the most widely accepted theory of the cosmos; or it could upend it. Many astronomers have dedicated their careers to finding this radiation, but it is challenging for a number of reasons. <><><><><""><><><><><><><><><><""><><><><><>(a CC BY 4.0 The Royal Society/A Fialkov et al. 2024 Philos. Trans. A Math. Phys. Eng. Sci. 382 20230068; b Copyright Springer Nature. Reused with permission from E de Lera Acedo et al. 2022 Nature Astronomy 6 984) a A simulation of the sky-averaged (global) signal as a function of time (horizontal) and space (vertical). b A typical model of the global 21 cm line with the main cosmic events highlighted. Each experiment searching for the global 21 cm signal focuses on a particular frequency band. For example, the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) is looking at the 50–170 MHz range (blue). There is also no single source of this emission, so, like the CMB, it permeates the universe. “If it was the only signal in the sky, we would have found it by now,” says Eloy de Lera Acedo, head of Cavendish Radio Astronomy and Cosmology at the University of Cambridge. But the universe is full of contamination, with the Milky Way being a major culprit. Scientists are searching for 0.1 K in an environment “that’s a million times brighter”, he explains. And even before this signal reaches the radio-noisy Earth, it has to travel through the atmosphere, which further distorts and contaminates it. “It’s a very difficult measurement,” says Rigel Cappallo, a research scientist at the MIT Haystack Observatory. “It takes a really, really well calibrated instrument that you understand really well, plus really good modelling.” In 2018 the Experiment to Detect the Global EoR Signature (EDGES) – a collaboration between Arizona State University and MIT Haystack Observatory – hit the headlines when it claimed to have detected the global 21 cm signal (Nature 555 67). The EDGES instrument is a dipole antenna, which resembles a ping-pong table with a gap in the middle (see photo at top of article for the 2024 set-up). It is mounted on a large metal groundsheet, which is about 30 × 30 m. Its ground-breaking observation was made at a remote site in western Australia, far from radio frequency interference. But in the intervening seven years, no-one else has been able to replicate the EDGES results. The spectrum dip that EDGES detected was very different from what theorists had expected. “There is a whole family of models that are predicted by the different cosmological scenarios,” explains Ravi Subrahmanyan, a research scientist at Australia’s national science agency CSIRO. “When we take measurements, we compare them with the models, so that we can rule those models in or out.” In general, the current models predict a very specific envelope of signal possibilities (see figure 1). First, they anticipate an absorption dip in brightness temperature of around 0.1 to 0.2 K, caused by the temperature difference between the cold hydrogen gas (in an expanding universe) and the warmer CMB. Then, a speedy rise and photon emission is predicted as the gas starts to warm when the first stars form, and the signal should spike dramatically when the first X-ray binary stars fire up and heat up the surrounding gas. The signal is then expected to fade as the epoch of reionization begins, because ionized particles cannot undergo the spectral transition. With models, scientists theorize when this happened, how many stars there were, and how the cosmos unfurled. (Courtesy: SARAS Team) The 21 cm signals predicted by current cosmology models (coloured lines) and the detection by the EDGES experiment (dashed black line). “It’s just one line, but it packs in so many physical phenomena,” says Fialkov, referring to the shape of the 21 cm signal’s brightness temperature over time. The timing of the dip, its gradient and magnitude all represent different milestones in cosmic history, which affect how it evolved. The EDGES team, however, reported a dip of more than double the predicted size, at about 78 MHz (see figure 2). While the frequency was consistent with predictions, the very wide and deep dip of the signal took the community by surprise. “It would be a revolution in physics, because that signal will call for very, very exotic physics to explain it,” says de Lera Acedo. “Of course, the first thing we need to do is to make sure that that is actually the signal.” The EDGES claim has galvanized the cosmology community. “It set a cat among the pigeons,” says Bull. “People realized that, actually, there’s some very exciting science to be done here.” Some groups are trying to replicate the EDGES observation, while others are trying new approaches to detect the signal that the models promise. The Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) – a collaboration between the University of Cambridge and Stellenbosch University in South Africa – focuses on the 50–170 MHz frequency range. Sitting on the dry and empty plains of South Africa’s Northern Cape, it is targeting the EDGES observation (Nature Astronomy 6 984). In this radio-quiet environment, REACH has set up two antennas: one looks like EDGES’ dipole ping-pong table, while the other is a spiral cone. They sit on top of a giant metallic mesh – the ground plate – in the shape of a many-pointed star, which aims to minimize reflections from the ground. Hunting for this signal “requires precision cosmology and engineering”, says de Lera Acedo, the principal investigator on REACH. Reflections from the ground or mesh, calibration errors, and signals from the soil, are the kryptonite of cosmic dawn measurements. “You need to reduce your systemic noise, do better analysis, better calibration, better cleaning [to remove other sources from observations],” he says. Another radio telescope, dubbed the Shaped Antenna measurement of the background Radio Spectrum (SARAS) – which was established in the late 2000s by the Raman Research Institute (RRI) in Bengaluru, India – has undergone a number of transformations to reduce noise and limit other sources of radiation. Over time, it has morphed from a dipole on the ground to a metallic cone floating on a raft. It is looking at 40 to 200 MHz (Exp. Astron. 51 193). After the EDGES claim, SARAS pivoted its attention to verifying the detection, explains Saurabh Singh, a research scientist at the RRI. “Initially, we were not able to get down to the required sensitivity to be able to say anything about their detection,” he explains. “That’s why we started floating our radiometer on water.” Buoying the experiment reduces ground contamination and creates a more predictable surface to include in calculations. Using data from their floating radiometer, in 2022 Singh and colleagues disfavoured EDGES’ claim (Nature Astronomy 6 607), but for many groups the detection still remains a target for observations. While SARAS has yet to detect a cosmic-dawn signal of its own, Singh says that non-detection is also an important element of finding the global 21 cm signal. “Non-detection gives us an opportunity to rule out a lot of these models, and that has helped us to reject a lot of properties of these stars and galaxies,” he says. Raul Monsalve Jara – a cosmologist at the University of California, Berkeley – has been part of the EDGES collaboration since 2012, but decided to also explore other ways to detect the signal. “My view is that we need several experiments doing different things and taking different approaches,” he says. The Mapper of the IGM Spin Temperature (MIST) experiment, of which Monsalve is co-principal investigator, is a collaboration between Chilean, Canadian, Australian and American researchers. These instruments are looking at 25 to 105 MHz (MNRAS 530 4125). “Our approach was to simplify the instrument, get rid of the metal ground plate, and to take small, portable instruments to remote locations,” he explains. These locations have to fulfil very specific requirements – everything around the instrument, from mountains to the soil, can impact the instrument’s performance. “If the soil itself is irregular, that will be very difficult to characterize and its impact will be difficult to remove [from observations],” Monsalve says. So far, the MIST instrument, which is also a dipole ping-pong table, has visited a desert in California, another in Nevada, and even the Arctic. Each time, the researchers spend a few weeks at the site collecting data, and it is portable and easy to set up, Monsalve explains. The team is planning more observations in Chile. “If you suspect that your environment could be doing something to your measurements, then you need to be able to move around,” continues Monsalve. “And we are contributing to the field by doing that.” Aaron Parsons, also from the University of California, Berkeley, decided that the best way to detect this elusive signal would be to try and eliminate the ground entirely – by suspending a rotating antenna over a giant canyon with 100 m empty space in every direction. His Electromagnetically Isolated Global Signal Estimation Platform (EIGSEP) includes an antenna hanging four storeys above the ground, attached to Kevlar cable strung across a canyon in Utah. It’s observing at 50 to 250 MHz. “It continuously rotates around and twists every which way,” Parsons explains. Hopefully, that will allow them to calibrate the instrument very accurately. Two antennas on the ground cross-correlate observations. EIGSEP began making observations last year. More experiments are expected to come online in the next year. The Remote HI eNvironment Observer (RHINO), an initiative of the University of Manchester, will have a horn-shaped receiver made of a metal mesh that is usually used to construct skyscrapers. Horn shapes are particularly good for calibration, allowing for very precise measurements. The most famous horn-shaped antenna is Bell Laboratories’ Holmdel Horn Antenna in the US, with which two scientists accidentally discovered the CMB in 1965. Initially, RHINO will be based at Jodrell Bank Observatory in the UK, but like other experiments, it could travel to other remote locations to hunt for the 21 cm signal. Similarly, Subrahmanyan – who established the SARAS experiment in India and is now with CSIRO in Australia – is working to design a new radiometer from scratch. The instrument, which will focus on 40–160 MHz, is called Global Imprints from Nascent Atoms to Now (GINAN). He says that it will feature a recently patented self-calibrating antenna. “It gives a much more authentic measurement of the sky signal as measured by the antenna,” he explains. In the meanwhile, the EDGES collaboration has not been idle. MIT Haystack Observatory’s Cappallo project manages EDGES, which is currently in its third iteration. It is still the size of a desk, but its top now looks like a box, with closed sides and its electronics tucked inside, and an even larger metal ground plate. The team has now made observations from islands in the Canadian archipelago and in Alaska’s Aleutian island chain (see photo at top of article). “The 2018 EDGES result is not going to be accepted by the community until somebody completely independently verifies it,” Cappallo explains. “But just for our own sanity and also to try to improve on what we can do, we want to see it from as many places as possible and as many conditions as possible.” The EDGES team has replicated its results using the same data analysis pipeline, but no-one else has been able to reproduce the unusual signal. All the astronomers interviewed welcomed the introduction of new experiments. “I think it’s good to have a rich field of people trying to do this experiment because nobody is going to trust any one measurement,” says Parsons. “We need to build consensus here.” Some astronomers have decided to avoid the struggles of trying to detect the global 21 cm signal from Earth – instead, they have their sights set on the Moon. Earth’s atmosphere is one of the reasons why the 21 cm signal is so difficult to measure. The ionosphere, a charged region of the atmosphere, distorts and contaminates this incredibly faint signal. On the far side of the Moon, any antenna would also be shielded from the cacophony of radio-frequency interference from Earth. “This is why some experiments are going to the Moon,” says Parsons, adding that he is involved in NASA’s LuSEE-Night experiment. LuSEE-Night, or the Lunar Surface Electromagnetics Experiment, aims to land a low-frequency experiment on the Moon next year. In July, at the National Astronomical Meeting in Durham, the University of Cambridge’s de Lera Acedo presented a proposal to put a miniature radiometer into lunar orbit. Dubbed “Cosmocube”, it will be a nanosatellite that will orbit the Moon searching for this 21 cm signal. “It is just in the making,” says de Lera Acedo, adding that it will not be in operation for at least a decade. “But it is the next step.” In the meanwhile, groups here on Earth are in a race to detect this elusive signal. The instruments are getting more sensitive, the modelling is improving, and the unknowns are reducing. “If we do the experiments right, we will find the signal,” Monsalve believes. The big question is who, of the many groups with their hat in the ring, is doing the experiment “right”. The post Cosmic dawn: the search for the primordial hydrogen signal appeared first on Physics World.
https://physicsworld.com/a/cosmic-dawn-the-search-for-the-primordial-hydrogen-signal/
Space & Physics
svg
d62045537171eb0d4d63b58525254791fc8baa720ead11dd645ade2fbadab38a
2025-11-17T16:15:26+00:00
Ten-ion system brings us a step closer to large-scale qubit registers
Researchers in Austria have entangled matter-based qubits with photonic qubits in a ten-ion system. The technique is scalable to larger ion-qubit registers, paving the way for the creation of larger and more complex quantum networks. Quantum networks consist of matter-based nodes that store and process quantum information and are linked through photons (quanta of light). Already, Ben Lanyon’s group at the University of Innsbruck has made advances in this direction by entangling two ions in different systems. Now, in a new paper published in Physical Review Letters , they describe how they have developed and demonstrated a new method to entangle a string of ten ions with photons. In the future, this approach could enable the entanglement of sets of ions in different locations through light, rather than one ion at a time. To achieve this, Lanyon and colleagues trapped a chain of 10 calcium ions in a linear trap in an optical cavity. By changing the trapping voltages in the trap, each ion was moved, one-by-one, into the cavity. Once inside, the ion was placed in the “sweet spot”, where the ion’s interaction with the cavity is the strongest. There, the ion emitted a single photon when exposed to a 393 nm Raman laser beam. This beam was tightly focused on one ion, guaranteeing that the emitted photon – collected in a single-mode optical fibre – comes out from one ion at a time. This process was carried out ten times, one per ion, to obtain a train of ten photons. By using quantum state tomography, the researchers reconstructed the density matrix, which describes the correlation between the states of ions (i) and photons (j). To do so, they measure every ion and photon state in three different basis, resulting in nine Pauli-basis configurations of quantum measurements. From the density matrix, the concurrence (a measure of entanglement) between the ion (i) and photon (j) was found to be positive only when i = j, and equal to zero otherwise. This implies that the ion is uniquely entangled with the photon it produced, and unentangled with the photon produced by other ions. From the density matrix, they also calculate the fidelity with the Bell state (a state of maximum entanglement), yielding an average 92%. As Marco Canteri points out, “this fidelity characterizes the quality of entanglement between the ion-photon pair for i=j”. This work developed and demonstrated a technique whereby matter-based qubits and photonic qubits can be entangled, one at a time, in ion strings. Now, the group aims to “demonstrate universal quantum logic within the photon-interfaced 10-ion register and, building up towards entangling two remote 10-ion processors through the exchange of photons between them,” explains team member Victor Krutyanskiy. If this method effectively scales to larger systems, more complex quantum networks could be built. This would lead to applications in quantum communication and quantum sensing. The post Ten-ion system brings us a step closer to large-scale qubit registers appeared first on Physics World.
https://physicsworld.com/a/ten-ion-system-brings-us-a-step-closer-to-large-scale-qubit-registers/
Space & Physics
svg
33a08be5b0f1eac18e9b48afb7baf43e9177c9a6e62b7cb3cb46730de255e5cd
2025-11-17T09:45:35+00:00
Non-invasive wearable device measures blood flow to the brain
Measuring blood flow to the brain is essential for diagnosing and developing treatments for neurological disorders such as stroke, vascular dementia or traumatic brain injury. Performing this measurement non-invasively is challenging, however, and achieved predominantly using costly MRI and nuclear medicine imaging techniques. Emerging as an alternative, modalities based on optical transcranial measurement are cost-effective and easy to use. In particular, speckle contrast optical spectroscopy (SCOS) – an offshoot of laser speckle contrast imaging, which uses laser light speckles to visualize blood vessels – can measure cerebral blood flow (CBF) with high temporal resolution, typically above 30 Hz, and cerebral blood volume (CBV) through optical signal attenuation. Researchers at the California Institute of Technology (Caltech) and the Keck School of Medicine’s USC Neurorestoration Center have designed a lightweight SCOS system that accurately measures blood flow to the brain, distinguishing it from blood flow to the scalp. Co-senior author Charles Liu of the Keck School of Medicine and team describe the system and their initial experimentation with it in APL Bioengineering. The SCOS system consists of a 3D-printed head mount designed for secure placement over the temple region. It holds a single 830 nm laser illumination fibre and seven detector fibres positioned at seven different source-to-detector (S–D) distances (between 0.6 and 2.6 cm) to simultaneously capture blood flow dynamics across layers of the scalp, skull and brain. Fibres with shorter S–D distances acquire shallower optical data from the scalp, while those with greater distances obtain deeper and broader data. The seven channels are synchronized to exhibit identical oscillation frequencies corresponding to the heart rate and cardiac cycle. When the SCOS system directs the laser light onto a sample, multiple random scattering events occur before the light exits the sample, creating speckles. These speckles, which materialize on rapid timescales, are the result of interference of light travelling along different trajectories. Movement within the sample (of red blood cells, for instance) causes dynamic changes in the speckle field. These changes are captured by a multi-million-pixel camera with a frame rate above 30 frames/s and quantified by calculating the speckle contrast value for each image. The researchers used the SCOS system to perform CBF and CBV measurements in 20 healthy volunteers. To isolate and obtain surface blood dynamics from brain signals, the researchers gently pressed on the superficial temporal artery (a terminal branch of the external carotid artery that supplies blood to the face and scalp) to block blood flow to the scalp. In tests on the volunteers, when temporal artery blood flow was occluded for 8 s, scalp-sensitive channels exhibited significant decreases in blood flow while brain-sensitive channels showed minimal change, enabling signals from the internal carotid artery that supplies blood to the brain to be clearly distinguished. Additionally, the team found that positioning the detector 2.3 cm or more away from the source allowed for optimal brain blood flow measurement while minimizing interference from the scalp. “Combined with the simultaneous measurements at seven S–D separations, this approach enables the first quantitative experimental assessment of how scalp and brain signal contributions vary with depth in SCOS-based CBF measurements and, more broadly, in optical measurements,” they write. “This work also provides crucial insights into the optimal device S–D distance configuration for preferentially probing brain signal over scalp signal, with a practical and subject-friendly alternative for evaluating depth sensitivity, and complements more advanced, hardware-intensive strategies such as time-domain gating.” The researchers are now working to improve the signal-to-noise ratio of the system. They plan to introduce a compact, portable laser and develop a custom-designed extended camera that spans over 3 cm in one dimension, enabling simultaneous and continuous measurement of blood dynamics across S–D distances from 0.5 to 3.5 cm. These design advancements will enhance spatial resolution and enable deeper brain measurements. “This crucial step will help transition the system into a compact, wearable form suitable for clinical use,” comments Liu. “Importantly, the measurements described in this publication were achieved in human subjects in a very similar manner to how the final device will be used, greatly reducing barriers to clinical application.” “I believe this study will advance the engineering of SCOS systems and bring us closer to a wearable, clinically practical device for monitoring brain blood flow,” adds co-author Simon Mahler, now at Stevens Institute of Technology. “I am particularly excited about the next stage of this project: developing a wearable SCOS system that can simultaneously measure both scalp and brain blood flow, which will unlock many fascinating new experiments.” The post Non-invasive wearable device measures blood flow to the brain appeared first on Physics World.
https://physicsworld.com/a/non-invasive-wearable-device-measures-blood-flow-to-the-brain/
Space & Physics
svg
fb83d7ca82a2640910997db228631728a272e1a9db07dda4f7327c47c2d4c4dc
2025-11-14T16:41:34+00:00
The future of quantum physics and technology debated at the Royal Institution
As we enter the final stretch of the International Year of Quantum Science and Technology (IYQ), I hope you’ve enjoyed our extensive quantum coverage over the last 12 months. We’ve tackled the history of the subject, explored some of the unexplained mysteries that still make quantum physics so exciting, and examined many of the commercial applications of quantum technology. You can find most of our coverage collected into two free-to-read digital Quantum Briefings, available here and here on the Physics World website. Over the last 100 years since Werner Heisenberg first developed quantum mechanics on the island of Helgoland in June 1925, quantum mechanics has proved to be an incredibly powerful, successful and logically consistent theory. Our understanding of the subatomic world is no longer the “lamentable hodgepodge of hypotheses, principles, theorems and computational recipes”, as the Israeli physicist and philosopher Max Jammer memorably once described it. In fact, quantum mechanics has not just transformed our understanding of the natural world; it has immense practical ramifications too, with so-called “quantum 1.0” technologies – lasers, semiconductors and electronics – underpinning our modern world. But as was clear from the UK National Quantum Technologies Showcase in London last week, organized by Innovate UK, the “quantum 2.0” revolution is now in full swing. The day-long event, which is now in its 10th year, featured over 100 exhibitors, including many companies that are already using fundamental quantum concepts such as entanglement and superposition to support the burgeoning fields of quantum computing, quantum sensing and quantum communication. The show was attended by more than 3000 delegates, some of whom almost had to be ushered out of the door at closing time, so keen were they to keep talking. Last week also saw a two-day conference at the historic Royal Institution (RI) in central London that was a centrepiece of IYQ in the UK and Ireland. Entitled Quantum Science and Technology: the First 100 Years; Our Quantum Future and attended by over 300 people, it was organized by the History of Physics and the Business Innovation and Growth groups of the Institute of Physics (IOP), which publishes Physics World. The first day, focusing on the foundations of quantum mechanics, ended with a panel discussion – chaired by my colleague Tushna Commissariat and Daisy Shearer from the UK’s National Quantum Computing Centre – with physicists Fay Dowker (Imperial College), Jim Al-Khalili (University of Surrey) and Peter Knight. They talked about whether the quantum wavefunction provides a complete description of physical reality, prompting much discussion with the audience. As Al-Khalili wryly noted, if entanglement has emerged as the fundamental feature of quantum reality, then “decoherence is her annoying and ever-present little brother”. Knight, meanwhile, who is a powerful figure in quantum-policy circles, went as far as to say that the limit of decoherence – and indeed the boundary between the classical and quantum worlds – is not a fixed and yet-to-be revealed point. Instead, he mused, it will be determined by how much money and ingenuity and time physicists have at their disposal. On the second day of the IOP conference at the RI, I chaired a discussion that brought together four future leaders of the subject: Mehul Malik (Heriot-Watt University) and Sarah Malik (University College London) along with industry insiders Nicole Gillett (Riverlane) and Muhammad Hamza Waseem (Quantinuum). As well as outlining the technical challenges in their fields, the speakers all stressed the importance of developing a “skills pipeline” so that the quantum sector has enough talented people to meet its needs. Also vital will be the need to communicate the mysteries and potential of quantum technology – not just to the public but to industrialists, government officials and venture capitalists. By many measures, the UK is at the forefront of quantum tech – and it is a lead it should not let slip. The week ended with Al-Khalili giving a public lecture, also at the Royal Institution, entitled “A new quantum world: ‘spooky’ physics to tech revolution”. It formed part of the RI’s famous Friday night “discourses”, which this year celebrate their 200th anniversary. Al-Khalili, who also presents A Life Scientific on BBC Radio 4, is now the only person ever to have given three RI discourses. After the lecture, which was sold out, he took part in a panel discussion with Knight and Elizabeth Cunningham, a former vice-president for membership at the IOP. Al-Khalili was later presented with a special bottle of “Glentanglement” whisky made by Glasgow-based Fraunhofer UK for the Scottish Quantum Technology cluster. The post The future of quantum physics and technology debated at the Royal Institution appeared first on Physics World.
https://physicsworld.com/a/the-future-of-quantum-physics-and-technology-debated-at-the-royal-institution/
Space & Physics
svg
572ff333af3f6742e90341761a62dbf3e281349c00fb1543777f89f419d42532
2025-11-14T08:56:19+00:00
Neural networks discover unstable singularities in fluid systems
Significant progress towards answering one of the Clay Mathematics Institute’s seven Millennium Prize Problems has been achieved using deep learning. The challenge is to establish whether or not the Navier-Stokes equation of fluid dynamics develops singularities. The work was done by researchers in the US and UK – including some at Google Deepmind. Some team members had already shown that simplified versions of the equation could develop stable singularities, which reliably form. In the new work, the researchers found unstable singularities, which form only under very specific conditions. The Navier–Stokes partial differential equation was developed in the early 19th century by Claude-Louis Navier and George Stokes. It has proved its worth for modelling incompressible fluids in scenarios including water flow in pipes; airflow around aeroplanes; blood moving in veins; and magnetohydrodynamics in plasmas. No-one has yet proved, however, whether smooth, non-singular solutions to the equation always exist in three dimensions. “In the real world, there is no singularity…there is no energy going to infinity,” says fluid dynamics expert Pedram Hassanzadeh of the University of Chicago. “So if you have an equation that has a singularity, it tells you that there is some physics that is missing.” In 2000, the Clay Mathematics Institute in Denver, Colorado listed this proof as one of seven key unsolved problems in mathematics, offering a reward of $1,000,000 for an answer. Researchers have traditionally tackled the problem analytically, but in recent decades high-level computational simulations have been used to assist in the search. In a 2023 paper, mathematician Tristan Buckmaster of New York University and colleagues used a special type of machine learning algorithm called a physics-informed neural network to address the question. “The main difference is…you represent [the solution] in a highly non-linear way in terms of a neural network,” explains Buckmaster. This allows it to occupy a lower-dimensional space with fewer free parameters, and therefore to be optimized more efficiently. Using this approach, the researchers successfully obtained the first stable singularity in the Euler equation. This is an analogy to the Navier-Stokes equation that does not include viscosity. A stable singularity will still occur if the initial conditions of the fluid are changed slightly – although the time taken for them to form may be altered. An unstable singularity, however, may never occur if the initial conditions are perturbed even infinitesimally. Some researchers have hypothesized that any singularities in the Navier-Stokes equation must be unstable, but finding unstable singularities in a computer model is extraordinarily difficult. “Before our result there hadn’t been an unstable singularity for an incompressible fluid equation found numerically,” says geophysicist Ching-Yao Lai of California’s Stanford University. In the new work the authors of the original paper and others teamed up with researchers at Google Deepmind to search for unstable singularities in a bounded 3D version of the Euler equation using a physics-informed neural network. “Unlike conventional neural networks that learn from vast datasets, we trained our models to match equations that model the laws of physics,” writes Yongji Wang of New York University and Stanford on Deepmind’s blog. “The network’s output is constantly checked against what the physical equations expect, and it learns by minimizing its ‘residual’, the amount by which its solution fails to satisfy the equations.” After an exhaustive search at a precision that is orders of magnitude higher than a normal deep learning protocol, the researchers discovered new families of singularities in the 3D Euler equation. They also found singularities in the related incompressible porous media equation used to model fluid flows in soil or rock; and in the Boussinesq equation that models atmospheric flows. The researchers also gleaned insights into the strength of the singularities. This could be important as stronger singularities might be less readily smoothed out by viscosity when moving from the Euler equation to the Navier-Stokes equation. The researchers are now seeking to model more open systems to study the problem in a more realistic space. Hassanzadeh, who was not involved in the work, believes that it is significant – although the results are not unexpected. “If the Euler equation tells you that ‘Hey, there is a singularity,’ it just tells you that there is physics that is missing and that physics becomes very important around that singularity,” he explains. “In the case of Euler we know that you get the singularity because, at the very smallest scales, the effects of viscosity become important…Finding a singularity in the Euler equation is a big achievement, but it doesn’t answer the big question of whether Navier-Stokes is a representation of the real world, because for us Navier-Stokes represents everything.” He says the extension to studying the full Navier-Stokes equation will be challenging but that “they are working with the best AI people in the world at Deepmind,” and concludes “I’m sure it’s something they’re thinking about”. The work is available on the arXiv pre-print server. The post Neural networks discover unstable singularities in fluid systems appeared first on Physics World.
https://physicsworld.com/a/neural-networks-discover-unstable-singularities-in-fluid-systems/
Space & Physics
svg
77c41523ea2f8d3eea8e23a914a8f5c935d5f658fc4925279199475901dea5e2
2025-11-13T15:48:44+00:00
NASA’s Goddard Space Flight Center hit by significant downsizing
NASA’s Goddard Space Flight Center (GSFC) looks set to lose a big proportion of its budget as a two-decade reorganization plan for the centre is being accelerated. The move, which is set to be complete by March, has left the Goddard campus with empty buildings and disillusioned employees. Some staff even fear that the actions during the 43-day US government shutdown, which ended on 12 November, could see the end of much of the centre’s activities. Based in Greenbelt, Maryland, the GSFC has almost 10 000 scientists and engineers, about 7000 of whom are directly employed by NASA contractors. Responsible for many of NASA’s most important uncrewed missions, telescopes, and probes, the centre is currently working on the Nancy Grace Roman Space Telescope, which is scheduled to launch in 2027, as well as the Dragonfly mission that is due to head for Saturn’s largest moon Titan in 2028. The ability to meet those schedules has now been put in doubt by the Trump administration’s proposed budget for financial year 2026, which started in September. It calls for NASA to receive almost $19bn – far less than the $25bn it has received for the past two years. If passed, Goddard would lose more than 42% of its staff. Congress, which passes the final budget, is not planning to cut NASA so deeply as it prepares its 2026 budget proposal. But on 24 September, Goddard managers began what they told employees was “a series of moves…that will reduce our footprint into fewer buildings”. The shift is intended to “bring down overall operating costs while maintaining the critical facilities we need for our core capabilities of the future”. While this is part of a 20-year “master plan” for the GSFC that NASA’s leadership approved in 2019, the management’s memo stated that “all planned moves will take place over the next several months and be completed by March 2026″. A report in September by Democratic members of the Senate Committee on Commerce, Science, and Transportation, which is responsible for NASA, asserts that the cuts are “in clear violation of the [US] constitution [without] regard for the impacts on NASA’s science missions and workforce”. On 3 November, the Goddard Engineers, Scientists and Technicians Association, a union representing NASA workers, reported that the GSFC had already closed over a third of its buildings, including some 100 labs. This had been done, it says, “with extreme haste and with no transparent strategy or benefit to NASA or the nation”. The union adds that the “closures are being justified as cost-saving but no details are being provided and any short-term savings are unlikely to offset a full account of moving costs and the reduced ability to complete NASA missions”. Zoe Lofgren, the lead Democrat on the House of Representatives Science Committee, has demanded of Sean Duffy, NASA’s acting administrator, that the agency “must now halt” any laboratory, facility and building closure and relocation activities at Goddard. In a letter to Duffy dated 10 November, she also calls for the “relocation, disposal, excessing, or repurposing of any specialized equipment or mission-related activities, hardware and systems” to also end immediately. Lofgren now wants NASA to carry out a “full accounting of the damage inflicted on Goddard thus far” by 18 November. Owing to the government shutdown, no GSFC or NASA official was available to respond to Physics World’s requests for a response. Meanwhile, the Trump administration has renominated billionaire entrepreneur Jared Isaacman as NASA’s administrator. Trump had originally nominated Isaacman, who had flown on a private SpaceX mission and carried out spacewalk, on the recommendation of SpaceX founder Elon Musk. But the administration withdrew the nomination in May following concerns among some Republicans that Isaacman had funded the Democrat party. The post NASA’s Goddard Space Flight Center hit by significant downsizing appeared first on Physics World.
https://physicsworld.com/a/nasas-goddard-space-flight-center-hit-by-significant-downsizing/
Space & Physics
svg
aea11afe18d214f9570bbdc003283bc4a6a71afd5a5e127d65228e8482a9c602
2025-11-13T14:53:34+00:00
Designing better semiconductor chips: NP hard problems and forever chemicals
Like any major endeavour, designing and fabricating semiconductor chips requires compromise. As well as trade-offs between cost and performance, designers also consider carbon emissions and other environmental impacts. In this episode of the Physics World Weekly podcast, Margaret Harris reports from the Heidelberg Laureate Forum where she spoke to two researchers who are focused on some of these design challenges. Up first is Mariam Elgamal, who’s doing a PhD at Harvard University on the development of environmentally sustainable computing systems. She explains why sustainability goes well beyond energy efficiency and must consider the manufacturing process and the chemicals used therein. Harris also chats with Andrew Gunter, who is doing a PhD at the University of British Columbia on circuit design for computer chips. He talks about the maths-related problems that must be solved in order to translate a desired functionality into a chip that can be fabricated.   The post Designing better semiconductor chips: NP hard problems and forever chemicals appeared first on Physics World.
https://physicsworld.com/a/designing-better-semiconductor-chips-np-hard-problems-and-forever-chemicals/
Space & Physics
svg
2489bd47ae80e9568f3bf4eba1f8c24430b0e5d2ad3d3dd6c9d3ee08102dbaa4
2025-11-13T12:00:35+00:00
High-resolution PET scanner visualizes mouse brain structures with unprecedented detail
Positron emission tomography (PET) is used extensively within preclinical research, enabling molecular imaging of rodent brains, for example, to investigate neurodegenerative disease. Such imaging studies require the highest possible spatial resolution to resolve the tiny structures in the animal’s brain. A research team at the National Institutes for Quantum Science and Technology (QST) in Japan has now developed the first PET scanner to achieve sub-0.5 mm spatial resolution. Submillimetre-resolution PET has been demonstrated by several research groups. Indeed, the QST team previously built a PET scanner with 0.55 mm resolution – sufficient to visualize the thalamus and hypothalamus in the mouse brain. But identification of smaller structures such as the amygdala and cerebellar nuclei has remained a challenge. “Sub-0.5 mm resolution is important to visualize mouse brain structures with high quantification accuracy,” explains first author Han Gyu Kang. “Moreover, this research work will change our perspective about the fundamental limit of PET resolution, which had been regarded to be around 0.5 mm due to the positron range of [the radioisotope] fluorine-18”. With Monte Carlo simulations revealing that sub-0.5 mm resolution could be achievable with optimal detector parameters and system geometry, Kang and colleagues performed a series of modifications to their submillimetre-resolution PET (SR-PET) to create the new high-resolution PET (HR-PET) scanner. The HR-PET, described in IEEE Transactions on Medical Imaging, is based around two 48 mm-diameter detector rings with an axial coverage of 23.4 mm. Each ring contains 16 depth-of-interaction (DOI) detectors (essential to minimize parallax error in a small ring diameter) made from three layers of LYSO crystal arrays stacked in a staggered configuration, with the outer layer coupled to a silicon photomultiplier (SiPM) array. Compared with their previous design, the researchers reduced the detector ring diameter from 52.5 to 48 mm, which served to improve geometrical efficiency and minimize the noncollinearity effect. They also reduced the crystal pitch from 1.0 to 0.8 mm and the SiPM pitch from 3.2 to 2.4 mm, improving the spatial resolution and crystal decoding accuracy, respectively. Other changes included optimizing the crystal thicknesses to 3, 3 and 5 mm for the first, second and third arrays, as well as use of a narrow energy window (440–560 keV) to reduce the scatter fraction and inter-crystal scattering events. “The optimized staggered three-layer crystal array design is also a key factor to enhance the spatial resolution by improving the spatial sampling accuracy and DOI resolution compared with the previous SR-PET,” Kang points out. Performance tests showed that the HR-PET scanner had a system-level energy resolution of 18.6% and a coincidence timing resolution of 8.5 ns. Imaging a NEMA 22Na point source revealed a peak sensitivity at the axial centre of 0.65% for the 440–560 keV energy window and a radial resolution of 0.67±0.06 mm from the centre to 10 mm radial offset (using 2D filtered-back-projection reconstruction) – a 33% improvement over that achieved by the SR-PET. To further evaluate the performance of the HR-PET, the researchers imaged a rod-based resolution phantom. Images reconstructed using a 3D ordered-subset-expectation-maximization (OSEM) algorithm clearly resolved all of the rods. This included the smallest rods with diameters of 0.5 and 0.45 mm, with average valley-to-peak ratios of 0.533 and 0.655, respectively – a 40% improvement over the SR-PET. The researchers then used the HR-PET for in vivo mouse brain imaging. They injected 18F-FITM, a tracer used to image the central nervous system, into an awake mouse and performed a 30 min PET scan (with the animal anesthetized) 42 min after injection. For comparison, they scanned the same mouse for 30 min with a preclinical Inveon PET scanner. After OSEM reconstruction, strong tracer uptake in the thalamus, hypothalamus, cerebellar cortex and cerebellar nuclei was clearly visible in the coronal HR-PET images. A zoomed image distinguished the cerebellar nuclei and flocculus, while sagittal and axial images visualized the cortex and striatum. Images from the Inveon, however, could barely resolve these brain structures. The team also imaged the animal’s glucose metabolism using the tracer 18F-FDG. A 30 min HR-PET scan clearly delineated glucose transporter expression in the cortex, thalamus, hypothalamus and cerebellar nuclei. Here again, the Inveon could hardly identify these small structures. The researchers note that the 18F-FITM and 18F-FDG PET images matched well with the anatomy seen in a preclinical CT scan. “To the best of our knowledge, this is the first separate identification of the hypothalamus, amygdala and cerebellar nuclei of mouse brain,” they write. Future plans for the HR-PET scanner, says Kang, include using it for research on neurodegenerative disorders, with tracers that bind to amyloid beta or tau protein. “In addition, we plan to extend the axial coverage over 50 mm to explore the whole body of mice with sub-0.5 mm resolution, especially for oncological research,” he says. “Finally, we would like to achieve sub-0.3 mm PET resolution with more optimized PET detector and system designs.” The post High-resolution PET scanner visualizes mouse brain structures with unprecedented detail appeared first on Physics World.
https://physicsworld.com/a/high-resolution-pet-scanner-visualizes-mouse-brain-structures-with-unprecedented-detail/
Space & Physics
svg
1c23525a74162028af5307d87172923e13c1c8661049232fe5213e72226ab168
2025-11-13T09:00:30+00:00
New experiments on static electricity cast doubt on previous studies in the field
Static electricity is an everyday phenomenon, but it remains poorly understood. Researchers at the Institute of Science and Technology Austria (ISTA) have now shed new light on it by capturing an “image” of charge distributions as charge transfers from one surface to another. Their conclusions challenge longstanding interpretations of previous experiments and enhance our understanding of how charge behaves on insulating surfaces. Static electricity is also known as contact electrification because it occurs when charge is transferred from one object to another by touch. The most common laboratory example involves rubbing a balloon on someone’s head to make their hair stand on end. However, static electricity is also associated with many other activities, including coffee grinding, pollen transport and perhaps even the formation of rocky planets. One of the most useful ways of studying contact electrification is to move a metal tip slowly over the surface of a sample without touching it, recording a voltage all the while. These so-called scanning Kelvin methods produce an “image” of voltages created by the transferred charge. At the macroscale, around 100 μm to 10 cm, the main method is termed scanning Kelvin probe microscopy (SKPM). At the nanoscale, around 10 nm to 100 μm, a related but distinct variant known as Kelvin probe force microscopy (KPFM) is used instead. In previous fundamental physics studies using these techniques, the main challenges have been to make sense of the stationary patterns of charge left behind after contact electrification, and to investigate how these patterns evolve over space and time. In the latest work, the ISTA team chose to ask a slightly different question: when are the dynamics of charge transfer too fast for measured stationary patterns to yield meaningful information? To find out, ISTA PhD student Felix Pertl built a special setup that could measure a sample’s surface charge with KPFM; transfer it below a linear actuator so that it could exchange charge when it contacted another material; and then transfer it underneath the KPFM again to image the resulting change in the surface charge. “In a typical set-up, the sample transfer, moving the AFM to the right place and reinitiation and recalibration of the KPFM parameters can easily take as long as tens of minutes,” Pertl explains. “In our system, this happens in as little as around 30 s. As all aspects of the system are completely automated, we can repeat this process, and quickly, many times.” This speed-up is important because static electricity dissipates relatively rapidly. In fact, the researchers found that the transferred charge disappeared from the sample’s surface quicker than the time required for most KPFM scans. Their data also revealed that the deposited charge was, in effect, uniformly distributed across the surface and that its dissipation depended on the material’s electrical conductivity. Additional mathematical modelling and subsequent experiments confirmed that the more insulating a material is, the slower it dissipates charge. Pertl says that these results call into question the validity of some previous static electricity studies that used KPFM to study charge transfer. “The most influential paper in our field to date reported surface charge heterogeneity using KPFM,” he tells Physics World. At first, the ISTA team’s goal was to understand the origin of this heterogeneity. But when their own experiments showed an essentially homogenous distribution of surface charge, the researchers had to change tack. “The biggest challenge in our work was realizing – and then accepting – that we could not reproduce the results from this previous study,” Pertl says. “Convincing both my principal investigator and myself that our data revealed a very different physical mechanism required patience, persistence and trust in our experimental approach.” The discrepancy, he adds, implies that the surface heterogeneity previously observed was likely not a feature of static electricity, as was claimed. Instead, he says, it was probably “an artefact of the inability to image the charge before it had left the sample surface”. Studies of contact electrification studies go back a long way. Philippe Molinié of France’s GeePs Laboratory, who was not involved in this work, notes that the first experiments were performed by the English scientist William Gilbert clear back in the sixteenth century. As well as coining the term “electricity” (from the Greek “elektra”, meaning amber), Gilbert was also the first to establish that magnets maintain their electrical attraction over time, while the forces produced by contact-charged insulators slowly decrease. “Four centuries later, many mysteries remain unsolved in the contact electrification phenomenon,” Molinié observes. He adds that the surfaces of insulating materials are highly complex and usually strongly disordered, which affects their ability to transfer charge at the molecular scale. “The dynamics of the charge neutralization, as Pertl and colleagues underline, is also part of the process and is much more complex than could be described by a simple resistance-capacitor model,” Molinié says. Although the ISTA team studied these phenomena with sophisticated Kelvin probe microscopy rather than the rudimentary tools available to Gilbert, it is, Molinié says, “striking that the competition between charge transfer and charge screening that comes from the conductivity of an insulator, first observed by Gilbert, is still at the very heart of the scientific interrogations that this interesting new work addresses.” The Austrian researchers, who detail their work in Phys. Rev. Lett., say they hope their experiments will “encourage a more critical interpretation” of KPFM data in the future, with a new focus on the role of sample grounding and bulk conductivity in shaping observed charge patterns. “We hope it inspires KPFM users to reconsider how they design and analyse experiments, which could lead to more accurate insights into charge behaviour in insulators,” Pertl says. “We are now planning to deliberately engineer surface charge heterogeneity into our samples,” he reveals. “By tuning specific surface properties, we aim to control the sign and spatial distribution of charge on defined regions of these.” The post New experiments on static electricity cast doubt on previous studies in the field appeared first on Physics World.
https://physicsworld.com/a/new-experiments-on-static-electricity-cast-doubt-on-previous-studies-in-the-field/
Space & Physics
svg
942a1477ab8bdf5d5ce69a2a14d48298839a6a33208bf37d31825f1ac7a67db5
2025-11-12T16:38:14+00:00
SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production
“Global collaborations for European economic resilience” is the theme of SEMICON Europa 2025. The event is coming to Munich, Germany on 18–21 November and it will attract 25,000 semiconductor professionals who will enjoy presentations from over 200 speakers. The TechARENA portion of the event will cover a wide range of technology-related issues including new materials, future computing paradigms and the development of hi-tech skills in the European workface. There will also be an Executive Forum, which will feature leaders in industry and government and will cover topics including silicon geopolitics and the use of artificial intelligence in semiconductor manufacturing. SEMICON Europa will be held at the Messe München, where it will feature a huge exhibition with over 500 exhibitors from around the world. The exhibition is spread out over three halls and here are some of the companies and product innovations to look out for on the show floor. As the boundaries between electronic and photonic technologies continue to blur, the semiconductor industry faces a growing challenge: how to test and align increasingly complex electro-photonic chip architectures efficiently, precisely, and at scale. At SEMICON Europa 2025, SmarAct will address this challenge head-on with its latest innovation – Fast Scan Align. This is a high-speed and high-precision alignment solution that redefines the limits of testing and packaging for integrated photonics. In the emerging era of heterogeneous integration, electronic and photonic components must be aligned and interconnected with sub-micrometre accuracy. Traditional positioning systems often struggle to deliver both speed and precision, especially when dealing with the delicate coupling between optical and electrical domains. SmarAct’s Fast Scan Align solution bridges this gap by combining modular motion platforms, real-time feedback control, and advanced metrology into one integrated system. At its core, Fast Scan Align leverages SmarAct’s electromagnetic and piezo-driven positioning stages, which are capable of nanometre-resolution motion in multiple degrees of freedom. Fast Scan Align’s modular architecture allows users to configure systems tailored to their application – from wafer-level testing to fibre-to-chip alignment with active optical coupling. Integrated sensors and intelligent algorithms enable scanning and alignment routines that drastically reduce setup time while improving repeatability and process stability. Fast Scan Align’s compact modules allow various measurement techniques to be integrated with unprecedented possibilities. This has become decisive for the increasing level of integration of complex electro-photonic chips. Apart from the topics of wafer-level testing and packaging, wafer positioning with extreme precision is as crucial as never before for the highly integrated chips of the future. SmarAct’s PICOSCALE interferometer addresses the challenge of extreme position by delivering picometer-level displacement measurements directly at the point of interest. When combined with SmarAct’s precision wafer stages, the PICOSCALE interferometer ensures highly accurate motion tracking and closed-loop control during dynamic alignment processes. This synergy between motion and metrology gives users unprecedented insight into the mechanical and optical behaviour of their devices – which is a critical advantage for high-yield testing of photonic and optoelectronic wafers. Visitors to SEMICON Europa will also experience how all of SmarAct’s products – from motion and metrology components to modular systems and up to turn-key solutions – integrate seamlessly, offering intuitive operation, full automation capability, and compatibility with laboratory and production environments alike. For more information visit SmarAct at booth B1.860 or explore more of SmarAct’s solutions in the semiconductor and photonics industry. Thyracont Vacuum Instruments will be showcasing its precision vacuum metrology systems in exhibition hall C1. Made in Germany, the company’s broad portfolio combines diverse measurement technologies – including piezo, Pirani, capacitive, cold cathode, and hot cathode – to deliver reliable results across a pressure range from 2000 to 3e-11 mbar. Front-and-centre at SEMICON Europa will be Thyracont’s new series of VD800 compact vacuum meters. These instruments provide precise, on-site pressure monitoring in industrial and research environments. Featuring a direct pressure display and real-time pressure graphs, the VD800 series is ideal for service and maintenance tasks, laboratory applications, and test setups. The VD800 series combines high accuracy with a highly intuitive user interface. This delivers real-time measurement values; pressure diagrams; and minimum and maximum pressure – all at a glance. The VD800’s 4+1 membrane keypad ensures quick access to all functions. USB-C and optional Bluetooth LE connectivity deliver seamless data readout and export. The VD800’s large internal data logger can store over 10 million measured values with their RTC data, with each measurement series saved as a separate file. Data sampling rates can be set from 20 ms to 60 s to achieve dynamic pressure tracking or long-term measurements. Leak rates can be measured directly by monitoring the rise in pressure in the vacuum system. Intelligent energy management gives the meters extended battery life and longer operation times. Battery charging is done conveniently via USB-C. The vacuum meters are available in several different sensor configurations, making them adaptable to a wide range of different uses. Model VD810 integrates a piezo ceramic sensor for making gas-type-independent measurements for rough vacuum applications. This sensor is insensitive to contamination, making it suitable for rough industrial environments. The VD810 measures absolute pressure from 2000 to 1 mbar and relative pressure from −1060 to +1200 mbar. Model VD850 integrates a piezo/Pirani combination sensor, which delivers high resolution and accuracy in the rough and fine vacuum ranges. Optimized temperature compensation ensures stable measurements in the absolute pressure range from 1200 to 5e-5 mbar and in the relative pressure range from −1060 to +340 mbar. The model VD800 is a standalone meter designed for use with Thyracont’s USB-C vacuum transducers, which are available in two models. The VSRUSB USB-C transducer is a piezo/Pirani combination sensor that measures absolute pressure in the 2000 to 5.0e-5 mbar range. The other is the VSCUSB USB-C transducer, which measures absolute pressures from 2000 down to 1 mbar and has a relative pressure range from -1060 to +1200 mbar. A USB-C cable connects the transducer to the VD800 for quick and easy data retrieval. The USB-C transducers are ideal for hard-to-reach areas of vacuum systems. The transducers can be activated while a process is running, enabling continuous monitoring and improved service diagnostics. With its blend of precision, flexibility, and ease of use, the Thyracont VD800 series defines the next generation of compact vacuum meters. The devices’ intuitive interface, extensive data capabilities, and modern connectivity make them an indispensable tool for laboratories, service engineers, and industrial operators alike. To experience the future of vacuum metrology in Munich, visit Thyracont at SEMICON Europa hall C1, booth 752. There you will discover how the VD800 series can optimize your pressure monitoring workflows. The post SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production appeared first on Physics World.
https://physicsworld.com/a/semicon-europa-2025-presents-cutting-edge-technology-for-semiconductor-rd-and-production/
Space & Physics
svg
6c83ac30a54a8290a43c7b4320e410d0c4b16a089401fbba339b758334d47fcf
2025-11-12T15:00:51+00:00
Physicists discuss the future of machine learning and artificial intelligence
IOP Publishing’s Machine Learning series is the world’s first open-access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences. Part of the series is Machine Learning: Science and Technology, launched in 2019, which bridges the application and advances in machine learning across the sciences. Machine Learning: Earth is dedicated to the application of ML and AI across all areas of Earth, environmental and climate sciences while Machine Learning: Health covers healthcare, medical, biological, clinical and health sciences and Machine Learning: Engineeringfocuses on applied AI and non-traditional machine learning to the most complex engineering challenges. Here, the editors-in-chief (EiC) of the four journals discuss the growing importance of machine learning and their plans for the future. Kyle Cranmer is a particle physicist and data scientist at the University of Wisconsin-Madison and is EiC of Machine Learning: Science and Technology (MLST). Pierre Gentine is a geophysicist at Columbia University and is EiC of Machine Learning: Earth. Jimeng Sun is a biophysicist at the University of Illinois at Urbana-Champaign and is EiC of Machine Learning: Health. Mechanical engineer Jay Lee is from the University of Maryland and is EiC of Machine Learning: Engineering. Kyle Cranmer (KC): It is due to a convergence of multiple factors. The initial success of deep learning was driven largely by benchmark datasets, advances in computing with graphics processing units, and some clever algorithmic tricks. Since then, we’ve seen a huge investment in powerful, easy-to-use tools that have dramatically lowered the barrier to entry and driven extraordinary progress. Pierre Gentine (PG): Machine learning has been transforming many fields of physics, as it can accelerate physics simulation, better handle diverse sources of data (multimodality), help us better predict. Jimeng Sun (JS): Over the past decade, we have seen machine learning models consistently reach — and in some cases surpass — human-level performance on real-world tasks. This is not just in benchmark datasets, but in areas that directly impact operational efficiency and accuracy, such as medical imaging interpretation, clinical documentation, and speech recognition. Once ML proved it could perform reliably at human levels, many domains recognized its potential to transform labour-intensive processes. Jay Lee (JL): Traditionally, ML growth is based on the development of three elements: algorithms, big data, and computing. The past decade’s growth in ML research is due to the perfect storm of abundant data, powerful computing, open tools, commercial incentives, and groundbreaking discoveries—all occurring in a highly interconnected global ecosystem. KC: The advances in generative AI and self-supervised learning are very exciting. By generative AI, I don’t mean Large Language Models — though those are exciting too — but probabilistic ML models that can be useful in a huge number of scientific applications. The advances in self-supervised learning also allows us to engage our imagination of the potential uses of ML beyond well-understood supervised learning tasks. PG: I am very interested in the use of ML for climate simulations and fluid dynamics simulations. JS: The emergence of agentic systems in healthcare — AI systems that can reason, plan, and interact with humans to accomplish complex goals. A compelling example is in clinical trial workflow optimization. An agentic AI could help coordinate protocol development, automatically identify eligible patients, monitor recruitment progress, and even suggest adaptive changes to trial design based on interim data. This isn’t about replacing human judgment — it’s about creating intelligent collaborators that amplify expertise, improve efficiency, and ultimately accelerate the path from research to patient benefit. JL: One area is generative and multimodal ML — integrating text, images, video, and more — are transforming human–AI interaction, robotics, and autonomous systems. Equally exciting is applying ML to nontraditional domains like semiconductor fabs, smart grids, and electric vehicles, where complex engineering systems demand new kinds of intelligence. KC: The need for a venue to propagate advances in AI/ML in the sciences is clear. The large AI conferences are under stress, and their review system is designed to be a filter not a mechanism to ensure quality, improve clarity and disseminate progress. The large AI conferences also aren’t very welcoming to user-inspired research, often casting that work as purely applied. Similarly, innovation in AI/ML often takes a back seat in physics journals, which slows the propagation of those ideas to other fields. My vision for MLST is to fill this gap and nurture the community that embraces AI/ML research inspired by the physical sciences. PG: I hope we can demonstrate that machine learning is more than a nice tool but that it can play a fundamental role in physics and Earth sciences, especially when it comes to better simulating and understanding the world. JS: I see Machine Learning: Health becoming the premier venue for rigorous ML–health research — a place where technical novelty and genuine clinical impact go hand in hand. We want to publish work that not only advances algorithms but also demonstrates clear value in improving health outcomes and healthcare delivery. Equally important, we aim to champion open and reproducible science. That means encouraging authors to share code, data, and benchmarks whenever possible, and setting high standards for transparency in methods and reporting. By doing so, we can accelerate the pace of discovery, foster trust in AI systems, and ensure that our field’s breakthroughs are accessible to — and verifiable by — the global community. JL: Machine Learning: Engineering envisions becoming the global platform where ML meets engineering. By fostering collaboration, ensuring rigour and interpretability, and focusing on real-world impact, we aim to redefine how AI addresses humanity’s most complex engineering challenges. The post Physicists discuss the future of machine learning and artificial intelligence appeared first on Physics World.
https://physicsworld.com/a/physicists-discuss-the-future-of-machine-learning-and-artificial-intelligence/
Space & Physics
svg
d4abe783dc6348a00f502e8766a5fc4b515d04953fa4aa9231497468c0ec6d0d
2025-11-12T09:00:15+00:00
Playing games by the quantum rulebook expends less energy
Games played under the laws of quantum mechanics dissipate less energy than their classical equivalents. This is the finding of researchers at Singapore’s Nanyang Technological University (NTU), who worked with colleagues in the UK, Austria and the US to apply the mathematics of game theory to quantum information. The researchers also found that for more complex game strategies, the quantum-classical energy difference can increase without bound, raising the possibility of a “quantum advantage” in energy dissipation. Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game. In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landauer’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information. This Landauer minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one. To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag. “It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.” For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains. The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research. Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them. Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising. In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”. For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”. The post Playing games by the quantum rulebook expends less energy appeared first on Physics World.
https://physicsworld.com/a/playing-games-by-the-quantum-rulebook-expends-less-energy/
Space & Physics
svg
3aeb194eea1e40f74cc2389d969e200797e74895f26a436b736200f53c8098c2
2025-11-12T08:05:08+00:00
Teaching machines to understand complexity
Complex systems model real-world behaviour that is dynamic and often unpredictable. They are challenging to simulate because of nonlinearity, where small changes in conditions can lead to disproportionately large effects; many interacting variables, which make computational modelling cumbersome; and randomness, where outcomes are probabilistic. Machine learning is a powerful tool for understanding complex systems. It can be used to find hidden relationships in high-dimensional data and predict the future state of a system based on previous data. This research develops a novel machine learning approach for complex systems that allows the user to extract a few collective descriptors of the system, referred to as inherent structural variables. The researchers used an autoencoder (a type of machine learning tool) to examine snapshots of how atoms are arranged in a system at any moment (called instantaneous atomic configurations). Each snapshot is then matched to a more stable version of that structure (an inherent structure), which represents the system’s underlying shape or pattern after thermal noise is removed. These inherent structural variables enable the analysis of structural transitions both in and out of equilibrium and the computation of high-resolution free-energy landscapes. These are detailed maps that show how a system’s energy changes as its structure or configuration changes, helping researchers understand stability, transitions, and dynamics in complex systems. The model is versatile, and the authors demonstrate how it can be applied to metal nanoclusters and protein structures. In the case of Au147 nanoclusters (well-organised structures made up of 147 gold atoms), the inherent structural variables reveal three main types of stable structures that the gold nanocluster can adopt: fcc (face-centred cubic), Dh (decahedral), and Ih (icosahedral). These structures represent different stable states that a nanocluster can switch between, and on the high-resolution free-energy landscape, they appear as valleys. Moving from one valley to another isn’t easy, there are narrow paths or barriers between them, known as kinetic bottlenecks. The researchers validated their machine learning model using Markov state models, which are mathematical tools that help analyse how a system moves between different states over time, and electron microscopy, which images atomic structures and can confirm that the predicted structures exist in the gold nanoclusters. The approach also captures non-equilibrium melting and freezing processes, offering insights into polymorph selection and metastable states. Scalability is demonstrated up to Au309 clusters. The generality of the method is further demonstrated by applying it to the bradykinin peptide, a completely different type of system, identifying distinct structural motifs and transitions. Applying the method to a biological molecule provides further evidence that the machine learning approach is a flexible, powerful technique for studying many kinds of complex systems. This work contributes to machine learning strategies, as well as experimental and theoretical studies of complex systems, with potential applications across liquids, glasses, colloids, and biomolecules. Inherent structural descriptors via machine learning Emanuele Telari et al 2025 Rep. Prog. Phys. 88 068002 Do you want to learn more about this topic? Complex systems in the spotlight: next steps after the 2021 Nobel Prize in Physics by Ginestra Bianconi et al (2023) The post Teaching machines to understand complexity appeared first on Physics World.
https://physicsworld.com/a/teaching-machines-to-understand-complexity/
Space & Physics
svg
53571cdf596dfa9671b5fb5d56a816e213423285bfee6553dfa61a673df9648c
2025-11-12T08:03:41+00:00
Using AI to find new particles at the LHC
The Standard Model of particle physics is a very well-tested theory that describes the fundamental particles and their interactions. However, it does have several key limitations. For example, it doesn’t account for dark matter or why neutrinos have masses. One of the main aims of experimental particle physics at the moment is therefore to search for signs of new physical phenomena beyond the Standard Model. Finding something new like this would point us towards a better theoretical model of particle physics: one that can explain things that the Standard Model isn’t able to. These searches often involve looking for rare or unexpected signals in high-energy particle collisions such as those at CERN’s Large Hadron Collider (LHC). In a new paper published by the CMS collaboration, a new analysis method was used to search for new particles produced by proton-proton collisions at the at the LHC. These particles would decay into two jets, but with unusual internal structure not typical of known particles like quarks or gluons. The researchers used advanced machine learning techniques to identify jets with different substructures, applying various anomaly detection methods to maximise sensitivity to unknown signals. Unlike traditional strategies, anomaly detection methods allow the AI models to identify anomalous patterns in the data without being provided specific simulated examples, giving them increased sensitivity to a wider range of potential new particles. This time, they didn’t find any significant deviations from expected background values. Although no new particles were found, the results enabled the team to put several new theoretical models to the test for the first time. They were also able to set upper bounds on the production rates of several hypothetical particles. Most importantly, the study demonstrates that machine learning can significantly enhance the sensitivity of searches for new physics, offering a powerful tool for future discoveries at the LHC. Model-agnostic search for dijet resonances with anomalous jet substructure in proton–proton collisions at = 13 TeV – IOPscience The CMS Collaboration, 2025 Rep. Prog. Phys. 88 067802 The post Using AI to find new particles at the LHC appeared first on Physics World.
https://physicsworld.com/a/using-ai-to-find-new-particles-at-the-lhc/
Space & Physics
svg
1e3713fc33dd47fac8b6435112bb136042fed90b55d68178aea919f4c34dd661
2025-11-11T13:00:28+00:00
Researchers pin down the true cost of precision in quantum clocks
Classical clocks have to obey the second law of thermodynamics: the higher their precision, the more entropy they produce. For a while, it seemed like quantum clocks might beat this system, at least in theory. This is because although quantum fluctuations produce no entropy, if you can count those fluctuations as clock “ticks”, you can make a clock with nonzero precision. Now, however, a collaboration of researchers across Europe has pinned down where the entropy-precision trade-off balances out: it’s in the measurement process. As project leader Natalia Ares observes, “There’s no such thing as a free lunch.” The clock the team used to demonstrate this principle consists of a pair of quantum dots coupled by a thin tunnelling barrier. In this double quantum dot system, a “tick” occurs whenever an electron tunnels from one side of the system to the other, through both dots. Applying a bias voltage gives ticks a preferred direction. This might not seem like the most obvious kind of clock. Indeed, as an actual timekeeping device, collaboration member Florian Meier describes it as “quite bad”. However, Ares points out that although the tunnelling process is random (stochastic), the period between ticks does have a mean and a standard deviation. Hence, given enough ticks, the number of ticks recorded will tell you something about how much time has passed. In any case, Meier adds, they were not setting out to build the most accurate clock. Instead, they wanted to build a playground to explore basic principles of energy dissipation and clock precision, and for that, it works really well. “The really cool thing I like about what they did was that with that particular setup, you can really pinpoint the entropy dissipation of the measurement somehow in this quantum dot,” says Meier, a PhD student at the Technical University in Vienna, Austria. “I think that’s really unique in the field.” To measure the entropy of each quantum tick, the researchers measured the voltage drop (and associated heat dissipation) for each electron tunnelling through the double quantum dot. Vivek Wadhia, a DPhil student in Ares’s lab at the University of Oxford, UK who performed many of the measurements, points out that this single unit of charge does not equate to very much entropy. However, measuring the entropy of the tunnelling electron was not the full story. Because the ticks of the quantum clock were, in effect, continuously monitored by the environment, the coherence time for each quantum tunnelling event was very short. However, because the time on this clock could not be observed directly by humans – unlike, say, the hands of a mechanical clock – the researchers needed another way to measure and record each tick. For this, they turned to the electronics they were using in the lab and compared the power in versus the power out on a macroscopic scale. “That’s the cost of our measurement, right?” says Wadhia, adding that this cost includes both the measuring and recording of each tick. He stresses that they were not trying to find the most thermodynamically efficient measurement technique: “The idea was to understand how the readout compares to the clockwork.” This classical entropy associated with measuring and recording each tick turns out to be nine orders of magnitude larger than the quantum entropy of a tick – more than enough for the system to operate as a clock with some level of precision. “The interesting thing is that such simple systems sometimes reveal how you can maybe improve precision at a very low cost thermodynamically,” Meier says. As a next step, Ares plans to explore different arrangements of quantum dots, using Meier’s previous theoretical work to improve the clock’s precision. “We know that, for example, clocks in nature are not that energy intensive,” Ares tells Physics World. “So clearly, for biology, it is possible to run a lot of processes with stochastic clocks.” The research is reported in Physical Review Letters. The post Researchers pin down the true cost of precision in quantum clocks appeared first on Physics World.
https://physicsworld.com/a/researchers-pin-down-the-true-cost-of-precision-in-quantum-clocks/
Space & Physics
svg
4b0ee959a004b96c864c25777665a98e929a70a430d9269f50f9f171aba1476f
2025-11-11T10:00:50+00:00
The forgotten pioneers of computational physics
When you look back at the early days of computing, some familiar names pop up, including John von Neumann, Nicholas Metropolis and Richard Feynman. But they were not lonely pioneers – they were part of a much larger group, using mechanical and then electronic computers to do calculations that had never been possible before. These people, many of whom were women, were the first scientific programmers and computational scientists. Skilled in the complicated operation of early computing devices, they often had degrees in maths or science, and were an integral part of research efforts. And yet, their fundamental contributions are mostly forgotten. This was in part because of their gender – it was an age when sexism was rife, and it was standard for women to be fired from their job after getting married. However, there is another important factor that is often overlooked, even in today’s scientific community – people in technical roles are often underappreciated and underacknowledged, even though they are the ones who make research possible. Originally, a “computer” was a human being who did calculations by hand or with the help of a mechanical calculator. It is thought that the world’s first computational lab was set up in 1937 at Columbia University. But it wasn’t until the Second World War that the demand for computation really exploded; with the need for artillery calculations, new technologies and code breaking. In the US, the development of the atomic bomb during the Manhattan Project (established in 1943) required huge computational efforts, so it wasn’t long before the New Mexico site had a hand-computing group. Called the T-5 group of the Theoretical Division, it initially consisted of about 20 people. Most were women, including the spouses of other scientific staff. Among them was Mary Frankel, a mathematician married to physicist Stan Frankel; mathematician Augusta “Mici” Teller who was married to Edward Teller, the “father of the hydrogen bomb”; and Jean Bacher, the wife of physicist Robert Bacher. As the war continued, the T-5 group expanded to include civilian recruits from the nearby towns and members of the Women’s Army Corps. Its staff worked around the clock, using printed mathematical tables and desk calculators in four-hour shifts – but that was not enough to keep up with the computational needs for bomb development. In the early spring of 1944, IBM punch-card machines were brought in to supplement the human power. They became so effective that the machines were soon being used for all large calculations, 24 hours a day, in three shifts. The computational group continued to grow, and among the new recruits were Naomi Livesay and Eleonor Ewing. Livesay held an advanced degree in mathematics and had done a course in operating and programming IBM electric calculating machines, making her an ideal candidate for the T-5 division. She in turn recruited Ewing, a fellow mathematician who was a former colleague. The two young women supervised the running of the IBM machines around the clock. The frantic pace of the T-5 group continued until the end of the war in September 1945. The development of the atomic bomb required an immense computational effort, which was made possible through hand and punch-card calculations. Shortly after the war ended, the first fully electronic, general-purpose computer – the Electronic Numerical Integrator and Computer (ENIAC) – became operational at the University of Pennsylvania, following two years of development. The project had been led by physicist John Mauchly and electrical engineer J Presper Eckert. The machine was operated and coded by six women – mathematicians Betty Jean Jennings (later Bartik); Kathleen, or Kay, McNulty (later Mauchly, then Antonelli); Frances Bilas (Spence); Marlyn Wescoff (Meltzer) and Ruth Lichterman (Teitelbaum); as well as Betty Snyder (Holberton) who had studied journalism. Polymath John von Neumann also got involved when looking for more computing power for projects at the new Los Alamos Laboratory, established in New Mexico in 1947. In fact, although originally designed to solve ballistic trajectory problems, the first problem to be run on the ENIAC was “the Los Alamos problem” – a thermonuclear feasibility calculation for Teller’s group studying the H-bomb. Like in the Manhattan Project, several husband-and-wife teams worked on the ENIAC, the most famous being von Neumann and his wife Klara Dán, and mathematicians Adele and Herman Goldstine. Dán von Neumann in particular worked closely with Nicholas Metropolis, who alongside mathematician Stanislaw Ulam had coined the term Monte Carlo to describe numerical methods based on random sampling. Indeed, between 1948 and 1949 Dán von Neumann and Metropolis ran the first series of Monte Carlo simulations on an electronic computer. Work began on a new machine at Los Alamos in 1948 – the Mathematical Analyzer Numerical Integrator and Automatic Computer (MANIAC) – which ran its first large-scale hydrodynamic calculation in March 1952. Many of its users were physicists, and its operators and coders included mathematicians Mary Tsingou (later Tsingou-Menzel), Marjorie Jones (Devaney) and Elaine Felix (Alei); plus Verna Ellingson (later Gardiner) and Lois Cook (Leurgans). The Los Alamos scientists tried all sorts of problems on the MANIAC, including a chess-playing program – the first documented case of a machine defeating a human at the game. However, two of these projects stand out because they had profound implications on computational science. In 1953 the Tellers, together with Metropolis and physicists Arianna and Marshall Rosenbluth, published the seminal article “Equation of state calculations by fast computing machines” (J. Chem. Phys. 21 1087). The work introduced the ideas behind the “Metropolis (later renamed Metropolis–Hastings) algorithm”, which is a Monte Carlo method that is based on the concept of “importance sampling”. (While Metropolis was involved in the development of Monte Carlo methods, it appears that he did not contribute directly to the article, but provided access to the MANIAC nightshift.) This is the progenitor of the Markov Chain Monte Carlo methods, which are widely used today throughout science and engineering. Marshall later recalled how the research came about when he and Arianna had proposed using the MANIAC to study how solids melt (AIP Conf. Proc. 690 22). Edward Teller meanwhile had the idea of using statistical mechanics and taking ensemble averages instead of following detailed kinematics for each individual disk, and Mici helped with programming during the initial stages. However, the Rosenbluths did most of the work, with Arianna translating and programming the concepts into an algorithm. The 1953 article is remarkable, not only because it led to the Metropolis algorithm, but also as one of the earliest examples of using a digital computer to simulate a physical system. The main innovation of this work was in developing “importance sampling”. Instead of sampling from random configurations, it samples with a bias toward physically important configurations which contribute more towards the integral. In the summer of 1953, physicist Enrico Fermi, Ulam, Tsingou and physicist John Pasta also made a significant breakthrough using the MANIAC. They ran a “numerical experiment” as part of a series meant to illustrate possible uses of electronic computers in studying various physical phenomena. The team modelled a 1D chain of oscillators with a small nonlinearity to see if it would behave as hypothesized, reaching an equilibrium with the energy redistributed equally across the modes (doi.org/10.2172/4376203). However, their work showed that this was not guaranteed for small perturbations – a non-trivial and non-intuitive observation that would not have been apparent without the simulations. It is the first example of a physics discovery made not by theoretical or experimental means, but through a computational approach. It would later lead to the discovery of solitons and integrable models, the development of chaos theory, and a deeper understanding of ergodic limits. Although the paper says the work was done by all four scientists, Tsingou’s role was forgotten, and the results became known as the Fermi–Pasta–Ulam problem. It was not until 2008, when French physicist Thierry Dauxois advocated for giving her credit in a Physics Today article, that Tsingou’s contribution was properly acknowledged. Today the finding is called the Fermi–Pasta–Ulam–Tsingou problem. The year 1953 also saw IBM’s first commercial, fully electronic computer – an IBM 701 – arrive at Los Alamos. Soon the theoretical division had two of these machines, which, alongside the MANIAC, gave the scientists unprecedented computing power. Among those to take advantage of the new devices were Martha Evans (whom very little is known about) and theoretical physicist Francis Harlow, who began to tackle the largely unexplored subject of computational fluid dynamics. The idea was to use a mesh of cells through which the fluid, represented as particles, would move. This computational method made it possible to solve complex hydrodynamics problems (involving large distortions and compressions of the fluid) in 2D and 3D. Indeed, the method proved so effective that it became a standard tool in plasma physics where it has been applied to every conceivable topic from astrophysical plasmas to fusion energy. The resulting internal Los Alamos report – The Particle-in-cell Method for Hydrodynamic Calculations, published in 1955 – showed Evans as first author and acknowledged eight people (including Evans) for the machine calculations. However, while Harlow is remembered as one of the pioneers of computational fluid dynamics, Evans was forgotten. In an age where women had very limited access to the frontlines of research, the computational war effort brought many female researchers and technical staff in. As their contributions come more into the light, it becomes clearer that their role was not a simple clerical one. There is a view that the coders’ work was “the vital link between the physicist’s concepts (about which the coders more often than not didn’t have a clue) and their translation into a set of instructions that the computer was able to perform, in a language about which, more often than not, the physicists didn’t have a clue either”, as physicists Giovanni Battimelli and Giovanni Ciccotti wrote in 2018 (Eur. Phys. J. H 43 303). But the examples we have seen show that some of the coders had a solid grasp of the physics, and some of the physicists had a good understanding of the machine operation. Rather than a skilled–non-skilled/men–women separation, the division of labour was blurred. Indeed, it was more of an effective collaboration between physicists, mathematicians and engineers. Even in the early days of the T-5 division before electronic computers existed, Livesay and Ewing, for example, attended maths lectures from von Neumann, and introduced him to punch-card operations. As has been documented in books including Their Day in the Sun by Ruth Howes and Caroline Herzenberg, they also took part in the weekly colloquia held by J Robert Oppenheimer and other project leaders. This shows they should not be dismissed as mere human calculators and machine operators who supposedly “didn’t have a clue” about physics. Verna Ellingson (Gardiner) is another forgotten coder who worked at Los Alamos. While little information about her can be found, she appears as the last author on a 1955 paper (Science 122 465) written with Metropolis and physicist Joseph Hoffman – “Study of tumor cell populations by Monte Carlo methods”. The next year she was first author of “On certain sequences of integers defined by sieves” with mathematical physicist Roger Lazarus, Metropolis and Ulam (Mathematics Magazine 29 117). She also worked with physicist George Gamow on attempts to discover the code for DNA selection of amino acids, which just shows the breadth of projects she was involved in. Evans not only worked with Harlow but took part in a 1959 conference on self-organizing systems, where she queried AI pioneer Frank Rosenblatt on his ideas about human and machine learning. Her attendance at such a meeting, in an age when women were not common attendees, implies we should not view her as “just a coder”. <>’<>What’s in a name Marjory Jones (later Devaney), a mathematician, shown in 1952 punching a program onto paper tape to be loaded into the MANIAC. The name of this role evolved to programmer during the 1950s. (Courtesy: US government / Los Alamos National Laboratory) In the 1950s there was no computational physics or computer science, therefore it’s unsurprising that the practitioners of these disciplines went by different names, and their identity has evolved over the decades since. Originally a “computer” was a person doing calculations by hand or with the help of a mechanical calculator. A “coder” was a person who translated mathematical concepts into a set of instructions in machine language. John von Neumann and Herman Goldstine distinguished between “coding” and “planning”, with the former being the lower-level work of turning flow diagrams into machine language (and doing the physical configuration) while the latter did the mathematical analysis of the problem. Meanwhile, an “operator” would physically handle the computer (replacing punch cards, doing the rewiring, etc). In the late-1940s coders were also operators. As historians note in the book ENIAC in Action this was an age where “It was hard to devise the mathematical treatment without a good knowledge of the processes of mechanical computation…It was also hard to operate the ENIAC without understanding something about the mathematical task it was undertaking.” For the ENIAC a “programmer” was not a person but “a unit combining different sequences in a coherent computation”. The term would later shift and eventually overlap with the meaning of coder as a person’s job. Computer scientist Margaret Hamilton, who led the development of the on-board flight software for NASA’s Apollo program, coined the term “software engineering” to distinguish the practice of designing, developing, testing and maintaining software from the engineering tasks associated with the hardware. Using the term “programmer” for someone who coded computers peaked in popularity in the 1980s, but by the 2000s was replaced in favour of other job titles such as various flavours of “developer” or “software architect”. A “research software engineer” is a person who combines professional software engineering expertise with an intimate understanding of scientific research. Overlooked then, overlooked now Credited or not, these pioneering women and their contributions have been mostly forgotten, and only in recent decades have their roles come to light again. But why were they obscured by history in the first place? But another often overlooked reason is the widespread underappreciation of the key role of computational scientists and research software engineers, a term that was only coined just over a decade ago. Even today, these non-traditional research roles end up being undervalued. A 2022 survey by the UK Software Sustainability Institute, for example, showed that only 59% of research software engineers were named as authors, with barely a quarter (24%) mentioned in the acknowledgements or main text, while a sixth (16%) were not mentioned at all. The separation between those who understand the physics and those who write the code, understand and operate the hardware goes back to the early days of computing (see box above), but it wasn’t entirely accurate even then. People who implement complex scientific computations are not just coders or skilled operators of supercomputers, but truly multidisciplinary scientists who have a deep understanding of the scientific problems, mathematics, computational methods and hardware. Such people – whatever their gender – play a key role in advancing science and yet remain the unsung heroes of the discoveries their work enables. Perhaps what this story of the forgotten pioneers of computational physics tells us is that some views rooted in the 1950s are still influencing us today. It’s high time we moved on. The post The forgotten pioneers of computational physics appeared first on Physics World.
https://physicsworld.com/a/the-forgotten-pioneers-of-computational-physics/
Space & Physics
svg
c4cb16a9c46e6f8e596bcc8b551533cabab30ccc0d9e8c90e9db18930916878a
2025-11-11T08:30:02+00:00
Classical gravity may entangle matter, new study claims
Gravity might be able to quantum-entangle particles even if the gravitational field itself is classical. That is the conclusion of a new study by Joseph Aziz and Richard Howl at Royal Holloway University of London. This challenges a popular view that such entanglement would necessarily imply that gravity must be quantized. This could be important in the ongoing attempt to develop a theory of quantum gravity that unites quantum mechanics with Einstein’s general theory of relativity. “When you try to quantize the gravitational interaction in exactly the same way we tried to mathematically quantize the other forces, you end up with mathematically inconsistent results – you end up with infinities in your calculations that you can’t do anything about,” Howl tells Physics World. “With the other interactions, we quantized them assuming they live within an independent background of classical space and time,” Howl explains. “But with quantum gravity, arguably you cannot do this [because] gravity describes space−time itself rather than something within space−time.” Quantum entanglement occurs when two particles share linked quantum states even when separated. While it has become a powerful probe of the gravitational field, the central question is whether gravity can mediate entanglement only if it is itself quantum in nature. “It has generally been considered that the gravitational interaction can only entangle matter if the gravitational field is quantum,” Howl says. “We have argued that you could treat the gravitational interaction as more general than just the mediation of the gravitational field such that even if the field is classical, you could in principle entangle matter.” Quantum field theory postulates that entanglement between masses arises through the exchange of virtual gravitons. These are hypothetical, transient quantum excitations of the gravitational field. Aziz and Howl propose that even if the field remains classical, virtual-matter processes can still generate entanglement indirectly. These processes, he says, “will persist even when the gravitational field is considered classical and could in principle allow for entanglement”. The idea of probing the quantum nature of gravity through entanglement goes back to a suggestion by Richard Feynman in the 1950s. He envisioned placing a tiny mass in a superposition of two locations and checking whether its gravitational field was also superposed. Though elegant, the idea seemed untestable at the time. Recent proposals − most notably by teams led by Sougato Bose and by Chiara Marletto and Vlatko Vedral – revived Feynman’s insight in a more practical form. “Recently, two proposals showed that one way you could test that the field is in a superposition (and thus quantum) is by putting two masses in a quantum superposition of two locations and seeing if they become entangled through the gravitational interaction,” says Howl. “This also seemed to be much more feasible than Feynman’s original idea.” Such experiments might use levitated diamonds, metallic spheres, or cold atoms – systems where both position and gravitational effects can be precisely controlled. Aziz and Howl’s work, however, considers whether such entanglement could arise even if gravity is not quantum. They find that certain classical-gravity processes can in principle entangle particles, though the predicted effects are extremely small. “These classical-gravity entangling effects are likely to be very small in near-future experiments,” Howl says. “This though is actually a good thing: it means that if we see entanglement…we can be confident that this means that gravity is quantized.” The paper has drawn a strong response from some leading figures in the field, including Marletto at the University of Oxford, who co-developed the original idea of using gravitationally induced entanglement as a test of quantum gravity. “The phenomenon of gravitationally induced entanglement … is a game changer in the search for quantum gravity, as it provides a way to detect quantum effects in the gravitational field indirectly, with laboratory-scale equipment,” she says. Detecting it would, she adds, “constitute the first experimental confirmation that gravity is quantum, and the first experimental refutation of Einstein’s relativity as an adequate theory of gravity”. However, Marletto disputes Aziz and Howl’s interpretation. “No classical theory of gravity can mediate entanglement via local means, contrary to what the study purports to show,” she says. “What the study actually shows is that a classical theory with direct, non-local interactions between the quantum probes can get them entangled.” In her view, that mechanism “is not new and has been known for a long time”. Despite the controversy, Howl and Marletto agree that experiments capable of detecting gravitationally induced entanglement would be transformative. “We see our work as strengthening the case for these proposed experiments,” Howl says. Marletto concurs that “detecting gravitationally induced entanglement will be a major milestone … and I hope and expect it will happen within the next decade.” Howl hopes the work will encourage further discussion about quantum gravity. “It may also lead to more work on what other ways you could argue that classical gravity can lead to entanglement,” he says. The research is described in Nature. The post Classical gravity may entangle matter, new study claims appeared first on Physics World.
https://physicsworld.com/a/classical-gravity-may-entangle-matter-new-study-claims/
Space & Physics
svg
24d850805fbb155bb1571352ef20dd4f052c9a1c3dc408d307ef32ad77751359
2025-11-10T15:00:06+00:00
Is Donald Trump conducting a ‘blitzkrieg’ on science?
“Drain the swamp!” In the intense first few months of his second US presidency, Donald Trump has been enacting his old campaign promise with a vengeance. He’s ridding all the muck from the American federal bureaucracy, he claims, and finally bringing it back under control. Scientific projects and institutions are particular targets of his, with one recent casualty being the High Energy Physics Advisory Panel (HEPAP). Outsiders might shrug their shoulders at a panel of scientists being axed. Panels come and go. Also, any development in Washington these days is accompanied by confusion, uncertainty, and the possibility of reversal. But HEPAP’s dissolution is different. Set up in 1967, it’s been a valuable and long-standing advisory committee of the Office of Science at the US Department of Energy (DOE). HEPAP has a distinguished track record of developing, supporting and reviewing high-energy physics programmes, setting priorities and balancing different areas. Many scientists are horrified by its axing. Since taking office in January 2025, Trump has issued a flurry of executive orders – presidential decrees that do not need Congressional approval, legislative review or public debate. One order, which he signed in February, was entitled “Commencing the Reduction of the Federal Bureaucracy”. It sought to reduce parts of the government “that the President has determined are unnecessary”, seeking to eliminate “waste and abuse, reduce inflation, and promote American freedom and innovation”. While supporters see those as laudable goals, opponents believe the order is driving a stake into the heart of US science. Hugely valuable, long-standing scientific advisory committees have been axed at key federal agencies, including NASA, the National Science Foundation, the Environmental Protection Agency, the National Oceanic and Atmospheric Administration, the US Geological Service, the National Institute of Health, the Food and Drug Administration, and the Centers for Disease Control and Prevention. What’s more, the committees were terminated without warning or debate, eliminating load-bearing pillars of the US science infrastructure. It was, as the Columbia University sociologist Gil Eyal put it in a recent talk, the “Trump 2.0 Blitzkrieg”. Then, on 30 September, Trump’s enablers took aim at advisory committees at the DOE Office of Science. According to the DOE’s website, a new Office of Science Advisory Committee (SCAC) will take over functions of the six former discretionary (non-legislatively mandated) Office of Science advisory committees. “Any current charged responsibilities of these former committees will be transferred to the SCAC,” the website states matter-of-factly. The committee will provide “independent, consensus advice regarding complex scientific and technical issues” to the entire Office of Science. Its members will be appointed by under secretary for science Dario Gil – a political appointee. Apart from HEPAP, others axed without warning were the Nuclear Science Advisory Committee, the Basic Energy Sciences Advisory Committee, the Fusion Energy Sciences Advisory Committee, the Advanced Scientific Computing Advisory Committee, and the Biological and Environmental Research Advisory Committee. Over the years, each committee served a different community and was represented by prominent research scientists who were closely in touch with other researchers. Each committee could therefore assemble the awareness of – and technical knowledge about – emerging promising initiatives and identify the less promising ones. Many committee members only learned of the changes when they received letters or e-mails out of the blue informing them that their committee had been dissolved, that a new committee had replaced them, and that they were not on it. No explanation was given. Closing HEPAP and the other Office of Science committees will hamper both the technical support and community input that it has relied on to promote the efficient, effective and robust growth of physics Physicists whom I have spoken to are appalled for two main reasons. One is that closing HEPAP and the other Office of Science committees will hamper both the technical support and community input that it has relied on to promote the efficient, effective and robust growth of physics. “Speaking just for high-energy physics, HEPAP gave feedback on the DOE and NSF funding strategies and priorities for the high-energy physics experiments,” says Kay Kinoshita from the University of Cincinnati, a former HEPAP member. “The panel system provided a conduit for information between the agencies and the community, so the community felt heard and the agencies were (mostly) aligned with the community consensus”. As Kinoshita continued: “There are complex questions that each panel has to deal with. even within the topical area. It’s hard to see how a broader panel is going to make better strategic decisions, ‘better’ meaning in terms of scientific advancement. In terms of community buy-in I expect it will be worse.” Other physicists cite a second reason for alarm. The elimination of the advisory committees spreads the expertise so thinly as to increase the likelihood of political pressure on decisions. “If you have one committee you are not going to get the right kind of fine detail,” says Michael Lubell, a physicist and science-policy expert at the City College of New York, who has sat in on meetings of most of the Office of Science advisory committees. “You’ll get opinions from people outside that area and you won’t be able to get information that you need as a policy maker to decide how the resources are to be allocated,” he adds. “A condensed-matter physicist for example, would probably have insufficient knowledge to advise DOE on particle physics. Instead, new committee members would be expected to vet programs based on ideological conformity to what the Administration wants.” At the end of the Second World War, the US began to construct an ambitious long-range plan to promote science that began with the establishment of the National Science Foundation in 1950 and developed and extended ever since. The plan aimed to incorporate both the ability of elected politicians to direct science towards social needs and the independence of scientists to explore what is possible. US presidents have, of course, had pet scientific projects: the War on Cancer (Nixon), the Moon Shot (Kennedy), promoting renewable energy (Carter), to mention a few. But it is one thing for a president to set science to producing a socially desirable product and another to manipulate the scientific process itself. “This is another sad day for American science,” says Lubell. “If I were a young person just embarking on a career, I would get the hell out of the country. I would not want to waste the most creative years of my life waiting for things to turn around, if they ever do. What a way to destroy a legacy!” The end of HEPAP is not draining a swamp but creating one. The post Is Donald Trump conducting a ‘blitzkrieg’ on science? appeared first on Physics World.
https://physicsworld.com/a/is-donald-trump-conducting-a-blitzkrieg-on-science/
Space & Physics
svg
3f9219a06ef4d5cc86d60d7f5adff250857ca82aed0be17550648485f6ff63dc
2025-11-10T09:48:06+00:00
Delft Circuits, Bluefors: the engine-room driving joined-up quantum innovation
Better together. That’s the headline take on a newly inked technology partnership between Bluefors, a heavyweight Finnish supplier of cryogenic measurement systems, and Delft Circuits, a Dutch manufacturer of specialist I/O cabling solutions designed for the scale-up and industrial deployment of next-generation quantum computers. The drivers behind the tie-up are clear: as quantum systems evolve – think vastly increased qubit counts plus ever-more exacting requirements on gate fidelity – developers in research and industry will reach a point where current coax cabling technology doesn’t cut it anymore. The answer? Collaboration, joined-up thinking and product innovation. In short, by integrating Delft Circuits’ Cri/oFlex® cabling technology into Bluefors’ dilution refrigerators, the vendors’ combined customer base will benefit from a complete, industrially proven and fully scalable I/O solution for their quantum systems. The end-game: to overcome the quantum tech industry’s biggest bottleneck, forging a development pathway from quantum computing systems with hundreds of qubits today to tens of thousands of qubits by 2030. For context, Cri/oFlex® cryogenic RF cables comprise a stripline (a type of transmission line) based on planar microwave circuitry – essentially a conducting strip encapsulated in dielectric material and sandwiched between two conducting ground planes. The use of the polyimide Kapton® as the dielectric ensures Cri/oFlex® cables remain flexible in cryogenic environments (which are necessary to generate quantum states, manipulate them and read them out), with silver or superconducting NbTi providing the conductive strip and ground layer. The standard product comes as a multichannel flex (eight channels per flex) with a range of I/O channel configurations tailored to the customer’s application needs, including flux bias lines, microwave drive lines, signal lines or read-out lines. “Reliability is a given with Cri/oFlex®,” says Robby Ferdinandus, global chief commercial officer for Delft Circuits and a driving force behind the partnership with Bluefors. “By integrating components such as attenuators and filters directly into the flex,” he adds, “we eliminate extra parts and reduce points of failure. Combined with fast thermalization at every temperature stage, our technology ensures stable performance across thousands of channels, unmatched by any other I/O solution.” Technology aside, the new partnership is informed by a “one-stop shop” mindset, offering the high-density Cri/oFlex® solution pre-installed and fully tested in Bluefors cryogenic measurement systems. For the end-user, think turnkey efficiency: streamlined installation, commissioning, acceptance and, ultimately, enhanced system uptime. Scalability is front-and-centre too, thanks to Delft Circuits’ pre-assembled and tested side-loading systems. The high-density I/O cabling solution delivers up to 50% more channels per side-loading port to Bluefors’ (current) High Density Wiring, providing a total of 1536 input or control lines to an XLDsl cryostat. In addition, more wiring lines can be added to multiple KF ports as a custom option. Reciprocally, there’s significant commercial upside to this partnership. Bluefors is the quantum industry’s leading cryogenic systems OEM and, by extension, Delft Circuits now has access to the former’s established global customer base, amplifying its channels to market by orders of magnitude. “We have stepped into the big league here and, working together, we will ensure that Cri/oFlex® becomes a core enabling technology on the journey to quantum advantage,” notes Ferdinandus. That view is amplified by Reetta Kaila, director for global technical sales and new products at Bluefors (and, alongside Ferdinandus, a main-mover behind the partnership). “Our market position in cryogenics is strong, so we have the ‘muscle’ and specialist know-how to integrate innovative technologies like Cri/oFlex® into our dilution refrigerators,” she explains. A win-win, it seems, along several coordinates. “The Bluefors sales teams are excited to add Cri/oFlex® into the product portfolio,” Kaila adds. “It’s worth noting, though, that the collaboration extends across multiple functions – technical and commercial – and will therefore ensure close alignment of our respective innovation roadmaps.” Deconstructed, Delft Circuits’ value proposition is all about enabling, from an I/O perspective, the transition of quantum technologies out of the R&D lab into at-scale practical applications. More specifically: Cri/oFlex® technology allows quantum scientists and engineers to increase the I/O cabling density of their systems easily – and by a lot – while guaranteeing high gate fidelities (minimizing noise and heating) as well as market-leading uptime and reliability. To put some hard-and-fast performance milestones against that claim, the company has published a granular product development roadmap that aligns Cri/oFlex® cabling specifications against the anticipated evolution of quantum computing systems – from 150+ qubits today out to 40,000 qubits and beyond in 2029 (see figure below, “Quantum alignment”). The resulting milestones are based on a study of the development roadmaps of more than 10 full-stack quantum computing vendors – a consolidated view that will ensure the “guiding principles” of Delft Circuits’ innovation roadmap align versus the aggregate quantity and quality of qubits targeted by the system developers over time. –“”–“”Quantum alignment The new product development roadmap from Delft Circuits starts with the guiding principles, highlighting performance milestones to be achieved by the quantum computing industry over the next five years – specifically, the number of physical qubits per system and gate fidelities. By extension, cabling metrics in the Delft Circuits roadmap focus on “quantity”: the number of I/O channels per loader (i.e. the wiring trees that insert into a cryostat, with typical cryostats having between 6–24 slots for loaders) and the number of channels per cryostat (summing across all loaders); also on “quality” (the crosstalk in the cabling flex). To complete the picture, the roadmap outlines product introductions at a conceptual level to enable both the quantity and quality timelines. (Courtesy: Delft Circuits) The post Delft Circuits, Bluefors: the engine-room driving joined-up quantum innovation appeared first on Physics World.
https://physicsworld.com/a/delft-circuits-bluefors-the-engine-room-driving-joined-up-quantum-innovation/
Space & Physics
svg
07c318bf7cd5ea29386b438e8f73d3174dcda4eaf55c2a18a86d191276c88c34
2025-11-10T09:30:18+00:00
Microbubbles power soft, programmable artificial muscles
Artificial muscles that offer flexible functionality could prove invaluable for a range of applications, from soft robotics and wearables to biomedical instrumentation and minimally invasive surgery. Current designs, however, are limited by complex actuation mechanisms and challenges in miniaturization. Aiming to overcome these obstacles, a research team headed up at the Acoustic Robotics Systems Lab (ETH Zürich) in Switzerland is using microbubbles to create soft, programmable artificial muscles that can be wirelessly controlled via targeted ultrasound activation. Gas-filled microbubbles can concentrate acoustic energy, providing a means to initiate movement with rapid response times and high spatial accuracy. In this study, reported in Nature, team leader Daniel Ahmed and colleagues built a synthetic muscle from a thin flexible membrane containing arrays of more than 10,000 microbubbles. When acoustically activated, the microbubbles generate thrust and cause the membrane to deform. And as different sized microbubbles resonate at different ultrasound frequencies, the arrays can be designed to provide programmable motion. “Ultrasound is safe, non-invasive, can penetrate deep into the body and can generate large forces. However, without microbubbles, a much higher force is needed to deform the muscle, and selective activation is difficult,” Ahmed explains. “To overcome this limitation, we use microbubbles, which amplify force generation at specific sites and act as resonant systems. As a result, we can activate the artificial muscle at safe ultrasound power levels and generate complex motion.” The team created the artificial muscles from a thin silicone membrane patterned with an array of cylindrical microcavities with the dimensions of the desired microbubbles. Submerging this membrane in a water-filled acoustic chamber trapped tens of thousands of gas bubbles within the cavities (one per cavity). The final device contains around 3000 microbubbles per mm2 and weighs just 0.047 mg/mm2. To demonstrate acoustic activation, the researchers fabricated an artificial muscle containing uniform-sized microbubbles on one surface. They fixed one end of the muscle and exposed it to resonant frequency ultrasound, simultaneously exciting the entire microbubble array. The resulting oscillations generated acoustic streaming and radiation forces, causing the muscle to flex upward, with an amplitude dependent upon the ultrasound excitation voltage. Next, the team designed an 80 µm-thick, 3 x 0.5 cm artificial muscle containing arrays of three different sized microbubbles. Stimulation at 96.5, 82.3 and 33.2 kHz induced deformations in regions containing bubbles with diameters of 12, 16 and 66 µm, respectively. Exposure to swept-frequency ultrasound covering the three resonant frequencies sequentially activated the different arrays, resulting in an undulatory motion. Ahmed and colleagues showcased a range of applications for the artificial muscle by integrating microbubble arrays into functional devices, such as a miniature soft gripper for trapping and manipulating fragile live animals. The gripper comprises six to ten microbubble array-based “tentacles” that, when subjected to ultrasound, gently gripped a zebrafish larva with sub-100 ms response time. When the ultrasound was switched off, the tentacles opened and the larva swam away with no adverse effects. The artificial muscle can function as a conformable robotic skin that sticks and imparts motion to a stationary object, which the team demonstrated by attaching it to the surface of an excised pig heart. It can also be employed for targeted drug delivery – shown by the use of a microbubble-array robotic patch for ultrasound-enhanced delivery of dye into an agar block. The researchers also built an ultrasound-powered “stingraybot”, a soft surgical robot with artificial muscles (arrays of differently sized microbubbles) on either side to mimic the pectoral fins of a stingray. Exposure to swept-frequency ultrasound induced an undulatory motion that wirelessly propelled the 4 cm-long robot forward at a speed of about 0.8 body lengths per second. To demonstrate future practical biomedical applications, such as supporting minimally invasive surgery or site-specific drug release within the gastrointestinal tract, the researchers encapsulated a rolled up stingraybot within a 27 x 12 mm edible capsule. Once released into the stomach, the robot could be propelled on demand under ultrasound actuation. They also pre-folded a linear artificial muscle into a wheel shape and showed that swept ultrasound frequencies could propel it along the complex mucosal surfaces of the stomach and intestine. “Through the strategic use of microbubble configurations and voltage and frequency as ultrasound excitation parameters, we engineered a diverse range of preprogrammed movements and demonstrated their applicability across various robotic platforms,” the researchers write. “Looking ahead, these artificial muscles hold transformative potential across cutting-edge fields such as soft robotics, haptic medical devices and minimally invasive surgery.” Ahmed says that the team is currently developing soft patches that can conform to biological surfaces for drug delivery inside the bladder. “We are also designing soft, flexible robots that can wrap around a tumour and release drugs directly at the target site,” he tells Physics World. “Basically, we’re creating mobile conformable drug-delivery patches.” The post Microbubbles power soft, programmable artificial muscles appeared first on Physics World.
https://physicsworld.com/a/microbubbles-power-soft-programmable-artificial-muscles/
Space & Physics
svg
97a18898d3c4db8d70f581b388515d9f2e6f1c913cc154f27d1b01863249fd11
2025-11-07T15:00:02+00:00
China’s Shenzhou-20 crewed spacecraft return delayed by space debris impact
China has delayed the return of a crewed mission to the country’s space station over fears that the astronaut’s spacecraft has been struck by space debris. The craft was supposed to return to Earth on 5 November but the China Manned Space Agency says it will now carry out an impact analysis and risk assessment before making any further decisions about when the astronauts will return. The Shenzhou programme involves taking astronauts to and from China’s Tiangong space station, which was constructed in 2022, for six-month stays. Shenzhou-20, carrying three crew, launched on 24 April from Jiuquan Satellite Launch Center on board a Long March 2F rocket. Once docked with Tiangong the three-member crew of Shenzhou-19 began handing over control of the station to the crew of Shenzhou-20 before they returned to Earth on 30 April. The three-member crew of Shenzhou-21 launched on 31 October and underwent the same hand-over process with the crew of Shenzhou-20 before they were set to return to Earth on Wednesday. Yet pre-operation checks revealed that the craft had been hit by “a small piece of debris” with the location and scale of the damage to Shenzhou-20 having not been released. If the craft is deemed unsafe following the assessment, it is possible that the crew of Shenzhou-20 will return to Earth aboard Shenzhou-21. Another option is to launch a back-up Shenzhou spacecraft, which remains on stand-by and could be launched within eight days. Space debris is of increasing concern and this marks the first time that a crewed craft has been delayed due to a potential space debris impact. In 2021, for example, China noted that Tiangong had to perform two emergency avoidance manoeuvres to avoid fragments produced by Starlink satellites that were launched by SpaceX. The post China’s Shenzhou-20 crewed spacecraft return delayed by space debris impact appeared first on Physics World.
https://physicsworld.com/a/chinas-shenzhou-20-crewed-spacecraft-return-delayed-by-space-debris-impact/
Space & Physics
svg
0a98c89af64543ff35020a2e7f0ba889f628735552cbaa9da245f25654047cc3
2025-11-07T13:57:32+00:00
Twistelastics controls how mechanical waves move in metamaterials
By simply placing two identical elastic metasurfaces atop each other and then rotating them relative to each other, the topology of the elastic waves dispersing through the resulting stacked structure can be changed – from elliptic to hyperbolic. This new control technique, from physicists at the CUNY Advanced Science Research Center in the US, works over a broad frequency range and has been dubbed “twistelastics”. It could allow for advanced reconfigurable phononic devices with potential applications in microelectronics, ultrasound sensing and microfluidics. The researchers, led by Andrea Alù, say they were inspired by the recent advances in “twistronics” and its “profound impact” on electronic and photonic systems. “Our goal in this work was to explore whether similar twist-induced topological phenomena could be harnessed in elastodynamics in which phonons (vibrations of the crystal lattice) play a central role,” says Alù. In twistelastics, the rotations between layers of identical, elastic engineered surfaces are used to manipulate how mechanical waves travel through the materials. The new approach, say the CUNY researchers, allows them to reconfigure the behaviour of these waves and precisely control them. “This opens the door to new technologies for sensing, communication and signal processing,” says Alù. In their work, the researchers used computer simulations to design metasurfaces patterned with micron-sized pillars. When they stacked one such metasurface atop the other and rotated them at different angles, the resulting combined structure changed the way phonons spread. Indeed, their dispersion topology went from elliptic to hyperbolic. At a specific rotation angle, known as the “magic angle” (just like in twistronics), the waves become highly focused and begin to travel in one direction. This effect could allow for more efficient signal processing, says Alù, with the signals being easier to control over a wide range of frequencies. The new twistelastic platform offers broadband, reconfigurable, and robust control over phonon propagation,” he tells Physics World. “This may be highly useful for a wide range of application areas, including surface acoustic wave (SAW) technologies, ultrasound imaging and sensing, microfluidic particle manipulation and on-chip phononic signal processing. Since the twist-induced transitions are topologically protected, again like in twistronics, the system is resilient to fabrication imperfections, meaning it can be miniaturized and integrated into real-world devices, he adds. “We are part of an exciting science and technology centre called ‘New Frontiers of Sound’, of which I am one of the leaders. The goal of this ambitious centre is to develop new acoustic platforms for the above applications enabling disruptive advances for these technologies.” Looking ahead, the researchers say they are looking into miniaturizing their metasurface design for integration into microelectromechanical systems (MEMS). They will also be studying multi-layer twistelastic architectures to improve how they can control wave propagation and investigating active tuning mechanisms, such as electromechanical actuation, to dynamically control twist angles. “Adding piezoelectric phenomena for further control and coupling to the electromagnetic waves,” is also on the agenda says Alù. The present work is detailed in PNAS. The post Twistelastics controls how mechanical waves move in metamaterials appeared first on Physics World.
https://physicsworld.com/a/twistelastics-controls-how-mechanical-waves-move-in-metamaterials/
Space & Physics
svg
b02fae72db975832014560510b8897f243e07acb4a7a1e7139f777bfdb6ba87b
2025-11-07T09:00:33+00:00
Ternary hydride shows signs of room-temperature superconductivity at high pressures
Researchers in China claim to have made the first ever room-temperature superconductor by compressing an alloy of lanthanum-scandium (La-Sc) and the hydrogen-rich material ammonia borane (NH3BH3) together at pressures of 250–260 GPa, observing superconductivity with a maximum onset temperature of 298 K. While these high pressures are akin to those at the centre of the Earth, the work marks a milestone in the field of superconductivity, they say. Superconductors conduct electricity without resistance and many materials do this when cooled below a certain transition temperature, Tc. In most cases this temperature is very low – for example, solid mercury, the first superconductor to be discovered, has a Tc of 4.2 K. Researchers have therefore been looking for superconductors that operate at higher temperatures – perhaps even at room temperature. Such materials could revolutionize a host of application areas, including increasing the efficiency of electrical generators and transmission lines through lossless electricity transmission. They would also greatly simplify technologies such as MRI, for instance, that rely on the generation or detection of magnetic fields. Researchers made considerable progress towards this goal in the 1980s and 1990s with the discovery of the “high-temperature” copper oxide superconductors, which have Tc values between 30 and 133 K. Fast-forward to 2015 and the maximum known critical temperature rose even higher thanks to the discovery of a sulphide material, H3S, that has a Tc of 203 K when compressed to pressures of 150 GPa. This result sparked much interest in solid materials containing hydrogen atoms bonded to other elements and in 2019, the record was broken again, this time by lanthanum decahydride (LaH10), which was found to have a Tc of 250–260 K, albeit again at very high pressures. Then in 2021, researchers observed high-temperature superconductivity in the cerium hydrides, CeH9 and CeH10, which are remarkable because they are stable and boast high-temperature superconductivity at lower pressures (about 80 GPa, or 0.8 million atmospheres) than the other so-called “superhydrides”. In recent years, researchers have started turning their attention to ternary hydrides – substances that comprise three different atomic species rather than just two. Compared with binary hydrides, ternary hydrides are more structurally complex, which may allow them to have higher Tc values. Indeed, Li2MgH16 has been predicted to exhibit “hot” superconductivity with a Tc of 351–473 K under multimegabar pressures and several other high-Tc hydrides, including MBxHy, MBeH8 and Mg2IrH6-7, have been predicted to be stable under comparatively lower pressures. In the new work, a team led by physicist Yanming Ma of Jilin University, studied LaSc2H24 – a compound that’s made by doping Sc into the well-known La-H binary system. Ma and colleagues had already predicted in theory – using the crystal structure prediction (CALYPSO) method – that this ternary material should feature a hexagonal P6/mmm symmetry. Introducing Sc into the La-H results in the formation of two novel interlinked H24 and H30 hydrogen clathrate “cages” with the H24 surrounding Sc and the H30 surrounding La. The researchers predicted that these two novel hydrogen frameworks should produce an exceptionally large hydrogen-derived density of states at the Fermi level (the highest energy level that electrons can occupy in a solid at a temperature of absolute zero), as well as enhancing coupling between electrons and phonons (vibrations of the crystal lattice) in the material, leading to an exceptionally high Tc of up to 316 K at high pressure. To characterize their material, the researchers placed it in a diamond-anvil cell, a device that generates extreme pressures as it squeezes the sample between two tiny, gem-grade crystals of diamond (one of the hardest substances known) while heating it with a laser. In situ X-ray diffraction experiments revealed that the compound crystallizes into a hexagonal structure, in excellent agreement with the predicted P6/mmm LaSc2H24 structure. A key piece of experimental evidence for superconductivity in the La-Sc-H ternary system, says co-author Guangtao Liu, came from measurements that repeatedly demonstrated the onset of zero electrical resistance below the Tc. Another significant proof, Liu adds, is that the Tc decreases monotonically with the application of an external magnetic field in a number of independently synthesized samples. “This behaviour is consistent with the conventional theory of superconductivity since an external magnetic field disrupts Cooper pairs – the charge carriers responsible for the zero-resistance state – thereby suppressing superconductivity.” “These two main observations demonstrate the superconductivity in our synthesized La-Sc-H compound,” he tells Physics World. The experiments were not easy, Liu recalls. The first six months of attempting to synthesize LaSc2H24 below 200 GPa yielded no obvious Tc enhancement. “We then tried higher pressure and above 250 GPa, we had to manually deposit three precursor layers and ensure that four electrodes (for subsequent conductance measurements) were properly connected to the alloy in an extremely small sample chamber, just 10 to 15 µm in size,” he says. “This required hundreds of painstaking repetitions.” And that was not all: to synthesize the LaSc2H24, the researchers had to prepare the correct molar ratios of a precursor alloy. The Sc and La elements cannot form a solid solution because of their different atomic radii, so using a normal melting method makes it hard to control this ratio. “After about a year of continuous investigations, we finally used the magnetron sputtering method to obtain films of LaSc2H24 with the molar ratios we wanted,” Liu explains. “During the entire process, most of our experiments failed and we ended up damaging at least 70 pairs of diamonds.” Sven Friedemann of the University of Bristol, who was not involved in this work, says that the study is “an important step forward” for the field of superconductivity with a new record transition temperature of 295 K. “The new measurements show zero resistance (within resolution) and suppression in magnetic fields, thus strongly suggesting superconductivity,” he comments. “It will be exciting to see future work probing other signatures of superconductivity. The X-ray diffraction measurements could be more comprehensive and leave some room for uncertainty to whether it is indeed the claimed LaSc2H24 structure giving rise to the superconductivity.” Ma and colleagues say they will continue to study the properties of this compound – and in particular, verify the isotope effect (a signature of conventional superconductors) or measure the superconducting critical current. “We will also try to directly detect the Meissner effect – a key goal for high-temperature superhydride superconductors in general,” says Ma. “Guided by rapidly advancing theoretical predictions, we will also synthesize new multinary superhydrides to achieve better superconducting properties under much lower pressures.” The study is available on the arXiv pre-print server. The post Ternary hydride shows signs of room-temperature superconductivity at high pressures appeared first on Physics World.
https://physicsworld.com/a/ternary-hydride-shows-signs-of-room-temperature-superconductivity-at-high-pressures/
Space & Physics
svg
ca34a12b16b7c3ff806060605270202f369456eee515d8aa3d8f99a7a2cf7fa6
2025-11-06T15:09:37+00:00
Scientific collaborations increasingly more likely to be led by Chinese scientists, finds study
International research collaborations will be increasingly led by scientists in China over the coming decade. That is according to a new study by researchers at the University of Chicago, which finds that the power balance in international science has shifted markedly away from the US and towards China over the last 25 years (Proc. Natl. Acad. Sci. 122 e2414893122). To explore China’s role in global science, the team used a machine-learning model to predict the lead researchers of almost six million scientific papers that involved international collaboration listed by online bibliographic catalogue OpenAlex. The model was trained on author data from 80 000 papers published in high-profile journals that routinely detail author contributions, including team leadership. The study found that between 2010 and 2012 there were only 4429 scientists from China who were likely to have led China-US collaborations. By 2023, this number had risen to 12714, meaning that the proportion of team leaders affiliated with Chinese institutions had risen from 30% to 45%. If this trend continues, China will hit “leadership parity” with the US in chemistry, materials science and computer science by 2028, with maths, physics and engineering being level by 2031. The analysis also suggests that China will achieve leadership parity with the US in eight “critical technology” areas by 2030, including AI, semiconductors, communications, energy and high-performance computing. For China-UK partnerships, the model found that equality had already been reached in 2019, while EU and China leadership roles will be on par this year or next. The authors also found that China has been actively training scientists in nations in the “Belt and Road Initiative” which seeks to connect China closer to the world through investments and infrastructure projects. This, the researchers warn, limits the ability to isolate science done in China. Instead, they suggest that it could inspire a different course of action, with the US and other countries expanding their engagement with the developing world to train a global workforce and accelerate scientific advancements beneficial to their economies. The post Scientific collaborations increasingly more likely to be led by Chinese scientists, finds study appeared first on Physics World.
https://physicsworld.com/a/scientific-collaborations-increasingly-more-likely-to-be-led-by-chinese-scientists-finds-study/
Space & Physics
svg
62fb82ae742368b1d457ddec64d912a98b545463db6761c93bbc688975feffa9
2025-11-06T14:49:34+00:00
Unlocking the potential of 2D materials: graphene and much more
This episode explores the scientific and technological significance of 2D materials such as graphene. My guest is Antonio Rossi, who is a researcher in 2D materials engineering at the Italian Institute of Technology in Genoa. Rossi explains why 2D materials are fundamentally different than their 3D counterparts – and how these differences are driving scientific progress and the development of new and exciting technologies. Graphene is the most famous 2D material and Rossi talks about today’s real-world applications of graphene in coatings. We also chat about the challenges facing scientists and engineers who are trying to exploit graphene’s unique electronic properties. Rossi’s current research focuses on two other promising 2D materials – tungsten disulphide and hexagonal boron nitride. He explains why tungsten disulphide shows great technological promise because of its favourable electronic and optical properties; and why hexagonal boron nitride is emerging as an ideal substrate for creating 2D devices. Artificial intelligence (AI) is becoming an important tool in developing new 2D materials. Rossi explains how his team is developing feedback loops that connect AI with the fabrication and characterization of new materials. Our conversation also touches on the use of 2D materials in quantum science and technology. IOP Publishing’s new Progress In Series: Research Highlights website offers quick, accessible summaries of top papers from leading journals like Reports on Progress in Physics and Progress in Energy. Whether you’re short on time or just want the essentials, these highlights help you expand your knowledge of leading topics. The post Unlocking the potential of 2D materials: graphene and much more appeared first on Physics World.
https://physicsworld.com/a/unlocking-the-potential-of-2d-materials-graphene-and-much-more/
Space & Physics
svg
16f07e8bb95f24d84f125ee6952fe866af114b8ca4dacbdf37d4a74ed26f555c
2025-11-06T09:35:19+00:00
Ultrasound probe maps real-time blood flow across entire organs
Microcirculation – the flow of blood through the smallest vessels – is responsible for distributing oxygen and nutrients to tissues and organs throughout the body. Mapping this flow at the whole-organ scale could enhance our understanding of the circulatory system and improve diagnosis of vascular disorders. With this aim, researchers at the Institute Physics for Medicine Paris (Inserm, ESPCI-PSL, CNRS) have combined 3D ultrasound localization microscopy (ULM) with a multi-lens array method to image blood flow dynamics in entire organs with micrometric resolution, reporting their findings in Nature Communications. “Beyond understanding how an organ functions across different spatial scales, imaging the vasculature of an entire organ reveals the spatial relationships between macro- and micro-vascular networks, providing a comprehensive assessment of its structural and functional organization,” explains senior author Clement Papadacci. The 3D ULM technique works by localizing intravenously injected microbubbles. Offering a spatial resolution roughly ten times finer than conventional ultrasound, 3D ULM can map and quantify micro-scale vascular structures. But while the method has proved valuable for mapping whole organs in small animals, visualizing entire organs in large animals or humans is hindered by the limitations of existing technology. To enable wide field-of-view coverage while maintaining high-resolution imaging, the team – led by PhD student Nabil Haidour under Papadacci’s supervision – developed a multi-lens array probe. The probe comprises an array of 252 large (4.5 mm²) ultrasound transducer elements. The use of large elements increases the probe’s sensitive area to a total footprint of 104 x 82 mm, while maintaining a relatively low element count. Each transducer element is equipped with an individual acoustic diverging lens. “Large elements alone are too directive to create an image, as they cannot generate sufficient overlap or interference between beams,” Papadacci explains. “The acoustic lenses reduce this directivity, allowing the elements to focus and coherently combine signals in reception, thus enabling volumetric image formation.” After validating their method via numerical simulations and phantom experiments, the team used a multi-lens array probe driven by a clinical ultrasound system to perform 3D dynamic ULM of an entire explanted porcine heart – considered an ideal cardiac model as its vascular anatomies and dimensions are comparable to those of humans. The heart was perfused with microbubble solution, enabling the probe to visualize the whole coronary microcirculation network over a large volume of 120 x 100 x 82 mm, with a spatial resolution of around 125 µm. The technique enabled visualization of both large vessels and the finest microcirculation in real time. The team also used a skeletonization algorithm to measure vessel radii at each voxel, which ranged from approximately 75 to 600 µm. As well as structural imaging, the probe can also assess flow dynamics across all vascular scales, with a high temporal resolution of 312 frames/s. By tracking the microbubbles, the researchers estimated absolute flow velocities ranging from 10 mm/s in small vessels to over 300 mm/s in the largest. They could also differentiate arteries and veins based on the flow direction in the coronary network. Next, the researchers used the multi-lens array probe to image the entire kidney and liver of an anaesthetized pig at the Veterinary school of Maison Alfort, with the probe positioned in front of the kidney or liver, respectively, and held using an articulated arm. They employed electrocardiography to synchronize the ultrasound acquisitions with periods of minimal respiratory motion and injected microbubble solution intravenously into the animal’s ear. The probe mapped the vascular network of the kidney over a 60 x 80 x 40 mm volume with a spatial resolution of 147 µm. The maximum 3D absolute flow velocity was approximately 280 mm/s in the large vessels and the vessel radii ranged from 70 to 400 µm. The team also used directional flow measurements to identify the arterial and venous flow systems. Liver imaging is more challenging due to respiratory, cardiac and stomach motions. Nevertheless, 3D dynamic ULM enabled high-depth visualization of a large volume of liver vasculature (65 x 100 x 82 mm) with a spatial resolution of 200 µm. Here, the researchers used dynamic velocity measurement to identify the liver’s three blood networks (arterial, venous and portal veins). “The combination of whole-organ volumetric imaging with high-resolution vascular quantification effectively addresses key limitations of existing modalities, such as ultrasound Doppler imaging, CT angiography and 4D flow MRI,” they write. Clinical applications of 3D dynamic ULM still need to be demonstrated, but Papadacci suggests that the technique has strong potential for evaluating kidney transplants, coronary microcirculation disorders, stroke, aneurysms and neoangiogenesis in cancer. “It could also become a powerful tool for monitoring treatment response and vascular remodelling over time,” he adds. Papadacci and colleagues anticipate that translation to human applications will be possible in the near future and plan to begin a clinical trial early in 2026. The post Ultrasound probe maps real-time blood flow across entire organs appeared first on Physics World.
https://physicsworld.com/a/ultrasound-probe-maps-real-time-blood-flow-across-entire-organs/
Space & Physics
svg
3f459902d8bbeeb27eb154de634063de1673cd6dea76a7736c1fb0829ba01373
2025-11-05T14:00:18+00:00
Inge Lehmann: the ground-breaking seismologist who faced a rocky road to success
In the 1930s a little-known Danish seismologist calculated that the Earth has a solid inner core, within the liquid outer core identified just a decade earlier. The international scientific community welcomed Inge Lehmann as a member of the relatively new field of geophysics – yet in her home country, Lehmann was never really acknowledged as more than a very competent keeper of instruments. It was only after retiring from her seismologist job aged 65 that Lehmann was able to devote herself full time to research. For the next 30 years, Lehmann worked and published prolifically, finally receiving awards and plaudits that were well deserved. However, this remarkable scientist, who died in 1993 aged 104, rarely appears in short histories of her field. A brief glance at the chronology of Lehmann’s education and career would suggest that she was a late starter. She was 32 when she graduated with a bachelor’s degree in mathematics from the University of Copenhagen, 40 when she received her master’s degree in geodosy and was appointed state geodesist for Denmark. Lehmann faced a litany of struggles in her younger years, from health problems and money issues to the restrictions placed on most women’s education in the first decades of the 20th century. The limits did not come from her family. Lehmann and her sister were sent to good schools, she was encouraged to attend university, and was never pressed to get married, which would likely have meant the end of her education. When she asked her father’s permission to go to the University of Cambridge, his objection was the cost – though the money was found and Lehmann duly went to Newnham College in 1910. While there she passed all the preliminary exams to study for Cambridge’s legendarily tough mathematical tripos but then her health forced her to leave. Lehmann was suffering from stomach pains; she had trouble sleeping; her hair was falling out. And this was not her first breakdown. She had previously studied for a year at the University of Copenhagen before then, too, dropping out and moving to the countryside to recover her health. The cause of Lehmann’s recurrent breakdowns is unknown. They unfortunately fed into the prevailing view of the time that women were too fragile for the rigours of higher learning. Strager attempts to unpick these historical attitudes from Lehmann’s very real medical issues. She posits that Lehmann had severe anxiety or a physical limitation to how hard she could push herself. But this conclusion fails to address the hostile conditions Lehmann was working in. In Cambridge Lehmann formed firm friendships that lasted the rest of her life. But women there did not have the same access to learning as men. They were barred from most libraries and laboratories; could not attend all the lectures; were often mocked and belittled by professors and male students. They could sit exams but, even if they passed, would not be awarded a degree. This was a contributing factor when after the First World War Lehmann decided to complete her undergraduate studies in Copenhagen rather than Cambridge. Lehmann is described as quiet, shy, reticent. But she could be eloquent in writing and once her career began she established connections with scientists all over the world by writing to them frequently. She was also not the wallflower she initially appeared to be. When she was hired as an assistant at Denmark’s Institute for the Measurement of Degrees, she quickly complained that she was being using as an office clerk, not a scientist, and she would not have accepted the job had she known this was the role. She was instead given geometry tasks that she found intellectually stimulating, which led her to seismology. Unfortunately, soon after this Lehmann’s career development stalled. While her title of “state geodesist” sounds impressive, she was the only seismologist in Denmark for decades, responsible for all the seismographs in Denmark and Greenland. Her days were filled with the practicalities of instrument maintenance and publishing reports of all the data collected. Despite repeated requests Lehmann didn’t receive an assistant, which meant she never got round to completing a PhD, though she did work towards one in her evenings and weekends. Time and again opportunities for career advancement went to men who had the title of doctor but far less real experience in geophysics. Even after she co-founded the Danish Geophysical Society in 1934, her native country overlooked her. The breakthrough that should have changed this attitude from the men around her came in 1936, when she published “P’ ”. This innocuous sounding paper was revolutionary, but based firmly in the P wave and S wave measurements that Lehmann routinely monitored. In If I Am Right, and I Know I Am, Strager clearly explains what P and S waves are. She also highlights why they were being studied by both state seismologist Lehmann and Cambridge statistician Harold Jeffreys, and how they led to both scientists’ biggest breakthroughs. After any seismological disturbance, P and S waves propagate through the Earth. P waves move at different speeds according to the material they encounter, while S waves cannot pass through liquid or air. This knowledge allowed Lehmann to calculate whether any fluctuations in seismograph readings were earthquakes, and if so where the epicentre was located. And it led to Jeffreys’ insight that the Earth must have a liquid core. Lehmann’s attention to detail meant she spotted a “discontinuity” in P waves that did not quite match a purely liquid core. She immediately wrote to Jeffreys that she believed there was another layer to the Earth, a solid inner core, but he was dismissive – which led to her writing the statement that forms the title of this book. Undeterred, she published her discovery in the journal of the International Union of Geodesy and Geophysics. In 1951 Lehmann visited the institution that would become her second home: the Lamont Geological Observatory in New York state. Its director Maurice Ewing invited her to work there on a sabbatical, arranging all the practicalities of travel and housing on her behalf. Here, Lehmann finally had something she had lacked her entire career: friendly collaboration with colleagues who not only took her seriously but also revered her. Lehmann took retirement from her job in Denmark and began to spend months of every year at the Lamont Observatory until well into her 80s. Though Strager tells us this “second phase” of Lehmann’s career was prolific, she provides little detail about the work Lehmann did. She initially focused on detecting nuclear tests during the Cold War. But her later work was more varied, and continued after she lost most of her vision. Lehmann published her final paper aged 99. If I Am Right, and I Know I Am is bookended with accounts of Strager’s research into one particular letter sent to Lehmann, an anonymous (because the final page has been lost) declaration of love. It’s an insight into the lengths Strager went to – reading all the surviving correspondence to and from Lehmann; interviewing living relatives and colleagues; working with historians both professional and amateur; visiting archives in several countries. But for me it hit the wrong tone. The preface and epilogue are mostly speculation about Lehmann’s love life. Lehmann destroyed a lot of her personal correspondence towards the end of her life, and chose what papers to donate to an archive. To me those are the actions of a woman who wants to control the narrative of her life – and does not want her romances to be written about. I would have preferred instead another chapter about her later work, of which we know she was proud. But for the majority of its pages, this is a book of which Strager can be proud. I came away from it with great admiration for Lehmann and an appreciation for how lonely life was for many women scientists even in recent history. The post Inge Lehmann: the ground-breaking seismologist who faced a rocky road to success appeared first on Physics World.
https://physicsworld.com/a/inge-lehmann-the-ground-breaking-seismologist-who-faced-a-rocky-road-to-success/
Space & Physics
svg
1b04e208e97b99561f156aaf580b8121988da490774a543a85b83e37caae348e
2025-11-05T12:28:51+00:00
Rapidly spinning black holes put new limit on ultralight bosons
The LIGO–Virgo–KAGRA collaboration has detected strong evidence for second-generation black holes, which were formed from earlier mergers of smaller black holes. The two gravitational wave signals provide one of the strongest confirmations to date for how Einstein’s general theory of relativity describes rotating black holes. Studying such objects also provides a testbed for probing new physics beyond the Standard Model. Over the past decade, the global network of interferometers operated by LIGO, Virgo, and KAGRA have detected close to 300 gravitational waves (GWs) – mostly from the mergers of binary black holes. In October 2024 the network detected a clear signal that pointed back to a merger that occurred 700 million light-years away. The progenitor black holes were 20 and 6 solar masses and the larger object was spinning at 370 Hz, which makes it one of the fastest-spinning black holes ever observed. Just one month later, the collaboration detected the coalescence of another highly imbalanced binary (17 and 8 solar masses), 2.4 billion light-years away. This signal was even more unusual – showing for the first time that the larger companion was spinning in the opposite direction of the binary orbit. While conventional wisdom says black holes should not be spinning at such high rates, the observations were not entirely unexpected. “With both events having one black hole, which is both significantly more massive than the other and rapidly spinning, [the observations] provide tantalizing evidence that these black holes were formed from previous black hole mergers,” explains Stephen Fairhurst at Cardiff University, spokesperson of the LIGO Collaboration. If this were the case, the two GW signals – called GW241011 and GW241110 – are first observations of second-generation black holes. This is because when a binary merges, the resulting second-generation object tends to have a large spin. The GW241011 signal was particularly clear, which allowed the team to make the third-ever observation of higher harmonic modes. These are overtones in the GW signal that become far clearer when the masses of the coalescing bodies are highly imbalanced. The precision of the GW241011 measurement provides one of the most stringent verifications so far of general relativity. The observations also support Roy Kerr’s prediction that rapid rotation distorts the shape of a black hole. “We now know that black holes are shaped like Einstein and Kerr predicted, and general relativity can add two more checkmarks in its list of many successes,” says team member Carl-Johan Haster at the University of Nevada, Las Vegas. “This discovery also means that we’re more sensitive than ever to any new physics that might lie beyond Einstein’s theory.” This new physics could include hypothetical particles called ultralight bosons. These could form in clouds just outside the event horizons of spinning black holes, and would gradually drain a black hole’s rotational energy via a quantum effect called superradiance. The idea is that the observed second-generation black holes had been spinning for billions of years before their mergers occurred. This means that if ultralight bosons were present, they cannot have removed lots of angular momentum from the black holes. This places the tightest constraint to date on the mass of ultralight bosons. “Planned upgrades to the LIGO, Virgo and KAGRA detectors will enable further observations of similar systems,” Fairhurst says. “They will enable us to better understand both the fundamental physics governing these black hole binaries and the astrophysical mechanisms that lead to their formation.” Haster adds, “Each new detection provides important insights about the universe, reminding us that each observed merger is both an astrophysical discovery but also an invaluable laboratory for probing the fundamental laws of physics”. The observations are described in The Astrophysical Journal Letters. The post Rapidly spinning black holes put new limit on ultralight bosons appeared first on Physics World.
https://physicsworld.com/a/rapidly-spinning-black-holes-put-new-limit-on-ultralight-bosons/
Space & Physics
svg