diff --git "a/raw_rss_feeds/https___physicsworld_com_feed_.xml" "b/raw_rss_feeds/https___physicsworld_com_feed_.xml"
--- "a/raw_rss_feeds/https___physicsworld_com_feed_.xml"
+++ "b/raw_rss_feeds/https___physicsworld_com_feed_.xml"
@@ -14,9 +14,9 @@ xmlns:rawvoice="https://blubrry.com/developer/rawvoice-rss/"
The post The pros and cons of patenting appeared first on Physics World.
+]]> +But there are more reasons for holding a patent than IP protection alone. In particular, patents go some way to protecting the investment that may have been necessary to generate the IP in the first place, such as the cost of R&D facilities, materials, labour and expertise. Those factors need to be considered when you’re deciding if patenting is the right approach or not.
++Patents are tangible assets that can be sold to other businesses or licensed for royalties to provide your compay with regular income.
+
Patents are in effect a form of currency. Counting as tangible assets that add to the overall value of a company, they can be sold to other businesses or licensed for royalties to provide regular income. Some companies, in fact, build up or acquire significant patent portfolios, which can be used for bargaining with competitors, potentially leading to cross-licensing agreements where both parties agree to use each other’s technology.
+Patents also say something about the competitive edge of a company, by demonstrating technical expertise and market position through the control of a specific technology. Essentially, patents give credibility to a company’s claims of its technical know-how: a patent shows investors that a firm has a unique, protected asset, making the business more appealing and attractive to further investment.
+ +However, it’s not all one-way traffic and there are obligations on the part of the patentee. Firstly, a patent holder has to reveal to the world exactly how their invention works. Governments favour this kind of public disclosure as it encourages broader participation in innovation. The downside is that whilst your competitors cannot directly copy you, they can enhance and improve upon your invention, provided those changes aren’t covered by the original patent.
+It’s also worth bearing in mind that a patent holder is responsible for patent enforcement and any ensuing litigation; a patent office will not do this for you. So you’ll have to monitor what your competitors are up to and decide on what course of action to take if you suspect your patent’s been infringed. Trouble is, it can sometimes be hard to prove or disprove an infringement – and getting the lawyers in can be expensive, even if you win.
+Probably the biggest consideration of all is the cost and time involved in making a patent application. Filing a patent requires a rigorous understanding of “prior art” – the existing body of relevant knowledge on which novelty is judged. You’ll therefore need to do a lot of work finding out about relevant established patents, any published research and journal articles, along with products or processes publicly disclosed before the patent’s filing date.
+Before it can be filed with a patent office, a patent needs to be written as a legal description, which includes all the legwork like an abstract, background, detailed specifications, drawings and claims of the invention. Once filed, an expert in the relevant technical field will be assigned to assess the worth of the claim; this examiner must be satisfied that the application is both unique and “non-obvious” before it’s granted.
+ +Even when the invention is judged to be technically novel, in order to be non-obvious, it must also involve an “inventive step” that would not be obvious to a person with “ordinary skill” in that technical field at the time of filing. The assessment phase can result in significant to-ing and fro-ing between the examiner and the applicant to determine exactly what is patentable. If insufficient evidence is found, the patent application will be refused.
+Patents are only ever granted in a particular country or region, such as Europe, and the application process has to be repeated for each new place (although the information required is usually pretty similar). Translations may be required for some countries, there are fees for each application and, even if a patent is granted, you have to pay an additional annual bill to maintain the patent (which in the UK rises year on year).
++Patents can take years to process, which is why many companies pay specialized firms to support their applications.
+
Patent applications, in other words, can be expensive and can take years to process. That’s why many companies pay specialized firms to support their patent applications. Those firms employ patent attorneys – legal experts with a technical background who help inventors and companies manage their IP rights by drafting patent applications, navigating patent office procedures and advising on IP strategy. Attorneys can also represent their clients in disputes or licensing deals, thereby acting as a crucial bridge between science/engineering and law.
+It’s impossible to write about patents without mentioning the impact that Thomas Edison had as an inventor. During the 20th century, he became the world’s most prolific inventor with a staggering 1093 US patents granted in his lifetime. This monumental achievement remained unsurpassed until 2003, when it was overtaken by the Japanese inventor Shunpei Yamazaki and, more recently, by the Australian “patent titan” Kia Silverbrook in 2008.
+Edison clearly saw there was a lot of value in patents, but how did he achieve so much? His approach was grounded in systematic problem solving, which he accomplished through his Menlo Park lab in New Jersey. Dedicated to technological development and invention, it was effectively the world’s first corporate R&D lab. And whilst Edison’s name appeared on all the patents, they were often primarily the work of his staff; he was effectively being credited for inventions made by his employees.
++I have a love-hate relationship with patents or at least the process of obtaining them.
+
I will be honest; I have a love-hate relationship with patents or at least the process of obtaining them. As a scientist or engineer, it’s easy to think all the hard work is getting an invention over the line, slogging your guts out in the lab. But applying for a patent can be just as expensive and time-consuming, which is why you need to be clear on what and when to patent. Even Edison grew tired of being hailed a genius, stating that his success was “1% inspiration and 99% perspiration”.
+Still, without the sweat of patents, your success might be all but 99% aspiration.
+The post The pros and cons of patenting appeared first on Physics World.
+]]>The post Practical impurity analysis for biogas producers appeared first on Physics World.
+]]>Strict rules apply to the amount of impurities allowed in biogas and biomethane, as these contaminants can damage engines, turbines, and catalysts during upgrading or combustion. EN 16723 is the European standard that sets maximum allowable levels of siloxanes and sulfur‑containing compounds for biomethane injected into the natural gas grid or used as vehicle fuel. These limits are extremely low, meaning highly sensitive analytical techniques are required. However, most biogas plants do not have the advanced equipment needed to measure these impurities accurately.
+
The researchers developed a new, simpler method to sample and analyse biogas using GC‑ICP‑MS. Gas chromatography (GC) separates chemical compounds in a gas mixture based on how quickly they travel through a column. Inductively Coupled Plasma Mass Spectrometry (ICP‑MS) then detects the elements within those compounds at very low concentrations. Crucially, this combined method can measure both siloxanes and sulfur compounds simultaneously. It avoids matrix effects that can limit other detectors and cause biased or ambiguous results. It also achieves the very low detection limits required by EN 16723.
+The sampling approach and centralized measurement enables biogas plants to meet regulatory standards using an efficient, less complex, and more cost‑effective method with fewer errors. Overall, this research provides a practical, high‑accuracy tool that makes reliable biogas impurity monitoring accessible to plants of all sizes, strengthening biomethane quality, protecting infrastructure, and accelerating the transition to cleaner energy systems.
+Ayush Agarwal et al 2026 Prog. Energy 8 015001
++
Household biogas technology in the cold climate of low-income countries: a review of sustainable technologies for accelerating biogas generation Sunil Prasad Lohani et al. (2024)
+The post Practical impurity analysis for biogas producers appeared first on Physics World.
+]]>The post Cavity-based X-ray laser delivers high-quality pulses appeared first on Physics World.
+]]>In recent decades, XFELs have delivered pulses of monochromatic and coherent X-rays for a wide range of science including physics, chemistry, biology and materials science.
+ +Despite their name, XFELs do not work like conventional lasers. In particular, there is no gain medium or resonator cavity. Instead, XFELs rely on the fact that when a free electron is accelerated, it will emit electromagnetic radiation. In an XFEL, pulses of high-energy electrons are sent through an undulator, which deflects the electrons back and forth. These wiggling electrons radiate X-rays at a specific energy. As the X-rays and electrons travel along the undulator, they interact in such a way that the emitted X-ray pulse has a high degree of coherence.
+While these XFELs have proven very useful, they do not deliver radiation that is as monochromatic or as coherent as radiation from conventional lasers. One reason why conventional lasers perform better is that the radiation is reflected back and forth many times in a mirrored cavity that is tuned to resonate at a specific frequency – whereas XFEL radiation only makes one pass through an undulator.
+Practical X-ray cavities, however, are difficult to create. This is because X-rays penetrate deep into materials, where they are usually absorbed – making reflection with conventional mirrors impossible.
+Now, researchers working at the European XFEL at DESY in Germany have created a proof-of-concept hybrid system that places an undulator within a mirrored resonator cavity. X-ray pulses that are created in the undulator are directed at a downstream mirror and reflected back to a mirror upstream of the undulator. The X-ray pulses are then reflected back downstream through the undulator. Crucially, a returning X-ray pulse overlaps with a subsequent electron pulse in the undulator, amplifying the X-ray pulse. As a result, the X-ray pulses circulating within the cavity quickly become more monochromatic and more coherent than pulses created by an undulator alone.
+The team solved the mirror challenge by using diamond crystals that achieve the Bragg reflection of X-rays with a specific frequency. These are used at either end of the cavity in conjunction with Kirkpatrick–Baez mirrors, which help focus the reflected X-rays back into the cavity.
+Some of the X-ray radiation circulating in the cavity is allowed to escape downstream, providing a beam of monochromatic and coherent X-ray pulses. They have called their system X-ray Free-Electron Laser Oscillator (XFELO). The cavity is about 66 m long.
+DESY accelerator scientist Patrick Rauer explains, “With every round trip, the noise in the X-ray pulse gets less and the concentrated light more defined”. Rauer pioneered the design of the cavity in his PhD work and is now the DESY lead on its implementation. “It gets more stable and you start to see this single, clear frequency – this spike.” Indeed, the frequency width of XFELO X-ray pulses is about 1% that of pulses that are created by the undulators alone
+Ensuring the overlap of electron and X-pulses within the cavity was also a significant challenge. This required a high degree of stability within the accelerator that provides electron pulses to XFELO. “It took years to bring the accelerator to that state, which is now unique in the world of high-repetition-rate accelerators”, explains Rauer.
+Team member Harald Sinn says, “The successful demonstration shows that the resonator principle is practical to implement”. Sinn is head of European XFEL’s instrumentation department and he adds, “In comparison with methods used up to now, it delivers X-ray pulses with a very narrow wavelength as well as a much higher stability and coherence.”
+The team will now work towards improving the stability of XFELO so that in the future it can be used to do experiments by European XFEL’s research community.
+XFELO is described in Nature.
+The post Cavity-based X-ray laser delivers high-quality pulses appeared first on Physics World.
+]]>The post The physics of an unethical daycare model that uses illness to maximize profits appeared first on Physics World.
+]]>That same dilemma faced mathematician Lauren Smith from the University of Auckland. She has two children at a ���wonderful daycare centre” who often fall ill. As many parents juggling work and parenting will understand, Smith is frequently faced with the issue of whether her kids are well enough to attend daycare.
+ +Smith then thought about how an unethical daycare centre might take advantage of this to maximize its profits – under the assumption that if there are not enough children attending (who still pay) then staff get sent home without pay, and also don’t get sick pay themselves.
+“It occurred to me that a sick kid attending daycare could actually be financially beneficial to the centre, while clearly being a detriment to the wellbeing of the other children as well as the staff and the broader community,” Smith told Physics World.
+For a hypothetical daycare centre that is solely focused on making as much money as possible, Smith realized that full attendance of sick children is not optimal financially as this requires maximal staffing at all times, whereas zero attendance of sick children does not give an opportunity for the disease to spread such that other children are then sent home.
+But in between these two extremes, Smith thought there should be an optimal attendance rate so that the disease is still able to spread and some children – and staff – are sent home. “As a mathematician I knew I had the tools to find it,” adds Smith.
+Using the so-called Susceptible-Infected-Recovered model for 100 children, a teacher to child ratio of 1:6 and a recovery rate from illness of 10 days, Smith found that the more infectious the disease, the lower the optimal attendance rate for sick children is, and so the more savings the unethical daycare centre can make.
+In other words, the more infectious a disease, fewer ill children are required to attend to spread it around, and so can keep more of them – and importantly staff – at home while still making sure it still spreads to non-infected kids.
+For a measles outbreak with a basic reproductive number of 12-18, for example, the model resulted in a potential staff saving of 90 working days, whereas for seasonal flu with a basic reproductive rate of 1.2 to 1.3, the potential staff savings is 4.4 days.
+Smith writes in the paper that the work is “not intended as a recipe for unethical daycare centre” but is rather to illustrate the financial incentive that exists for daycare centres to propagate diseases among children, which would lead to more infections of at-risk populations in the wider community.
+“I hope that as well as being an interesting topic, it can show that mathematics itself is interesting and is useful for describing the real world,” adds Smith.
+The post The physics of an unethical daycare model that uses illness to maximize profits appeared first on Physics World.
+]]>The post Saving the <em>Titanic</em>: the science of icebergs and unsinkable ships appeared first on Physics World.
+]]>These are the premises of two separate papers published independently this week by Chunlei Guo and colleagues at the University of Rochester, and by Daisuke Noto and Hugo N Ulloa of the University of Pennsylvania, both in the US. The Rochester group’s paper, which appears in Advanced Functional Materials, describes how applying a superhydrophobic coating to an open-ended metallic tube can make it literally unsinkable – a claim supported by extensive tests in a water tank. Noto and Ulloa’s research, which they describe in Science Advances, likewise involved a water tank. Theirs, however, was equipped with cameras, lasers and thermochromic liquid crystals that enabled them to track a freely floating miniature iceberg as it melted.
+Each study is surprising in its own way. For the iceberg paper, arguably the biggest surprise is that no-one had ever done such experiments before. After all, water and ice are readily available. Fancy tanks, lasers, cameras and temperature-sensitive crystals are less so, yet surely someone, somewhere, must have stuck some ice in a tank and monitored what happened to it?
+ +Noto and Ulloa’s answer is, in effect, no. “Despite the relevance of melting of floating ice in calm and energetic environments…most experimental and numerical efforts to examine this process, even to date, have either fixed or tightly constrained the position and posture of ice,” they write. “Consequently, the relationships between ice dissolution rate and background fluid flow conditions inferred from these studies are meaningful only when a one-way interaction, from the liquid to the solid phase, dominates the melting dynamics.”
+The problem, they continue, is that eliminating these approximations “introduces a significant technical challenge for both laboratory experiments and numerical simulations” thanks to a slew of interactions that would otherwise get swept under the rug. These interactions, in turn, lead to complex dynamics such as drifting, spinning and even flipping that must be incorporated into the model. Consequently, they write, “fundamental questions persist: ‘How long does an ice body last?’”
+To answer this question, Noto and Ulloa used their water-tank observations (see video) to develop a model that incorporates the thermodynamics of ice melting and mass balance conservation. Based on this model, they correctly predict both the melting rate and the lifespan of freely floating ice under self-driven convective flows that arise from interactions between the ice and the calm, fresh water surrounding it. Though the behaviour of ice in tempestuous salty seas is, they write, “beyond our scope”, their model nevertheless provides a useful upper bound on iceberg longevity, with applications for climate modelling as well as (presumably) shipping forecasts for otherwise-doomed ocean liners.
+In the unsinkable tube study, the big surprise is that a metal tube, divided in the middle but open at both ends, can continue to float after being submerged, corroded with salt, tossed about on a turbulent sea and peppered with holes. How is that even possible?
+“The inside of the tube is superhydrophobic, so water can’t enter and wet the walls,” Guo explains. “As a result, air remains trapped inside, providing buoyancy.”
+Importantly, this buoyancy persists even if the tube is damaged. “When the tube is punctured, you can think of it as becoming two, three, or more smaller sections,” Guo tells Physics World. “Each section will work in the same way of preventing water from entering inside, so no matter how many holes you punch into it, the tube will remain afloat.”
+ +So, is there anything that could make these superhydrophobic structures sink? “I can’t think of any realistic real-world challenges more severe than what we have put them through experimentally,” he says.
+We aren’t in unsinkable ship territory yet: the largest structure in the Rochester study was a decidedly un-Titanic-like raft a few centimetres across. But Guo doesn’t discount the possibility. He points out that tubes are made from ordinary aluminium, with a simple fabrication process. “If suitable applications call for it, I believe [human-scale versions] could become a reality within a decade,” he concludes.
+The post Saving the <em>Titanic</em>: the science of icebergs and unsinkable ships appeared first on Physics World.
+]]>The post Scientists quantify behaviour of micro- and nanoplastics in city environments appeared first on Physics World.
+]]>
Plastic has become a global pollutant concern over the last couple of decades: it is widespread in society, not often disposed of effectively, and generates both microplastics (1 µm to 5 mm in size) and nanoplastics (smaller than 1 µm) that have infiltrated many ecosystems – including being found inside humans and animals.
+Over time, bulk plastics break down into micro- and nanoplastics through fragmentation mechanisms that create much smaller particles with a range of shapes and sizes. Their small size has become a problem because they are increasingly finding their way into waterways that pollute the environment, into cities and other urban environments, and are now even being transported to remote polar and high-altitude regions.
+This poses potential health risks around the world. While the behaviour of micro- and nanoplastics in the atmosphere is poorly understood, it’s thought that they are transported by transcontinental and transoceanic winds, which causes the spread of plastic in the global carbon cycle.
+However, the lack of data on the emission, distribution and deposition of atmospheric micro- and nanoplastic particles makes it difficult to definitively say how they are transported around the world. It is also challenging to quantify their behaviour, because plastic particles can have a range of densities, sizes and shapes that undergo physical changes in clouds, all of which affect how they travel.
+ +A global team of researchers has developed a new semi-automated microanalytical method that can quantify atmospheric plastic particles present in air dustfall, rain, snow and dust resuspension. The research was performed across two Chinese megacities, Guangzhou and Xi’an.
+“As atmospheric scientists, we noticed that microplastics in the atmosphere have been the least reported among all environmental compartments in the Earth system due to limitations in detection methods, because atmospheric particles are smaller and more complex to analyse,” explains Yu Huang, from the Institute of Earth Environment of the Chinese Academy of Sciences (IEECAS) and one of the paper’s lead authors. “We therefore set out to develop a reliable detection technique to determine whether microplastics are present in the atmosphere, and if so, in what quantities.”
+For this new approach, the researchers employed a computer-controlled scanning electron microscopy (CCSEM) system equipped with energy-dispersive X-ray spectroscopy to reduce human bias in the measurements (which is an issue in manual inspections). They located and measured individual micro- and nanoplastic particles – enabling their concentration and physicochemical characteristics to be determined – in aerosols, dry and wet depositions, and resuspended road dust.
+“We believe the key contribution of this work lies in the development of a semi‑automated method that identifies the atmosphere as a significant reservoir of microplastics. By avoiding the human bias inherent in visual inspection, our approach provides robust quantitative data,” says Huang. “Importantly, we found that these microplastics often coexist with other atmospheric particles, such as mineral dust and soot – a mixing state that could enhance their potential impacts on climate and the environment.”
+The method could detect and quantify plastic particles as small as 200 nm, and revealed airborne concentrations of 1.8 × 105 microplastics/m3 and 4.2 × 104 nanoplastics/m3 in Guangzhou and 1.4 × 105 microplastics/m3 and 3.0 × 104 nanoplastics/m3 in Xi’an. This is two to six orders of magnitude higher for both microplastic and nanoplastic fluxes than reported previously via visual methods.
+The team also found that the deposition samples were more heterogeneously mixed with other particle types (such as dust and other pollution particles) than aerosols and resuspension samples, which showed that particles tend to aggregate in the atmosphere before being removed during atmospheric transport.
+ +The study revealed transport insights that could be beneficial for investigating the climate, ecosystem and human health impacts of plastic particles at all levels. The researchers are now advancing their method in two key directions.
+“First, we are refining sampling and CCSEM‑based analytical strategies to detect mixed states between microplastics and biological or water‑soluble components, which remain invisible with current techniques. Understanding these interactions is essential for accurately assessing microplastics’ climate and health effects,” Huang tells Physics World. “Second, we are integrating CCSEM with Raman analysis to not only quantify abundance but also identify polymer types. This dual approach will generate vital evidence to support environmental policy decisions.”
+The research was published in Science Advances.
+The post Scientists quantify behaviour of micro- and nanoplastics in city environments appeared first on Physics World.
+]]>The post Michele Dougherty steps aside as president of the Institute of Physics appeared first on Physics World.
+]]>Dougherty, who is based at Imperial College London, spent two years as IOP president-elect from October 2023 before becoming president in October 2025. Dougherty was appointed executive chair of the STFC in January 2025 and in July that year was also announced as the next Astronomer Royal – the first woman to hold the position.
+The changes at the IOP come in the wake of UK Research and Innovation (UKRI) stating last month that it will be adjusting how it allocates government funding for scientific research and infrastructure. Spending on curiosity-driven research will remain flat from 2026 to 2030, with UKRI prioritising funding in three key areas or “buckets”.
+ +The three buckets are: curiosity-driven research, which will be the largest; strategic government and societal priorities; and supporting innovative companies. There will also be a fourth “cross-cutting” bucket with funding for infrastructure, facilities and talent. In the four years to 2030, UKRI’s budget will be £38.6bn.
+While the detailed implications of the funding changes are still to be worked out, the IOP says its “top priority” is understanding and responding to them. With the STFC being one of nine research councils within UKRI, Dougherty is stepping aside as IOP president to ensure the IOP can play what it says is “a leadership role in advocating for physics without any conflict of interest”.
+In her role as STFC executive chair, Dougherty yesterday wrote to the UK’s particle physics, astronomy and nuclear physics community, asking researchers to identify by March how their projects would respond to flat cash as well as reductions of 20%, 40% and 60% – and to “identify the funding point at which the project becomes non-viable”. The letter says that a “similar process” will happen for facilities and labs.
+In her letter, Dougherty says that the UK’s science minister Lord Vallance and UKRI chief executive Ian Chapman want to protect curiosity-driven research, which they say is vital, and grow it “as the economy allows”. However, she adds, “the STFC will need to focus our efforts on a more concentrated set of priorities, funded at a level that can be maintained over time”.
+Tom Grinyer, chief executive officer of the IOP, says that the IOP is “fully focused on ensuring physics is heard clearly as these serious decisions are shaped”. He says the IOP is “gathering insight from across the physics community and engaging closely with government, UKRI and the research councils so that we can represent the sector with authority and evidence”.
+Grinyer warns, however, that UKRI’s shift in funding priorities and the subsequent STFC funding cuts will have “severe consequences” for physics. “The promised investment in quantum, AI, semiconductors and green technologies is welcome but these strengths depend on a stable research ecosystem,” he says.
+“I want to thank Michele for her leadership, and we look forward to working constructively with her in her capacity at STFC as this important period for physics unfolds,” adds Grinyer.
+The nuclear physicist Paul Howarth, who has been IOP president-elect since September, will now take on Dougherty’s responsibilities – as prescribed by the IOP’s charter – with immediate effect, with the IOP Council discussing its next steps at its February 2026 meeting.
+With a PhD in nuclear physics, Howarth has had a long career in the nuclear sector working on the European Fusion Programme and at British Nuclear Fuels, as well as co-founding the Dalton Nuclear Institute at the University of Manchester.
+He was a non-executive board director of the National Physical Laboratory and until his retirement earlier this year was chief executive officer of the National Nuclear Laboratory.
+In response to the STFC letter, Howarth says that the projected cuts “are a devastating blow for the foundations of UK physics”.
+“Physics isn’t a luxury we can afford to throw away through confusion,” says Howarth. “We urge the government to rethink these cuts, listen to the physics community, and deliver to a 10-year strategy to secure physics for the future.”
+The post Michele Dougherty steps aside as president of the Institute of Physics appeared first on Physics World.
+]]>The post AI-based tool improves the quality of radiation therapy plans for cancer treatment appeared first on Physics World.
+]]>As well as discussing the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres, they examine its evolution from an idea developed by an academic collaboration to a clinical product offered today by Sun Nuclear, a US manufacturer of radiation equipment and software.
+This podcast is sponsored by Sun Nuclear.
+The post AI-based tool improves the quality of radiation therapy plans for cancer treatment appeared first on Physics World.
+]]>The post The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’ appeared first on Physics World.
+]]>As noted by historian John Krige in his opening keynote address, “CERN is a European laboratory with a global footprint. Yet for all its success it now faces a turning point.” During the period under examination at the symposium, CERN essentially achieved the “world laboratory” status that various leaders of particle physics had dreamt of for decades.
+By building the Large Electron Positron (LEP) collider and then the Large Hadron Collider (LHC), the latter with contributions from Canada, China, India, Japan, Russia, the US and other non-European nations, CERN has attracted researchers from six continents. And as the Cold War ended in 1989–1991, two prescient CERN staff members developed the World Wide Web, helping knit this sprawling international scientific community together and enable extensive global collaboration.
+The LHC was funded and built during a unique period of growing globalization and democratization that emerged in the wake of the Cold War’s end. After the US terminated the Superconducting Super Collider in 1993, CERN was the only game in town if one wanted to pursue particle physics at the multi-TeV energy frontier. And many particle physicists wanted to be involved in the search for the Higgs boson, which by the mid-1990s looked as if it should show up at accessible LHC energies.
+ +Having discovered this long-sought particle at the LHC in 2012, CERN is now contemplating an ambitious construction project, the Future Circular Collider (FCC). Over three times larger than the LHC, it would study this all-important, mass-generating boson in greater detail using an electron–positron collider dubbed FCC-ee, estimated to cost $18bn and start operations by 2050.
+Later in the century, the FCC-hh, a proton–proton collider, would go in the same tunnel to see what, if anything, may lie at much higher energies. That collider, the cost of which is currently educated guesswork, would not come online until the mid 2070s.
+But the steadily worsening geopolitics of a fragmenting world order could make funding and building these colliders dicey affairs. After Russia’s expulsion from CERN, little in the way of its contributions can be expected. Chinese physicists had hoped to build an equivalent collider, but those plans seem to have been put on the backburner for now.
+And the “America First” political stance of the current US administration is hardly conducive to the multibillion-dollar contribution likely required from what is today the world’s richest (albeit debt-laden) nation. The ongoing collapse of the rules-based world order was recently put into stark relief by the US invasion of Venezuela and abduction of its president Nicolás Maduro, followed by Donald Trump’s menacing rhetoric over Greenland.
+While these shocking events have immediate significance for international relations, they also suggest how difficult it may become to fund gargantuan international scientific projects such as the FCC. Under such circumstances, it is very difficult to imagine non-European nations being able to contribute a hoped-for third of the FCC’s total costs.
+But the mounting European populist right-wing parties are no great friends of physics either, nor of international scientific endeavours. And Europeans face the not-insignificant costs of military rearmament in the face of Russian aggression and likely US withdrawal from Europe.
+So the other two thirds of the FCC’s many billions in costs cannot be taken for granted – especially not during the decades needed to construct its 91 km tunnel, 350 GeV electron–positron collider, the subsequent 100 TeV proton collider, and the massive detectors both machines require.
+According to former CERN director-general Chris Llewellyn Smith in his symposium lecture, “The political history of the LHC“, just under 12% of the material project costs of the LHC eventually came from non-member nations. It therefore warps the imagination to believe that a third of the much greater costs of the FCC can come from non-member nations in the current “Wild West” geopolitical climate.
+But particle physics desperately needs a Higgs factory. After the 1983 Z boson discovery at the CERN SPS Collider, it took just six years before we had not one but two Z factories – LEP and the Stanford Linear Collider – which proved very productive machines. It’s now been more than 13 years since the Higgs boson discovery. Must we wait another 20 years?
+CERN therefore needs a more modest, realistic, productive new scientific facility – a “Plan B” – to cope with the geopolitical uncertainties of an imperfect, unpredictable world. And I was encouraged to learn that several possible ideas are under consideration, according to outgoing CERN director-general Fabiola Gianotti in her symposium lecture, “CERN today and tomorrow“.
+Three of these ideas reflect the European Strategy for Particle Physics, which states that “an electron–positron Higgs factory is the highest-priority next CERN collider”. Two linear electron–positron colliders would require just 11–34 km of tunnelling and could begin construction in the mid-2030s, but would involve a fair amount of technical risk and cost roughly €10bn.
+The least costly and risky option, dubbed LEP3, involves installing superconducting radio-frequency cavities in the existing LHC tunnel once the high-luminosity proton run ends. Essentially an upgrade of the 200 GeV LEP2, this approach is based on well-understood technologies and would cost less than €5bn but can reach at most 240 GeV. The linear colliders could attain over twice that energy, enabling research on Higgs-boson decays into top quarks and the triple-Higgs self-interaction.
+Other proposed projects involving the LHC tunnel can produce large numbers of Higgs bosons with relatively minor backgrounds, but they can hardly be called “Higgs factories”. One of these, dubbed the LHeC, could only produce a few thousand Higgs bosons annually and would allow other important research on proton structure functions. Another idea is the proposed Gamma Factory, in which laser beams would be backscattered from LHC beams of partially stripped ions. If sufficient photon energies and intensity can be achieved, it will allow research on the γγ → H interaction. These alternatives would cost at most a few billion euros.
+As Krige stressed in his keynote address, CERN was meant to be more than a scientific laboratory at which European physicists could compete with their US and Soviet counterparts. As many of its founders intended, he said, it was “a cultural weapon against all forms of bigoted nationalism and anti-science populism that defied Enlightenment values of critical reasoning”. The same logic holds true today.
+In planning the next phase in CERN’s estimable history, it is crucial to preserve this cultural vitality, while of course providing unparalleled opportunities to do world-class science – lacking which, the best scientists will turn elsewhere.
+I therefore urge CERN planners to be daring but cognizant of financial and political reality in the fracturing world order. Don’t for a nanosecond assume that the future will be a smooth extrapolation from the past. Be fairly certain that whatever new facility you decide to build, there is a solid financial pathway to achieving it in a reasonable time frame.
+The future of CERN – and the bracing spirit of CERN – rests in your hands.
+The post The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’ appeared first on Physics World.
+]]>The post Ion-clock transition could benefit quantum computing and nuclear physics appeared first on Physics World.
+]]>
An atomic transition in ytterbium-173 could be used to create an optical multi-ion clock that is both precise and stable. That is the conclusion of researchers in Germany and Thailand who have characterized a clock transition that is enhanced by the non-spherical shape of the ytterbium-173 nucleus. As well as applications in timekeeping, the transition could be used in quantum computing. Furthermore, the interplay between atomic and nuclear effects in the transition could provide insights into the physics of deformed nuclei.
+ +The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. These clocks play crucial roles in technologies that require precision timing – such as global navigation satellite systems and communications networks. Currently, the international definition of the second is given by the frequency of caesium-based clocks, which deliver microwave time signals.
+Today’s best clocks, however, work at higher optical frequencies and are therefore much more precise than microwave clocks. Indeed, at some point in the future metrologists will redefine the second in terms of an optical transition – but the international metrology community has yet to decide which transition will be used.
+Broadly speaking, there are two types of optical clock. One uses an ensemble of atoms that are trapped and cooled to ultralow temperatures using lasers; the other involves a single atomic ion (or a few ions) held in an electromagnetic trap. Clocks that use one ion are extremely precise, but lack stability; whereas clocks that use many atoms are very stable, but sacrifice precision.
+As a result, some physicists are developing clocks that use multiple ions with the aim of creating a clock that optimizes precision and stability.
+Now, researchers at PTB and NIMT (the national metrology institutes of Germany and Thailand respectively) have characterized a clock transition in ions of ytterbium-173, and have shown that the transition could be used to create a multi-ion clock.
+“This isotope has a particularly interesting transition,” explains PTB’s Tanja Mehlstäubler – who is a pioneer in the development of multi-ion clocks.
+The ytterbium-173 nucleus is highly deformed with a shape that resembles a rugby ball. This deformation affects the electronic properties of the ion, which should make it much easier to use a laser to excite a specific transition that would be very useful for creating a multi-ion clock.
+This clock transition can also be excited in ytterbium-171 and has already been used to create a single-ion clock. However, excitation in a ytterbium-171 clock requires an intense laser pulse, which creates a strong electric field that shifts the clock frequency (called the AC Stark effect). This is a particular problem for multi-ion clocks because the intensity of the laser (and hence the clock frequency) can vary across the region in which the ions are trapped.
+ +To show that a much lower laser intensity can be used to excite the clock transition in ytterbium-173, the team studied a “Coulomb crystal” in which three ions were trapped in a line and separated by about 10 micron. They illuminated the ions with laser light that was not uniform in intensity across the crystal. They were able to excite the transition at a relatively low laser intensity, which resulted in very small AC Stark shifts between the frequencies of the three ions.
+According to the team, this means that as many as 100 trapped ytterbium-173 ions could be used to create a clock that could be used as a time standard; to redefine the second; and also to make very precise measurements of the Earth’s gravitational field.
+As well as being useful for creating an optical ion clock, this multi-ion capability could also be exploited to create quantum-computing architectures based on multiple trapped ions. And because the observed effect is a result of the shape of the ytterbium-173 nucleus, further studies could provide insights into nuclear physics.
+The research is described in Physical Review Letters.
++
The post Ion-clock transition could benefit quantum computing and nuclear physics appeared first on Physics World.
+]]>The post The power of a poster appeared first on Physics World.
+]]>For years, I pestered my university to erect a notice board outside my office so that I could showcase my group’s recent research posters. Each time, for reasons of cost, my request was unsuccessful. At the same time, I would see similar boards placed outside the offices of more senior and better-funded researchers in my university. I voiced my frustrations to a mentor whose advice was, “It’s better to seek forgiveness than permission.” So, since I couldn’t afford to buy a notice board, I simply used drawing pins to mount some unauthorized posters on the wall beside my office door.
+ +Some weeks later, I rounded the corner to my office corridor to find the head porter standing with a group of visitors gathered around my posters. He was telling them all about my research using solar energy to disinfect contaminated drinking water in disadvantaged communities in Sub-Saharan Africa. Unintentionally, my illegal posters had been subsumed into the head porter’s official tour that he frequently gave to visitors.
+The group moved on but one man stayed behind, examining the poster very closely. I asked him if he had any questions. “No, thanks,” he said, “I’m not actually with the tour, I’m just waiting to visit someone further up the corridor and they’re not ready for me yet. Your research in Africa is very interesting.” We chatted for a while about the challenges of working in resource-poor environments. He seemed quite knowledgeable on the topic but soon left for his meeting.
+A few days later while clearing my e-mail junk folder I spotted an e-mail from an Asian “philanthropist” offering me €20,000 towards my research. To collect the money, all I had to do was send him my bank account details. I paused for a moment to admire the novelty and elegance of this new e-mail scam before deleting it. Two days later I received a second e-mail from the same source asking why I hadn’t responded to their first generous offer. While admiring their persistence, I resisted the urge to respond by asking them to stop wasting their time and mine, and instead just deleted it.
+So, you can imagine my surprise when the following Monday morning I received a phone call from the university deputy vice-chancellor inviting me to pop up for a quick chat. On arrival, he wasted no time before asking why I had been so foolish as to ignore repeated offers of research funding from one of the college’s most generous benefactors. And that is how I learned that those e-mails from the Asian philanthropist weren’t bogus.
+The gentleman that I’d chatted with outside my office was indeed a wealthy philanthropic funder who had been visiting our university. Having retrieved the e-mails from my deleted items folder, I re-engaged with him and subsequently received €20,000 to install 10,000-litre harvested-rainwater tanks in as many primary schools in rural Uganda as the money would stretch to.
+
About six months later, I presented the benefactor with a full report accounting for the funding expenditure, replete with photos of harvested-rainwater tanks installed in 10 primary schools, with their very happy new owners standing in the foreground. Since you miss 100% of the chances you don’t take, I decided I should push my luck and added a “wish list” of other research items that the philanthropist might consider funding.
+The list started small and grew steadily ambitious. I asked for funds for more tanks in other schools, a travel bursary, PhD registration fees, student stipends and so on. All told, the list came to a total of several hundred thousand euros, but I emphasized that they had been very generous so I would be delighted to receive funding for any one of the listed items and, even if nothing was funded, I was still very grateful for everything he had already done. The following week my generous patron deposited a six-figure-euro sum into my university research account with instructions that it be used as I saw fit for my research purposes, “under the supervision of your university finance office”.
+In my career I have co-ordinated several large-budget, multi-partner, interdisciplinary, international research projects. In each case, that money was hard-earned, needing at least six months and many sleepless nights to prepare the grant submission. It still amuses me that I garnered such a large sum on the back of one research poster, one 10-minute chat and fewer than six e-mails.
+So, if you have learned nothing else from this story, please don’t underestimate the power of a strategically placed and impactful poster describing your research. You never know with whom it may resonate and down which road it might lead you.
+The post The power of a poster appeared first on Physics World.
+]]>The post ATLAS narrows the hunt for dark matter appeared first on Physics World.
+]]>Using 51.8 fb⁻¹ of proton–proton collision data at 13.6 TeV collected in 2022–2023, the ATLAS team looked for events containing two such emerging jets. They explored two possible production mechanisms, which are a vector mediator (Z′) produced in the s‑channel and a scalar mediator (Φ) exchanged in the t‑channel. The analysis combined two complementary strategies. A cut-based strategy relying on high-level jet observables, including track-, vertex-, and jet-substructure-based selections, enables a straightforward reinterpretation for alternative theoretical models. A machine learning approach employs a per-jet tagger using a transformer architecture trained on low-level tracking variables to discriminate emerging from Standard Model jets, maximizing sensitivity for the specific models studied.
+No emerging‑jet signal excess was found, but the search set the first direct limits on emerging‑jet production via a Z′ mediator and the first constraints on t‑channel Φ production. Depending on the model assumptions, Z′ masses up to around 2.5 TeV and Φ masses up to about 1.35 TeV are excluded. These results significantly narrow the space in which dark sector particles could exist and form part of a broader ATLAS programme to probe dark quantum chromodynamics. The work sharpens future searches for dark matter and advances our understanding of how a dark sector might behave.
+Search for emerging jets in pp collisions at √s = 13.6 TeV with the ATLAS experiment
+The ATLAS Collaboration 2025 Rep. Prog. Phys. 88 097801
++
Dark matter and dark energy interactions: theoretical challenges, cosmological implications and observational signatures by B Wang, E Abdalla, F Atrio-Barandela and D Pavón (2016)
+The post ATLAS narrows the hunt for dark matter appeared first on Physics World.
+]]>The post How do bacteria produce entropy? appeared first on Physics World.
+]]>This type of matter is commonly found in biology: swimming bacteria or migrating cells are both classic examples. In addition, a wide range of synthetic systems, such as active colloids or robotic swarms, can also fall into this umbrella.
+Active matter has therefore been the focus of much research over the past decade, unveiling many surprising theoretical features and a suggesting a plethora of applications.
+Perhaps most importantly, these systems’ ability to perform work leads to sustained non-equilibrium behaviour. This is distinctly different from that of relaxing equilibrium thermodynamic systems, commonly found in other areas of physics.
+The concept of entropy production is often used to quantify this difference and to calculate how much useful work can be performed. If we want to harvest and utilise this work however, we need to understand the small-scale dynamics of the system. And it turns out this is rather complicated.
+One way to calculate entropy production is through field theory, the workhorse of statistical mechanics. Traditional field theories simplify the system by smoothing out details, which works well for predicting densities and correlations. However, these approximations often ignore the individual particle nature, leading to incorrect results for entropy production.
+The new paper details a substantial improvement on this method. By making use of Doi-Peliti field theory, they’re able to keep track of microscopic particle dynamics, including reactions and interactions.
+The approach starts from the Fokker-Planck equation and provides a systematic way to calculate entropy production from first principles. It can be extended to include interactions between particles and produces general, compact formulas that work for a wide range of systems. These formulas are practical because they can be applied to both simulations and experiments.
+The authors demonstrated their method with numerous examples, including systems of Active Brownian Particles, showing its broad usefulness. The big challenge going forward though is to extend their framework to non-Markovian systems, ones where future states depend on the present as well as past states.
+Field theories of active particle systems and their entropy production – IOPscience
+G. Pruessner and R. Garcia-Millan, 2025 Rep. Prog. Phys. 88 097601
++
The post How do bacteria produce entropy? appeared first on Physics World.
+]]>The post Einstein’s recoiling slit experiment realized at the quantum limit appeared first on Physics World.
+]]>The thought experiment dates back to the 1927 Solvay Conference, where Albert Einstein proposed a modification of the double-slit experiment in which one of the slits could recoil. He argued that if a photon caused the slit to recoil as it passed through, then measuring that recoil might reveal which path the photon had taken without destroying the interference pattern. Conversely, Niels Bohr argued that any such recoil would entangle the photon with the slit, washing out the interference fringes.
+For decades, this debate remained largely philosophical. The challenge was not about adding a detector or a label to track a photon’s path. Instead, the question was whether the “which-path” information could be stored in the motion of the slit itself. Until now, however, no physical slit was sensitive enough to register the momentum kick from a single photon.
+To detect the recoil from a single photon, the slit’s momentum uncertainty must be comparable to the photon’s momentum. For any ordinary macroscopic slit, its quantum fluctuations are significantly larger than the recoil, washing out the which-path information. To give a sense of scale, the authors note that even a 1 g object modelled as a 100 kHz oscillator (for example, a mirror on a spring) would have a ground-state momentum uncertainty of about 10-16 kg m s-1, roughly 11 orders of magnitude larger than the momentum of an optical photon (approximately 10-27 kg m s-1).
+
In their study, published in Physical Review Letters, Yu-Chen Zhang and colleagues from the University of Science and Technology of China overcame this obstacle by replacing the movable slit with a single rubidium atom held in an optical tweezer and cooled to its three-dimensional motional ground state. In this regime, the atom’s momentum uncertainty reaches the quantum limit, making the recoil from a single photon directly measurable.
+Rather than using a conventional double-slit geometry, the researchers built an optical interferometer in which photons scattered off the trapped atom. By tuning the depth of this optical trap, the researchers were able to precisely control the atom’s intrinsic momentum uncertainty, effectively adjusting how “movable” the slit was.
+As the researchers decreased the atom’s momentum uncertainty, they observed a loss of interference in the scattered photons. Increasing the atom’s momentum uncertainty caused the interference to reappear.
+ +This behaviour directly revealed the trade-off between interference and which-path information at the heart of the Einstein–Bohr debate. The researchers note that the loss of interference arose not from classical noise, but from entanglement between the photon and the atom’s motion.
+“The main challenge was matching the slit’s momentum uncertainty to that of a single photon,” says corresponding author Jian-Wei Pan. “For macroscopic objects, momentum fluctuations are far too large – they completely hide the recoil. Using a single atom cooled to its motional ground state allows us to reach the fundamental quantum limit.”
+Maintaining interferometric phase stability was equally demanding. The team used active phase stabilization with a reference laser to keep the optical path length stable to within a few nanometres (roughly 3 nm) for over 10 h.
+ +Beyond settling a historical argument, the experiment offers a clean demonstration of how entanglement plays a key role in Bohr’s complementarity principle. As Pan explains, the results suggest that “entanglement in the momentum degree-of-freedom is the deeper reason behind the loss of interference when which-path information becomes available”.
+This experiment opens the door to exploring quantum measurement in a new regime. By treating the slit itself as a quantum object, future studies could probe how entanglement emerges between light and matter. Additionally, the same set-up could be used to gradually increase the mass of the slit, providing a new way to study the transition from quantum to classical behaviour.
+The post Einstein’s recoiling slit experiment realized at the quantum limit appeared first on Physics World.
+]]>The post European Space Agency unveils first images from Earth-observation ‘sounder’ satellite appeared first on Physics World.
+]]>Launched on 1 July 2025 from the Kennedy Space Center in Florida aboard a SpaceX Falcon 9 rocket, MTG-S operates from a geostationary orbit, about 36 000 km above Earth’s surface and is able to provide coverage of Europe and part of northern Africa on a 15-minute repeat cycle.
+The satellite carries a hyperspectral sounding instrument that uses interferometry to capture data on temperature and humidity as well as being able to measure wind and trace gases in the atmosphere. It can scan nearly 2,000 thermal infrared wavelengths every 30 minutes.
+The data will eventually be used to generate 3D maps of the atmosphere and help improve the accuracy of weather forecasting, especially for rapidly evolving storms.
+The “temperature” image, above, was taken in November 2025 and shows heat (red) from the African continent, while a dark blue weather front covers Spain and Portugal.
+The “humidity” image, below, was captured using the sounder’s medium-wave infrared channel. Blue colours represent regions in the atmosphere with higher humidity, while red colours correspond to lower humidity.
+
“Seeing the first infrared sounder images from MTG-S really brings this mission and its potential to life,” notes Simonetta Cheli, ESA’s director of Earth observation programmes. “We expect data from this mission to change the way we forecast severe storms over Europe – and this is very exciting for communities and citizens, as well as for meteorologists and climatologists.”
+ESA is expected to launch a second Meteosat Third Generation-Imaging satellite later this year following the launch of the first one – MTG-I1 – in December 2022.
+The post European Space Agency unveils first images from Earth-observation ‘sounder’ satellite appeared first on Physics World.
+]]>The post Uranus and Neptune may be more rocky than icy, say astrophysicists appeared first on Physics World.
+]]>Within our solar system, planets fall into three categories based on their internal composition. Mercury, Venus, Earth and Mars are deemed terrestrial rocky planets; Jupiter and Saturn are gas giants; and Uranus and Neptune are ice giants.
+The new work, which was led by PhD student Luca Morf in UZH’s astrophysics department, challenges this last categorization by numerically simulating the two planets’ interiors as a mixture of rock, water, hydrogen and helium. Morf explains that this modelling framework is initially “agnostic” – meaning unbiased – about what the density profiles of the planets’ interiors should be. “We then calculate the gravitational fields of the planets so that they match with observational measurements to infer a possible composition,” he says.
+ +This process, Morf continues, is then repeated and refined to ensure that each model satisfies several criteria. The first criteria is that the planet should be in hydrostatic equilibrium, meaning that its internal pressure is enough to counteract its gravity and keep it stable. The second is that the planet should have the gravitational moments observed in spacecraft data. These moments describe the gravitational field of a planet, which is complex because planets are not perfect spheres.
+The final criteria is that the modelled planets need to be thermodynamically and compositionally consistent with known physics. “For example, a simulation of the planets’ interiors must obey equations of state, which dictate how materials behave under given pressure and temperature conditions,” Morf explains.
+After each iteration, the researchers adjust the density profile of each planet and test it to ensure that the model continues to adhere to the three criteria. “We wanted to bridge the gap between existing physics-based models that are overly constrained and empirical approaches that are too simplified,” Morf explains. Avoiding strict initial assumptions about composition, he says, “lets the physics and data guide the solution [and] allows us to probe a larger parameter space.”
+Based on their models, the UZH astrophysicists concluded that the interiors of Uranus and Neptune could have a wide range of possible structures, encompassing both water-rich and rock-rich configurations. More specifically, their calculations yield rock-to-water ratios of between 0.04-3.92 for Uranus and 0.20-1.78 for Neptune.
+
The models, which are detailed in Astronomy and Astrophysics, also contain convective regions with ionic water pockets. The presence of such pockets could explain the fact that Uranus and Neptune, unlike Earth, have more than two magnetic poles, as the pockets would generate their own local magnetic dynamos.
+Overall, the new findings suggest that the traditional “ice giant” label may oversimplify the true nature of Uranus of Neptune, Morf tells Physics World. Instead, these planets could have complex internal structures with compositional gradients and different heat transport mechanisms. Though much uncertainty remains, Morf stresses that Uranus and Neptune – and, by extension, similar intermediate-class planets that may exist in other solar systems – are so poorly understood that any new information about their internal structure is valuable.
+ +A dedicated space mission to these outer planets would yield more accurate measurements of the planets’ gravitational and magnetic fields, enabling scientists to refine the limited existing observational data. In the meantime, the UZH researchers are looking for more solutions for the possible interiors of Uranus and Neptune and improving their models to account for additional constraints, such as atmospheric conditions. “Our work will also guide laboratory and theoretical studies on the way materials behave in general at high temperatures and pressures,” Morf says.
+The post Uranus and Neptune may be more rocky than icy, say astrophysicists appeared first on Physics World.
+]]>The post String-theory concept boosts understanding of biological networks appeared first on Physics World.
+]]>Biological transport and communication networks have fascinated scientists for decades. Neurons branch to form synapses, blood vessels split to supply tissues, and plant roots spread through soil. Since the mid-20th century, many researchers believed that evolution favours networks that minimize total length or volume.
+“There is a longstanding hypothesis, going back to Cecil Murray from the 1940s, that many biological networks are optimized for their length and volume,” Albert-László Barabási of Northeastern University explains. “That is, biological networks, like the brain and the vascular systems, are built to achieve their goals with the minimal material needs.” Until recently, however, it had been difficult to characterize the complicated nature of biological networks.
+Now, advances in imaging have given Barabási and colleagues a detailed 3D picture of real physical networks, from individual neurons to entire vascular systems. With these new data in hand, the researchers found that previous theories are unable to describe real networks in quantitative terms.
+To remedy this, the team defined the problem in terms of physical networks, systems whose nodes and links have finite thickness and occupy space. Rather than treating them as abstract graphs made of idealized edges, the team models them as geometrical objects embedded in 3D space.
+To do this, the researchers turned to an unexpected mathematical tool. “Our work relies on the framework of covariant closed string field theory, developed by Barton Zwiebach and others in the 1980s,” says team member Xiangyi Meng at Rensselaer Polytechnic Institute. This framework provides a correspondence between network-like graphs and smooth surfaces.
+Unlike string theory, their approach is entirely classical. “These surfaces, obtained in the absence of quantum fluctuations, are precisely the minimal surfaces we seek,” Meng says. No quantum mechanics, supersymmetry, or exotic string-theory ingredients are required. “Those aspects were introduced mainly to make string theory quantum and thus do not apply to our current context.”
+Using this framework, the team analysed a wide range of biological systems. “We studied human and fruit fly neurons, blood vessels, trees, corals, and plants like Arabidopsis,” says Meng. Across all these cases, a consistent pattern emerged: the geometry of the networks is better predicted by minimizing surface area rather than total length.
+One of the most striking outcomes of the surface-minimization framework is its ability to explain structural features that previous models cannot. Traditional length-based theories typically predict simple Y-shaped bifurcations, where one branch splits into two. Real networks, however, often display far richer geometries.
+“While traditional models are limited to simple bifurcations, our framework predicts the existence of higher-order junctions and ‘orthogonal sprouts’,” explains Meng.
+These include three- or four-way splits and perpendicular, dead-end offshoots. Under a surface-based principle, such features arise naturally and allow neurons to form synapses using less membrane material overall and enable plant roots to probe their environment more effectively.
+Ginestra Bianconi of the UK’s Queen Mary University of London says that the key result of the new study is the demonstration that “physical networks such as the brain or vascular networks are not wired according to a principle of minimization of edge length, but rather that their geometry follows a principle of surface minimization.”
+Bianconi, who was not involved in the study, also highlights the interdisciplinary leap of invoking ideas from string theory, “This is a beautiful demonstration of how basic research works”.
+The team emphasizes that their work is not immediately technological. “This is fundamental research, but we know that such research may one day lead to practical applications,” Barabási says. In the near term, he expects the strongest impact in neuroscience and vascular biology, where understanding wiring and morphology is essential.
+ +Bianconi agrees that important questions remain. “The next step would be to understand whether this new principle can help us understand brain function or have an impact on our understanding of brain diseases,” she says. Surface optimization could, for example, offer new ways to interpret structural changes observed in neurological disorders.
+Looking further ahead, the framework may influence the design of engineered systems. “Physical networks are also relevant for new materials systems, like metamaterials, who are also aiming to achieve functions at minimal cost,” Barabási notes. Meng points to network materials as a particularly promising area, where surface-based optimization could inspire new architectures with tailored mechanical or transport properties.
+The research is described in Nature.
+The post String-theory concept boosts understanding of biological networks appeared first on Physics World.
+]]>The post The secret life of TiO₂ in foams appeared first on Physics World.
+]]>In this study, researchers deposited TiO₂ thin films onto carbon foams using magnetron sputtering and applied different bias voltages to control ion energy, which in turn affects coating density, crystal structure, thickness, and adhesion. They analysed both the outer surface and the interior of the foam using microscopy, particle‑transport simulations, and X‑ray techniques.
+They found that the TiO₂ coating on the outer surface is dense, correctly composed, and crystalline (mainly anatase with a small amount of rutile) ideal for catalytic and energy applications. They also discovered that although fewer particles reach deep inside the foam, those do retain the same energy, meaning particle quantity decreases with depth but particle energy does not. Because devices like batteries and supercapacitors rely on uniform coatings, variations in thickness or structure inside the foam can lead to poorer performance and faster degradation.
+Overall, this research provides a much clearer understanding of how TiO₂ coatings grow inside complex 3D foams, showing how thickness, density, and crystal structure evolve with depth and how bias voltage can be used to tune these properties. By revealing how plasma particles move through the foam and validating models that predict coating behaviour, it enables the design of more reliable, higher‑performing foam‑based devices for energy and catalytic applications.
+Loris Chavée et al 2026 Prog. Energy 8 015002
++
Advances in thermal conductivity for energy applications: a review Qiye Zheng et al. (2021)
+The post The secret life of TiO₂ in foams appeared first on Physics World.
+]]>The post Laser processed thin NiO powder coating for durable anode-free batteries appeared first on Physics World.
+]]>In this research, the scientists coated the copper foil with NiO powder and used a CO₂ laser (l = 10.6 mm) to rapidly heat the same in a rapid scanning mode to transform it. The laser‑treated NiO becomes porous and strongly adherent to the copper, helping lithium spread out more evenly. The process is fast, energy‑efficient, and can be done in air. As a result, lithium ions diffuse or move more easily across the surface, reducing dendrite formation. The exchange current density also doubled compared to bare copper, indicating better charge‑transfer behaviour. Overall, battery performance improved dramatically. The modified cells lasted 400 cycles at room temperature and 700 cycles at 40°C, compared with only 150 cycles for uncoated copper.
+This simple, rapid, and scalable technique offers a powerful way to improve anode‑free lithium metal batteries, one of the most promising next‑generation battery technologies.
+Microgradient patterned NiO coating on copper current collector for anode-free lithium metal battery
+Supriya Kadam et al 2025 Prog. Energy 7 045003
++
Lithium aluminum alloy anodes in Li-ion rechargeable batteries: past developments, recent progress, and future prospects by Tianye Zheng and Steven T Boles (2023)
+The post Laser processed thin NiO powder coating for durable anode-free batteries appeared first on Physics World.
+]]>The post Planning a sustainable water future in the United States appeared first on Physics World.
+]]>
In this work, the researchers show how desalination of brackish groundwater can be made genuinely sustainable and economically viable for addressing the United States’ looming water shortages. A key part of the solution is zero‑liquid‑discharge, which avoids brine disposal by extracting more freshwater and recovering salts such as sodium, calcium, and magnesium for reuse. Crucially, the study demonstrates that when desalination is powered by low‑cost solar and wind energy, the overall process becomes far more affordable. By 2040, solar photovoltaics paired with optimised battery storage are projected to produce electricity at lower cost than the grid in the states facing the largest water deficits, making renewable‑powered desalination a competitive option.
+The researchers also show that advanced technologies, such as high‑recovery reverse osmosis and crystallisation, can achieve zero‑liquid‑discharge without increasing costs, because the extra water and salt recovery offsets the expense of brine management. Their modelling indicates that a full renewable‑powered zero‑liquid‑discharge pathway can produce freshwater at an affordable cost, while reducing environmental impacts and avoiding brine disposal altogether. Taken together, this work outlines a realistic, sustainable pathway for large‑scale desalination in the United States, offering a credible strategy for securing future water supplies in increasingly water‑stressed regions.
+
Zhuoran Zhang et al 2025 Prog. Energy 7 045002
++
Review of solar-enabled desalination and implications for zero-liquid-discharge applications by Vasilis Fthenakis et al. (2024)
++
The post Planning a sustainable water future in the United States appeared first on Physics World.
+]]>The post Could silicon become the bedrock of quantum computers? appeared first on Physics World.
+]]>Dubbed the 14|15 platform due to its elemental constituents, it combines a crystalline silicon substrate with qubits made from phosphorus atoms . By relying on only two types of atoms, team co-leader Michelle Simmons says the device “avoids the interfaces and complexities that plague so many multi-material platforms” while enabling “high-quality qubits with lower noise, simplicity of design and device stability”.
+Quantum computers take registers of qubits, which store quantum information, and apply basic operations to them sequentially to execute algorithms. One of the primary challenges they face is scalability – that is, sustaining reliable, or high-fidelity, operations on an increasing number of qubits. Many of today’s platforms use only a small number of qubits, for which operations can be individually tuned for optimal performance. However, as the amount of hardware, complexity and noise increases, this hands-on approach becomes debilitating.
+Silicon quantum processors may offer a solution. Writing in Nature, Simmons, Ludwik Kranz, and their team at Silicon Quantum Computing (a spinout from the University of New South Wales in Sydney) describe a system that uses the nuclei of phosphorus atoms as its primary qubit. Each nucleus behaves a little like a bar magnet with an orientation (north/south or up/down) that represents a 0 or 1.
+ +These so-called spin qubits are particularly desirable because they exhibit relatively long coherence times, meaning information can be preserved for long enough to apply the numerous operations of an algorithm. Using monolithic, high-purity silicon as the substrate further benefits coherence since it reduces undesirable charge and magnetic noise arising from impurities and interfaces.
+To make their quantum processor, the team deposited phosphorus atoms in small registers a few nanometres across. Within each register, the phosphorus nuclei do not interact enough to generate the entangled states required for a quantum computation. The team remedy this by loading each cluster of phosphorous atoms with a electron that is shared between the atoms. The result is that so-called hyperfine interactions, wherein each nuclear spin interacts with the electron like an interacting bar magnet, arise and provide the interaction necessary to entangle nuclear spins within each register.
+By combining these interactions with control of individual nuclear spins, the researchers showed that they can generate Bell states (maximally entangled two-qubit states) between pairs of nuclei within a register with error rates as low as 0.5% – the lowest to date for semiconductor platforms.
+The team’s next step was to connect multiple processors – a step that exponentially increases their combined capacity. To understand how, consider two quantum processors, one with n qubits and the other m qubits. Isolated from one another, they can collectively represent at most 2n + 2m states. Once they are entangled, however, they can represent 2n + m states.
+Simmons says that silicon quantum processors offer an inherent advantage in scaling, too. Generating numerous registers on a single chip and using “naturally occurring” qubits, she notes, reduces their need for extraneous confinement gates and electronics as they scale.
+ +The researchers showcased these scaling capabilities by entangling a register of four phosphorus atoms with a register of five, separated by 13 nm. The entanglement of these registers is mediated by the electron-exchange interaction, a phenomenon arising from the combination of Pauli’s exclusion principle and Coulomb repulsion when electrons are confined in a small region. By leveraging this and all other interactions and control in their toolkit, the researchers generate entanglement of eight data qubits across the two registers.
+Retaining such high-quality qubits and individual control of them despite their high density demonstrates the scaling potential of the platform. Future avenues of exploration include increasing the size of 2D arrays of registers to increase the number of qubits, but Simmons says the rest is “top secret”, adding “the world will know soon enough”.
+The post Could silicon become the bedrock of quantum computers? appeared first on Physics World.
+]]>The post Is our embrace of AI naïve and could it lead to an environmental disaster? appeared first on Physics World.
+]]>Despite signing the letter, Sam Altman of OpenAI, the firm behind ChatGPT, has stated that the company’s explicit ambition is to create artificial general intelligence (AGI) within the next few years, to “win the AI-race”. AGI is predicted to surpass human cognitive capabilities for almost all tasks, but the real danger is if or when AGI is used to generate more powerful versions of itself. Sometimes called “superintelligence”, this would be impossible to control. Companies do not want any regulation of AI and their business model is for AGI to replace most employees at all levels. This is how firms are expected to benefit from AI, since wages are most companies’ biggest expense.
+ +AI, to me, is not about saving the world, but about a handful of people wanting to make enormous amounts of money from it. No-one knows what internal mechanism makes even today’s AI work – just as one cannot find out what you think from how the neurons in your brain are firing. If we don’t even understand today’s AI models, how are we going to understand – and control – the more powerful models that already exist or are planned in the near future?
+AI has some practical benefits but too often is put to mostly meaningless, sometimes downright harmful, uses such as cheating your way through school or creating disinformation and fake videos online. What’s more, an online search with the help of AI requires at least 10 times as much energy as a search without AI. It already uses 5% of all electricity in the US and by 2028 this figure is expected to be 15%, which will be over a quarter of all US households’ electricity consumption. AI data servers are more than 50% as carbon intensive as the rest of the US’s electricity supply.
+Those energy needs are why some tech companies are building AI data centres – often under confidential, opaque agreements – very quickly for fear of losing market share. Indeed, the vast majority of those centres are powered by fossil-fuel energy sources – completely contrary to the Paris Agreement to limit global warming. We must wisely allocate Earth’s strictly limited resources, with what is wasted on AI instead going towards vital things.
+To solve the climate crisis, there is definitely no need for AI. All the solutions have already been known for decades: phasing out fossil fuels, reversing deforestation, reducing energy and resource consumption, regulating global trade, reforming the economic system away from its dependence on growth. The problem is that the solutions are not implemented because of short-term selfish profiteering, which AI only exacerbates.
+AI, like all other technologies, is not a magic wand and, as Hinton says, potentially has many negative consequences. It is not, as the enthusiasts seem to think, a magical free resource that provides output without input (and waste). I believe we must rethink our naïve, uncritical, overly fast, total embrace of AI. Universities are known for wise reflection, but worryingly they seem to be hurrying to jump on the AI bandwagon. The problem is that the bandwagon may be going in the wrong direction or crash and burn entirely.
+ +Why then should universities and organizations send their precious money to greedy, reckless and almost totalitarian tech billionaires? If we are going to use AI, shouldn’t we create our own AI tools that we can hopefully control better? Today, more money and power is transferred to a few AI companies that transcend national borders, which is also a threat to democracy. Democracy only works if citizens are well educated, committed, knowledgeable and have influence.
+AI is like using a hammer to crack a nut. Sometimes a hammer may be needed but most of the time it is not and is instead downright harmful. Happy-go-lucky people at universities, companies and throughout society are playing with fire without knowing about the true consequences now, let alone in 10 years’ time. Our mapped-out path towards AGI is like a zebra on the savannah creating an artificial lion that begins to self-replicate, becoming bigger, stronger, more dangerous and more unpredictable with each generation.
+Wise reflection today on our relationship with AI is more important than ever.
+The post Is our embrace of AI naïve and could it lead to an environmental disaster? appeared first on Physics World.
+]]>The post New sensor uses topological material to detect helium leaks appeared first on Physics World.
+]]>Helium is employed in a wide range of fields, including aerospace, semiconductor manufacturing and medical applications as well as physics research. Because it is odourless, colourless, and inert, it is essentially invisible to traditional leak-detection equipment such as adsorption-based sensors. Specialist helium detectors are available, but they are bulky, expensive and highly sensitive to operating conditions.
+The new device created by Li Fan and colleagues at Nanjing consists of nine cylinders arranged in three sub-triangles with tubes in between the cylinders. The corners of the sub-triangles touch and the tubes allow air to enter the device. The resulting two-dimensional system has a so-called “kagome” structure and is an example of a topological material – that is, one that contains special, topologically protected, states that remain stable even if the bulk structure contains minor imperfections or defects. In this system, the protected states are the corners.
+ +To test their setup, the researchers placed speakers under the corners that send sound waves into the structure and make the gas within it vibrate at a certain frequency (the resonance frequency). When they replaced the air in the device with helium, the sound waves travelled faster, changing the vibration frequency. Measuring this shift in frequency enabled the researchers to calculate the concentration of helium in the device.
+Fan explains that the device works because the interface/corner states are impacted by the properties of the gas within it. This mechanism has many advantages over traditional gas sensors. First, it does not rely on chemical reactions, making it ideal for detecting inert gases like helium. Second, the sensor is not affected by external conditions and can therefore work at extremely low temperatures – something that is challenging for conventional sensors that contain sensitive materials. Third, its sensitivity to the presence of helium does not change, meaning it does not need to be recalibrated during operation. Finally, it detects frequency changes quickly and rapidly returns to its baseline once helium levels decrease.
+As well as detecting helium, Fan says the device can also pinpoint the direction a gas leak is coming from. This is because when helium begins to fill the device, the corner closest to the source of the gas is impacted first. Each corner thus acts as an independent sensing point, giving the device a spatial sensing capability that most traditional detectors lack.
+Detecting helium leaks is important in fields such as semiconductor manufacturing, where the gas is used for cooling, and in medical imaging systems that operate at liquid helium temperatures. “We think our work opens an avenue for inert gas detection using a simple device and is an example of a practical application for two-dimensional acoustic topological materials,” says Fan.
+ +While the new sensor was fabricated to detect helium, the same mechanism could also be employed to detect other gases such as hydrogen, he adds.
+Spurred on by these promising preliminary results, which they report in Applied Physics Letters, the researchers plan to extend their fabrication technique to create three-dimensional acoustic topological structures. “These could be used to orientate the corner points so that helium can be detected in 3D space,” says Fan. “Ultimately, we are trying to integrate our system into a portable structure that can be deployed in real-world environments without complex supporting equipment.,” he tells Physics World.
+The post New sensor uses topological material to detect helium leaks appeared first on Physics World.
+]]>The post Encrypted qubits can be cloned and stored in multiple locations appeared first on Physics World.
+]]>Heisenberg’s uncertainty principle – which states that it is impossible to measure conjugate variables of a quantum object with less than a combined minimum uncertainty – is one of the central tenets of quantum mechanics. The no-cloning theorem – that it is impossible to create identical clones of unknown quantum states – flows directly from this. Achim Kempf of the University of Waterloo explains, “If you had [clones] you could take half your copies and perform one type of measurement, and the other half of your copies and perform an incompatible measurement, and then you could beat the uncertainty principle.”
+No-cloning poses a challenge those trying to create a quantum internet. On today’s Internet, storage of information on remote servers is common, and multiple copies of this information are usually stored in different locations to preserve data in case of disruption. Users of a quantum cloud server would presumably desire the same degree of information security, but no-cloning theorem would apparently forbid this.
+In the new work, Kempf and his colleague Koji Yamaguchi, now at Japan’s Kyushu University, show that this is not the case. Their encryption protocol begins with the generation of a set of pairs of entangled qubits. When a qubit, called A, is encrypted, it interacts with one qubit (called a signal qubit) from each pair in turn. In the process of interaction, the signal qubits record information about the state of A, which has been altered by previous interactions. As each signal qubit is entangled with a noise qubit, the state of the noise qubits is also changed.
+Another central tenet of quantum mechanics, however, is that quantum entanglement does not allow for information exchange. “The noise qubits don’t know anything about the state of A either classically or quantum mechanically,” says Kempf. “The noise qubits’ role is to serve as a record of noise…We use the noise that is in the signal qubit to encrypt the clone of A. You drown the information in noise, but the noise qubit has a record of exactly what noise has been added because [the signal qubits and noise qubits] are maximally entangled.”
+Therefore, a user with all of the noise qubits knows nothing about the signal, but knows all of the noise that was added to it. Possession of just one of the signal qubits, therefore, allows them to recover the unencrypted qubit. This does not violate the uncertainty principle, however, because decrypting one copy of A involves making a measurement of the noise qubits: “At the end of [the measurement], the noise qubits are no longer what they were before, and they can no longer be used for the decryption of another encrypted clone,” explains Kempf.
+Kempf says that, working with IBM, they have demonstrated hundreds of steps of iterative quantum cloning (quantum cloning of quantum clones) on a Heron 2 processor successfully and showed that the researchers could even clone entangled qubits and recover the entanglement after decryption. “We’ll put that on the arXiv this month,” he says.
+ +The research is described in Physical Review Letters and Barry Sanders at Canada’s University of Calgary is impressed by both the elegance and the generality of the result. He notes it could have significance for topics as distant as information loss from black holes: “It’s not a flash in the pan,” he says; “If I’m doing something that is related to no-cloning, I would look back and say ‘Gee, how do I interpret what I’m doing in this context?’: It’s a paper I won’t forget.”
+Seth Lloyd of MIT agrees: “It turns out that there’s still low-hanging fruit out there in the theory of quantum information, which hasn’t been around long,” he says. “It turns out nobody ever thought to look at this before: Achim is a very imaginative guy and it’s no surprise that he did.” Both Lloyd and Sanders agree that quantum cloud storage remains hypothetical, but Lloyd says “I think it’s a very cool and unexpected result and, while it’s unclear what the implications are towards practical uses, I suspect that people will find some very nice applications in the near future.”
+The post Encrypted qubits can be cloned and stored in multiple locations appeared first on Physics World.
+]]>In this episode of Physics World Stories, host Andrew Glester explores the fascinating hunt for pristine comets – icy bodies that preserve material from the solar system’s beginnings and even earlier. Unlike more familiar comets that repeatedly swing close to the Sun and transform, these frozen relics act as time capsules, offering unique insights into our cosmic history.
-

The first guest is Tracy Becker, deputy principal investigator for the Ultraviolet Spectrograph on NASA’s Europa Clipper mission. Becker describes how the Jupiter-bound spacecraft recently turned its gaze to 3I/ATLAS, an interstellar visitor that appeared last July. Mission scientists quickly reacted to this unique opportunity, which also enabled them to test the mission’s instruments before it arrives at the icy world of Europa.
@@ -84,7 +863,7 @@ xmlns:rawvoice="https://blubrry.com/developer/rawvoice-rss/"The post Fuel cell catalyst requirements for heavy-duty vehicle applications appeared first on Physics World.
]]>
+
Heavy-duty vehicles (HDVs) powered by hydrogen-based proton-exchange membrane (PEM) fuel cells offer a cleaner alternative to diesel-powered internal combustion engines for decarbonizing long-haul transportation sectors. The development path of sub-components for HDV fuel-cell applications is guided by the total cost of ownership (TCO) analysis of the truck.
TCO analysis suggests that the cost of the hydrogen fuel consumed over the lifetime of the HDV is more dominant because trucks typically operate over very high mileages (~a million miles) than the fuel cell stack capital expense (CapEx). Commercial HDV applications consume more hydrogen and demand higher durability, meaning that TCO is largely related to the fuel-cell efficiency and durability of catalysts.
This article is written to bridge the gap between the industrial requirements and academic activity for advanced cathode catalysts with an emphasis on durability. From a materials perspective, the underlying nature of the carbon support, Pt-alloy crystal structure, stability of the alloying element, cathode ionomer volume fraction, and catalyst–ionomer interface play a critical role in improving performance and durability.
We provide our perspective on four major approaches, namely, mesoporous carbon supports, ordered PtCo intermetallic alloys, thrifting ionomer volume fraction, and shell-protection strategies that are currently being pursued. While each approach has its merits and demerits, their key developmental needs for future are highlighted.
-

Nagappan Ramaswamy joined the Department of Chemical Engineering at IIT Bombay as a faculty member in January 2025. He earned his PhD in 2011 from Northeastern University, Boston specialising in fuel cell electrocatalysis.
He then spent 13 years working in industrial R&D – two years at Nissan North American in Michigan USA focusing on lithium-ion batteries, followed by 11 years at General Motors in Michigan USA focusing on low-temperature fuel cells and electrolyser technologies. While at GM, he led two multi-million-dollar research projects funded by the US Department of Energy focused on the development of proton-exchange membrane fuel cells for automotive applications.
At IIT Bombay, his primary research interests include low-temperature electrochemical energy-conversion and storage devices such as fuel cells, electrolysers and redox-flow batteries involving materials development, stack design and diagnostics.
@@ -4835,900 +5614,5 @@ ZAP-X represents the second cranial radiosurgery revolution, setting new standarThe post International Quantum Year competition for science journalists begins appeared first on Physics World.
-]]>The two publications invite journalists to submit story ideas on any aspect of quantum science and technology. At least two selected pitches will receive paid assignments and be published in one of the magazines.
- -Interviews with physicists and career profiles – either in academia or industry – are especially encouraged, but the editors will also consider news stories, podcasts, visual media and other creative storytelling formats that illuminate the quantum world for diverse audiences.
-Participants should submit a brief pitch (150–300 words recommended), along with a short journalist bio and a few representative clips, if available. Editors from Physics World and Physics Magazine will review all submissions and announce the winning pitches after the conference. Pitches should be submitted to physics@aps.org by 8 December 2025, with the subject line “2025WCSJ Quantum Pitch”.
-Whether you’re drawn to quantum materials, computing, sensing or the people shaping the field, this is an opportunity to feature fresh voices and ideas in two leading physics publications.
-This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
-Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
-Find out more on our quantum channel.
--
The post International Quantum Year competition for science journalists begins appeared first on Physics World.
-]]>The post New cylindrical metamaterials could act as shock absorbers for sensitive equipment appeared first on Physics World.
-]]>McInerney and colleagues’ tube-like design is made from a lattice of beams arranged in such a way that low-energy vibrational modes called floppy modes become localized to one side. “This provides good properties for isolating vibrations because energy input into the system on the floppy side does not propagate to the other side,” McInerney says.
- -The key to this desirable behaviour, he explains, is the arrangement of the beams that form the lattice structure. Using a pattern first proposed by the 19th century physicist James Clerk Maxwell, the beams are organized into repeating sub-units to form stable, two-dimensional structures known as topological Maxwell lattices.
-Previous versions of these lattices could not support their own weight. Instead, they were attached to rigid external mounts, making it impractical to integrate them into devices. The new design, in contrast, is made by folding a flat Maxwell lattice into a cylindrical tube that is self-supporting. The tube features a connected inner and outer layer – a kagome bilayer – and its radius can be precisely engineered to give it the topological behaviour desired.
-The researchers, who detail their work in Physical Review Applied, first tested their structure numerically by attaching a virtual version to a mechanically sensitive sample and a source of low-energy vibrations. As expected, the tube diverted the vibrations away from the sample and towards the other end of the tube.
-Next, they developed a simple spring-and-mass model to understand the tube’s geometry by considering it as a simple monolayer. This modelling indicated that the polarization of the tube should be similar to the polarization of the monolayer. They then added rigid connectors to the tube’s ends and used a finite-element method to calculate the frequency-dependent patterns of vibrations propagating across the structure. They also determined the effective stiffness of the lattice as they applied loads parallel and perpendicular to it.
- -The researchers are targeting vibration-isolation applications that would benefit from a passive support structure, especially in cases where the performance of alternative passive mechanisms, such as viscoelastomers, is temperature-limited. “Our tubes do not necessarily need to replace other vibration isolation mechanisms,” McInerney explains. “Rather, they can enhance the capabilities of these by having the load-bearing structure assist with isolation.”
-The team’s first and most important task, McInerney adds, will be to explore the implications of physically mounting the kagome tube on its vibration isolation structures. “The numerical study in our paper uses idealized mounting conditions so that the input and output are perfectly in phase with the tube vibrations,” he says. “Accounting for the potential impedance mismatch between the mounts and the tube will enable us to experimentally validate our work and provide realistic design scenarios.”
-The post New cylindrical metamaterials could act as shock absorbers for sensitive equipment appeared first on Physics World.
-]]>The post Breakfast physics, delving into quantum 2.0, the science of sound, an update to everything: micro reviews of recent books appeared first on Physics World.
-]]>Why do Cheerios tend to stick together while floating in a bowl of milk? Why does a runner’s ponytail swing side to side? These might not be the most pressing questions in physics, but getting to the answers is both fun and provides insights into important scientific concepts. These are just two examples of everyday physics that Physics World news editor Michael Banks explores in his book Physics Around the Clock, which begins with the physics (and chemistry) of your morning coffee and ends with a formula for predicting the winner of those cookery competitions that are mainstays of evening television. Hamish Johnston
-- -
Quantum 2.0: the Past, Present and Future of Quantum Physics
-By Paul Davies
You might wonder why the world needs yet another book about quantum mechanics, but for physicists there’s no better guide than Paul Davies. Based for the last two decades at Arizona State University in the US, in Quantum 2.0 Davies tackles the basics of quantum physics – along with its mysteries, applications and philosophical implications – with great clarity and insight. The book ends with truly strange topics such as quantum Cheshire cats and delayed-choice quantum erasers – see if you prefer his descriptions to those we’ve attempted in Physics World this year. Matin Durrani
--
Can You Get Music on the Moon? the Amazing Science of Sound and Space
-By Sheila Kanani, illustrated by Liz Kay
Why do dogs bark but wolves howl? How do stars “sing”? Why does thunder rumble? This delightful, fact-filled children’s book answers these questions and many more, taking readers on an adventure through sound and space. Written by planetary scientist Sheila Kanani and illustrated by Liz Kay, Can you get Music on the Moon? reveals not only how sound is produced but why it can make us feel certain things. Each of the 100 or so pages brims with charming illustrations that illuminate the many ways that sound is all around us. Michael Banks
--
A Short History of Nearly Everything 2.0
-By Bill Bryson
Alongside books such as Stephen Hawking’s A Brief History of Time and Carl Sagan’s Cosmos, British-American author Bill Bryson’s A Short History of Nearly Everything is one of the bestselling popular-science books of the last 50 years. First published in 2003, the book became a fan favourite of readers across the world and across disciplines as Bryson wove together a clear and humorous narrative of our universe. Now, 22 years later, he has released an updated and revised volume – A Short History of Nearly Everything 2.0 – that covers major updates in science from the past two decades. This includes the discovery of the Higgs boson and the latest on dark-matter research. The new edition is still imbued with all the wit and wisdom of the original, making it the perfect Christmas present for scientists and anyone else curious about the world around us. Tushna Commissariat
-The post Breakfast physics, delving into quantum 2.0, the science of sound, an update to everything: micro reviews of recent books appeared first on Physics World.
-]]>The post Quantum 2.0: Paul Davies on the next revolution in physics appeared first on Physics World.
-]]>He explores how emerging quantum technologies are beginning to merge with artificial intelligence, raising new ethical and philosophical questions. Could quantum AI help tackle climate change or tackle issues like hunger? And how far should we go in outsourcing planetary management to machines that may well prioritize their own survival?
-Davies also turns his gaze to the arts, imagining a future where quantum ideas inspire music, theatre and performance. From jazz improvized by quantum algorithms to plays whose endings depend on quantum outcomes, creativity itself could enter a new superposition.
-Hosted by Andrew Glester, this episode blends cutting-edge science and imagination in trademark Paul Davies style.
-This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
-Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
-Find out more on our quantum channel.
--
-
The post Quantum 2.0: Paul Davies on the next revolution in physics appeared first on Physics World.
-]]>The post Flexible electrodes for the future of light detection appeared first on Physics World.
-]]>Since the discovery of graphene’s remarkable electrical properties, there has been growing interest in using graphene and other two-dimensional (2D) materials to advance photodetection technologies. When light interacts with these materials, it excites electrons that must travel to a nearby contact electrode to generate an electrical signal. The ease with which this occurs depends on the work functions of the materials involved, specifically, the difference between them, known as the Schottky barrier height. Selecting an optimal combination of 2D material and electrode can minimize this barrier, enhancing the photodetector’s sensitivity and speed. Unfortunately, traditional electrode materials have fixed work functions which are limiting 2D photodetector technology.
-PEDOT:PSS is a widely used electrode material in photodetectors due to its low cost, flexibility, and transparency. In this study, the researchers have developed PEDOT:PSS electrodes with tunable work functions ranging from 5.1 to 3.2 eV, making them compatible with a variety of 2D materials and ideal for optimizing device performance in metal-semiconductor-metal architectures. In addition, their thorough investigation demonstrates that the produced photodetectors performed excellently, with a significant forward current flow (rectification ratio ~10⁵), a strong conversion of light to electrical output (responsivity up to 1.8 A/W), and an exceptionally high Ilight/Idark ratio of 10⁸. Furthermore, the detectors were highly sensitive with low noise, had very fast response times (as fast as 3.2 μs), and thanks to the transparency of PEDOT:PSS, showed extended sensitivity into the near-infrared region.
-This study demonstrates a tunable, transparent polymer electrode that enhances the performance and versatility of 2D photodetectors, offering a promising path toward flexible, self-powered, and wearable optoelectronic systems, and paving the way for next-generation intelligent interactive technologies.
-Youchen Chen et al 2025 Rep. Prog. Phys. 88 068003
--
Two-dimensional material/group-III nitride hetero-structures and devices by Tingting Lin, Yi Zeng, Xinyu Liao, Jing Li, Changjian Zhou and Wenliang Wang (2025)
-The post Flexible electrodes for the future of light detection appeared first on Physics World.
-]]>The post Quantum cryptography in practice appeared first on Physics World.
-]]>Unlike traditional methods that rely on classical cryptographic techniques, QCKA leverages the principles of quantum mechanics, particularly multipartite entanglement, to ensure security.
-A key aspect of QCKA is creating and distributing entangled quantum states among the parties. These entangled states have unique properties that make it impossible for an eavesdropper to intercept the key without being detected.
-Researchers measure the efficiency and performance of the key agreement protocol using a metric known as the key rate.
-One problem with state-of-the-art QCKA schemes is that this key rate decreases exponentially with the number of users.
-Previous solutions to this problem, based on single-photon interference, have come at the cost of requiring global phase locking. This makes them impractical to put in place experimentally.
-However, the authors of this new study have been able to circumvent this requirement, by adopting an asynchronous pairing strategy. Put simply, this means that measurements taken by different parties in different places do not need to happen at exactly at the same time.
-Their solution effectively removes the need for global phase locking while still maintaining the favourable scaling of the key rate as in other protocols based on single-photon interference.
-The new scheme represents an important step towards realising QCKA at long distances by allowing for much more practical experimental configurations.
-
Yu-Shuo Lu et al., 2025 Rep. Prog. Phys. 88 067901
--
The post Quantum cryptography in practice appeared first on Physics World.
-]]>The post Scientists realize superconductivity in traditional semiconducting material appeared first on Physics World.
-]]>
The ability to induce superconductivity in materials that are inherently semiconducting has been a longstanding research goal. Improving the conductivity of semiconductor materials could help develop quantum technologies with a high speed and energy efficiency, including superconducting quantum bits (qubits) and cryogenic CMOS control circuitry. However, this task has proved challenging in traditional semiconductors – such as silicon or germanium – as it is difficult to maintain the optimal superconductive atomic structure.
-In a new study, published in Nature Nanotechnology, researchers have used molecular beam epitaxy (MBE) to grow gallium-hyperdoped germanium films that retain their superconductivity. When asked about the motivation for this latest work, Peter Jacobson from the University of Queensland tells Physics World about his collaboration with Javad Shabani from New York University.
-“I had been working on superconducting circuits when I met Javad and discovered the new materials their team was making,” he explains. “We are all trying to understand how to control materials and tune interfaces in ways that could improve quantum devices.”
-Germanium is a group IV element, so its properties bridge those of both metals and insulators. Superconductivity can be induced in germanium by manipulating its atomic structure to introduce more electrons into the atomic lattice. These extra electrons interact with the germanium lattice to create electron pairs that move without resistance, or in other words, they become superconducting.
- -Hyperdoping germanium (at concentrations well above the solid solubility limit) with gallium induces a superconducting state. However, this material is traditionally unstable due to the presence of structural defects, dopant clustering and poor thickness control. There have also been many questions raised as to whether these materials are intrinsically superconducting, or whether it is actually gallium clusters and unintended phases that are solely responsible for the superconductivity of gallium-doped germanium.
-Considering these issues and looking for a potential new approach, Jacobson notes that X-ray absorption measurements at the Australian Synchrotron were “the first real sign” that Shabani’s team had grown something special. “The gallium signal was exceptionally clean, and early modelling showed that the data lined up almost perfectly with a purely substitutional picture,” he explains. “That was a genuine surprise. Once we confirmed and extended those results, it became clear that we could probe the mechanism of superconductivity in these films without the usual complications from disorder or spurious phases.”
-In a new approach, Jacobson, Shabani and colleagues used MBE to grow the crystals instead of relying on ion implantation techniques, allowing the germanium to by hyperdoped with gallium. Using MBE forces the gallium atoms to replace germanium atoms within the crystal lattice at levels much higher than previously seen. The process also provided better control over parasitic heating during film growth, allowing the researchers to achieve the structural precision required to understand and control the superconductivity of these germanium:gallium (Ge:Ga) materials, which were found to become superconducting at 3.5 K with a carrier concentration of 4.15 × 1021 holes/cm3. The critical gallium dopant threshold to achieve this was 17.9%.
-Using synchrotron-based X-ray absorption, the team found that the gallium dopants were substitutionally incorporated into the germanium lattice and induced a tetragonal distortion to the unit cell. Density functional theory calculations showed that this causes a shift in the Fermi level into the valence band and flattens electronic bands. This suggests that the structural order of gallium in the germanium lattice creates a narrow band that facilitates superconductivity in germanium, and that this superconductivity arises intrinsically in the germanium, rather than being governed by defects and gallium clusters.
- -The researchers tested trilayer heterostructures – Ge:Ga/Si/Ge:Ga and Ge:Ga/Ge/Ge:Ga – as proof-of-principle designs for vertical Josephson junction device architectures. In the future, they hope to develop these into fully fledged Josephson junction devices.
-Commenting on the team’s future plans for this research, Jacobson concludes: “I’m very keen to examine this material with low-temperature scanning tunnelling microscopy (STM) to directly measure the superconducting gap, because STM adds atomic-scale insights that complement our other measurements and will help clarify what sets hyperdoped germanium apart”.
-The post Scientists realize superconductivity in traditional semiconducting material appeared first on Physics World.
-]]>The post Better coffee, easier parking and more: the fascinating physics of daily life appeared first on Physics World.
-]]>As well as the rich physics of coffee, we chat about strategies for finding the best parking spot and the efficient boarding of aeroplanes. If you have ever wondered why a runner’s ponytail swings from side-to-side when they reach a certain speed – we have the answer for you.
-Other daily mysteries that we explore include how a hard steel razor blade can be dulled by cutting relatively soft hairs and why quasiparticles called “jamitons” are helping physicists understand the spontaneous appearance of traffic jams. And a warning for squeamish listeners, we do talk about the amazing virus-spreading capabilities of a flushing toilet.
--
This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online.
-The post Better coffee, easier parking and more: the fascinating physics of daily life appeared first on Physics World.
-]]>The post Cosmic dawn: the search for the primordial hydrogen signal appeared first on Physics World.
-]]>Bull is referring to the vital but baffling period in the early universe – from 380,000 years to one billion years after the Big Bang – when its structure went from simple to complex. To lift the veil on this epoch, experiments around the world – from Australia to the Arctic – are racing to find a specific but elusive signal from the earliest hydrogen atoms. This signal could confirm or disprove scientists’ theories of how the universe evolved and the physics that governs it.
-Hydrogen is the most abundant element in the universe. As neutral hydrogen atoms change states, they can emit or absorb photons. This spectral transition, which can be stimulated by radiation, produces an emission or absorption radio wave signal with a wavelength of 21 cm. To find out what happened during that early universe, astronomers are searching for these 21 cm photons that were emitted by primordial hydrogen atoms.
-But despite more teams joining the hunt every year, no-one has yet had a confirmed detection of this radiation. So who will win the race to find this signal and how is the hunt being carried out?
-Let’s first return to about 380,000 years after the Big Bang, when the universe had expanded and cooled to below 3000 K. At this stage, neutral atoms, including atomic hydrogen, could form. Thanks to the absence of free electrons, ordinary matter particles could decouple from light, allowing it to travel freely across the universe. This ancient radiation that permeates the sky is known as the cosmic microwave background (CMB).
- -But after that we don’t know much about what happened for the next few hundred million years. Meanwhile, the oldest known galaxy MoM-z14 – which existed about 280 million years after the Big Bang – was observed in April 2025 by the James Webb Space Telescope. So there is currently a gap of just under 280 million years in our observations of the early universe. “It’s one of the last blank spots,” says Anastasia Fialkov, an astrophysicist at the Institute of Astronomy of the University of Cambridge.
-This “blank spot” is a bridge between the early, simple universe and today’s complex structured cosmos. During this early epoch, the universe went from being filled with a thick cloud of neutral hydrogen, to being diversely populated with stars, black holes and everything in between. It covers the end of the cosmic dark ages, the cosmic dawn, and the epoch of reionization – and is arguably one of the most exciting periods in our universe’s evolution.
-During the cosmic dark ages, after the CMB flooded the universe, the only “ordinary” matter (made up of protons, neutrons and electrons) was neutral hydrogen (75% by mass) and neutral helium (25%), and there were no stellar structures to provide light. It is thought that gravity then magnified any slight fluctuations in density, causing some of this primordial gas to clump and eventually form the first stars and galaxies – a time called the cosmic dawn. Next came the epoch of reionization, when ultraviolet and X-ray emissions from those first celestial objects heated and ionized the hydrogen atoms, turning the neutral gas into a charged plasma of electrons and protons.
-The 21 cm signal astronomers are searching for was produced when the spectral transition was excited by collisions in the hydrogen gas during the dark ages and then by the first photons from the first stars during the cosmic dawn. However, the intensity of the 21 cm signal can only be measured against the CMB, which acts as a steady background source of 21 cm photons.
-When the hydrogen was colder than the background radiation, there were few collisions, and the atoms would have absorbed slightly more 21 cm photons from the CMB than they emitted themselves. The 21 cm signal would appear as a deficit, or absorption signal, against the CMB. But when the neutral gas was hotter than the CMB, the atoms would emit more photons than they absorbed, causing the 21 cm signal to be seen as a brighter emission against the CMB. These absorption and emission rates depend on the density and temperature of the gas, and the timing and intensity of radiation from the first cosmic sources. Essentially, the 21 cm signal became imprinted with how those early sources transformed the young universe.
-One way scientists are trying to observe this imprint is to measure the average – or “global” – signal across the sky, looking at how it shifts from absorption to emission compared to the CMB. Normally, a 21 cm radio wave signal has a frequency of about 1420 MHz. But this ancient signal, according to theory, has been emitted and absorbed at different intensities throughout this cosmic “blank spot”, depending on the universe’s evolutionary processes at the time. The expanding universe has also stretched and distorted the signal as it travelled to Earth. Theories predict that it would now be in the 1 to 200 MHz frequency range – with lower frequencies corresponding to older eras – and would have a wavelength of metres rather than centimetres.
-Importantly, the shape of the global 21 cm signal over time could confirm the lambda-cold dark matter (ΛCDM) model, which is the most widely accepted theory of the cosmos; or it could upend it. Many astronomers have dedicated their careers to finding this radiation, but it is challenging for a number of reasons.
-Unfortunately, the signal is incredibly faint. Its brightness temperature, which is measured as the change in the CMB’s black body temperature (2.7 K), will only be in the region of 0.1 K.
-
a A simulation of the sky-averaged (global) signal as a function of time (horizontal) and space (vertical). b A typical model of the global 21 cm line with the main cosmic events highlighted. Each experiment searching for the global 21 cm signal focuses on a particular frequency band. For example, the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) is looking at the 50–170 MHz range (blue).
--
There is also no single source of this emission, so, like the CMB, it permeates the universe. “If it was the only signal in the sky, we would have found it by now,” says Eloy de Lera Acedo, head of Cavendish Radio Astronomy and Cosmology at the University of Cambridge. But the universe is full of contamination, with the Milky Way being a major culprit. Scientists are searching for 0.1 K in an environment “that’s a million times brighter”, he explains.
-And even before this signal reaches the radio-noisy Earth, it has to travel through the atmosphere, which further distorts and contaminates it. “It’s a very difficult measurement,” says Rigel Cappallo, a research scientist at the MIT Haystack Observatory. “It takes a really, really well calibrated instrument that you understand really well, plus really good modelling.”
-In 2018 the Experiment to Detect the Global EoR Signature (EDGES) – a collaboration between Arizona State University and MIT Haystack Observatory – hit the headlines when it claimed to have detected the global 21 cm signal (Nature 555 67).
-The EDGES instrument is a dipole antenna, which resembles a ping-pong table with a gap in the middle (see photo at top of article for the 2024 set-up). It is mounted on a large metal groundsheet, which is about 30 × 30 m. Its ground-breaking observation was made at a remote site in western Australia, far from radio frequency interference.
-But in the intervening seven years, no-one else has been able to replicate the EDGES results.
-The spectrum dip that EDGES detected was very different from what theorists had expected. “There is a whole family of models that are predicted by the different cosmological scenarios,” explains Ravi Subrahmanyan, a research scientist at Australia’s national science agency CSIRO. “When we take measurements, we compare them with the models, so that we can rule those models in or out.”
-In general, the current models predict a very specific envelope of signal possibilities (see figure 1). First, they anticipate an absorption dip in brightness temperature of around 0.1 to 0.2 K, caused by the temperature difference between the cold hydrogen gas (in an expanding universe) and the warmer CMB. Then, a speedy rise and photon emission is predicted as the gas starts to warm when the first stars form, and the signal should spike dramatically when the first X-ray binary stars fire up and heat up the surrounding gas. The signal is then expected to fade as the epoch of reionization begins, because ionized particles cannot undergo the spectral transition. With models, scientists theorize when this happened, how many stars there were, and how the cosmos unfurled.
-The 21 cm signals predicted by current cosmology models (coloured lines) and the detection by the EDGES experiment (dashed black line).
--
“It’s just one line, but it packs in so many physical phenomena,” says Fialkov, referring to the shape of the 21 cm signal’s brightness temperature over time. The timing of the dip, its gradient and magnitude all represent different milestones in cosmic history, which affect how it evolved.
-The EDGES team, however, reported a dip of more than double the predicted size, at about 78 MHz (see figure 2). While the frequency was consistent with predictions, the very wide and deep dip of the signal took the community by surprise.
-“It would be a revolution in physics, because that signal will call for very, very exotic physics to explain it,” says de Lera Acedo. “Of course, the first thing we need to do is to make sure that that is actually the signal.”
-The EDGES claim has galvanized the cosmology community. “It set a cat among the pigeons,” says Bull. “People realized that, actually, there’s some very exciting science to be done here.” Some groups are trying to replicate the EDGES observation, while others are trying new approaches to detect the signal that the models promise.
-The Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) – a collaboration between the University of Cambridge and Stellenbosch University in South Africa – focuses on the 50–170 MHz frequency range. Sitting on the dry and empty plains of South Africa’s Northern Cape, it is targeting the EDGES observation (Nature Astronomy 6 984).
-
In this radio-quiet environment, REACH has set up two antennas: one looks like EDGES’ dipole ping-pong table, while the other is a spiral cone. They sit on top of a giant metallic mesh – the ground plate – in the shape of a many-pointed star, which aims to minimize reflections from the ground.
-Hunting for this signal “requires precision cosmology and engineering”, says de Lera Acedo, the principal investigator on REACH. Reflections from the ground or mesh, calibration errors, and signals from the soil, are the kryptonite of cosmic dawn measurements. “You need to reduce your systemic noise, do better analysis, better calibration, better cleaning [to remove other sources from observations],” he says.
-Another radio telescope, dubbed the Shaped Antenna measurement of the background Radio Spectrum (SARAS) – which was established in the late 2000s by the Raman Research Institute (RRI) in Bengaluru, India – has undergone a number of transformations to reduce noise and limit other sources of radiation. Over time, it has morphed from a dipole on the ground to a metallic cone floating on a raft. It is looking at 40 to 200 MHz (Exp. Astron. 51 193).
-After the EDGES claim, SARAS pivoted its attention to verifying the detection, explains Saurabh Singh, a research scientist at the RRI. “Initially, we were not able to get down to the required sensitivity to be able to say anything about their detection,” he explains. “That’s why we started floating our radiometer on water.” Buoying the experiment reduces ground contamination and creates a more predictable surface to include in calculations.
-
Using data from their floating radiometer, in 2022 Singh and colleagues disfavoured EDGES’ claim (Nature Astronomy 6 607), but for many groups the detection still remains a target for observations.
-While SARAS has yet to detect a cosmic-dawn signal of its own, Singh says that non-detection is also an important element of finding the global 21 cm signal. “Non-detection gives us an opportunity to rule out a lot of these models, and that has helped us to reject a lot of properties of these stars and galaxies,” he says.
-Raul Monsalve Jara – a cosmologist at the University of California, Berkeley – has been part of the EDGES collaboration since 2012, but decided to also explore other ways to detect the signal. “My view is that we need several experiments doing different things and taking different approaches,” he says.
-The Mapper of the IGM Spin Temperature (MIST) experiment, of which Monsalve is co-principal investigator, is a collaboration between Chilean, Canadian, Australian and American researchers. These instruments are looking at 25 to 105 MHz (MNRAS 530 4125). “Our approach was to simplify the instrument, get rid of the metal ground plate, and to take small, portable instruments to remote locations,” he explains. These locations have to fulfil very specific requirements – everything around the instrument, from mountains to the soil, can impact the instrument’s performance. “If the soil itself is irregular, that will be very difficult to characterize and its impact will be difficult to remove [from observations],” Monsalve says.
-
So far, the MIST instrument, which is also a dipole ping-pong table, has visited a desert in California, another in Nevada, and even the Arctic. Each time, the researchers spend a few weeks at the site collecting data, and it is portable and easy to set up, Monsalve explains. The team is planning more observations in Chile. “If you suspect that your environment could be doing something to your measurements, then you need to be able to move around,” continues Monsalve. “And we are contributing to the field by doing that.”
-Aaron Parsons, also from the University of California, Berkeley, decided that the best way to detect this elusive signal would be to try and eliminate the ground entirely – by suspending a rotating antenna over a giant canyon with 100 m empty space in every direction.
-His Electromagnetically Isolated Global Signal Estimation Platform (EIGSEP) includes an antenna hanging four storeys above the ground, attached to Kevlar cable strung across a canyon in Utah. It’s observing at 50 to 250 MHz. “It continuously rotates around and twists every which way,” Parsons explains. Hopefully, that will allow them to calibrate the instrument very accurately. Two antennas on the ground cross-correlate observations. EIGSEP began making observations last year.
-More experiments are expected to come online in the next year. The Remote HI eNvironment Observer (RHINO), an initiative of the University of Manchester, will have a horn-shaped receiver made of a metal mesh that is usually used to construct skyscrapers. Horn shapes are particularly good for calibration, allowing for very precise measurements. The most famous horn-shaped antenna is Bell Laboratories’ Holmdel Horn Antenna in the US, with which two scientists accidentally discovered the CMB in 1965.
-Initially, RHINO will be based at Jodrell Bank Observatory in the UK, but like other experiments, it could travel to other remote locations to hunt for the 21 cm signal.
-Similarly, Subrahmanyan – who established the SARAS experiment in India and is now with CSIRO in Australia – is working to design a new radiometer from scratch. The instrument, which will focus on 40–160 MHz, is called Global Imprints from Nascent Atoms to Now (GINAN). He says that it will feature a recently patented self-calibrating antenna. “It gives a much more authentic measurement of the sky signal as measured by the antenna,” he explains.
-In the meanwhile, the EDGES collaboration has not been idle. MIT Haystack Observatory’s Cappallo project manages EDGES, which is currently in its third iteration. It is still the size of a desk, but its top now looks like a box, with closed sides and its electronics tucked inside, and an even larger metal ground plate. The team has now made observations from islands in the Canadian archipelago and in Alaska’s Aleutian island chain (see photo at top of article).
-“The 2018 EDGES result is not going to be accepted by the community until somebody completely independently verifies it,” Cappallo explains. “But just for our own sanity and also to try to improve on what we can do, we want to see it from as many places as possible and as many conditions as possible.” The EDGES team has replicated its results using the same data analysis pipeline, but no-one else has been able to reproduce the unusual signal.
-All the astronomers interviewed welcomed the introduction of new experiments. “I think it’s good to have a rich field of people trying to do this experiment because nobody is going to trust any one measurement,” says Parsons. “We need to build consensus here.”
-Some astronomers have decided to avoid the struggles of trying to detect the global 21 cm signal from Earth – instead, they have their sights set on the Moon. Earth’s atmosphere is one of the reasons why the 21 cm signal is so difficult to measure. The ionosphere, a charged region of the atmosphere, distorts and contaminates this incredibly faint signal. On the far side of the Moon, any antenna would also be shielded from the cacophony of radio-frequency interference from Earth.
-“This is why some experiments are going to the Moon,” says Parsons, adding that he is involved in NASA’s LuSEE-Night experiment. LuSEE-Night, or the Lunar Surface Electromagnetics Experiment, aims to land a low-frequency experiment on the Moon next year.
-In July, at the National Astronomical Meeting in Durham, the University of Cambridge’s de Lera Acedo presented a proposal to put a miniature radiometer into lunar orbit. Dubbed “Cosmocube”, it will be a nanosatellite that will orbit the Moon searching for this 21 cm signal.
-
“It is just in the making,” says de Lera Acedo, adding that it will not be in operation for at least a decade. “But it is the next step.”
-In the meanwhile, groups here on Earth are in a race to detect this elusive signal. The instruments are getting more sensitive, the modelling is improving, and the unknowns are reducing. “If we do the experiments right, we will find the signal,” Monsalve believes. The big question is who, of the many groups with their hat in the ring, is doing the experiment “right”.
-The post Cosmic dawn: the search for the primordial hydrogen signal appeared first on Physics World.
-]]>The post Ten-ion system brings us a step closer to large-scale qubit registers appeared first on Physics World.
-]]>
Researchers in Austria have entangled matter-based qubits with photonic qubits in a ten-ion system. The technique is scalable to larger ion-qubit registers, paving the way for the creation of larger and more complex quantum networks.
-
Quantum networks consist of matter-based nodes that store and process quantum information and are linked through photons (quanta of light). Already, Ben Lanyon’s group at the University of Innsbruck has made advances in this direction by entangling two ions in different systems. Now, in a new paper published in Physical Review Letters , they describe how they have developed and demonstrated a new method to entangle a string of ten ions with photons. In the future, this approach could enable the entanglement of sets of ions in different locations through light, rather than one ion at a time.
-To achieve this, Lanyon and colleagues trapped a chain of 10 calcium ions in a linear trap in an optical cavity. By changing the trapping voltages in the trap, each ion was moved, one-by-one, into the cavity. Once inside, the ion was placed in the “sweet spot”, where the ion’s interaction with the cavity is the strongest. There, the ion emitted a single photon when exposed to a 393 nm Raman laser beam. This beam was tightly focused on one ion, guaranteeing that the emitted photon – collected in a single-mode optical fibre – comes out from one ion at a time. This process was carried out ten times, one per ion, to obtain a train of ten photons.
- -By using quantum state tomography, the researchers reconstructed the density matrix, which describes the correlation between the states of ions (i) and photons (j). To do so, they measure every ion and photon state in three different basis, resulting in nine Pauli-basis configurations of quantum measurements. From the density matrix, the concurrence (a measure of entanglement) between the ion (i) and photon (j) was found to be positive only when i = j, and equal to zero otherwise. This implies that the ion is uniquely entangled with the photon it produced, and unentangled with the photon produced by other ions.
-From the density matrix, they also calculate the fidelity with the Bell state (a state of maximum entanglement), yielding an average 92%. As Marco Canteri points out, “this fidelity characterizes the quality of entanglement between the ion-photon pair for i=j”.
- -This work developed and demonstrated a technique whereby matter-based qubits and photonic qubits can be entangled, one at a time, in ion strings. Now, the group aims to “demonstrate universal quantum logic within the photon-interfaced 10-ion register and, building up towards entangling two remote 10-ion processors through the exchange of photons between them,” explains team member Victor Krutyanskiy. If this method effectively scales to larger systems, more complex quantum networks could be built. This would lead to applications in quantum communication and quantum sensing.
-The post Ten-ion system brings us a step closer to large-scale qubit registers appeared first on Physics World.
-]]>The post Non-invasive wearable device measures blood flow to the brain appeared first on Physics World.
-]]>Emerging as an alternative, modalities based on optical transcranial measurement are cost-effective and easy to use. In particular, speckle contrast optical spectroscopy (SCOS) – an offshoot of laser speckle contrast imaging, which uses laser light speckles to visualize blood vessels – can measure cerebral blood flow (CBF) with high temporal resolution, typically above 30 Hz, and cerebral blood volume (CBV) through optical signal attenuation.
-Researchers at the California Institute of Technology (Caltech) and the Keck School of Medicine’s USC Neurorestoration Center have designed a lightweight SCOS system that accurately measures blood flow to the brain, distinguishing it from blood flow to the scalp. Co-senior author Charles Liu of the Keck School of Medicine and team describe the system and their initial experimentation with it in APL Bioengineering.
-
The SCOS system consists of a 3D-printed head mount designed for secure placement over the temple region. It holds a single 830 nm laser illumination fibre and seven detector fibres positioned at seven different source-to-detector (S–D) distances (between 0.6 and 2.6 cm) to simultaneously capture blood flow dynamics across layers of the scalp, skull and brain. Fibres with shorter S–D distances acquire shallower optical data from the scalp, while those with greater distances obtain deeper and broader data. The seven channels are synchronized to exhibit identical oscillation frequencies corresponding to the heart rate and cardiac cycle.
-When the SCOS system directs the laser light onto a sample, multiple random scattering events occur before the light exits the sample, creating speckles. These speckles, which materialize on rapid timescales, are the result of interference of light travelling along different trajectories. Movement within the sample (of red blood cells, for instance) causes dynamic changes in the speckle field. These changes are captured by a multi-million-pixel camera with a frame rate above 30 frames/s and quantified by calculating the speckle contrast value for each image.
-The researchers used the SCOS system to perform CBF and CBV measurements in 20 healthy volunteers. To isolate and obtain surface blood dynamics from brain signals, the researchers gently pressed on the superficial temporal artery (a terminal branch of the external carotid artery that supplies blood to the face and scalp) to block blood flow to the scalp.
- -In tests on the volunteers, when temporal artery blood flow was occluded for 8 s, scalp-sensitive channels exhibited significant decreases in blood flow while brain-sensitive channels showed minimal change, enabling signals from the internal carotid artery that supplies blood to the brain to be clearly distinguished. Additionally, the team found that positioning the detector 2.3 cm or more away from the source allowed for optimal brain blood flow measurement while minimizing interference from the scalp.
-“Combined with the simultaneous measurements at seven S–D separations, this approach enables the first quantitative experimental assessment of how scalp and brain signal contributions vary with depth in SCOS-based CBF measurements and, more broadly, in optical measurements,” they write. “This work also provides crucial insights into the optimal device S–D distance configuration for preferentially probing brain signal over scalp signal, with a practical and subject-friendly alternative for evaluating depth sensitivity, and complements more advanced, hardware-intensive strategies such as time-domain gating.”
-The researchers are now working to improve the signal-to-noise ratio of the system. They plan to introduce a compact, portable laser and develop a custom-designed extended camera that spans over 3 cm in one dimension, enabling simultaneous and continuous measurement of blood dynamics across S–D distances from 0.5 to 3.5 cm. These design advancements will enhance spatial resolution and enable deeper brain measurements.
- -“This crucial step will help transition the system into a compact, wearable form suitable for clinical use,” comments Liu. “Importantly, the measurements described in this publication were achieved in human subjects in a very similar manner to how the final device will be used, greatly reducing barriers to clinical application.”
-“I believe this study will advance the engineering of SCOS systems and bring us closer to a wearable, clinically practical device for monitoring brain blood flow,” adds co-author Simon Mahler, now at Stevens Institute of Technology. “I am particularly excited about the next stage of this project: developing a wearable SCOS system that can simultaneously measure both scalp and brain blood flow, which will unlock many fascinating new experiments.”
-The post Non-invasive wearable device measures blood flow to the brain appeared first on Physics World.
-]]>The post The future of quantum physics and technology debated at the Royal Institution appeared first on Physics World.
-]]>Over the last 100 years since Werner Heisenberg first developed quantum mechanics on the island of Helgoland in June 1925, quantum mechanics has proved to be an incredibly powerful, successful and logically consistent theory. Our understanding of the subatomic world is no longer the “lamentable hodgepodge of hypotheses, principles, theorems and computational recipes”, as the Israeli physicist and philosopher Max Jammer memorably once described it.
-In fact, quantum mechanics has not just transformed our understanding of the natural world; it has immense practical ramifications too, with so-called “quantum 1.0” technologies – lasers, semiconductors and electronics – underpinning our modern world. But as was clear from the UK National Quantum Technologies Showcase in London last week, organized by Innovate UK, the “quantum 2.0” revolution is now in full swing.
- -The day-long event, which is now in its 10th year, featured over 100 exhibitors, including many companies that are already using fundamental quantum concepts such as entanglement and superposition to support the burgeoning fields of quantum computing, quantum sensing and quantum communication. The show was attended by more than 3000 delegates, some of whom almost had to be ushered out of the door at closing time, so keen were they to keep talking.
- -Last week also saw a two-day conference at the historic Royal Institution (RI) in central London that was a centrepiece of IYQ in the UK and Ireland. Entitled Quantum Science and Technology: the First 100 Years; Our Quantum Future and attended by over 300 people, it was organized by the History of Physics and the Business Innovation and Growth groups of the Institute of Physics (IOP), which publishes Physics World.
-The first day, focusing on the foundations of quantum mechanics, ended with a panel discussion – chaired by my colleague Tushna Commissariat and Daisy Shearer from the UK’s National Quantum Computing Centre – with physicists Fay Dowker (Imperial College), Jim Al-Khalili (University of Surrey) and Peter Knight. They talked about whether the quantum wavefunction provides a complete description of physical reality, prompting much discussion with the audience. As Al-Khalili wryly noted, if entanglement has emerged as the fundamental feature of quantum reality, then “decoherence is her annoying and ever-present little brother”.
-Knight, meanwhile, who is a powerful figure in quantum-policy circles, went as far as to say that the limit of decoherence – and indeed the boundary between the classical and quantum worlds – is not a fixed and yet-to-be revealed point. Instead, he mused, it will be determined by how much money and ingenuity and time physicists have at their disposal.
- -On the second day of the IOP conference at the RI, I chaired a discussion that brought together four future leaders of the subject: Mehul Malik (Heriot-Watt University) and Sarah Malik (University College London) along with industry insiders Nicole Gillett (Riverlane) and Muhammad Hamza Waseem (Quantinuum).
-As well as outlining the technical challenges in their fields, the speakers all stressed the importance of developing a “skills pipeline” so that the quantum sector has enough talented people to meet its needs. Also vital will be the need to communicate the mysteries and potential of quantum technology – not just to the public but to industrialists, government officials and venture capitalists. By many measures, the UK is at the forefront of quantum tech – and it is a lead it should not let slip.
-
The week ended with Al-Khalili giving a public lecture, also at the Royal Institution, entitled “A new quantum world: ‘spooky’ physics to tech revolution”. It formed part of the RI’s famous Friday night “discourses”, which this year celebrate their 200th anniversary. Al-Khalili, who also presents A Life Scientific on BBC Radio 4, is now the only person ever to have given three RI discourses.
-After the lecture, which was sold out, he took part in a panel discussion with Knight and Elizabeth Cunningham, a former vice-president for membership at the IOP. Al-Khalili was later presented with a special bottle of “Glentanglement” whisky made by Glasgow-based Fraunhofer UK for the Scottish Quantum Technology cluster.
-The post The future of quantum physics and technology debated at the Royal Institution appeared first on Physics World.
-]]>The post Neural networks discover unstable singularities in fluid systems appeared first on Physics World.
-]]>The Navier–Stokes partial differential equation was developed in the early 19th century by Claude-Louis Navier and George Stokes. It has proved its worth for modelling incompressible fluids in scenarios including water flow in pipes; airflow around aeroplanes; blood moving in veins; and magnetohydrodynamics in plasmas.
-No-one has yet proved, however, whether smooth, non-singular solutions to the equation always exist in three dimensions. “In the real world, there is no singularity…there is no energy going to infinity,” says fluid dynamics expert Pedram Hassanzadeh of the University of Chicago. “So if you have an equation that has a singularity, it tells you that there is some physics that is missing.” In 2000, the Clay Mathematics Institute in Denver, Colorado listed this proof as one of seven key unsolved problems in mathematics, offering a reward of $1,000,000 for an answer.
-Researchers have traditionally tackled the problem analytically, but in recent decades high-level computational simulations have been used to assist in the search. In a 2023 paper, mathematician Tristan Buckmaster of New York University and colleagues used a special type of machine learning algorithm called a physics-informed neural network to address the question.
-“The main difference is…you represent [the solution] in a highly non-linear way in terms of a neural network,” explains Buckmaster. This allows it to occupy a lower-dimensional space with fewer free parameters, and therefore to be optimized more efficiently. Using this approach, the researchers successfully obtained the first stable singularity in the Euler equation. This is an analogy to the Navier-Stokes equation that does not include viscosity.
-A stable singularity will still occur if the initial conditions of the fluid are changed slightly – although the time taken for them to form may be altered. An unstable singularity, however, may never occur if the initial conditions are perturbed even infinitesimally. Some researchers have hypothesized that any singularities in the Navier-Stokes equation must be unstable, but finding unstable singularities in a computer model is extraordinarily difficult.
-“Before our result there hadn’t been an unstable singularity for an incompressible fluid equation found numerically,” says geophysicist Ching-Yao Lai of California’s Stanford University.
-In the new work the authors of the original paper and others teamed up with researchers at Google Deepmind to search for unstable singularities in a bounded 3D version of the Euler equation using a physics-informed neural network. “Unlike conventional neural networks that learn from vast datasets, we trained our models to match equations that model the laws of physics,” writes Yongji Wang of New York University and Stanford on Deepmind’s blog. “The network’s output is constantly checked against what the physical equations expect, and it learns by minimizing its ‘residual’, the amount by which its solution fails to satisfy the equations.”
-After an exhaustive search at a precision that is orders of magnitude higher than a normal deep learning protocol, the researchers discovered new families of singularities in the 3D Euler equation. They also found singularities in the related incompressible porous media equation used to model fluid flows in soil or rock; and in the Boussinesq equation that models atmospheric flows.
- -The researchers also gleaned insights into the strength of the singularities. This could be important as stronger singularities might be less readily smoothed out by viscosity when moving from the Euler equation to the Navier-Stokes equation. The researchers are now seeking to model more open systems to study the problem in a more realistic space.
-Hassanzadeh, who was not involved in the work, believes that it is significant – although the results are not unexpected. “If the Euler equation tells you that ‘Hey, there is a singularity,’ it just tells you that there is physics that is missing and that physics becomes very important around that singularity,” he explains. “In the case of Euler we know that you get the singularity because, at the very smallest scales, the effects of viscosity become important…Finding a singularity in the Euler equation is a big achievement, but it doesn’t answer the big question of whether Navier-Stokes is a representation of the real world, because for us Navier-Stokes represents everything.”
-He says the extension to studying the full Navier-Stokes equation will be challenging but that “they are working with the best AI people in the world at Deepmind,” and concludes “I’m sure it’s something they’re thinking about”.
-The work is available on the arXiv pre-print server.
-The post Neural networks discover unstable singularities in fluid systems appeared first on Physics World.
-]]>The post NASA’s Goddard Space Flight Center hit by significant downsizing appeared first on Physics World.
-]]>Based in Greenbelt, Maryland, the GSFC has almost 10 000 scientists and engineers, about 7000 of whom are directly employed by NASA contractors. Responsible for many of NASA’s most important uncrewed missions, telescopes, and probes, the centre is currently working on the Nancy Grace Roman Space Telescope, which is scheduled to launch in 2027, as well as the Dragonfly mission that is due to head for Saturn’s largest moon Titan in 2028.
- -The ability to meet those schedules has now been put in doubt by the Trump administration’s proposed budget for financial year 2026, which started in September. It calls for NASA to receive almost $19bn – far less than the $25bn it has received for the past two years. If passed, Goddard would lose more than 42% of its staff.
-Congress, which passes the final budget, is not planning to cut NASA so deeply as it prepares its 2026 budget proposal. But on 24 September, Goddard managers began what they told employees was “a series of moves…that will reduce our footprint into fewer buildings”. The shift is intended to “bring down overall operating costs while maintaining the critical facilities we need for our core capabilities of the future”.
-While this is part of a 20-year “master plan” for the GSFC that NASA’s leadership approved in 2019, the management’s memo stated that “all planned moves will take place over the next several months and be completed by March 2026″. A report in September by Democratic members of the Senate Committee on Commerce, Science, and Transportation, which is responsible for NASA, asserts that the cuts are “in clear violation of the [US] constitution [without] regard for the impacts on NASA’s science missions and workforce”.
-On 3 November, the Goddard Engineers, Scientists and Technicians Association, a union representing NASA workers, reported that the GSFC had already closed over a third of its buildings, including some 100 labs. This had been done, it says, “with extreme haste and with no transparent strategy or benefit to NASA or the nation”. The union adds that the “closures are being justified as cost-saving but no details are being provided and any short-term savings are unlikely to offset a full account of moving costs and the reduced ability to complete NASA missions”.
-Zoe Lofgren, the lead Democrat on the House of Representatives Science Committee, has demanded of Sean Duffy, NASA’s acting administrator, that the agency “must now halt” any laboratory, facility and building closure and relocation activities at Goddard. In a letter to Duffy dated 10 November, she also calls for the “relocation, disposal, excessing, or repurposing of any specialized equipment or mission-related activities, hardware and systems” to also end immediately.
-Lofgren now wants NASA to carry out a “full accounting of the damage inflicted on Goddard thus far” by 18 November. Owing to the government shutdown, no GSFC or NASA official was available to respond to Physics World’s requests for a response.
-Meanwhile, the Trump administration has renominated billionaire entrepreneur Jared Isaacman as NASA’s administrator. Trump had originally nominated Isaacman, who had flown on a private SpaceX mission and carried out spacewalk, on the recommendation of SpaceX founder Elon Musk. But the administration withdrew the nomination in May following concerns among some Republicans that Isaacman had funded the Democrat party.
-The post NASA’s Goddard Space Flight Center hit by significant downsizing appeared first on Physics World.
-]]>The post Designing better semiconductor chips: NP hard problems and forever chemicals appeared first on Physics World.
-]]>In this episode of the Physics World Weekly podcast, Margaret Harris reports from the Heidelberg Laureate Forum where she spoke to two researchers who are focused on some of these design challenges.
-Up first is Mariam Elgamal, who’s doing a PhD at Harvard University on the development of environmentally sustainable computing systems. She explains why sustainability goes well beyond energy efficiency and must consider the manufacturing process and the chemicals used therein.
-Harris also chats with Andrew Gunter, who is doing a PhD at the University of British Columbia on circuit design for computer chips. He talks about the maths-related problems that must be solved in order to translate a desired functionality into a chip that can be fabricated.
--
The post Designing better semiconductor chips: NP hard problems and forever chemicals appeared first on Physics World.
-]]>The post High-resolution PET scanner visualizes mouse brain structures with unprecedented detail appeared first on Physics World.
-]]>Submillimetre-resolution PET has been demonstrated by several research groups. Indeed, the QST team previously built a PET scanner with 0.55 mm resolution – sufficient to visualize the thalamus and hypothalamus in the mouse brain. But identification of smaller structures such as the amygdala and cerebellar nuclei has remained a challenge.
-“Sub-0.5 mm resolution is important to visualize mouse brain structures with high quantification accuracy,” explains first author Han Gyu Kang. “Moreover, this research work will change our perspective about the fundamental limit of PET resolution, which had been regarded to be around 0.5 mm due to the positron range of [the radioisotope] fluorine-18”.
-With Monte Carlo simulations revealing that sub-0.5 mm resolution could be achievable with optimal detector parameters and system geometry, Kang and colleagues performed a series of modifications to their submillimetre-resolution PET (SR-PET) to create the new high-resolution PET (HR-PET) scanner.
-The HR-PET, described in IEEE Transactions on Medical Imaging, is based around two 48 mm-diameter detector rings with an axial coverage of 23.4 mm. Each ring contains 16 depth-of-interaction (DOI) detectors (essential to minimize parallax error in a small ring diameter) made from three layers of LYSO crystal arrays stacked in a staggered configuration, with the outer layer coupled to a silicon photomultiplier (SiPM) array.
- -Compared with their previous design, the researchers reduced the detector ring diameter from 52.5 to 48 mm, which served to improve geometrical efficiency and minimize the noncollinearity effect. They also reduced the crystal pitch from 1.0 to 0.8 mm and the SiPM pitch from 3.2 to 2.4 mm, improving the spatial resolution and crystal decoding accuracy, respectively.
-Other changes included optimizing the crystal thicknesses to 3, 3 and 5 mm for the first, second and third arrays, as well as use of a narrow energy window (440–560 keV) to reduce the scatter fraction and inter-crystal scattering events. “The optimized staggered three-layer crystal array design is also a key factor to enhance the spatial resolution by improving the spatial sampling accuracy and DOI resolution compared with the previous SR-PET,” Kang points out.
-Performance tests showed that the HR-PET scanner had a system-level energy resolution of 18.6% and a coincidence timing resolution of 8.5 ns. Imaging a NEMA 22Na point source revealed a peak sensitivity at the axial centre of 0.65% for the 440–560 keV energy window and a radial resolution of 0.67±0.06 mm from the centre to 10 mm radial offset (using 2D filtered-back-projection reconstruction) – a 33% improvement over that achieved by the SR-PET.
-To further evaluate the performance of the HR-PET, the researchers imaged a rod-based resolution phantom. Images reconstructed using a 3D ordered-subset-expectation-maximization (OSEM) algorithm clearly resolved all of the rods. This included the smallest rods with diameters of 0.5 and 0.45 mm, with average valley-to-peak ratios of 0.533 and 0.655, respectively – a 40% improvement over the SR-PET.
-The researchers then used the HR-PET for in vivo mouse brain imaging. They injected 18F-FITM, a tracer used to image the central nervous system, into an awake mouse and performed a 30 min PET scan (with the animal anesthetized) 42 min after injection. For comparison, they scanned the same mouse for 30 min with a preclinical Inveon PET scanner.
-
After OSEM reconstruction, strong tracer uptake in the thalamus, hypothalamus, cerebellar cortex and cerebellar nuclei was clearly visible in the coronal HR-PET images. A zoomed image distinguished the cerebellar nuclei and flocculus, while sagittal and axial images visualized the cortex and striatum. Images from the Inveon, however, could barely resolve these brain structures.
-The team also imaged the animal’s glucose metabolism using the tracer 18F-FDG. A 30 min HR-PET scan clearly delineated glucose transporter expression in the cortex, thalamus, hypothalamus and cerebellar nuclei. Here again, the Inveon could hardly identify these small structures.
- -The researchers note that the 18F-FITM and 18F-FDG PET images matched well with the anatomy seen in a preclinical CT scan. “To the best of our knowledge, this is the first separate identification of the hypothalamus, amygdala and cerebellar nuclei of mouse brain,” they write.
-Future plans for the HR-PET scanner, says Kang, include using it for research on neurodegenerative disorders, with tracers that bind to amyloid beta or tau protein. “In addition, we plan to extend the axial coverage over 50 mm to explore the whole body of mice with sub-0.5 mm resolution, especially for oncological research,” he says. “Finally, we would like to achieve sub-0.3 mm PET resolution with more optimized PET detector and system designs.”
-The post High-resolution PET scanner visualizes mouse brain structures with unprecedented detail appeared first on Physics World.
-]]>The post New experiments on static electricity cast doubt on previous studies in the field appeared first on Physics World.
-]]>Static electricity is also known as contact electrification because it occurs when charge is transferred from one object to another by touch. The most common laboratory example involves rubbing a balloon on someone’s head to make their hair stand on end. However, static electricity is also associated with many other activities, including coffee grinding, pollen transport and perhaps even the formation of rocky planets.
-One of the most useful ways of studying contact electrification is to move a metal tip slowly over the surface of a sample without touching it, recording a voltage all the while. These so-called scanning Kelvin methods produce an “image” of voltages created by the transferred charge. At the macroscale, around 100 μm to 10 cm, the main method is termed scanning Kelvin probe microscopy (SKPM). At the nanoscale, around 10 nm to 100 μm, a related but distinct variant known as Kelvin probe force microscopy (KPFM) is used instead.
- -In previous fundamental physics studies using these techniques, the main challenges have been to make sense of the stationary patterns of charge left behind after contact electrification, and to investigate how these patterns evolve over space and time. In the latest work, the ISTA team chose to ask a slightly different question: when are the dynamics of charge transfer too fast for measured stationary patterns to yield meaningful information?
-To find out, ISTA PhD student Felix Pertl built a special setup that could measure a sample’s surface charge with KPFM; transfer it below a linear actuator so that it could exchange charge when it contacted another material; and then transfer it underneath the KPFM again to image the resulting change in the surface charge.
-“In a typical set-up, the sample transfer, moving the AFM to the right place and reinitiation and recalibration of the KPFM parameters can easily take as long as tens of minutes,” Pertl explains. “In our system, this happens in as little as around 30 s. As all aspects of the system are completely automated, we can repeat this process, and quickly, many times.”
-
This speed-up is important because static electricity dissipates relatively rapidly. In fact, the researchers found that the transferred charge disappeared from the sample’s surface quicker than the time required for most KPFM scans. Their data also revealed that the deposited charge was, in effect, uniformly distributed across the surface and that its dissipation depended on the material’s electrical conductivity. Additional mathematical modelling and subsequent experiments confirmed that the more insulating a material is, the slower it dissipates charge.
-Pertl says that these results call into question the validity of some previous static electricity studies that used KPFM to study charge transfer. “The most influential paper in our field to date reported surface charge heterogeneity using KPFM,” he tells Physics World. At first, the ISTA team’s goal was to understand the origin of this heterogeneity. But when their own experiments showed an essentially homogenous distribution of surface charge, the researchers had to change tack.
-“The biggest challenge in our work was realizing – and then accepting – that we could not reproduce the results from this previous study,” Pertl says. “Convincing both my principal investigator and myself that our data revealed a very different physical mechanism required patience, persistence and trust in our experimental approach.”
-The discrepancy, he adds, implies that the surface heterogeneity previously observed was likely not a feature of static electricity, as was claimed. Instead, he says, it was probably “an artefact of the inability to image the charge before it had left the sample surface”.
-Studies of contact electrification studies go back a long way. Philippe Molinié of France’s GeePs Laboratory, who was not involved in this work, notes that the first experiments were performed by the English scientist William Gilbert clear back in the sixteenth century. As well as coining the term “electricity” (from the Greek “elektra”, meaning amber), Gilbert was also the first to establish that magnets maintain their electrical attraction over time, while the forces produced by contact-charged insulators slowly decrease.
-“Four centuries later, many mysteries remain unsolved in the contact electrification phenomenon,” Molinié observes. He adds that the surfaces of insulating materials are highly complex and usually strongly disordered, which affects their ability to transfer charge at the molecular scale. “The dynamics of the charge neutralization, as Pertl and colleagues underline, is also part of the process and is much more complex than could be described by a simple resistance-capacitor model,” Molinié says.
- -Although the ISTA team studied these phenomena with sophisticated Kelvin probe microscopy rather than the rudimentary tools available to Gilbert, it is, Molinié says, “striking that the competition between charge transfer and charge screening that comes from the conductivity of an insulator, first observed by Gilbert, is still at the very heart of the scientific interrogations that this interesting new work addresses.”
-The Austrian researchers, who detail their work in Phys. Rev. Lett., say they hope their experiments will “encourage a more critical interpretation” of KPFM data in the future, with a new focus on the role of sample grounding and bulk conductivity in shaping observed charge patterns. “We hope it inspires KPFM users to reconsider how they design and analyse experiments, which could lead to more accurate insights into charge behaviour in insulators,” Pertl says.
-“We are now planning to deliberately engineer surface charge heterogeneity into our samples,” he reveals. “By tuning specific surface properties, we aim to control the sign and spatial distribution of charge on defined regions of these.”
-The post New experiments on static electricity cast doubt on previous studies in the field appeared first on Physics World.
-]]>The post SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production appeared first on Physics World.
-]]>The TechARENA portion of the event will cover a wide range of technology-related issues including new materials, future computing paradigms and the development of hi-tech skills in the European workface. There will also be an Executive Forum, which will feature leaders in industry and government and will cover topics including silicon geopolitics and the use of artificial intelligence in semiconductor manufacturing.
-SEMICON Europa will be held at the Messe München, where it will feature a huge exhibition with over 500 exhibitors from around the world. The exhibition is spread out over three halls and here are some of the companies and product innovations to look out for on the show floor.
-As the boundaries between electronic and photonic technologies continue to blur, the semiconductor industry faces a growing challenge: how to test and align increasingly complex electro-photonic chip architectures efficiently, precisely, and at scale. At SEMICON Europa 2025, SmarAct will address this challenge head-on with its latest innovation – Fast Scan Align. This is a high-speed and high-precision alignment solution that redefines the limits of testing and packaging for integrated photonics.
-
In the emerging era of heterogeneous integration, electronic and photonic components must be aligned and interconnected with sub-micrometre accuracy. Traditional positioning systems often struggle to deliver both speed and precision, especially when dealing with the delicate coupling between optical and electrical domains. SmarAct’s Fast Scan Align solution bridges this gap by combining modular motion platforms, real-time feedback control, and advanced metrology into one integrated system.
-At its core, Fast Scan Align leverages SmarAct’s electromagnetic and piezo-driven positioning stages, which are capable of nanometre-resolution motion in multiple degrees of freedom. Fast Scan Align’s modular architecture allows users to configure systems tailored to their application – from wafer-level testing to fibre-to-chip alignment with active optical coupling. Integrated sensors and intelligent algorithms enable scanning and alignment routines that drastically reduce setup time while improving repeatability and process stability.
-Fast Scan Align’s compact modules allow various measurement techniques to be integrated with unprecedented possibilities. This has become decisive for the increasing level of integration of complex electro-photonic chips.
-Apart from the topics of wafer-level testing and packaging, wafer positioning with extreme precision is as crucial as never before for the highly integrated chips of the future. SmarAct’s PICOSCALE interferometer addresses the challenge of extreme position by delivering picometer-level displacement measurements directly at the point of interest.
-When combined with SmarAct’s precision wafer stages, the PICOSCALE interferometer ensures highly accurate motion tracking and closed-loop control during dynamic alignment processes. This synergy between motion and metrology gives users unprecedented insight into the mechanical and optical behaviour of their devices – which is a critical advantage for high-yield testing of photonic and optoelectronic wafers.
-Visitors to SEMICON Europa will also experience how all of SmarAct’s products – from motion and metrology components to modular systems and up to turn-key solutions – integrate seamlessly, offering intuitive operation, full automation capability, and compatibility with laboratory and production environments alike.
-For more information visit SmarAct at booth B1.860 or explore more of SmarAct’s solutions in the semiconductor and photonics industry.
-Thyracont Vacuum Instruments will be showcasing its precision vacuum metrology systems in exhibition hall C1. Made in Germany, the company’s broad portfolio combines diverse measurement technologies – including piezo, Pirani, capacitive, cold cathode, and hot cathode – to deliver reliable results across a pressure range from 2000 to 3e-11 mbar.
-
Front-and-centre at SEMICON Europa will be Thyracont’s new series of VD800 compact vacuum meters. These instruments provide precise, on-site pressure monitoring in industrial and research environments. Featuring a direct pressure display and real-time pressure graphs, the VD800 series is ideal for service and maintenance tasks, laboratory applications, and test setups.
-The VD800 series combines high accuracy with a highly intuitive user interface. This delivers real-time measurement values; pressure diagrams; and minimum and maximum pressure – all at a glance. The VD800’s 4+1 membrane keypad ensures quick access to all functions. USB-C and optional Bluetooth LE connectivity deliver seamless data readout and export. The VD800’s large internal data logger can store over 10 million measured values with their RTC data, with each measurement series saved as a separate file.
-Data sampling rates can be set from 20 ms to 60 s to achieve dynamic pressure tracking or long-term measurements. Leak rates can be measured directly by monitoring the rise in pressure in the vacuum system. Intelligent energy management gives the meters extended battery life and longer operation times. Battery charging is done conveniently via USB-C.
-The vacuum meters are available in several different sensor configurations, making them adaptable to a wide range of different uses. Model VD810 integrates a piezo ceramic sensor for making gas-type-independent measurements for rough vacuum applications. This sensor is insensitive to contamination, making it suitable for rough industrial environments. The VD810 measures absolute pressure from 2000 to 1 mbar and relative pressure from −1060 to +1200 mbar.
-Model VD850 integrates a piezo/Pirani combination sensor, which delivers high resolution and accuracy in the rough and fine vacuum ranges. Optimized temperature compensation ensures stable measurements in the absolute pressure range from 1200 to 5e-5 mbar and in the relative pressure range from −1060 to +340 mbar.
-The model VD800 is a standalone meter designed for use with Thyracont’s USB-C vacuum transducers, which are available in two models. The VSRUSB USB-C transducer is a piezo/Pirani combination sensor that measures absolute pressure in the 2000 to 5.0e-5 mbar range. The other is the VSCUSB USB-C transducer, which measures absolute pressures from 2000 down to 1 mbar and has a relative pressure range from -1060 to +1200 mbar. A USB-C cable connects the transducer to the VD800 for quick and easy data retrieval. The USB-C transducers are ideal for hard-to-reach areas of vacuum systems. The transducers can be activated while a process is running, enabling continuous monitoring and improved service diagnostics.
-With its blend of precision, flexibility, and ease of use, the Thyracont VD800 series defines the next generation of compact vacuum meters. The devices’ intuitive interface, extensive data capabilities, and modern connectivity make them an indispensable tool for laboratories, service engineers, and industrial operators alike.
-To experience the future of vacuum metrology in Munich, visit Thyracont at SEMICON Europa hall C1, booth 752. There you will discover how the VD800 series can optimize your pressure monitoring workflows.
-The post SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production appeared first on Physics World.
-]]>The post Physicists discuss the future of machine learning and artificial intelligence appeared first on Physics World.
-]]>
IOP Publishing’s Machine Learning series is the world’s first open-access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.
-Part of the series is Machine Learning: Science and Technology, launched in 2019, which bridges the application and advances in machine learning across the sciences. Machine Learning: Earth is dedicated to the application of ML and AI across all areas of Earth, environmental and climate sciences while Machine Learning: Health covers healthcare, medical, biological, clinical and health sciences and Machine Learning: Engineeringfocuses on applied AI and non-traditional machine learning to the most complex engineering challenges.
- -Here, the editors-in-chief (EiC) of the four journals discuss the growing importance of machine learning and their plans for the future.
-Kyle Cranmer is a particle physicist and data scientist at the University of Wisconsin-Madison and is EiC of Machine Learning: Science and Technology (MLST). Pierre Gentine is a geophysicist at Columbia University and is EiC of Machine Learning: Earth. Jimeng Sun is a biophysicist at the University of Illinois at Urbana-Champaign and is EiC of Machine Learning: Health. Mechanical engineer Jay Lee is from the University of Maryland and is EiC of Machine Learning: Engineering.
-Kyle Cranmer (KC): It is due to a convergence of multiple factors. The initial success of deep learning was driven largely by benchmark datasets, advances in computing with graphics processing units, and some clever algorithmic tricks. Since then, we’ve seen a huge investment in powerful, easy-to-use tools that have dramatically lowered the barrier to entry and driven extraordinary progress.
-Pierre Gentine (PG): Machine learning has been transforming many fields of physics, as it can accelerate physics simulation, better handle diverse sources of data (multimodality), help us better predict.
-Jimeng Sun (JS): Over the past decade, we have seen machine learning models consistently reach — and in some cases surpass — human-level performance on real-world tasks. This is not just in benchmark datasets, but in areas that directly impact operational efficiency and accuracy, such as medical imaging interpretation, clinical documentation, and speech recognition. Once ML proved it could perform reliably at human levels, many domains recognized its potential to transform labour-intensive processes.
-Jay Lee (JL): Traditionally, ML growth is based on the development of three elements: algorithms, big data, and computing. The past decade’s growth in ML research is due to the perfect storm of abundant data, powerful computing, open tools, commercial incentives, and groundbreaking discoveries—all occurring in a highly interconnected global ecosystem.
-KC: The advances in generative AI and self-supervised learning are very exciting. By generative AI, I don’t mean Large Language Models — though those are exciting too — but probabilistic ML models that can be useful in a huge number of scientific applications. The advances in self-supervised learning also allows us to engage our imagination of the potential uses of ML beyond well-understood supervised learning tasks.
-PG: I am very interested in the use of ML for climate simulations and fluid dynamics simulations.
-JS: The emergence of agentic systems in healthcare — AI systems that can reason, plan, and interact with humans to accomplish complex goals. A compelling example is in clinical trial workflow optimization. An agentic AI could help coordinate protocol development, automatically identify eligible patients, monitor recruitment progress, and even suggest adaptive changes to trial design based on interim data. This isn’t about replacing human judgment — it’s about creating intelligent collaborators that amplify expertise, improve efficiency, and ultimately accelerate the path from research to patient benefit.
-JL: One area is generative and multimodal ML — integrating text, images, video, and more — are transforming human–AI interaction, robotics, and autonomous systems. Equally exciting is applying ML to nontraditional domains like semiconductor fabs, smart grids, and electric vehicles, where complex engineering systems demand new kinds of intelligence.
-KC: The need for a venue to propagate advances in AI/ML in the sciences is clear. The large AI conferences are under stress, and their review system is designed to be a filter not a mechanism to ensure quality, improve clarity and disseminate progress. The large AI conferences also aren’t very welcoming to user-inspired research, often casting that work as purely applied. Similarly, innovation in AI/ML often takes a back seat in physics journals, which slows the propagation of those ideas to other fields. My vision for MLST is to fill this gap and nurture the community that embraces AI/ML research inspired by the physical sciences.
-PG: I hope we can demonstrate that machine learning is more than a nice tool but that it can play a fundamental role in physics and Earth sciences, especially when it comes to better simulating and understanding the world.
-JS: I see Machine Learning: Health becoming the premier venue for rigorous ML–health research — a place where technical novelty and genuine clinical impact go hand in hand. We want to publish work that not only advances algorithms but also demonstrates clear value in improving health outcomes and healthcare delivery. Equally important, we aim to champion open and reproducible science. That means encouraging authors to share code, data, and benchmarks whenever possible, and setting high standards for transparency in methods and reporting. By doing so, we can accelerate the pace of discovery, foster trust in AI systems, and ensure that our field’s breakthroughs are accessible to — and verifiable by — the global community.
-JL: Machine Learning: Engineering envisions becoming the global platform where ML meets engineering. By fostering collaboration, ensuring rigour and interpretability, and focusing on real-world impact, we aim to redefine how AI addresses humanity’s most complex engineering challenges.
-The post Physicists discuss the future of machine learning and artificial intelligence appeared first on Physics World.
-]]>The post Playing games by the quantum rulebook expends less energy appeared first on Physics World.
-]]>Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game.
- -In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landauer’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information.
-This Landauer minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one.
-To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag.
-“It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.”
-For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains.
-The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research.
-Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them.
-Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising.
- -In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”.
-For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”.
-The post Playing games by the quantum rulebook expends less energy appeared first on Physics World.
-]]>The post Teaching machines to understand complexity appeared first on Physics World.
-]]>This research develops a novel machine learning approach for complex systems that allows the user to extract a few collective descriptors of the system, referred to as inherent structural variables. The researchers used an autoencoder (a type of machine learning tool) to examine snapshots of how atoms are arranged in a system at any moment (called instantaneous atomic configurations). Each snapshot is then matched to a more stable version of that structure (an inherent structure), which represents the system’s underlying shape or pattern after thermal noise is removed. These inherent structural variables enable the analysis of structural transitions both in and out of equilibrium and the computation of high-resolution free-energy landscapes. These are detailed maps that show how a system’s energy changes as its structure or configuration changes, helping researchers understand stability, transitions, and dynamics in complex systems.
-The model is versatile, and the authors demonstrate how it can be applied to metal nanoclusters and protein structures. In the case of Au147 nanoclusters (well-organised structures made up of 147 gold atoms), the inherent structural variables reveal three main types of stable structures that the gold nanocluster can adopt: fcc (face-centred cubic), Dh (decahedral), and Ih (icosahedral). These structures represent different stable states that a nanocluster can switch between, and on the high-resolution free-energy landscape, they appear as valleys. Moving from one valley to another isn’t easy, there are narrow paths or barriers between them, known as kinetic bottlenecks.
-The researchers validated their machine learning model using Markov state models, which are mathematical tools that help analyse how a system moves between different states over time, and electron microscopy, which images atomic structures and can confirm that the predicted structures exist in the gold nanoclusters. The approach also captures non-equilibrium melting and freezing processes, offering insights into polymorph selection and metastable states. Scalability is demonstrated up to Au309 clusters.
-The generality of the method is further demonstrated by applying it to the bradykinin peptide, a completely different type of system, identifying distinct structural motifs and transitions. Applying the method to a biological molecule provides further evidence that the machine learning approach is a flexible, powerful technique for studying many kinds of complex systems. This work contributes to machine learning strategies, as well as experimental and theoretical studies of complex systems, with potential applications across liquids, glasses, colloids, and biomolecules.
-Inherent structural descriptors via machine learning
-Emanuele Telari et al 2025 Rep. Prog. Phys. 88 068002
--
Complex systems in the spotlight: next steps after the 2021 Nobel Prize in Physics by Ginestra Bianconi et al (2023)
-The post Teaching machines to understand complexity appeared first on Physics World.
-]]>The post Using AI to find new particles at the LHC appeared first on Physics World.
-]]>One of the main aims of experimental particle physics at the moment is therefore to search for signs of new physical phenomena beyond the Standard Model.
-Finding something new like this would point us towards a better theoretical model of particle physics: one that can explain things that the Standard Model isn’t able to.
-These searches often involve looking for rare or unexpected signals in high-energy particle collisions such as those at CERN’s Large Hadron Collider (LHC).
-In a new paper published by the CMS collaboration, a new analysis method was used to search for new particles produced by proton-proton collisions at the at the LHC.
-These particles would decay into two jets, but with unusual internal structure not typical of known particles like quarks or gluons.
-The researchers used advanced machine learning techniques to identify jets with different substructures, applying various anomaly detection methods to maximise sensitivity to unknown signals.
-Unlike traditional strategies, anomaly detection methods allow the AI models to identify anomalous patterns in the data without being provided specific simulated examples, giving them increased sensitivity to a wider range of potential new particles.
-This time, they didn’t find any significant deviations from expected background values. Although no new particles were found, the results enabled the team to put several new theoretical models to the test for the first time. They were also able to set upper bounds on the production rates of several hypothetical particles.
-Most importantly, the study demonstrates that machine learning can significantly enhance the sensitivity of searches for new physics, offering a powerful tool for future discoveries at the LHC.
-The CMS Collaboration, 2025 Rep. Prog. Phys. 88 067802
--
The post Using AI to find new particles at the LHC appeared first on Physics World.
-]]>The post Researchers pin down the true cost of precision in quantum clocks appeared first on Physics World.
-]]>The clock the team used to demonstrate this principle consists of a pair of quantum dots coupled by a thin tunnelling barrier. In this double quantum dot system, a “tick” occurs whenever an electron tunnels from one side of the system to the other, through both dots. Applying a bias voltage gives ticks a preferred direction.
-This might not seem like the most obvious kind of clock. Indeed, as an actual timekeeping device, collaboration member Florian Meier describes it as “quite bad”. However, Ares points out that although the tunnelling process is random (stochastic), the period between ticks does have a mean and a standard deviation. Hence, given enough ticks, the number of ticks recorded will tell you something about how much time has passed.
-In any case, Meier adds, they were not setting out to build the most accurate clock. Instead, they wanted to build a playground to explore basic principles of energy dissipation and clock precision, and for that, it works really well. “The really cool thing I like about what they did was that with that particular setup, you can really pinpoint the entropy dissipation of the measurement somehow in this quantum dot,” says Meier, a PhD student at the Technical University in Vienna, Austria. “I think that’s really unique in the field.”
-To measure the entropy of each quantum tick, the researchers measured the voltage drop (and associated heat dissipation) for each electron tunnelling through the double quantum dot. Vivek Wadhia, a DPhil student in Ares’s lab at the University of Oxford, UK who performed many of the measurements, points out that this single unit of charge does not equate to very much entropy. However, measuring the entropy of the tunnelling electron was not the full story.
-
Because the ticks of the quantum clock were, in effect, continuously monitored by the environment, the coherence time for each quantum tunnelling event was very short. However, because the time on this clock could not be observed directly by humans – unlike, say, the hands of a mechanical clock – the researchers needed another way to measure and record each tick.
-For this, they turned to the electronics they were using in the lab and compared the power in versus the power out on a macroscopic scale. “That’s the cost of our measurement, right?” says Wadhia, adding that this cost includes both the measuring and recording of each tick. He stresses that they were not trying to find the most thermodynamically efficient measurement technique: “The idea was to understand how the readout compares to the clockwork.”
-This classical entropy associated with measuring and recording each tick turns out to be nine orders of magnitude larger than the quantum entropy of a tick – more than enough for the system to operate as a clock with some level of precision. “The interesting thing is that such simple systems sometimes reveal how you can maybe improve precision at a very low cost thermodynamically,” Meier says.
- -As a next step, Ares plans to explore different arrangements of quantum dots, using Meier’s previous theoretical work to improve the clock’s precision. “We know that, for example, clocks in nature are not that energy intensive,” Ares tells Physics World. “So clearly, for biology, it is possible to run a lot of processes with stochastic clocks.”
-The research is reported in Physical Review Letters.
-The post Researchers pin down the true cost of precision in quantum clocks appeared first on Physics World.
-]]>The post The forgotten pioneers of computational physics appeared first on Physics World.
-]]>These people, many of whom were women, were the first scientific programmers and computational scientists. Skilled in the complicated operation of early computing devices, they often had degrees in maths or science, and were an integral part of research efforts. And yet, their fundamental contributions are mostly forgotten.
-This was in part because of their gender – it was an age when sexism was rife, and it was standard for women to be fired from their job after getting married. However, there is another important factor that is often overlooked, even in today’s scientific community – people in technical roles are often underappreciated and underacknowledged, even though they are the ones who make research possible.
-Originally, a “computer” was a human being who did calculations by hand or with the help of a mechanical calculator. It is thought that the world’s first computational lab was set up in 1937 at Columbia University. But it wasn’t until the Second World War that the demand for computation really exploded; with the need for artillery calculations, new technologies and code breaking.
-
In the US, the development of the atomic bomb during the Manhattan Project (established in 1943) required huge computational efforts, so it wasn’t long before the New Mexico site had a hand-computing group. Called the T-5 group of the Theoretical Division, it initially consisted of about 20 people. Most were women, including the spouses of other scientific staff. Among them was Mary Frankel, a mathematician married to physicist Stan Frankel; mathematician Augusta “Mici” Teller who was married to Edward Teller, the “father of the hydrogen bomb”; and Jean Bacher, the wife of physicist Robert Bacher.
-As the war continued, the T-5 group expanded to include civilian recruits from the nearby towns and members of the Women’s Army Corps. Its staff worked around the clock, using printed mathematical tables and desk calculators in four-hour shifts – but that was not enough to keep up with the computational needs for bomb development. In the early spring of 1944, IBM punch-card machines were brought in to supplement the human power. They became so effective that the machines were soon being used for all large calculations, 24 hours a day, in three shifts.
-The computational group continued to grow, and among the new recruits were Naomi Livesay and Eleonor Ewing. Livesay held an advanced degree in mathematics and had done a course in operating and programming IBM electric calculating machines, making her an ideal candidate for the T-5 division. She in turn recruited Ewing, a fellow mathematician who was a former colleague. The two young women supervised the running of the IBM machines around the clock.
-The frantic pace of the T-5 group continued until the end of the war in September 1945. The development of the atomic bomb required an immense computational effort, which was made possible through hand and punch-card calculations.
-Shortly after the war ended, the first fully electronic, general-purpose computer – the Electronic Numerical Integrator and Computer (ENIAC) – became operational at the University of Pennsylvania, following two years of development. The project had been led by physicist John Mauchly and electrical engineer J Presper Eckert. The machine was operated and coded by six women – mathematicians Betty Jean Jennings (later Bartik); Kathleen, or Kay, McNulty (later Mauchly, then Antonelli); Frances Bilas (Spence); Marlyn Wescoff (Meltzer) and Ruth Lichterman (Teitelbaum); as well as Betty Snyder (Holberton) who had studied journalism.
-
Polymath John von Neumann also got involved when looking for more computing power for projects at the new Los Alamos Laboratory, established in New Mexico in 1947. In fact, although originally designed to solve ballistic trajectory problems, the first problem to be run on the ENIAC was “the Los Alamos problem” – a thermonuclear feasibility calculation for Teller’s group studying the H-bomb.
-Like in the Manhattan Project, several husband-and-wife teams worked on the ENIAC, the most famous being von Neumann and his wife Klara Dán, and mathematicians Adele and Herman Goldstine. Dán von Neumann in particular worked closely with Nicholas Metropolis, who alongside mathematician Stanislaw Ulam had coined the term Monte Carlo to describe numerical methods based on random sampling. Indeed, between 1948 and 1949 Dán von Neumann and Metropolis ran the first series of Monte Carlo simulations on an electronic computer.
-Work began on a new machine at Los Alamos in 1948 – the Mathematical Analyzer Numerical Integrator and Automatic Computer (MANIAC) – which ran its first large-scale hydrodynamic calculation in March 1952. Many of its users were physicists, and its operators and coders included mathematicians Mary Tsingou (later Tsingou-Menzel), Marjorie Jones (Devaney) and Elaine Felix (Alei); plus Verna Ellingson (later Gardiner) and Lois Cook (Leurgans).
-The Los Alamos scientists tried all sorts of problems on the MANIAC, including a chess-playing program – the first documented case of a machine defeating a human at the game. However, two of these projects stand out because they had profound implications on computational science.
-In 1953 the Tellers, together with Metropolis and physicists Arianna and Marshall Rosenbluth, published the seminal article “Equation of state calculations by fast computing machines” (J. Chem. Phys. 21 1087). The work introduced the ideas behind the “Metropolis (later renamed Metropolis–Hastings) algorithm”, which is a Monte Carlo method that is based on the concept of “importance sampling”. (While Metropolis was involved in the development of Monte Carlo methods, it appears that he did not contribute directly to the article, but provided access to the MANIAC nightshift.) This is the progenitor of the Markov Chain Monte Carlo methods, which are widely used today throughout science and engineering.
-Marshall later recalled how the research came about when he and Arianna had proposed using the MANIAC to study how solids melt (AIP Conf. Proc. 690 22).
-
Edward Teller meanwhile had the idea of using statistical mechanics and taking ensemble averages instead of following detailed kinematics for each individual disk, and Mici helped with programming during the initial stages. However, the Rosenbluths did most of the work, with Arianna translating and programming the concepts into an algorithm.
-The 1953 article is remarkable, not only because it led to the Metropolis algorithm, but also as one of the earliest examples of using a digital computer to simulate a physical system. The main innovation of this work was in developing “importance sampling”. Instead of sampling from random configurations, it samples with a bias toward physically important configurations which contribute more towards the integral.
-Furthermore, the article also introduced another computational trick, known as “periodic boundary conditions” (PBCs): a set of conditions which are often used to approximate an infinitely large system by using a small part known as a “unit cell”. Both importance sampling and PBCs went on to become workhorse methods in computational physics.
-In the summer of 1953, physicist Enrico Fermi, Ulam, Tsingou and physicist John Pasta also made a significant breakthrough using the MANIAC. They ran a “numerical experiment” as part of a series meant to illustrate possible uses of electronic computers in studying various physical phenomena.
-The team modelled a 1D chain of oscillators with a small nonlinearity to see if it would behave as hypothesized, reaching an equilibrium with the energy redistributed equally across the modes (doi.org/10.2172/4376203). However, their work showed that this was not guaranteed for small perturbations – a non-trivial and non-intuitive observation that would not have been apparent without the simulations. It is the first example of a physics discovery made not by theoretical or experimental means, but through a computational approach. It would later lead to the discovery of solitons and integrable models, the development of chaos theory, and a deeper understanding of ergodic limits.
-Although the paper says the work was done by all four scientists, Tsingou’s role was forgotten, and the results became known as the Fermi–Pasta–Ulam problem. It was not until 2008, when French physicist Thierry Dauxois advocated for giving her credit in a Physics Today article, that Tsingou’s contribution was properly acknowledged. Today the finding is called the Fermi–Pasta–Ulam–Tsingou problem.
-The year 1953 also saw IBM’s first commercial, fully electronic computer – an IBM 701 – arrive at Los Alamos. Soon the theoretical division had two of these machines, which, alongside the MANIAC, gave the scientists unprecedented computing power. Among those to take advantage of the new devices were Martha Evans (whom very little is known about) and theoretical physicist Francis Harlow, who began to tackle the largely unexplored subject of computational fluid dynamics.
-The idea was to use a mesh of cells through which the fluid, represented as particles, would move. This computational method made it possible to solve complex hydrodynamics problems (involving large distortions and compressions of the fluid) in 2D and 3D. Indeed, the method proved so effective that it became a standard tool in plasma physics where it has been applied to every conceivable topic from astrophysical plasmas to fusion energy.
-The resulting internal Los Alamos report – The Particle-in-cell Method for Hydrodynamic Calculations, published in 1955 – showed Evans as first author and acknowledged eight people (including Evans) for the machine calculations. However, while Harlow is remembered as one of the pioneers of computational fluid dynamics, Evans was forgotten.
-In an age where women had very limited access to the frontlines of research, the computational war effort brought many female researchers and technical staff in. As their contributions come more into the light, it becomes clearer that their role was not a simple clerical one.
-
There is a view that the coders’ work was “the vital link between the physicist’s concepts (about which the coders more often than not didn’t have a clue) and their translation into a set of instructions that the computer was able to perform, in a language about which, more often than not, the physicists didn’t have a clue either”, as physicists Giovanni Battimelli and Giovanni Ciccotti wrote in 2018 (Eur. Phys. J. H 43 303). But the examples we have seen show that some of the coders had a solid grasp of the physics, and some of the physicists had a good understanding of the machine operation. Rather than a skilled–non-skilled/men–women separation, the division of labour was blurred. Indeed, it was more of an effective collaboration between physicists, mathematicians and engineers.
-Even in the early days of the T-5 division before electronic computers existed, Livesay and Ewing, for example, attended maths lectures from von Neumann, and introduced him to punch-card operations. As has been documented in books including Their Day in the Sun by Ruth Howes and Caroline Herzenberg, they also took part in the weekly colloquia held by J Robert Oppenheimer and other project leaders. This shows they should not be dismissed as mere human calculators and machine operators who supposedly “didn’t have a clue” about physics.
-Verna Ellingson (Gardiner) is another forgotten coder who worked at Los Alamos. While little information about her can be found, she appears as the last author on a 1955 paper (Science 122 465) written with Metropolis and physicist Joseph Hoffman – “Study of tumor cell populations by Monte Carlo methods”. The next year she was first author of “On certain sequences of integers defined by sieves” with mathematical physicist Roger Lazarus, Metropolis and Ulam (Mathematics Magazine 29 117). She also worked with physicist George Gamow on attempts to discover the code for DNA selection of amino acids, which just shows the breadth of projects she was involved in.
-Evans not only worked with Harlow but took part in a 1959 conference on self-organizing systems, where she queried AI pioneer Frank Rosenblatt on his ideas about human and machine learning. Her attendance at such a meeting, in an age when women were not common attendees, implies we should not view her as “just a coder”.
-With their many and wide-ranging contributions, it is more than likely that Evans, Gardiner, Tsingou and many others were full-fledged researchers, and were perhaps even the first computational scientists. “These women were doing work that modern computational physicists in the [Los Alamos] lab’s XCP [Weapons Computational Physics] Division do,” says Nicholas Lewis, a historian at Los Alamos. “They needed a deep understanding of both the physics being studied, and of how to map the problem to the particular architecture of the machine being used.”
-
In the 1950s there was no computational physics or computer science, therefore it’s unsurprising that the practitioners of these disciplines went by different names, and their identity has evolved over the decades since.
-Originally a “computer” was a person doing calculations by hand or with the help of a mechanical calculator.
-A “coder” was a person who translated mathematical concepts into a set of instructions in machine language. John von Neumann and Herman Goldstine distinguished between “coding” and “planning”, with the former being the lower-level work of turning flow diagrams into machine language (and doing the physical configuration) while the latter did the mathematical analysis of the problem.
-Meanwhile, an “operator” would physically handle the computer (replacing punch cards, doing the rewiring, etc). In the late-1940s coders were also operators.
-As historians note in the book ENIAC in Action this was an age where “It was hard to devise the mathematical treatment without a good knowledge of the processes of mechanical computation…It was also hard to operate the ENIAC without understanding something about the mathematical task it was undertaking.”
-For the ENIAC a “programmer” was not a person but “a unit combining different sequences in a coherent computation”. The term would later shift and eventually overlap with the meaning of coder as a person’s job.
-Computer scientist Margaret Hamilton, who led the development of the on-board flight software for NASA’s Apollo program, coined the term “software engineering” to distinguish the practice of designing, developing, testing and maintaining software from the engineering tasks associated with the hardware.
-Using the term “programmer” for someone who coded computers peaked in popularity in the 1980s, but by the 2000s was replaced in favour of other job titles such as various flavours of “developer” or “software architect”.
-A “research software engineer” is a person who combines professional software engineering expertise with an intimate understanding of scientific research.
--
Credited or not, these pioneering women and their contributions have been mostly forgotten, and only in recent decades have their roles come to light again. But why were they obscured by history in the first place?
-Secrecy and sexism seem to be the main factors at play. For example, Livesay was not allowed to pursue a PhD in mathematics because she was a woman, and in the cases of the many married couples, the team contributions were attributed exclusively to the husband. The existence of the Manhattan Project was publicly announced in 1945, but documents that contain certain nuclear-weapons-related information remain classified today. Because these are likely to remain secret, we will never know the full extent of these pioneers’ contributions.
-But another often overlooked reason is the widespread underappreciation of the key role of computational scientists and research software engineers, a term that was only coined just over a decade ago. Even today, these non-traditional research roles end up being undervalued. A 2022 survey by the UK Software Sustainability Institute, for example, showed that only 59% of research software engineers were named as authors, with barely a quarter (24%) mentioned in the acknowledgements or main text, while a sixth (16%) were not mentioned at all.
-The separation between those who understand the physics and those who write the code, understand and operate the hardware goes back to the early days of computing (see box above), but it wasn’t entirely accurate even then. People who implement complex scientific computations are not just coders or skilled operators of supercomputers, but truly multidisciplinary scientists who have a deep understanding of the scientific problems, mathematics, computational methods and hardware.
-Such people – whatever their gender – play a key role in advancing science and yet remain the unsung heroes of the discoveries their work enables. Perhaps what this story of the forgotten pioneers of computational physics tells us is that some views rooted in the 1950s are still influencing us today. It’s high time we moved on.
-The post The forgotten pioneers of computational physics appeared first on Physics World.
-]]>