article_text
stringlengths 294
32.8k
⌀ | topic
stringlengths 3
42
|
|---|---|
In 1981, many of the world’s leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing.Before Hawking’s talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, “What happened before that?” The Big Bang theory, for instance—pioneered 50 years before Hawking’s lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican’s academy of sciences—rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from?The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking’s talk, the cosmologist Alan Guth realized that the Big Bang’s problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth, and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it?Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there’s no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, “There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?”The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock—the so-called “wave function of the universe” that encompasses the entire past, present, and future at once—making moot all contemplation of seeds of creation, a creator, or any transition from a time before.“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. “It would be like asking what lies south of the South Pole.”Stephen Hawking and James Hartle at a 2014 workshop near Hereford, England.Cathy PageHartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties—particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on … We didn’t have time in the early universe, but we have time later on.”The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking’s. The proposal represented a first guess at the quantum description of the cosmos—the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories’ various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, “was in some ways the simplest possible proposal for that.”But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would—that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said—“deeply ambiguous.”In their 2017 paper, published in Physical Review Letters, Turok and his coauthors approached Hartle and Hawking’s no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. “We discovered that it just failed miserably,” Turok said. “It was just not possible quantum mechanically for a universe to start in the way they imagined.” The trio checked their math and queried their underlying assumptions before going public, but “unfortunately,” Turok said, “it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster.”The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues’ reasoning. “We disagree with his technical arguments,” said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter’s life. “But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that’s the more interesting discussion.”After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated—yet friendly—debate has helped firm up the idea that most tickled Hawking’s fancy. Even critics of his and Hartle’s specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure.Garden of Cosmic DelightsHartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo’s theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin.In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein’s equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking “singularity theorems” meant there was no way for space-time to begin smoothly, undramatically at a point.Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking’s hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.Likewise, Hartle and Hawking expressed the wave function of the universe—which describes its likely states—as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible “expansion histories,” smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails.The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. “Murray Gell-Mann used to ask me,” Hartle said, referring to the late Nobel Prize-winning physicist, “if you know the wave function of the universe, why aren’t you rich?” Of course, to actually solve for the wave function using Feynman’s method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in “minisuperspace,” defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking’s shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.)Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate.The rival solutions are the two “classical” expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein’s theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation.One of the two classical solutions resembles our universe. On large scales, it’s smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe.The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong.The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our “contour of integration” pick up? Researchers have forked down different paths.In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called “lapse.” Lapse is essentially the height of each possible shuttlecock universe—the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution.“People place huge faith in Stephen’s intuition,” Turok said by phone. “For good reason—I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”Imaginary UniversesJonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking’s student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog’s, and apparently Hawking’s, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It’s similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. “You can do that parameterization in many different ways, but none of them are any more physical than another one,” Halliwell said.He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be “normalizable,” but the wildly fluctuating universe that Turok’s team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws—obvious signs, according to no-boundary’s defenders, to walk the other way.Neil Turok has mounted a challenge to Hartle and Hawking’s “no-boundary” proposal and floated a competing quantum description of the universe.Gabriela SecaraIt’s true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that’s a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. “That’s a principle which is not written in the stars, and which we profoundly disagree with,” Hertog said.According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt—after mulling over the issue during a layover at Raleigh-Durham International—argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace.In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present.That effort is continuing in Hawking’s absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they’re studying is no longer Hartle-Hawking, in his opinion—though Hartle himself disagrees.For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological modelthat has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe’s mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative.There has also been a revival of interest in the “tunneling proposal,” an alternative way that the universe might have arisen from nothing, conceived in the ’80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical “tunneling” event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment.Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning—though universes that tunnel into existence may have other problems.No matter how things go, perhaps we’ll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. “If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?” asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe “is the right kind of question to ask,” said Maldacena, who, incidentally, is a member of the Pontifical Academy. “Whether we are finding the right wave function, or how we should think about the wave function—it’s less clear.”Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.More Great WIRED StoriesBitcoin's climate impact is global. The cures are localFans are better than tech at organizing information onlineGritty postcards from the Russian hinterlandWhat does it mean when a product is “Amazon’s Choice”?My glorious, boring, almost-disconnected walk in Japan🎧 Things not sounding right? Check out our favorite wireless headphones, soundbars, and bluetooth speakers📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories
|
Physics
|
The AMS-02 experiment aboard ISS.Image: NASAA team of physicists determined that enigmatic ‘antinuclei’ can travel across the universe without being absorbed by the interstellar medium. The finding suggests we may be able to identify antimatter that is produced by dark matter in deep space.OffEnglishThe physicists estimated the Milky Way’s so-called transparency to antihelium-3 nuclei—meaning, how permissive the galaxy’s interstellar medium is to antinuclei zipping through space.“Our results show, for the first time on the basis of a direct absorption measurement, that antihelium-3 nuclei coming from as far as the centre of our Galaxy can reach near-Earth locations,” said ALICE physics coordinator Andrea Dainese, in a CERN release.Antimatter is not merely the stuff of sci-fi novels. It is a real, naturally occurring mirror to ordinary matter. Antimatter particles have the same mass but the opposite charges of their ordinary counterparts. Where electrons have a negative charge, their antimatter analogues, positrons, have a positive charge. Protons’ antimatter partners are the more simply named antiprotons.This principle can be scaled up to the atomic level: Every atom has a nucleus—a core of protons and neutrons glommed together—but there are also antinuclei, composed of antiprotons and antineutrons. We know these exist because they were discovered in an experiment in 1965, when physicists observed antideuterons (the antimatter version of the deuterium atom) in a lab. G/O Media may get a commissionThe universe rocked into being 14 billion years ago, with a Big Bang that in theory should have created equal amounts of matter and antimatter. But look around you, or at the latest Webb telescope images: We live in a universe dominated by matter. An outstanding question in physics is what happened to all the antimatter.The recent research team—a large, international collaboration of physicists—worked with the ALICE detector at CERN’s Large Hadron Collider, beneath the ground near St Genis-Pouilly, France, to try to get a step closer to spotting the mysterious stuff.ALICE (A Large Ion Collider Experiment) is an 11,000-ton detector that investigates collisions between heavy ions and other particles, which allows physicists to probe some of the smallest, primordial, and most exotic masses in our universe.In the recent experiment, the ALICE Collaboration attempted to measure the rate at which antihelium-3 nuclei (isotopes of helium’s antimatter counterpart) disappeared when they encountered ordinary matter. Their research is published in Nature Physics.The study is not as much about the remarkable distances the antimatter particles can travel but “how many of the produced antihelium-3 would reach the detectors,” said study-co-author Laura Šerkšnytė, a physicist at Technische Universität Munchen and a member of the ALICE Collaboration, in an email to Gizmodo.In other words, the team’s research is a helpful indicator that cosmic antinuclei detectors, like the AMS experiment aboard the International Space Station and the upcoming GAPS balloon experiment in Antarctica, will have a fair chance at finding the vexing particles.There are a few candidates for natural antinuclei sources in the universe; one is high-energy cosmic ray collisions with atoms in the interstellar medium, the stuff that occupies the space between stars. Another candidate—a core component of the recent study—is that a certain flavor of theorized dark matter particles called WIMPs (Weakly Interacting Massive Particles) emit antinuclei when they annihilate.A third, more exotic idea is that antinuclei are given off by antistars, a theoretical object that—you guessed it—is a star composed entirely of antimatter.Antinuclei from cosmic rays’ interactions with regular matter would have much higher energies associated with them than antinuclei born from dark matter annihilation events. There’s never been a confirmed detection of cosmic light antinuclei (‘cosmic,’ meaning they float through space, and ‘light,’ referring to their mass). Without detections of such antimatter particles in the wild, physicists’ best bet is in accelerators like the LHC.The ALICE Collaboration separately modeled the Milky Way’s transparency to antinuclei that would emerge from dark matter WIMPs and cosmic ray collisions. They found a 50% transparency for the dark matter model and a range of 25% to 90% transparency for the cosmic ray model.By their measure, antihelium-3 nuclei could make it several kiloparsecs (thousands of light-years) without being absorbed by ordinary matter in the interstellar medium.“The idea of the paper was to show this transparency, and the fact that we can now use our measurement in all the future studies,” Šerkšnytė said.The transparencies showed that “these antinuclei could actually be measured in principle,” Šerkšnytė added, noting that having these measurements gives future research teams a means of interpreting data from light antinuclei searches—in turn informing the search for dark matter.So the findings are redeeming for antimatter nuclei detectors like AMS aboard the ISS and the GAPS balloon mission. AMS has so far collected data on 213 billion cosmic ray events and counting, troves upon troves of data to sift through for signs of antimatter. The second iteration of the experiment detected a few antihelium candidates in cosmic rays. Results from GAPS—expected to fly in late 2023—could independently confirm AMS’s antihelium detections.You can think of the new research as idiomatic horse, which needs to be before the cart if you’re planning to get anywhere soon. If physicists want to move forward in their understanding of the antimatter universe—where it is and how we can find it—and learn more about dark matter, they need to be able to find some antinuclei.More: Could Antimatter Be the Portal Into the Dark Universe?
|
Physics
|
Solar power gathered far away in space, seen here being transmitted wirelessly down to Earth to wherever it is needed. The European Space Agency plans to investigate key technologies needed to make Space-Based Solar Power a working reality through its SOLARIS initiative. One such technology – wireless power transmission – was recently demonstrated in Germany to an audience of decision-makers from business and government. Credit: Airbus
Solar power could be gathered far away in space and transmitted wirelessly down to Earth to wherever it is needed. The European Space Agency (ESA) plans to investigate key technologies needed to make Space-Based Solar Power a working reality through its SOLARIS initiative. Recently in Germany, one of these technologies, wireless power transmission, was demonstrated to an audience of decision-makers from business and government.
The demonstration took place at Airbus’ X-Works Innovation Factory in Munich. Microwave beaming was used to transmit green energy between two points representing ‘Space’ and ‘Earth’ over a distance of 36 meters.
The received power was used to light up a model city and produce green hydrogen by splitting water. It even served to produce the world’s first wirelessly cooled 0% alcohol beer in a fridge before being served to the watching audience. To prepare Europe for future decision-making on Space-Based Solar Power, ESA has proposed a preparatory program for Europe, initially named SOLARIS, for the upcoming ESA Council at Ministerial Level in November 2022. Space-based solar power is a potential source of clean, affordable, continuous, abundant, and secure energy. This basic concept has been given fresh urgency by the need for new sources of clean and secure energy to aid Europe’s transition to a Net Zero carbon world by 2050. If Europe wants to benefit from this game-changing capability then we need to start investing now. Credit: ESA – European Space Agency
For a working version of a Space-Based Solar Power system, solar power satellites in geostationary orbit would harvest sunlight on a permanent 24/7 basis and then convert it into low-power density microwaves to safely beam down to receiver stations on Earth. The physics involved means that these satellites would have to be large, on the order of several kilometers in size, to generate the equivalent power of a typical nuclear power station. The same would be true for the collecting ‘rectennas’ down on Earth’s surface.
Technical advancements in areas such as in-space manufacturing and robotic assembly, low-cost high-efficiency photovoltaics, high-power electronics, and radio frequency beamforming would be required to achieve this vision. Further research to confirm the effects of low-power microwaves on human and animal health are benign and compatibility with aircraft and satellites would also be undertaken.
ESA’s SOLARIS – being proposed to Europe’s space ministers at the Agency’s Council at Ministerial Level on November 22-23 – will research these technologies, to allow Agency Member States to make an informed choice on future implementation of Space-Based Solar Power as a new source of clean, always-on ‘baseload’ power supplementing existing renewable power sources, helping Europe to attain Net Zero by mid-century.
In addition, any breakthroughs achieved in these areas will also benefit many other spaceflight endeavors as well as terrestrial applications.
|
Physics
|
Ice cubes float in water because they’re less dense than the liquid. But a newfound type of ice has a density nearly equal to what’s in your water glass, researchers report in the Feb. 3 Science. If you could plop this ice in your cup without it melting immediately, it would bob around, neither floating nor sinking.
The new ice is a special type called an amorphous ice. That means the water molecules within it aren’t arranged in a neat pattern, as in normal, crystalline ice. Other types of amorphous ice are already known, but they have densities either lower or higher than water’s density under standard conditions. Some scientists hope this newly made amorphous ice could help solve the scientific mysteries that swirl around water.
Science News headlines, in your inbox
Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.
Thank you for signing up!
There was a problem signing you up.
To generate the new ice, scientists used a surprisingly simple technique. Called ball milling, it involves shaking a container of ice and stainless steel balls, cooled to 77 kelvins (nearly –200° Celsius). The researchers were motivated by curiosity; they didn’t expect the technique to produce a new amorphous ice. “It was a sort of Friday-afternoon idea we had, to just give it a go and see what happens,” says physical chemist Christoph Salzmann of University College London.
An analysis of how X-rays scattered from the frosty stuff suggested they’d created an amorphous ice. And computer simulations that mimicked the effects of ball milling revealed that a disordered structure could be produced by layers of ice sliding past one another in random directions, in response to the forces exerted by the balls.
“You have to be open, as a scientist, for the unexpected,” says chemical physicist Anders Nilsson of Stockholm University, who was not involved with the research. The ball milling technique, he says, “was quite innovative to do.”
Since the material was made by mashing up normal ice, its relationship to liquid water is unknown. It’s unclear whether it can be produced directly, by cooling liquid water. Not all amorphous ices share this connection with their liquid state.
If the new ice does have this link to the liquid, the ice might help scientists better understand water’s quirks. Water is puzzling because it flouts the norms for liquids. For example, whereas most liquids become denser upon cooling, water gets denser as it gets closer to 4° C, but becomes less dense as it is cooled further.
Many scientists suspect water’s weirdness is connected to its behavior as a supercooled liquid (SN: 9/28/20). Pure water can remain a liquid at temperatures well below freezing. Under such conditions, liquid water is thought to exist in two different phases, a high-density liquid and a low-density one, and that dual nature could explain water’s behavior under more typical conditions (SN: 11/19/20). But much remains uncertain about that idea.
Subscribe to Science News
Get great science journalism, from the most trusted source, delivered to your doorstep.
Salzmann and colleagues suggest that the new ice could be a special form of water called a glass. Glasses can be made by cooling a liquid quickly enough that the molecules can’t rearrange into a crystal structure. The glass in a windowpane is an example of this kind of material, made by cooling molten silica sand, but other substances can form glasses, too.
If the new ice is a glass state of water, scientists would need to work out how it fits into that dual-liquid picture. And that could help scientists tease out what’s really going on at difficult-to-study supercooled conditions.
But some researchers are skeptical that the new material has any connection to the weird physics of liquid water. Physical chemist Thomas Loerting of the University of Innsbruck in Austria thinks that the ice is “closely related to very small, distorted ice crystals,” rather than the liquid form of water.
Still, earlier computer simulations have suggested that water could form glasses of a range of densities close to liquid water, says computational physicist Nicolas Giovambattista of Brooklyn College of the City University of New York. Those simulations produced structures similar to the ones seen in the computer simulation of ball milling ice, says Giovambattista, who was not involved with the new research. “It opens doors for new questions. It’s new, so what is it?”
|
Physics
|
Taken from the June 2022 issue of Physics World. Members of the Institute of Physics can enjoy the full issue via the Physics World app. Ancient glass is not just of interest to historians and archaeologists – it may also hold the key to understanding the durability of vitrified nuclear waste. Rachel Brazil investigates Molten hot To vitrify radioactive waste, it's incorporated into molten glass. (Courtesy: US Dept. of Energy / Science Photo Library) The golden death mask of the pharaoh Tutankhamun is one of the most famous historical artefacts in the world. The shining visage of the young king dates back to around 1325 BCE and features blue strips that are sometimes described as lapis lazuli. Yet rather than being the semi-precious stone favoured in ancient Egypt, the striking decoration is in fact coloured glass.
A coveted and highly prized material deemed worthy of royalty, glass was once viewed on a par with gemstones, with examples of ancient glass going back even further than Tutankhamun. Indeed, samples excavated and analysed by archaeologists and scientists have enabled a better understanding of how and where glass production began. But surprisingly, ancient glass is also being studied by another group of scientists – those who are finding safe ways to store nuclear waste.
Next year the US will start to vitrify parts of its legacy nuclear waste currently housed in 177 underground storage tanks at the Hanford Site, a decommissioned facility in Washington state that produced plutonium for nuclear weapons during the Second World War and Cold War. But the idea to transform nuclear waste into glass, or vitrify it, was developed as far back as the 1970s, as a way to keep the radioactive elements locked away and prevent them from leaking out.
Nuclear waste is typically classified as being low, intermediate or high level, depending on its radioactivity. While some countries vitrify low and intermediate-level waste, the method is mostly used to immobilize high-level liquid waste, which contains fission products and transuranic elements with long half-lives that are generated in a reactor core. This type of waste requires active cooling and shielding because it is radioactive enough to significantly heat both itself and its surroundings.
Before the vitrification process, liquid waste is dried (or calcined) to form a powder. This is then incorporated into molten glass in huge smelters and poured into stainless steel canisters. Once the mixture has cooled and formed a solid glass, the containers are welded closed and readied for storage, which nowadays takes place in deep underground facilities. But the glass does not just provide a barrier, according to Clare Thorpe, a research fellow at the University of Sheffield, UK, who is studying the durability of vitrified nuclear waste. “It’s better than that. The waste becomes part of the glass.”
The glass does not just provide a barrier. It’s better than that. The waste becomes part of the glass
Clare Thorpe, University of Sheffield, UK
However, there have always been question marks over the long-term stability of these glasses. How, in other words, can we know if these materials will remain immobilized over thousands of years? To better understand these questions, nuclear-waste researchers are working with archaeologists, museum curators and geologists to identify glass analogues that might help us understand how vitrified nuclear waste will change with time.
Ingredient sweet spot
The most stable glasses are made from pure silicon dioxide (SiO2), but various additives – such as sodium carbonate (Na2CO3), boron trioxide (B2O3) and aluminium oxide (Al2O3) – are often incorporated to change the properties of the glass, such as viscosity and melting point. For example, borosilicate glass (containing B2O3) has a very low coefficient of thermal expansion, so does not crack under extreme temperatures. “The UK and other countries, including the US and France, have chosen to vitrify their waste in borosilicate glass before it’s stored,” explains Thorpe.
When elements such as those from additives or nuclear waste are included, they become part of the glass structure as either network formers or modifiers (figure 1). Network-forming ions act as a substitute for silicon, becoming an integral part of the highly cross-linked chemically bonded network (boron and aluminium do this for example). Meanwhile, modifiers interrupt the bonds between oxygen and the glass-forming elements by loosely bonding with the oxygen atoms and causing a “non-bridging” oxygen (sodium, potassium and calcium incorporate this way). The latter cause weaker overall bonding in the material, which can reduce the melting point, surface tension and viscosity of the glass overall.
1 Formers and modifiers When an additive is incorporated into a glass mixture, the ions either become part of the highly cross-linked network, replacing the silicon (black dots) as network formers (green dots), or act as glass modifiers (blue dots) that loosely bond with the oxygen (red dots) and disrupt the glass-forming bonds, creating “non-bridging” oxygens. (Adapted from Clinical Applications of Biomaterials 10.1007/978-3-319-56059-5_2)
“There’s a certain sweet spot where you get the right amount [of waste additives] to form a very durable glass,” explains Carolyn Pearce from the Pacific Northwest National Laboratory in the US, who is studying the kinetics of radionuclide stability in waste forms. “If you add in too much, you start pushing the system to form crystalline phases, which is problematic, because then you have multi-phase glass, which is not as durable as a homogeneous single-phase glass.”
Pearce says the waste at Hanford contains “virtually every element in the periodic table in some form or another” and is stored as a liquid, sludge or salt cakes, which makes it more difficult to predict the most stable glass composition. “There’s a lot of modelling that goes into designing the glass-forming elements that will be added. They’ll characterize what’s in the staging tank waiting to go in the facility, and then design the composition of the glass based on that chemistry.”
The use of vitrification for nuclear waste is supported by the stability of natural glasses that have been around for millennia, such as igneous glass, fulgurites (also known as “fossilized lightning”) and glass in meteorites. “In theory, radioactive elements should be released at the same rate as the glass itself dissolves, and we know that glass is highly durable, because we can see volcanic glasses that were made millions of years ago still sitting around today,” says Thorpe. But it isn’t easy to prove that vitrified waste will survive the 60,000 to millions of years necessary for radioactive waste to fully decay – iodine-129, for example, has a half-life of more than 15 million years.
When glass is in contact with water or water vapour, it begins to very slowly deteriorate. First, the alkali metals (sodium or potassium) leach out. The glass networks then start to break down, releasing silicates (and also borates in the case of borosilicate glass) that subsequently form an amorphous gel layer on the glass surface. This becomes dense over time, creating an outer “passivation” layer that can also contain secondary crystallized phases – compounds that form from the surface recrystallization of material that has been released from the bulk glass. At this point, further corrosion is limited by the ability of elements to migrate through this coating.
But if conditions change, or certain mineral species are present, the passivation layer can break down too. “Studies have highlighted elements of concern that could be involved in something called rate resumption, which is where some of the secondary mineral precipitates – particularly iron and magnesium zeolites – have been implicated in the rate of glass dissolution speeding up,” explains Thorpe (figure 2).
2 Stages of corrosion Glass corrosion occurs in three stages. Stage 1 – The “initial rate regime” involves H3O+ ions diffusing into the glass, displacing network-modifying alkali ions. At the same time, hydrolysis of the glass network releases silicon and other network-forming ions if present. The “rate drop regime” begins once sufficient silica has been freed to form an amorphous gel layer on the glass surface. Stage 2 – The gel “passivation” layer – which densifies over time and can crystallize into secondary mineral phases – slows the rate of dissolution as the ions have to diffuse through it to corrode the glass. This “residual rate regime” can continue indefinitely. Stage 3 – Finally, “rate resumption” occurs if conditions change or certain mineral species are present, such as iron, and the passivation layer breaks down. (Adapted from original by Clare Thorpe)
One of the methods Thorpe and Pearce use to understand these mechanisms is accelerated testing of newly formed glass. “In the laboratory, to speed up the reaction we [flatten] the glass to increase the surface area, and we increase the temperature, typically up to 90 °C,” says Thorpe. “This is really effective for ranking glasses – saying this one’s more durable than this one – but not great for determining the actual dissolution rate in a complex natural environment.”
Instead, researchers have turned to analogue glasses already in existence. “Borosilicate glasses have only been around for about 100 years. We have some data on how they behave long term, but nothing stretching out to the kinds of timescales that we need for thinking about radioactive waste storage,” says Thorpe. Natural glasses are not always a suitable comparison as they tend to be low in alkali elements, which are commonly found in nuclear-waste glasses and will impact their properties – so the other option has been archaeological glasses. While their compositions are not identical to waste glass, they do contain a variety of elements. “Just having these different chemistries really allows us to look at the role that this plays in terms of alteration,” says Pearce.
Glass from the past
Before discovering how to create glass, humans used natural glass for both its strength and beauty. One example is the pectoral, or brooch, found in the tomb of Tutankhamun. Placed on the chest of the mummy, it contains a piece of pale-yellow natural glass shaped into a scarab beetle at least 3300 years ago. The glass came from the Libyan desert, with recent research attributing its formation to a meteorite impact 29 million years ago. Scientists reached this conclusion because of the presence of zirconium silicate crystals within the glass, which come from the mineral reidite that is formed at high pressure (Geology 47 609).
“The earliest production of glass on a regular basis is around 1600 BCE,” says Andrew Shortland, an archaeological scientist at Cranfield University in the UK. “The most spectacular glass object of all, without doubt, is the death mask of Tutankhamun in the Cairo [Museum] catalogue.”
Over the last century archaeologists have disagreed over where glass was first manufactured on a large scale, with northern Syria and Egypt both being prime candidates. “I’d say that at the moment it is too close to call,” says Shortland. The glasses excavated are soda-lime silicate glasses – not too different to the glass we still use in our windows. These were produced using silicate minerals with a “flux” containing soda (Na2CO3), which lowers the melting point to an attainable smelting temperature, and lime (CaCO3) to make the glass harder and chemically more durable. “The silica in these early glasses comes from crushed quartz, which was used because it’s very clean, very low in iron, titanium and other things that colour the glass.”
The problem of glass corrosion is familiar to archaeological conservators who aim to stabilize glass when freshly excavated or stored in museums. “Moisture, obviously, is the worst thing for glass,” says Duygu Çamurcuoğlu, senior objects conservator at the British Museum in London. “If not looked after well, moisture will start attacking and dissolving the glass.” Çamurcuoğlu explains that the beautiful iridescent surface archaeological glasses display is often made up of nearly 90% silicate because other ions, particularly the alkali ions, will have been removed by corrosion.
Archaeological analogues
The key to using archaeological glasses as an analogue for vitrified nuclear waste is having a good knowledge of the environmental conditions the objects have experienced. Trouble is, that gets harder the older the glass is. “Something that’s 200 years old might actually be more useful,” explains Thorpe, “because we can pin down exactly the full climate records.” By comparing archaeological samples to vitrified waste, Thorpe and colleagues are able to validate some of the mechanisms they are seeing in their accelerated high-temperature testing, thereby confirming whether or not they have similar processes and minerals forming, and that there’s nothing they’ve overlooked.
(Courtesy: Dr Clare Thorpe)
Calculating corrosion These and many more 256-year-old glass ingots were found in a shipwreck off the coast of Margate, UK. We have 200 years of records of the local water temperatures and salinity, which makes it easier to use as a comparison to nuclear waste glasses. (Courtesy: Dr Clare Thorpe)
In Shortland’s experience, the precise local environmental conditions can make a big difference to the length of time glass survives. He remembers using scanning-electron microscopy to analyse glass from the Late Bronze Age city of Nuzi, near Kirkuk in Iraq, originally excavated in the 1930s. “We noticed that some of the glass was perfectly preserved, had beautiful colour, and was robust, while other pieces were weathered and gone completely.” But, he explains, the samples were often found in the same houses in nearby rooms. “We were dealing with micro-environments.” A minor difference in the amount of moisture over 3000 years created very different weathering patterns, as they found (Archaeometry 60 764).
Of course, the sort of glass artefacts found in Nuzi or elsewhere are much too precious to be given to nuclear-waste scientists for testing, but there are many less-rare pieces of archaeological glass available. Thorpe is looking at several well-characterized archaeological sites where material may provide useful analogues, such as slag – the silicate-glass waste product formed during iron smelting. Slag blocks had been incorporated into a wall at the Black Bridge foundry, a site within the town of Hayle in Cornwall, UK, constructed around 1811 (Chem. Geol. 413 28). “They’re fairly analogous to some of the plutonium-contaminated material when they are vitrified,” she explains. “You can be sure that they’ve been exposed to either the air or the estuary that they’ve sat in for 250 years.” She has also investigated 265-year-old glass ingots from the Albion shipwreck off the coast of Margate, UK, where there are comprehensive records of water temperatures and salinity dating back 200 years.
Thorpe and others have also been considering the impact of metals on glass stability. “We’re very interested in the role of iron as it’s going to be present because of the canisters [holding the vitrified waste]. In the natural analogue sites, it’s present because a lot of the time the glass is in soil or, in the case of the slags, surrounded by iron-rich material.” The worry is that positive iron ions, leaching from the glass or surroundings, scavenge negatively charged silicates from the glass’s surface gel layer. This would precipitate out iron silicate minerals, potentially disrupting the passification layer and triggering rate resumption. This effect has been seen in a number of laboratory studies (Environ. Sci. Technol. 47 750) but Thorpe wants to see it happening in the field at low temperatures because the thermodynamics are very different to accelerated testing. So far, they don’t have evidence that this is occurring with vitrified nuclear waste and are confident that with or without the presence of iron, these glasses are highly durable. But it is still important to understand the processes that might affect the rate at which corrosion happens.
A biological challenge
An analogue glass that Pearce and colleagues have been studying comes from the Broborg pre-Viking hillfort in Sweden, which was occupied around 1500 years ago. It contains vitrified walls that Pearce thinks were purposefully constructed, rather than being the results of accidental or violent destruction of the site. The granite walls were strengthened by melting amphibolite rocks that contain largely silicate minerals, to form a vitrified mortar surrounding the granite boulders. “We know exactly what’s happened to the glass in terms of what temperatures it’s been exposed to, and the amount of rainfall, through records in Sweden going back those 1500 years,” says Pearce.
Glass walls An archaeological dig at the Broborg pre-Viking hillfort in Sweden has revealed walls fortified by melting silicate-containing rock to form a glass mortar around the granite. (Courtesy: Mia Englund, The Archaeologists)
Using electron microscopy to study the Broborg glass, the researchers were surprised to find the surface exposed to the environment covered in bacteria, fungi and lichens. Pearce’s team is now trying to understand the implications of such biological activity on the glass’s stability. The site contains several different glass compositions and they found that samples with more iron showed more evidence of microbial colonization (possibly due to the larger number of organisms able to metabolize iron) and more evidence of physical damage such as pitting.
While it seems as though certain organisms can thrive in these harsh conditions, and may even extract elements from the material, Pearce explains that it’s also possible that a biofilm provides a protective layer. “The bacteria like to live in relatively unchanging conditions, as all living organisms are engaged in homeostasis, and so they try to regulate the pH and the water content around them.” Her team is now trying to determine what role the biofilm plays and how that relates to the glass composition (npj Materials Degradation 5 61).
Living layer Scanning electron microscopy of the naturally created glass used to fortify walls at the Broborg pre-Viking hillfort in Sweden reveals that the exposed surface of this glass is covered in micro-organisms, with more microbes where the iron content is higher. (Courtesy: Bruce Arey, PNNL)
The key problem faced by those looking to create the most stable nuclear-waste glasses is that of longevity. But for archaeological conservators trying to stabilize deteriorating glass, they have a more urgent challenge, which is to remove moisture and therefore stop the glass from cracking and shattering. Archaeological glass can be consolidated with acrylic resin, applied on top of the iridescent corrosion layer. “It’s actually [part of] the glass itself, so it should be protected,” says Çamurcuoğlu.
Despite how long we’ve been using glass, there is still a long way to go in fully understanding how its structure and composition impacts its stability. “It amazes me that we still can’t guess the melting temperature of a glass from its composition entirely accurately. Very small amounts of additional elements can have huge effects – it really is a bit of a dark art,” muses Thorpe.
Her work at Sheffield will continue, with some projects handed down to her that have been running for over 50 years. The Ballidon Quarry in Derbyshire, UK, for example, hosts one of the longest running “glass burial” experiments in the world. The aim is to test the degradation of archaeological glasses under the sort of alkaline conditions that vitrified nuclear waste will experience, alongside waste encased in cement (J. Glass Stud. 14 149). The experiment is intended to run for 500 years. Whether the university itself will last that length of time remains to be seen, but as for the nuclear waste they are working to protect us from, it certainly will endure.
|
Physics
|
Towards ultrafast logic gates (Courtesy: University of Rochester illustration / Michael Osadciw) The first logic gate to operate at femtosecond timescales could help usher in an era of information processing at petahertz frequencies – a million times faster than today’s gigahertz-scale computers. The new gate, developed by researchers at the University of Rochester in the US and the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) in Germany, is an application of lightwave electronics – essentially, shuffling electrons around with light fields – and harnesses both real and virtual charge carriers.
In lightwave electronics, scientists use laser light to guide the motion of electrons in matter, then exploit this control to create electronic circuit elements. “Since light oscillates so fast (roughly a few hundred million times per second), using light could speed up electronics by a factor of roughly 10 000 as compared to computer chips,” says Tobias Boolakee, a laser physicist in Peter Hommelhoff’s group at the FAU and the first author of a study in Nature on the new gate. “With our present work, we have been able propose the idea for a first light field-driven logic gate (the fundamental building block for any computer architecture) and also demonstrate its working principle experimentally.”
In the work, Boolakee and colleagues prepared tiny graphene-based wires connected to two gold electrodes and illuminated them with a laser pulse lasting a few tens of femtoseconds (10-15 s). This laser pulse excites, or sets in motion, the electrons in graphene and causes them to propagate in a particular direction – so generating a net electrical current.
Virtual and real charge carriers
Researchers at the FAU and Rochester have been working on lightwave electronics for the past decade, and the latest work takes advantage of their recent discovery that exciting the gold-graphene junction excites two different kinds of electronic charge carriers: virtual and real. The virtual carriers are only set in a net directional motion while the laser pulse is on, the researchers explain, and as such are transient. The contribution of the virtual carriers to the net current must therefore be measured during light excitation.
The researchers performed this measurement by probing a net polarization induced by the virtual carriers in the gold electrodes attached to the graphene. The real charge carriers, for their part, continue propagating in the preferred direction even after the laser pulse is turned off, so their contribution to the net current can be measured after light excitation has ended.
According to the researchers, the results of the measurement were “striking”: by changing the shape of the laser pulse, they found they could generate currents in which only the real or only the virtual charge carriers play a role. Being able to control the two different types of charge carriers in this way allowed them to make a logic gate operating on the femtosecond timescale for the first time.
Logic gate operations
The basic idea of the new logic gate is to encode two binary signals (0 and 1, as is standard in computer logic) in the shape of two few-cycle laser pulses – that is, in their “carrier-envelope” phase, Hommelhoff explains. When these two laser pulses interact with the gold-graphene heterostructure, each one produces an ultrafast current pulse. Hence, from the two incoming laser pulses, the researchers can generate two current pulses that either add up or cancel each other out.
“A binary output signal (again 0 or 1) is obtained from the level of the resulting electric current measured at one of the gold electrodes,” Hommelhoff tells Physics World. “The timescale for the logic operations is fundamentally limited by the turn-on time of the two current pulses, which is intrinsically given by the underlying quantum-mechanical mechanisms driven by the frequency of the laser pulse.”
With the parameters used in their experiment, the Rochester-FAU team anticipates an upper limit for the bandwidth of their logic gate at the driving optical frequency of 0.36 PHz, or equivalently, 2.8 fs. Read more Quantum physics sets a speed limit for fastest possible optoelectronic switch While the researchers are – at least for the moment – hesitant about direct applications for the new gate, they say the next step will be to prove that it can operate at much faster time scales than can conventional electronics. “We are quite positive that this is the case, but scaling up our system to more gates to form a complex logic will be much more of an issue: here we will need to find ways to keep the speeds high,” Boolakee says.
As for integrating these gates into actual devices, the team note that the system will need to be much smaller than it is now. This will mean resorting to nearfield optics schemes to circumvent the fact that the laser focus cannot be made much smaller than the wavelengths of the actual driving laser pulses (around 800 nm), which is much too large for electronics length scales.
“Finally, the laser pulses we used in this work need to be quite intense, which is another point that will make scaling up difficult,” Hommelhoff says. “In essence, much more fundamental and well as applied research is needed to turn this proof-of-principle demonstration into a new technology. But at least we have made the initial step: the demonstration of a new logic gate.”
|
Physics
|
Image: Adam Patrick Murray/IDG For a long, long time, PCs have been chasing the idea of “no moving parts” as a platonic ideal for efficiency and reliability. And for just as long, active cooling has been an impediment to this goal: for high-powered electronics, you just can’t beat a fan and moving air for cooling stuff down. Or can you? Frore Systems’ AirJet is a radical solid-state approach to active cooling, and Gordon has the scoop at CES 2023. Frore’s founder and CEO Seshu Madhavapeddy was kind enough to give PCWorld the low-down on this emerging tech, which has the potential to upend the way high-powered laptops are built. The “magic” of AirJet is a combination of exotic materials, geometry, and physics: the 2.8mm chip has cavities in the top full of vibrating membranes, which blast cool air across the heat spreader underneath, cooling down a CPU or other component. Despite the miniscule dimensions, the AirJet can send individual air particles whooshing over the heat spreader at up to 200 kilometers per hour. It seems impossible, but the results speak for themselves. AirJet’s CES demonstration showed its “Mini” chip pushing air across a conventional fan and pushing up a ping-pong ball in a way that can’t be denied, and indeed, can replicate the cooling power of old-fashioned spinning blades. In fact, the back pressure — the amount of force created by the moving air — generated by the AirJet is equal to a fan more than ten times its size. The AirJet is also silent and potentially dust-proof. The potential here is enormous. One of the biggest limiting factors in the performance of thin laptops is the thermal profile: you simply can’t shove a desktop-class chip into a laptop and run it at full performance without making it thick as a brick (or setting it on fire). But with solid-state cooling at a fraction of the size of even the most advanced conventional active cooling systems, those limitations start to disappear. According to Madhavapeddy, two 1-watt AirJet units can account for 5 watts of cooling. Thus it could double the thermal limit of a fanless thin-and-light, from 10 watts to 20 watts with one larger “Pro” unit. The system has been scaled at up to 28 watts in a silent, fanless laptop. For integration with more conventional designs, the AirJet can also be installed with a vapor chamber to put it to the side of a processor instead of directly on top. AirJet expects the first devices with its cooling systems to debut by the end of the year. Check out the video for the full technical breakdown. And for more looks at the future of PC tech, be sure to subscribe to PCWorld on YouTube! Michael is a former graphic designer who's been building and tweaking desktop computers for longer than he cares to admit. His interests include folk music, football, science fiction, and salsa verde, in no particular order.
|
Physics
|
First of two parts There’s just enough time left in 2014 to sneak in one more scientific anniversary, and it just might be the most noteworthy of them all. Fifty years ago last month, John Stewart Bell transformed forever the human race’s grasp on the mystery of quantum physics. He proved a theorem establishing the depth of quantum weirdness, deflating the hopes of Einstein and others that the sanity of traditional physics could be restored. Sign Up For the Latest from Science News Headlines and summaries of the latest Science News articles, delivered to your inbox “Bell’s theorem has deeply influenced our perception and understanding of physics, and arguably ranks among the most profound scientific discoveries ever made,” Nicolas Brunner and colleagues write in a recent issue of Reviews of Modern Physics. Before Bell, physicists’ grip on the quantum was severely limited. Weirdness was well established, but not very well explained. Heisenberg’s uncertainty principle had ruined Newton’s deterministic universe — the future could not be completely predicted from perfect knowledge of the present. Waves could be particles and particles could be waves. Cats could be alive and dead at the same time. Einstein didn’t buy it, insisting that underlying the quantum fuzziness there must exist a solid reality, even if it was inaccessible to human eyes and equations. But try as he might — and he tried several times — Einstein could devise no experiment showing quantum physics to be in error. The best he could do was demonstrate how unbelievable quantum physics really was. In 1935 he pointed out (as had Erwin Schrödinger at about the same time) that quantum rules apparently defied “locality,” the notion that what happens far away cannot immediately affect what happens here. As Einstein described it, in a paper with collaborators Boris Podolsky and Nathan Rosen, quantum mechanics — the mathematical apparatus governing the subatomic realm — seemed incomplete. If two particles of light interact and then fly far apart, quantum math describes them as still a single system. Measuring a property of one of the particles therefore instantly tells you what the result would be when someone measured the same property for the other particle. In the language now used to describe this situation, the particles are “entangled.” Typically, the property to be measured would be something like spin (the direction that a particle’s rotational axis points) or polarization (the orientation of the vibrations if you view the light as a wave). Depending on how you create the entangled particles, the spins or polarizations might turn out always to be opposite. That is, if one particle’s spin is measured to be pointing up, the other will surely point down. At first glance, there seems to be a simple explanation for this mystery. It could be just like sending one of a pair of gloves far away. If the recipient sees a left-handed glove, the one you kept must be right-handed. But quantum physics is not like that. It’s more like sending away one of a pair of mittens, and the mitten becomes a glove, assuming a handedness when the recipient puts it on. The stay-at-home mitten would then suddenly become a glove with the opposite handedness. Or at least that is the standard view. Einstein sympathizers contended that maybe some unseen factors, “hidden variables,” controlled the outcome, forcing the mittens to have had a handedness all along. For nearly three decades, there seemed to be no way to resolve that dispute. Both views of quantum physics would, everyone believed, predict exactly the same outcomes for any possible experiments. But Bell perceived the situation with more sophistication. In a paper published in November 1964, he worked out an ingenious mathematical theorem to show that a hidden-variables reality would produce different experimental results. Bell’s insight incorporated the fact that quantum math predicts probabilities for outcomes, not definite outcomes. In real entanglement experiments (which at the time could just be imagined), many measurements would be made. If every day you send one of a pair of entangled particles to Alice in D.C. and the other to Bob in L.A., they both can choose to make any of several possible measurements. When they meet once a year in Dallas to compare results, they’ll find that the outcomes match more often than chance. In principle, that correlation could arise either from quantum weirdness or from hidden variables. But Bell showed that the two explanations predicted different degrees of correlation. In one case, for instance, math using hidden variables predicted that the measurements would match 33 percent of the time. Quantum math, with no hidden variables, predicted a match no more than 25 percent of the time. (If you want to see the more general logic worked out explicitly, you can find it in Brunner et al’s paper in Reviews of Modern Physics, preprint available at arXiv.org.) These differences, the “Bell inequalities,” gave experiments something definite to test. By the 1970s such experiments had begun, and in the 1980s Alain Aspect and colleagues in France showed definitively that Bell’s inequalities were violated in real experiments. That meant that local hidden variables could not be causing the mysterious connections in quantum entanglement. Einstein’s hope for a deeper reality did not pan out. “It is a fact that this way of thinking does not work,” Bell said at a physics meeting I attended in 1989. “Einstein’s view, we now know, is not tenable.” It’s not that the speed of light limit set by Einstein’s special relativity is violated. Entanglement does not, as is sometimes implied, involve instantaneous faster-than-light signaling. Measurement of one particle does not actually immediately determine the property of the other. It simply tells you what that property will be when measured. (I hope I have always been careful to phrase this by saying one measurement seems to affect the other.) It’s just that if you know the result of one measurement, you also know the result of the other, no matter which one is measured first. (And in some cases, which one comes first can depend on how fast you’re moving with respect to them, as considerations of special relativity come into play, as I mentioned at the end of an essay in Science News in 2008.) In any case, the deep impact of Bell’s theorem was not really about proving quantum weirdness. Its greater importance was to make the underlying foundations of quantum physics a topic worth pursuing. “What Bell’s Theorem really shows us is that the foundations of quantum theory is a bona fide field of physics, in which questions are to be resolved by rigorous argument and experiment, rather than remaining the subject of open-ended debate,” Matthew Leifer of the Perimeter Institute for Theoretical Physics in Canada writes in a recent paper. That debate has made enormous progress in identifying and clarifying quantum phenomena, opening the way to new fields of study such as quantum information theory and new technologies for quantum communication and computation. Still, experts argue. Bell’s theorems admit some loopholes that may not all have been closed. Perhaps, for instance, hidden variables can still guide quantum particles if reality is not local. And an ongoing debate rages (at the quantum level) about whether the “quantum state” of a particle simply represents knowledge used to make predictions, or is in fact a real thing in itself. This epistemic vs. ontic argument warrants the advertised Part 2 of this post, coming soon. But don’t hope for a definitive resolution of that issue. That’s not the quantum way. After Bell’s 1989 talk, I chatted with him briefly, telling him I had enjoyed his book of collected papers on quantum topics. He remarked that I therefore knew more about it all than he did, as he had forgotten most of what he had written. In the quarter century since then, I’ve read hundreds more papers on quantum physics, at least a couple of dozen books, attended numerous quantum talks at conferences and interviewed many of the world’s leading quantum experts multiple times. So I can confidently say now that John Bell was wrong. I don’t know more than he did about quantum physics, because the more I find out about it the less I know. In fact, I’m pretty sure that I know nothing at all. And I’m beginning to suspect that nobody else knows anything, either. It may just be inherent in the very nature of the universe that we just can’t know. All anybody can do is suspect. Follow me on Twitter: @tom_siegfried
|
Physics
|
NEWS AND VIEWS 04 January 2023 Ferroelectricity has been found in a superconducting compound. Strong coupling between these two properties enables ferroelectric control of the superconductivity, which could prove useful for quantum devices. Kenji Yasuda Kenji Yasuda is in the Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA. When some materials are cooled, their electrons flow without friction. This is a property known as superconductivity, which makes them lose their electrical resistance at ultralow temperatures. Superconductivity is particularly promising for the development of quantum devices, and the ability to switch between a superconductor and a normal metal adds an extra functionality to such devices —information storage. Writing in Nature, Jindal et al.1 report that the superconductivity of ultrathin sheets of the semimetallic compound molybdenum ditelluride (MoTe2) coexists with another property, called ferroelectricity, that can be used to switch the material’s superconductivity on and off. Access options Subscribe to Nature+Get immediate online access to Nature and 55 other Nature journalSubscribe to JournalGet full journal access for 1 year$199.00only $3.90 per issueAll prices are NET prices. VAT will be added later in the checkout.Tax calculation will be finalised during checkout.Buy articleGet time limited or full article access on ReadCube.$32.00All prices are NET prices. Additional access options: Log in Learn about institutional subscriptions Nature 613, 33-34 (2023) doi: https://doi.org/10.1038/d41586-022-04491-w ReferencesJindal, A. et al. Nature 613, 48–52 (2023).Article Google Scholar Zhou, W. X. & Ariando, A. Jpn. J. Appl. Phys. 59, SI0802 (2020).Article Google Scholar Fei, Z. et al. Nature 560, 336–339 (2018).Article PubMed Google Scholar Qi, Y. et al. Nature Commun. 7, 11038 (2016).Article PubMed Google Scholar Rhodes, D. A. et al. Nano Lett. 21, 2505–2511 (2021).Article PubMed Google Scholar Novoselov, K. S., Mishchenko, A., Carvalho, A. & Castro Neto, A. H. Science 353, 6298 (2016).Article Google Scholar Zhai, B., Li, B., Wen, Y., Wu, F. & He, J. Phys. Rev. B 106, L140505 (2022).Article Google Scholar Li, L. & Wu, M. ACS Nano 11, 6382–6388 (2017).Article PubMed Google Scholar Yasuda, K., Wang, X., Watanabe, K., Taniguchi, T. & Jarillo-Herrero, P. Science 372, 1458–1462 (2021).Article Google Scholar Vizner Stern, M. et al. Science 372, 1462–1466 (2021).Article Google Scholar Andrei, E. Y. et al. Nature Rev. Mater. 6, 201–206 (2021).Article Google Scholar Yang, Q., Wu, M. & Li, J. J. Phys. Chem. Lett. 9, 7160–7164 (2018).Article PubMed Google Scholar Download references Competing Interests The author declares no competing interests. Related Articles Read the paper: Coupled ferroelectricity and superconductivity in bilayer Td-MoTe2 Topology turns the crank on a magnetoelectric switch Electric control of a spin current has potential for low-power computing See all News & Views Subjects Jobs Postdoctoral Scientist: Cryo-EM Wissenschaftliche/r Mitarbeiter/in (w/m/d) Associate/Senior Editor, Science Robotics Postdoctoral Fellow-Systems Biology Approach to model single cell response to perturbation
|
Physics
|
Scientists said the five-year, $60 million search finally got underway two months ago after a delay caused by the Covid-19 pandemic.Researchers walk through an old mining tunnel to what is now the Sanford Underground Research Facility in Lead, S.D., on Dec. 8, 2019. The laboratory houses a dark matter detector. Stephen Groves / AP fileJuly 7, 2022, 9:26 PM UTC / Source: Associated PressLEAD, S.D. — In a former gold mine a mile underground, inside a titanium tank filled with a rare liquified gas, scientists have begun the search for what so far has been unfindable: dark matter.Scientists are pretty sure the invisible stuff makes up most of the universe’s mass and say we wouldn’t be here without it — but they don’t know what it is. The race to solve this enormous mystery has brought one team to the depths under Lead, South Dakota.The question for scientists is basic, says Kevin Lesko, a physicist at Lawrence Berkeley National Laboratory. “What is this great place I live in? Right now, 95% of it is a mystery.”The idea is that a mile of dirt and rock, a giant tank, a second tank and the purest titanium in the world will block nearly all the cosmic rays and particles that zip around — and through — all of us every day. But dark matter particles, scientists think, can avoid all those obstacles. They hope one will fly into the vat of liquid xenon in the inner tank and smash into a xenon nucleus like two balls in a game of pool, revealing its existence in a flash of light seen by a device called “the time projection chamber.”Scientists announced Thursday that the five-year, $60 million search finally got underway two months ago after a delay caused by the COVID-19 pandemic. So far the device has found ... nothing. At least no dark matter.That’s OK, they say. The equipment appears to be working to filter out most of the background radiation they hoped to block. “To search for this very rare type of interaction, job number one is to first get rid of all of the ordinary sources of radiation, which would overwhelm the experiment,” said University of Maryland physicist Carter Hall.And if all their calculations and theories are right, they figure they’ll see only a couple fleeting signs of dark matter a year. The team of 250 scientists estimates they’ll get 20 times more data over the next couple of years.By the time the experiment finishes, the chance of finding dark matter with this device is “probably less than 50% but more than 10%,” said Hugh Lippincott, a physicist and spokesman for the experiment in a Thursday news conference.While that’s far from a sure thing, “you need a little enthusiasm,” Lawrence Berkeley’s Lesko said. “You don’t go into rare search physics without some hope of finding something.”Two hulking Depression-era hoists run an elevator that brings scientists to what’s called the LUX-ZEPLIN experiment in the Sanford Underground Research Facility. A 10-minute descent ends in a tunnel with cool-to-the-touch walls lined with netting. But the old, musty mine soon leads to a high-tech lab where dirt and contamination is the enemy. Helmets are exchanged for new cleaner ones and a double layer of baby blue booties go over steel-toed safety boots.The heart of the experiment is the giant tank called the cryostat, lead engineer Jeff Cherwinka said in a December 2019 tour before the device was closed and filled. He described it as “like a thermos” made of “perhaps the purest titanium in the world” designed to keep the liquid xenon cold and keep background radiation at a minimum.Aaron Manalaysay, the physics coordinator of Lawrence Berkeley National Lab's experiment, explains how the underground detector will interact with dark matter in the Sanford Underground Research Facility in Lead, S.D., on Dec. 8, 2019.Stephen Groves / AP fileXenon is special, explained experiment physics coordinator Aaron Manalaysay, because it allows researchers to see if a collision is with one of its electrons or with its nucleus. If something hits the nucleus, it is more likely to be the dark matter that everyone is looking for, he said.These scientists tried a similar, smaller experiment here years ago. After coming up empty, they figured they had to go much bigger. Another large-scale experiment is underway in Italy run by a rival team, but no results have been announced so far.The scientists are trying to understand why the universe is not what it seems.One part of the mystery is dark matter, which has by far most of the mass in the cosmos. Astronomers know it’s there because when they measure the stars and other regular matter in galaxies, they find that there is not nearly enough gravity to hold these clusters together. If nothing else was out there, galaxies would be “quickly flying apart,” Manalaysay said.“It is essentially impossible to understand our observation of history, of the evolutionary cosmos without dark matter,” Manalaysay said.Lippincott, a University of California, Santa Barbara, physicist, said “we would not be here without dark matter.”So while there’s little doubt that dark matter exists, there’s lots of doubt about what it is. The leading theory is that it involves things called WIMPs — weakly interacting massive particles.If that’s the case, LUX-ZEPLIN could be able to detect them. We want to find “where the wimps can be hiding,” Lippincott said.
|
Physics
|
It worked! Humanity has, for the first time, purposely moved a celestial object. As a test of a potential asteroid-deflection scheme, NASA’s DART spacecraft shortened the orbit of asteroid Dimorphos by 32 minutes — a far greater change than astronomers expected. The Double Asteroid Redirection Test, or DART, rammed into the tiny asteroid at about 22,500 kilometers per hour on September 26 (SN: 9/26/22). The goal was to move Dimorphos slightly closer to the larger asteroid it orbits, Didymos. Sign Up For the Latest from Science News Headlines and summaries of the latest Science News articles, delivered to your inbox Neither Dimorphos nor Didymos pose any threat to Earth. DART’s mission was to help scientists figure out if a similar impact could nudge a potentially hazardous asteroid out of harm’s way before it hits our planet. The experiment was a smashing success. Before the impact, Dimorphos orbited Didymos every 11 hours and 55 minutes. After, the orbit was 11 hours and 23 minutes, NASA announced October 11 in a news briefing. A small spacecraft called LICIACube, short for Light Italian CubeSat for Imaging of Asteroids, detached from DART just before impact, then buzzed the two asteroids to get a closeup view of the cosmic smashup. Starting from about 700 kilometers away, this series of images captures a bright plume of debris erupting from Dimorphos (right in the first half of this gif), evidence of the impact that shortened its orbit around Didymos (left). At closest approach, LICIACube was about 59 kilometers from the asteroids.ASI, NASA “For the first time ever, humanity has changed the orbit of a planetary body,” said NASA planetary science division director Lori Glaze. Four telescopes in Chile and South Africa observed the asteroids every night after the impact. The telescopes can’t see the asteroids separately, but they can detect periodic changes in brightness as the asteroids eclipse each other. All four telescopes saw eclipses consistent with an 11-hour, 23-minute orbit. The result was confirmed by two planetary radar facilities, which bounced radio waves off the asteroids to measure their orbits directly, said Nancy Chabot, a planetary scientist at Johns Hopkins Applied Physics Laboratory in Laurel, Md. The minimum change for the DART team to declare success was 73 seconds — a hurdle the mission overshot by more than 30 minutes. The team thinks the spectacular plume of debris that the impactor kicked up gave the mission extra oomph. The impact itself gave some momentum to the asteroid, but the debris flying off in the other direction pushed it even more — like a temporary rocket engine. “This is a very exciting and promising result for planetary defense,” Chabot said. But the change in orbital period was just 4 percent. “It just gave it a small nudge,” she said. So knowing an asteroid is coming is crucial to future success. For something similar to work on an asteroid headed for Earth, “you’d want to do it years in advance,” Chabot said. An upcoming space telescope called Near-Earth Object Surveyor is one of many projects intended to give that early warning.
|
Physics
|
Nuclear fusion powers the Sun, and if we could harness it here on Earth we would benefit from a clean and abundant source of energy. However, creating a fusion power plant remains a formidable technical challenge.
This episode of the Physics World Weekly podcast features an interview with Nick Hawker, who is co-founder and CEO of First Light Fusion. Recently, the UK-based company has achieved fusion in the lab using a new technique that involves firing a projectile at a deuterium target. Hawker talks about projectile fusion and how the company plans to develop the technology so that it could be used in a power plant.
Also this week, Physics World’s Margaret Harris explains how alien civilizations could be using quantum signals to communicate with us on Earth.
|
Physics
|
NASA will use a spacecraft later this month to test a planetary-defense method that could one day save Earth.The Double Asteroid Redirect Test spacecraft, otherwise known as DART, will be used as a battering ram to crash into an asteroid not far from Earth on Sept. 26. The mission is an international collaboration to protect the globe from future asteroid impacts.7 THINGS TO KNOW ABOUT NASA’S DART MISSION"While the asteroid poses no threat to Earth, this is the world's first test of the kinetic impact technique, using a spacecraft to deflect an asteroid for planetary defense," NASA said Thursday.In November 2021, a SpaceX Falcon 9 rocket launched with DART from the Vandenberg Air Force Base in California. A SpaceX Falcon 9 rocket lifts off from Space Launch Complex 4 at Vandenberg Space Force Base in California on Nov. 23, 2021, carrying NASA’s Double Asteroid Redirection Mission spacecraft. (NASA / FOX Weather)Now, 10 months later, DART will catch up with the asteroid by executing three trajectory correction maneuvers over the next three weeks. Scientists say that each maneuver will reduce the margin of error for the spacecraft’s required trajectory to impact the asteroid known as Dimorphos.NASA says that after the final maneuver on Sept. 25, approximately 24 hours before impact, the navigation team will know the position of Dimorphos within 2 kilometers. From there, DART will be on its own to autonomously guide itself to collision with the out-of-this-world space rock.Getting a look at the asteroid This image of the light from asteroid Didymos and its orbiting moonlet Dimorphos is a composite of 243 images taken by the Didymos Reconnaissance and Asteroid Camera for Optical navigation (DRACO) on July 27, 2022. (NASA JPL DART Navigation Team / FOX Weather)DART recently got its first look at Didymos, the double-asteroid system that includes its target, Dimorphos.An image taken from 20 million miles away showed the Didymos system to be quite faint. Still, once a series of images were combined, astronomers could pinpoint Dimorphos' exact location.NASA SLAMMING A SPACECRAFT INTO AN ASTEROID IS PART OF AN INTERNATIONAL PLAN TO SAVE EARTH"Seeing the DRACO images of Didymos for the first time, we can iron out the best settings for DRACO and fine-tune the software," said Julie Bellerose, the DART navigation lead at NASA's Jet Propulsion Laboratory. "In September, we'll refine where DART is aiming by getting a more precise determination of Didymos' location."DART's mission objective A rendering of the DART spacecraft and the asteroid pair, Didymos and Dimorphos. (NASA)If DART hits Dimorphos at 15,000 mph as planned, it will test the kinetic impactor Earth defense theory. "The point of a kinetic impactor is you ram your spacecraft into the asteroid you're worried about, and then you change its orbit around the Sun by doing that," Johns Hopkins Applied Physics Laboratory Planetary astronomer Andy Rivkin said. DART won't change the orbit of Didymos. It aims to change the speed of the moonlet, Dimorphos. Ground-based telescopes and data from the spacecraft will ultimately tell scientists if their plan worked.Asteroids move around the sun at a speed of about 20 miles per second. Rivkin explained that if a kinetic impactor method were used to change its orbit, engineers would only want to alter that by a tiny amount, maybe an inch or two a second.That's why Didymos and its moonlet Dimorphos make a perfect practice target. The tiny asteroid is orbiting Didymos and moves about a foot per second which is much easier to measure than 20 miles per second.If this works, the idea is to apply the same technique to larger asteroids. Until this mission, scientists could only simulate such an impact in a lab. DART will give them data to help solidify this defense plan. Illustration of NASA’s DART spacecraft and the Italian Space Agency’s (ASI) LICIACube prior to impact at the Didymos binary system. Credit: NASA/Johns Hopkins APL/Steve Gribben (NASA)
|
Physics
|
The world’s most powerful heavy-ion accelerator, the Facility of Rare Isotope Beams at Michigan State University, exploits transistor-based power amplifiers to generate beam intensities of up to 400 KeV Going straight: at the heart of the Facility of Rare Isotope Beams is a 500 m linear accelerator that can generate beams of heavy atomic nuclei travelling at half the speed of light (Courtesy: FRIB) In May 2022 the long-awaited Facility for Rare Isotope Beams (FRIB) opened its doors to scientists who are eager to experiment with exotic atomic nuclei that in many cases have never before existed on Earth. The FRIB, built over the last eight years at Michigan State University in the US, is expected to shed new light on fundamental questions in nuclear physics, including how most of the elements in the universe are created in stars and supernova explosions, while also enabling important innovations in fields as diverse as medicine, materials discovery, and environmental science.
All of this will be made possible by the world’s most powerful heavy-ion accelerator, capable of propelling atomic nuclei of any stable element to half the speed of light. By colliding ions with energies of up to 200 MeV with a target, the FRIB promises to produce rare isotopes at a rate orders of magnitude higher than is possible at other similar facilities, while also providing scientists with access to isotopes that have not yet been synthesized or detected here on Earth.
The accelerating power of the facility is achieved with 324 superconducting radio-frequency cavities distributed around a 500 m linear structure. Inside these resonators a low-level RF signal is boosted by a series of power amplifiers from a signal strength of just a few milliwatts up to a maximum beam intensity of 400 kW – about 1000 times greater than the accelerator the FRIB is designed to replace, the National Superconducting Cyclotron Laboratory.
Unlike its predecessor, and most other existing large-scale facilities, the FRIB exploits solid-state power amplifiers to accelerate the beam. “Over the last few years there have been big improvements in transistor technology, which means that each chip can deliver more power,” says Marcus Lau of TRUMPF Hüttinger, a power-electronics specialist that designed and produced the power amplifiers for the FRIB. “At frequencies from around 80 MHz to those approaching the gigahertz regime, it is possible to obtain more than 1 kW from each transistor.”
That improvement in power output has made solid-state systems a viable alternative to traditional vacuum-tube technologies, such as klystrons, active output tubes and tetrodes. These established devices can generate a few megawatts of power from a single tube, but reliability can be an issue: their performance gradually deteriorates from the moment they are switched on, reducing the amount of electrical power that is converted into a microwave signal. They also introduce a single and unpredictable point-of-failure into the system, occasionally forcing a facility to shut down for repairs during the time allocated to scientific experiments. “Both technology suppliers and end users are moving away from tube technology,” says Lau. “Both the initial investment and the operating costs have increased, particularly for klystrons, and it’s becoming more difficult to access service and maintenance.”
In contrast, the power delivered by solid-state amplifiers remains constant throughout their lifetime. The availability of the accelerator for scientific experiments can also be maximized by building redundancy into the design, with TRUMPF Hüttinger exploiting a fully modular architecture that integrates multiple transistor modules into rack-mountable power-amplifier units. “We don’t operate the units at 100% of their capability. If one transistor fails, we can compensate with other amplifiers in the system while the accelerator is still running, ” explains Lau. “We know the mean time between failure for the devices, so we can calculate the redundancy that should be built into the system to enable the power amplifiers to operate throughout an experimental run.”
In the worst-case scenario, an entire power-amplifier unit can even be replaced while the facility is still operating – a process called “hot swapping”. Each of the transistor modules, or “pallets”, are also continuously monitored during operation to enable predictive maintenance. “It’s really an advantage to see the status of the different pallets,” comments Lau. “If we see any deviations, for example in the power emission, we can choose to exchange an amplifier unit during scheduled downtime rather than having it fail during the next operating period.”
Power play: solid-state amplifier units supplied by TRUMPF Hüttinger provide the FRIB with a flexible and reliable solution that facilitates uninterrupted experimental time for scientific users (Courtesy: FRIB)
The engineers at TRUMPF Hüttinger have been developing the power-amplifier system for the FRIB for a number of years. The contract was originally awarded to HBH Microwave, a company founded in 1999 to develop microwave systems for radar and communications, but also capable of developing transistor-based amplifier technology for particle accelerators, and it was acquired by TRUMPF Hüttinger in January 2020. “Historically TRUMPF Hüttinger has focused on producing power generators for driving CO2 lasers and other electronic products,” explains Lau. “The merger with HBH Microwave allowed us to extend our product portfolio into high-frequency power electronics, and gives our ongoing development of power amplifiers for particle accelerators the backing and commitment of a major technology supplier.”
The modular design of the TRUMPF Hüttinger system offers maximum flexibility for large facilities like the FRIB, since it allows different amplifier units to be combined together to meet the power and frequency demands of each type of resonator. For the FRIB the amplifiers also need to be protected from energy that is reflected back into the electronic units, which in some extreme cases can reach four times the power emitted by a single pallet. To meet this requirement the company’s engineers designed a circulator circuit to isolate the transistors from the reflected signal, but they found it challenging to make the circulator work properly at the FRIB’s lowest operating frequency of 80.5 MHz. “At this frequency small changes in temperature affected the performance of the circulator,” explains Lau. “Within the project we developed software to track the optimized performance of the circuit, which made it possible to adapt its operation at low frequencies just by applying a voltage.”
To check that the transistors could withstand the intense reflected energy, each amplifier unit was subjected to rigorous burn-in tests before they were sent for installation at the FRIB. A small rack-based set-up was built to allow different power-amplifier units to be combined together, and then a phase shifter was used to replicate the reflection conditions inside the accelerator. For the “worst-case phase”, equivalent to the most reflected power, the emission from the power amplifiers was tested continuously for at least 24 hours.
Close communication with the project team was essential throughout the project, all the way from the initial design proposals through to final installation of the power-amplifier units. “We have bi-weekly meetings for each of our ongoing projects, which allows us to get feedback from the customer and to talk about any particular issues,” says Lau. “There is always something we haven’t thought about, and that needs to be adapted, even if it’s something simple such as whether the cooling water is supplied from the top or the bottom of the system.”
That ongoing dialogue enabled the FRIB project team to install and configure several hundreds of amplifier units for powering the different types of resonator deployed in the accelerator. “They were engaged and familiar with the system, plus we had sent them the first evaluation units,” adds Lau. “The intense exchange we had with the FRIB engineers helped the whole project to stay on time and within budget.”
TRUMPF Hüttinger is now applying its know-how in high-frequency power electronics to several other large-scale projects, driven in part by ongoing upgrades at synchrotron light sources around the world. Longer term, Lau sees an opportunity for developing turnkey solutions for medical applications that rely on accelerator-based systems, including the growing trend towards proton and ion-beam therapy. “It’s an expanding field and it’s definitely moving towards transistor technology,” he says.
Such innovative approaches generally don’t happen in isolation, which has prompted TRUMPF Hüttinger to organize a conference to explore the use of different accelerator systems for applications in science, industry and medicine. Taking place on 15–16 November 2022 in Freiburg, Germany, the Power Amplifiers for Particle Accelerators (PA2) conference aims to provide a forum for both end users and a range of industry partners to discuss emerging technology solutions as well as future directions for research and innovation. “We want to bring industry together with the scientific community,” says Lau. “In my experience it is always important not to stay in your own field but to talk to other experts to seed new ideas. We hope it will allow a fruitful exchange.”
|
Physics
|
Liang Wu is an assistant professor in the Department of Physics and Astronomy at the School of Arts & Sciences. Credit: University of Pennsylvania Already used in computers and MRI machines, superconductors—materials that can transmit electricity without resistance—hold promise for the development of even more advanced technologies, like hover trains and quantum computing. Yet, how superconductivity works in many materials remains a mystery that limits its applications. A new study published in Nature Physics sheds light on the superconductivity of AV3Sb5, a recently discovered family of kagome metals. The research was led by Liang Wu of the School of Arts & Sciences and conducted by Yishuai Xu, a postdoc in Wu's lab, and graduate students Zhuoliang Ni and Qinwen Deng, in collaboration with researchers from the Weizmann Institute of Science and University of California, Santa Barbara.
Since their discovery, superconductors with the chemical formula AV3Sb5—where A refers to cesium, rubidium, or potassium—have generated immense interest for their exotic properties. The compounds feature a kagome lattice, an unusual atomic arrangement that resembles and takes its name from a Japanese basket-weave pattern of interlaced, corner-sharing triangles. Kagome lattice materials have fascinated researchers for decades because they provide a window into quantum phenomena such as geometrical frustration, topology, and strong correlations.
While previous research on AV3Sb5 has discovered the coexistence of two different cooperative electronic states—the charge-density wave order and superconductivity—the nature of the symmetry breaking that accompanies these states has been unclear. In physics, symmetry refers to a physical or mathematical feature of a system that remains unchanged under certain transformations. When a material transitions from a normal, high-temperature state to an exotic, low-temperature state like superconductivity, it undergoes symmetry breaking. Wu, whose lab develops and uses time-resolved and nonlinear optical techniques to study quantum materials, set out to clarify the nature of symmetry-breaking when AV3Sb5 enters the charge-density wave phase.
AV3Sb5 exhibits what researchers call a "cascade" of symmetry-broken phases. In other words, as the system cools down, it begins to enter a symmetry breaking state, with lower and lower temperatures leading to additional broken symmetries. "In order to use superconductors for applications, we need to understand them," Wu says. "Because superconductivity develops at even lower temperatures, we need to understand the charge-density wave phase first." In its normal state, AV3Sb5 consists of a hexagonal crystal structure, composed of kagome lattices of vanadium (V) atoms coordinated by antimony (Sb) stacked on top of one another, with sheets of cesium, rubidium, or potassium in between each V-Sb layer. The structure is six-fold rotationally symmetric; when rotated by 60 degrees, it stays the same.
To find out whether AV3Sb5 retains its six-fold symmetry in the charge-density wave phase, the researchers performed scanning birefringence measurements on all three members of the AV3Sb5 family. Birefringence, or double refraction, refers to an optical property exhibited by materials with crystallographically distinct axes, a principal axis and a non-equivalent axis. When light enters the material along the non-equivalent axis, it splits in two, with each ray polarized and traveling at different speeds.
"In a kagome plane, the linear optical response should be the same along any direction, but they're not in AV3Sb5 because between the two kagome layers there's a relative shift," Wu says, explaining that the birefringence measurements revealed the difference between two orthogonal directions in the plane and a phase shift between the two layers that reduces the six-fold rotational symmetry of the materials to two-fold when they enter the charge-density wave state. "This was not clear to the physics community before."
Distinct axes are not the only explanation for the rotation of the light polarization plane. When linearly polarized light encounters a magnetic surface, it also changes, a phenomenon known as the magneto-optical Kerr effect. After separating out the property of birefringence by sending light along the principal axis in samples of AV3Sb5, the researchers used a second optical technique to measure the onset of the Kerr effect. For all three metals, the experiments reveal that the Kerr effect begins in the charge-density wave state. This finding indicates that the formation of charge-density waves breaks another symmetry, time-reversal symmetry. The simplest way to break time-reversal symmetry—which holds that the laws of physics remain the same whether time runs forward or backwards—is to use a permanent magnet, like those we put on a refrigerator, Wu says.
However, the Kerr effect is only observable at low temperatures with high resolution, indicating that the kagome metals are not substantially magnetic. "With these quantum materials," Wu says, he and his collaborators theorize that time-reversal symmetry is "not broken by a permanent magnet but by a circulating loop current." To confirm the nature of time-reversal symmetry breaking in the charge-density wave state, the researchers performed a third experiment in which they measured the circular dichroism, or the unequal reflectivity of left-handed and right-handed circularly polarized light, of the charge density wave phase. "We still need further work, but this finding really supports the possibility of circulating loop currents," the existence of which would suggest the unconventional nature of superconductivity in the metals, says Wu.
In 2018 Congress passed the National Quantum Initiative Act, with the goal of advancing research on quantum materials and the development of quantum technology. Quantum materials include those with topological properties and those with correlation, like the kagome metals AV3Sb5. While Wu's previous research centered on the former category and antiferromagnets, he says that the scanning optics technique that he'd developed for these studies presented a "ready and versatile tool" for studying symmetry breaking in new kagome metals.
"All superconductors are interesting because they could potentially be used as the basis for quantum computers, but before using these new superconductors for quantum computing, we need to understand the nature of the superconductivity," Wu says. More information: Yishuai Xu et al, Three-state nematicity and magneto-optical Kerr effect in the charge density waves in kagome superconductors, Nature Physics (2022). DOI: 10.1038/s41567-022-01805-7 Citation: Shedding light on the superconductivity of newly-discovered kagome metals (2022, November 7) retrieved 7 November 2022 from https://phys.org/news/2022-11-superconductivity-newly-discovered-kagome-metals.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
|
Physics
|
Born in the cradle of deep space, shooting at nearly the speed of light and harnessing energy up to a million times greater than anything achieved by the world's most powerful particle accelerator, cosmic rays are atom fragments that relentlessly rain down on Earth. They get caught in our atmosphere and mess up our satellites. They threaten the health of astronauts living in orbit, even when sparse in number. What kind of extreme cosmic factory could make such a thing, you ask? Unclear. In fact, this question has plagued scientists for over a century. But on Thursday in the journal Science, astrophysicists announced they might have uncovered an important clue for putting together a cosmic ray origin story. The short version is that they think cosmic rays come from blazars, or galaxies holding enormous black holes with energetic jets that point toward Earth -- streams so intense they're mightier even than the entire surrounding galactic region. It's the kind of phenomena one might expect fierce particles to come from."This of course means we are sitting right in the particle beam being spewed at us by the black hole," Francis Halzen, a University of Wisconsin-Madison professor of physics and lead scientist for the IceCube Neutrino Observatory, who wasn't involved in the new study, said in a statement.Here's the long version. A secret neutrino codeBasically, the new study's team used the art of deduction to figure out where these strange atom bits come from. First, they tracked down a sort of cosmic ray offshoot called a neutrino.Known as "ghost particles," neutrinos are a massive enigma in themselves. They're so evasive they interact with hardly anything, yet forcefully blast throughout the cosmos. As they travel, neutrinos don't touch even the tiniest building blocks of life -- atoms -- which means trillions of them are actually zipping through your atoms right now. You just can't tell. Specifically to cosmic rays, however, neutrinos are thought to begin somewhere along the puzzling particles' lifespan. Their legacies are connected, so to speak.Thus, the research team realized that if we can understand where astrophysical neutrinos come from, we'll have a solid idea of where cosmic rays might originate as well. Think of neutrinos as little shadowy messengers, telling us where their cosmic ray parents are. Fascinatingly, these sort of "particle messengers" are giving rise to a whole new field of astronomy called multi-messenger astronomy. Rather than rely only on light to decode the universe -- the driving force behind NASA's exceptional James Webb Space Telescope, for instance -- scientists can call on elusive particles, and even gravitational waves, to dissect the ins and outs of space-borne phenomena. "It's like feeling, hearing and seeing at the same time. You'll get a much better understanding," Marco Ajello, associate professor of Physics and Astronomy at Clemson University and author of the study, said in a statement. "The same is true in astrophysics because the insight you have from multiple detections of different messengers is much more detailed than you can get from only light."Searching from within the South PoleSo, focusing on multi-messenger astronomy, to get to the bottom of things, the scientists first analyzed what they call the "largest available neutrino data set" optimized for the search, collected from the IceCube Neutrino Observatory, a science base buried deep within the South Pole. In 2017, this observatory detected a neutrino that was later traced to a frightening blazar called TXS 0506+056. But there was still debate over whether these blazars really are natural particle accelerators that manufacture cosmic rays. Other experts, for example, believe cosmic rays are blips of stardust crashing through space, the product of violent supernovas illuminating the universe.The IceCube Neutrino Observatory in Antarctica. Erik Beiser, IceCube/NSF Though this debate ought to be put to rest, per the new study's team, as it cross-checked IceCube's findings with a catalog of blazars -- PeVatron blazers, to be exact, which speed up particles to at least 10^15 electron-volts -- and obtained strong proof that the two are entangled. "In this work," the study authors wrote, "we show that blazars are unambiguously associated with high-energy astrophysical neutrinos at an unprecedented level of confidence."A rendering of the IceCube detector shows the interaction of a neutrino with a molecule of ice. IceCube Collaboration/NSF "We had a hint back then (in 2017), and now we have evidence," Ajello said."The results provide, for the first time, incontrovertible observational evidence that the sub-sample of PeVatron blazars are extragalactic neutrino sources and thus cosmic ray accelerators," study co-author Sara Buson from Julius-Maximilians-Universität in Germany, said in a statement.Importantly, Buson also notes these results came from looking at only the "most promising" sets of IceCube neutrino data -- which means digging deeper into the background sets could offer even stronger evidence and pave the way for more discoveries going forward.As Aljello puts it, this new neutrino clue "places us a step forward in solving the century-old mystery of the origin of cosmic rays."
|
Physics
|
What happened
For the first time, scientists have entangled atoms for use as networked quantum sensors, specifically, atomic clocks and accelerometers.
The research team’s experimental setup yielded ultraprecise measurements of time and acceleration. Compared to a similar setup that does not draw on quantum entanglement, their time measurements were 3.5 times more precise, and acceleration measurements exhibited 1.2 times greater precision.
The result, published in Nature, is supported by Q-NEXT, a U.S. Department of Energy (DOE) National Quantum Information Science Research Center led by DOE’s Argonne National Laboratory. The research was conducted by scientists at Stanford University, Cornell University and DOE’s Brookhaven National Laboratory.
“The impact of using entanglement in this configuration was that it produced better sensor network performance than would have been available if quantum entanglement were not used as a resource,” said Mark Kasevich, lead author of the paper, a member of Q-NEXT, the William R. Kenan, Jr. professor in the Stanford School of Humanities and Sciences and professor of physics and of applied physics. “For atomic clocks and accelerometers, ours is a pioneering demonstration.”
What is quantum entanglement? How does it apply to sensors?
Entanglement, a special property of nature at the quantum level, is a correlation between two or more objects. When two atoms are entangled, one can measure the properties of both atoms by observing only one. This is true no matter how much distance — even if it’s light-years — separates the entangled atoms.
A helpful everyday analogy: A red marble and a blue marble are placed in a box. If you draw a red marble from the box, you know, without having to look at the other one, that it’s blue. The color of the marbles is correlated, or entangled.
In the quantum realm, entanglement is subtler. An atom can take on multiple states (colors) at once. If our marbles were like atoms, each marble would be both red and blue at the same time. Neither is fully red or blue while it sits the box. The quantum marble “decides” its color only at the moment of revelation. And once you draw one marble of “decided” color, you know the color of its entangled partner.
To take a measurement of one member of an entangled pair is effectively to take a simultaneous reading of both.
Taking this further: Two entangled clocks are practically equivalent to a single clock with two displays. Time measurements taken using entangled clocks can be more precise than measurements from two separate, synchronized clocks.
Why it matters
Greater sensitivity in atomic clocks and accelerometers would lead to more precise timekeeping and navigation systems, such as those used in global positioning systems, in defense and in broadcast communications. Ultraprecise clocks are also used in finance and trading.
“GPS tells me where I am to about a meter right now,” Kasevich said. “But what if I wanted to know where I was to within 10 centimeters? That’s what the impact of better clocks would be.”
A note on ultraprecise clocks
One can mark the passage of time by counting the number of pulses in an electromagnetic wave, just as you would count the ticks of a clock. If you know that a particular wave pulses 6 billion times per second, you know that, once you count 6 billion crests of the wave, one second has passed. So knowing the exact frequency of a microwave gives one a precise way to track time.
How it works
The entanglement: Rubidium atoms, trapped inside a cavity, are separated into two groups of about 100,000 atoms each. The groups sit between two mirrors. Light is made to bounce back and forth between the mirrors, tracing its way through the groups of atoms with every shot. The ricocheting light entangles them.
The sensing: A microwave ripples through the two groups of atoms. The atoms that happen to resonate with the microwave’s particular frequency respond by changing to a different state, like the wine glass that vibrates when a soprano hits just the right note.
Similarly, when a particular acceleration is applied to the atomic groups, some fraction of the atoms in each group responds by changing state.
The measurement: The two entangled atomic groups behave like two faces of a single clock, or two readings of one accelerometer.
The research team measured the number of atoms that changed state — the ones that vibrated like a wine glass — in each group.
Then they used the numbers to calculate the difference in the microwave frequencies applied to the two groups, and therefore the difference in the groups’ readings of time or acceleration.
Increased precision: The Kasevich team found that entanglement improves the precision in the frequency or acceleration difference read by the displays.
In their setup, the measurement of time in two locations was 3.5 times more precise when the clocks were entangled than if they were operating independently. For acceleration, the measurement was 1.2 times more precise with entanglement.
Impact
“If you want to know how long something takes, you might look at one clock as a starting point and then run to another room to look at another clock, the end point,” Kasevich said. “Our method exploits the entanglement principle to make that comparison as precise as possible.”
The researchers also successfully networked four groups of atoms in four separate locations using this configuration.
In the team’s experiment, the two groups of atoms were separated by about 20 micrometers, close to the average width of a human hair.
Their work means that time or acceleration can be compared, with unprecedented sensitivity, between four separate, albeit close-together, locations.
“In the future, we want to push them out to longer distances. The world wants clocks whose time can be compared. It’s the same with accelerometers. There are sensing configurations where you want to be able to read out the difference in the acceleration of one group with respect to another. We were able to show how to do that,” Kasevich said.
“This is a tour de force result from Mark and his team,” said Q-NEXT Deputy Director JoAnne Hewett, who is also the SLAC National Accelerator Laboratory associate director of fundamental physics and chief research officer as well as a Stanford professor of particle physics and astrophysics. “This means we can harness entanglement to develop sensors that are far more powerful than those we use today. We are another step closer to wielding quantum phenomena to improve our everyday lives.”
This work was supported by the DOE’s Office of Science National Quantum Information Science Research Centers as part of the Q-NEXT center. About Q-NEXT
Q-NEXT is a U.S. Department of Energy National Quantum Information Science Research Center led by Argonne National Laboratory. Q-NEXT brings together world-class researchers from national laboratories, universities and U.S. technology companies with the goal of developing the science and technology to control and distribute quantum information. Q-NEXT collaborators and institutions will create two national foundries for quantum materials and devices, develop networks of sensors and secure communications systems, establish simulation and network test beds, and train the next-generation quantum-ready workforce to ensure continued U.S. scientific and economic leadership in this rapidly advancing field. For more information, visit https://q-next.org/. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
|
Physics
|
Anomaly localization: Lung scans from patients with COVID-19 infection (rows 1–3) and healthy controls (rows 4–5). From left to right: original CT image; attention maps generated by four example latent distributions (warmer colours indicate regions of anomalies); aggregated attention map; segmentation mask; and ground truth infection mask. (Courtesy: Q Zhou et al Comput. Methods Programs Biomed. 10.1016/j.cmpb.2022.106883 ©2022, with permission from Elsevier)
An artificial neural network that can identify regions of COVID-19 lung infection in CT scans, despite being trained only on images of healthy patients, has been developed by researchers in the UK and China. The tool can also be used to generate pseudo data to retrain and enhance other segmentation models.
One of the most significant health crises of the last century, the COVID-19 pandemic is estimated to have infected more than 250 million individuals worldwide since it first emerged back in December 2019. As we have all learnt over the last two years, the successful suppression of coronavirus transmission is dependent on effective testing and quarantine of infected individuals.
CT scanning is an important diagnostic tool — and one that has the potential to stand in as both an alternative to reverse transcription polymerase chain reaction (RT-PCR) tests where such are limited, as well as providing a way to screen for PCR false negatives.
The problem with such uses of CT scans, however, is that they are dependent on having trained radiologists to interpret them — and increasing demand in outbreak hotspots would add more strain to local medical services. One solution lies in the application of deep learning-based artificial intelligence to analyse the images, which could help clinicians screen for COVID-19 faster and also lower the expertise level needed to do so.
While various studies have shown the potential for deep learning to aid with classification of the disease on CT images, few have examined its capacity to automate the localization of disease regions. A challenge with using deep learning to complete this task, however, comes in the scarcity of annotated datasets on which to train neural networks.
In their study, computer scientist Yu-Dong Zhang of the University of Leicester and colleagues propose a new weakly supervised deep-learning framework dubbed the “Weak Variational Autoencoder for Localisation and Enhancement”, or “WVALE” for short. The framework includes a neural network model named “WVAE”, which uses a gradient-based approach for anomaly localization. WVAE works by transforming and then recovering the original data in order to learn about their latent features, which then allows it to identify anomalous regions
The team’s approach is “weakly supervised” because they used only healthy control images to train their WVAE model — rather than a mixture of CT scans from healthy and coronavirus-infected patients. The study’s main dataset comprised CT scans from 66 patients diagnosed with COVID-19 at the Fourth People’s Hospital in Huai’an, China and 66 healthy medical examiners as controls.
The researchers found that the WVAE model is capable of producing high-quality attention maps from the CT scans, with fine borders around infected lung regions, and segmentation results comparable to those produced by many conventional, supervised segmentation models —outperforming a range of existing weakly supervised anomaly localization methods. Furthermore, they were able to go on to use pseudo data generated by this model to retrain and enhance other segmentation models. Read more Bridging the knowledge gap on AI and machine-learning technologies “Our study provides a proof-of-concept for weakly supervised segmentation and an alternative approach to alleviate the lack of annotation, while its independence from classification and segmentation frameworks makes it easily integratable with existing systems,” says Zhang.
With their initial study complete, the researchers are now looking to explore bringing different methods to bear on their framework.
“While the gradient-based method provides good anomaly localization performance in our case, it’s a post-hoc method — which attempts to interpret models after they’re trained — and WVAE is still a black box,” Zhang explains. “We plan to look at exploring methods that can perhaps make WVAE inherently interpretable and exploit that interpretability for tasks like anomaly localization and generating pseudo segmentation data.”
The study is described in Computer Methods and Programs in Biomedicine. AI in Medical Physics Week is supported by Sun Nuclear, a manufacturer of patient safety solutions for radiation therapy and diagnostic imaging centres. Visit www.sunnuclear.com to find out more.
|
Physics
|
Everyone has the same comment about Batman: He's cool because he's just a normal dude, but he's also a superhero. It’s true, he doesn't have superpowers. However, what he does have is a combination of skills and equipment.In the movie The Batman, we get to see him use one of his "toys"—his ascender gun. (It's also known as a grapple gun or a grappling gun.) Batman uses it to launch something like a grappling hook, which is connected to a cable. Once it attaches to a high point, an electric motor within the gun winds the cable, pulling Batman up.In this scene, Batman is in a building with a bunch of Gotham police officers who have detained him. He doesn't think that's such a good idea. After breaking free, he runs to an interior stairwell and shoots the ascender cable up near the stairwell’s top, then activates the motor to pull him up. Spoiler: He escapes. (But you probably knew that.)Now for the physics calculations: What kind of battery or power source would his ascender need, and how much power would it use? Let's start with a bit of background on energy.Energy for the AscentOne of the key ideas for understanding energy is to define a system of interest, which is the collection of objects we want to study. (Of course, the most complete system is the entire universe, but it’s not very practical to deal with the whole thing all at once. Instead, we want to isolate only the objects that we are interested in.)Let's use the following system: Batman plus the ascender (and its battery) and the Earth. (You might think the Earth is a weird thing to add to the system, but just hold on. We’ll get there.)Once we have a system, we can use one of the most important concepts in physics: the work-energy principle. This says that the work done on a system is equal to the change in the system’s energy. But what the heck is energy?That's actually a hard question, but here is my best answer: Energy is not a real thing, but rather a way to keep track of different interactions. Energy also comes in different forms. For example, kinetic energy is associated with the motion of objects, and potential energy is the kind that depends on the position of objects.So work is a way to add or take away energy from a system. In terms of forces, we define work as the following:Illustration: Rhett AllainIn this equation, F is the applied force and Δr is the distance over which the force pulls (or pushes) an object. However, only the component of the force in the direction of the displacement matters—that's what the cos(θ) term is for, where θ is the angle between the force and the displacement.Honestly, the best way to understand energy is with an example, and Batman using an ascender is a perfect situation to demonstrate the work-energy principle. So let’s consider the work and changes in energy as Batman zooms up to the top floor of the building. The first thing we need to do is define our system of interest. This is actually a pretty important step—by defining the system, we can figure out which interactions we can represent as "work" and which ones as "energies."I'm going to start with a force diagram showing Batman ascending the cable at a constant speed. (Although the Earth is part of our system of interest, I'm just going to show Batman, since the Earth is rather large.)Illustration: Rhett AllainHere you can see there are two forces that act on Batman: the downward-pulling gravitational force (mg), and the upward-pulling force from the tension in the cable (T). Even though these two forces pull on Batman over some distance, neither one does any work on the system. The tension in the cable doesn't do any work because the cable doesn't actually move; Batman and the ascender move instead. Since the cable doesn't move, its displacement (Δr) is zero, so the work is also zero.We could consider the work done by the gravitational force—but we won't. Forces are an interaction between two objects, in this case, Batman and the Earth. Since the Earth is also part of our system, we can't consider work done by this "internal" force. Instead of work done by this force, we will use a potential energy. You can think of a potential energy as energy stored in a system. In this case, we will call this "gravitational potential energy." It looks like this:Illustration: Rhett AllainHere, m is the mass of the object (Batman plus his stuff), g is the gravitational field (9.8 newtons per kilogram), and y is the vertical position. The awesome thing is that you can measure the position from any reference point you want, since really it's just the change in Batman’s position that is going to matter.But now we have a problem. Our work-energy equation looks like this:Illustration: Rhett AllainWe already said the work was zero—but the change in kinetic energy is also zero if Batman is moving up at a constant speed (since the speed doesn't change).The change in gravitational potential energy will be some positive value, since he is moving up and his y value is increasing. But that means the left side of the equation is zero and the right side is zero plus some positive value. You don't have to be a "math person" to see that something is missing.That missing energy is the chemical potential energy in the battery. That battery energy decreases (or gets used up) as the gravitational potential energy increases (since Batman’s y position increases). So we actually have this equation:Illustration: Rhett AllainHow much energy is needed for a battery-powered Bat-ascender? You just need to know the mass of Batman (m), the gravitational field (g), and the change in height (Δy).Let's make some estimates. I have no idea about the mass of Batman's suit and stuff, so I'm just going with 100 kilograms. For the change in height, it’s sort of difficult to see, but it's clearly more than five floors and probably fewer than 15. Let's say 10 floors. Since a story is about 4.3 meters, this puts the change in height (Δy) at 43 meters. Putting it all into the equation above, the decrease in battery energy would be about 42,000 joules.But what does that even mean? Let's start with what a joule is: a basic unit of energy named after James Prescott Joule. (Remember, if you do cool stuff, they might name something after you too.) If you pick up a physics textbook from the floor and put it on a table, that would require about 10 joules of energy.How big a battery would you need to pull off this stairwell escape? Well, that depends on the kind of battery you use. The iPhone 13 uses a lithium-ion battery with a capacity of 3,227 mAh, or milliamp-hours. If you know the battery voltage (it's 3.7 volts), then you can convert this energy capacity to joules. Guess what? The energy in an iPhone 13 battery is 43,000 joules. That's enough for Batman to escape—and maybe have a little extra left over so he can post his epic feat on his TikTok channel.But what if he used AA batteries? Different brands of batteries have different amounts of energy, but Batman would only get the best AA batteries, with an energy of around 10,000 joules. So he would only need four or five of these batteries to get him to the top of the building.I know that seems surprising, and it is—because there's more to this calculation.Estimation of PowerWe can’t only consider the energy needed to get Batman to the top of the stairwell. We also have to factor in how long this move takes. We have a quantity to describe how fast the energy of a system changes—it’s called the power.Illustration: Rhett AllainIf the change in energy (ΔE) is in joules and the change in time (Δt) is in seconds, this will give a power in units of watts. Since I already know the change in energy (that's the 42,000 joules), I just need to find the time it takes for Batman to travel the 10 floors in the video.As is usual for a movie, the film cuts between shots and switches camera angles, so the timing is not completely clear. But as a rough estimate, I'm going to say that it took him 6 seconds to go from the ground floor to 10 stories up. That puts the power required from the battery at 7,164 watts.Is that a lot of power? I guess so. If you wanted to compare this to the energy needed to power a car, you’d have to convert it to horsepower, and you’d get a pretty low figure: just 10 horsepower. Most cars have at least a 150-horsepower engine.However, this is pretty good for a person. A human in top shape could produce about 800 to 1,000 watts of power for 6 seconds, according to experimental data from NASA. But not 7,000 watts. That's just not going to happen, even if that person is the Batman.There's another problem with this ascender and its battery. Although an iPhone battery could have enough stored energy to power the ascender for a 10-storey climb, it wouldn’t be enough to do it in 6 seconds. Suppose you have a 3.8-volt battery with a power output of 7,000 watts. This would require an electric current of over 1,800 amperes. That's a super high current. In fact, it's so high that it would heat up (or melt) the wires going to the ascender motor, which would require even more power.A normal iPhone battery can't do this. You would need a much larger battery with a higher voltage so that you could reduce the current. But bigger batteries are heavier. And that would mean you would need a larger ascender.Just to be clear, electric-powered ascenders are possible—check out this DIY ascender gun from Hacksmith), which is pretty awesome and looks like it works. In fact, something like this is probably a more reasonable version of what it would take to pull a human up a stairwell. But notice that this version is both larger and slower than the ascender in the movie clip. The ascender gun from Hacksmith uses batteries small enough to fit in two pants pockets—but they also have electric motors that are much bigger than Batman’s gun.That might be fine for mere mortals, but Batman wants his ascender gun to be small and concealed. (That way no one knows what he can or can’t do.) So in the end, The Batman has to cheat with special effects. I'm totally fine with that, though, because it still makes for a great physics problem.
|
Physics
|
By Matt WilliamsThe Multiverse Theory, which states that there may be multiple or even an infinite number of Universes, is a time-honored concept in cosmology and theoretical physics. While the term goes back to the late 19th century, the scientific basis of this theory arose from quantum physics and the study of cosmological forces like black holes, singularities, and problems arising out of the Big Bang Theory. Together, the research team sought to determine how the accelerated expansion of the cosmos could have effected the rate of star and galaxy formation in our Universe. This accelerate rate of expansion, which is an integral part of the Lambda-Cold Dark Matter (Lambda-CDM) model of cosmology, arose out of problems posed by Einstein’s Theory of General Relativity.As a consequence of Einstein’s field equations, physicist’s understood that the Universe would either be in a state of expansion or contraction since the Big Bang. In 1919, Einstein responded by proposing the “Cosmological Constant” (represented by Lambda), which was a force that “held back” the effects of gravity and thus ensured that the Universe was static and unchanging.Shortly thereafter, Einstein retracted this proposal when Edwin Hubble revealed (based on redshift measurements of other galaxies) that the Universe was indeed in a state of expansion. Einstein apparently went as far as to declare the Cosmological Constant “the biggest blunder” of his career as a result. However, research into cosmological expansion during the late 1990s caused his theory to be reevaluated. In short, ongoing studies of the large-scale Universe revealed that during the past 5 billion years, cosmic expansion has accelerated. As such, astronomers began to hypothesize the existence of a mysterious, invisible force that was driving this acceleration. Popularly known as “Dark Energy”, this force is also referred to as the Cosmological Constant (CC) since it is responsible for counter-effecting the effects of gravity.Since that time, astrophysicists and cosmologists have sought to understand how Dark Energy could have effected cosmic evolution. This is an issue since our current cosmological models predict that there must be more Dark Energy in our Universe than has been observed. However, accounting for larger amounts of Dark Energy would cause such a rapid expansion that it would dilute matter before any stars, planets or life could form. For the first study, Salcido and the team therefore sought to determine how the presence of more Dark Energy could effect the rate of star formation in our Universe. To do this, they conducted hydrodynamical simulations using the EAGLE (Evolution and Assembly of GaLaxies and their Environments) project – one of the most realistic simulations of the observed Universe.Using these simulations, the team considered the effects that Dark Energy (at its observed value) would have on star formation over the past 13.8 billion years, and an additional 13.8 billion years into the future. From this, the team developed a simple analytic model that indicated that Dark Energy – despite the difference in the rate of cosmic expansion – would have a negligible impact on star formation in the Universe. Timeline of the Big Bang and the expansion of the Universe. - Image Credit: NASA They further showed that the impact of Lambda only becomes significant when the Universe has already produced most of its stellar mass and only causes decreases in the total density of star formation by about 15%. As Salcido explained in a Durham University press release:“For many physicists, the unexplained but seemingly special amount of dark energy in our Universe is a frustrating puzzle. Our simulations show that even if there was much more dark energy or even very little in the Universe then it would only have a minimal effect on star and planet formation, raising the prospect that life could exist throughout the Multiverse.”For the second study, the team used the same simulation from the EAGLE collaboration to investigate the effect of varying degrees of the CC on the formation on galaxies and stars. This consisted of simulating Universes that had Lambda values ranging from 0 to 300 times the current value observed in our Universe. However, since the Universe’s rate of star formation peaked at around 3.5 billion years before the onset of accelerating expansion (ca. 8.5 billion years ago and 5.3 billion years after the Big Bang), increases in the CC had only a small effect on the rate of star formation. Taken together, these simulations indicated that in a Multiverse, where the laws of physics may differ widely, the effects of more dark energy cosmic accelerated expansion would not have a significant impact on the rates of star or galaxy formation. This, in turn, indicates that other Universes in the Multiverse would be just about as habitable as our own, at least in theory. As Dr. Barnes explained:“The Multiverse was previously thought to explain the observed value of dark energy as a lottery – we have a lucky ticket and live in the Universe that forms beautiful galaxies which permit life as we know it. Our work shows that our ticket seems a little too lucky, so to speak. It’s more special than it needs to be for life. This is a problem for the Multiverse; a puzzle remains.”However, the team’s studies also cast doubt on the ability of Multiverse Theory to explain the observed value of Dark Energy in our Universe. According to their research, if we do live in a Multiverse, we would be observing as much as 50 times more Dark Energy than what we are. Although their results do not rule out the possibility of the Multiverse, the tiny amount of Dark Energy we’ve observed would be better explained by the presence of a as-yet undiscovered law of nature. As Professor Richard Bower, a member of Durham University’s Institute for Computational Cosmology and a co-author on the paper, explained:“The formation of stars in a universe is a battle between the attraction of gravity, and the repulsion of dark energy. We have found in our simulations that Universes with much more dark energy than ours can happily form stars. So why such a paltry amount of dark energy in our Universe? I think we should be looking for a new law of physics to explain this strange property of our Universe, and the Multiverse theory does little to rescue physicists’ discomfort.” These studies are timely since they come on the heels of Stephen Hawking’s final theory, which cast doubt on the existence of the Multiverse and proposed a finite and reasonably smooth Universe instead. Basically, all three studies indicate that the debate about whether or not we live in a Multiverse and the role of Dark Energy in cosmic evolution is far from over. But we can look forward to next-generation missions providing some helpful clues in the future. These include the James Webb Space Telescope (JWST), the Wide Field Infrared Survey Telescope (WFIRST), and ground-based observatories like the Square Kilometer Array (SKA). In addition to studying exoplanets and objects in our Solar System, these mission will be dedicated to studying how the first stars and galaxies formed and determining the role played by Dark Energy.What’s more, all of these missions are expected to be gathering their first light sometime in the 2020s. So stay tuned, because more information – with cosmological implications – will be arriving in just a few years time!Source: Universe Today - Further Reading: Durham University If you enjoy our selection of content please consider following Universal-Sci on social media:
|
Physics
|
Louis Minion reviews Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology by Sonia Contera The future is nano The development of nanoscience means biology and medicine can be approached from an engineering perspective. (Courtesy: Shutterstock / FrimuFilms) Part showcase, part manifesto, Sonia Contera’s Nano Comes to Life makes the ambitious attempt to convey the wonder of recent advances in biology and nanoscience while at the same time also arguing for a new approach to biological and medical research.
Contera – a biological physicist at the University of Oxford – covers huge ground, describing with clarity a range of pioneering experiments, including building nanoscale robots and engines from self-assembled DNA strands, and the incremental but fascinating work towards artificially grown organs.
But throughout this interesting survey of nanoscience in biology, Contera weaves a complex argument for the future of biology and medicine. For me, it is here the book truly excels. In arguing for the importance of physics and engineering in biology, the author critiques the way in which the biomedical industry has typically carried out research, instead arguing that we need an approach to biology that respects its properties at all scales, not just the molecular.
As evidence for this, Contera introduces the reader to experiments on the mechanical properties of cells, showing that the right tweak or pinch can cause a cell to completely change its properties. Particularly fascinating is the description of work showing that stem cells can be made to behave more like brain cells when grown on a softer surface and more like bone when on a harder one. As Contera puts it, “Biology does not distinguish between the domains of science; it uses them all.”
However, we are being held back, she argues, by institutional inertia. Pharmaceutical companies have devised highly automated high-throughput methods of testing new drugs but still rely on a chemistry-centred approach – hardly better than blindfolded dart throwing. With new knowledge from biophysics and biomedical engineering, we can instead approach medicine from multiple scales – making nanoparticles tagged with antibodies and emitters to simultaneously treat and image, or using advances in 3D-printed tissues to investigate new processes for “organs on a chip”. These new ideas, which view biology from an engineering perspective, allow us to see ourselves as machines that are part of the physical and mechanical world around us.
Despite the complex ideas conveyed in Nano Comes to Life, I never found myself bogged down in technical details. Indeed, some of the unfamiliar concepts are illustrated with high-speed atomic force microscope images from the Contera’s own lab, giving us a nanoscale peek into how scientists know what is going on at such small scales.
Nano Comes to Life is aimed at both the general reader as well as scientists, emphasizing and encouraging the democratization of science and its relationship to human culture. Ending on an inspiring note, Contera encourages us to throw off our fear of technology and use science to make a fairer and more prosperous future. 2022 Princeton University Press 240pp £14.99pb
|
Physics
|
These new systems will be available primarily for R&D purpose to a wide range of European users, no matter where in Europe they are located, to the scientific communities, as well as to industry and the public sector. The selected proposals ensure a diversity in the quantum technologies and architectures to give European users access to many different quantum technologies.This quantum computer infrastructure will support the development of a wide range of applications with industrial, scientific and societal relevance for Europe, adding new capabilities to the European supercomputer infrastructure. Across industry and scientific communities, there is an array of important computing tasks that classical supercomputers struggle to solve. Examples of such complex problems include the optimisation of traffic flows and fundamental numerical problems in chemistry and physics for the development of new drugs and materials. This is where quantum computing can help and open the door to new approaches to solve these hard-to-compute problems. The integration of quantum computing capabilities in HPC applications will enable scientific discoveries, R&D and new opportunities for industrial innovations.The quantum computers will be integrated in already existing supercomputers and the selected hosting entities will operate the systems on behalf of the EuroHPC JU. The quantum computers will be co-funded by the EuroHPC JU budget stemming from the Digital Europe Programme (DEP) and by contributions from the relevant EuroHPC JU participating states. The JU will co-fund up to 50% of the total cost of the quantum computers with a planned total investment of more than EUR 100 million. The exact funding arrangements for each system will be reflected in hosting agreements that will be signed soon.More informationThe hosting entities have been selected as a result of a call for expression of interest for the hosting and operation of European quantum computers integrated in HPC supercomputer, launched in March 2022.In December 2021, the EuroHPC JU launched its first quantum computing initiative with its R&I project HPCQS. The project aims to integrate two quantum simulators, each controlling about 100+ quantum bits (qubits), in two already existing supercomputers:the supercomputer Joliot Curie of GENCI, the French national supercomputing organisation, located in France;the JUWELS supercomputer of the Jülich Supercomputing Centre, located in Germany.HPCQS will become an incubator for hybrid quantum-supercomputing that is unique in the world. Hybrid computing, blending the best of quantum and classical HPC technologies will unleash new innovative potential and prepare Europe for the post-exascale era.Related linksThe Czech Republic will host the European LUMI-Q quantum computer, press release from IT4Innovations, one of the selected hosting entity.EU deploys first quantum technology in six sites across Europe, press release from the European CommissionBackgroundThe EuroHPC JU is a legal and funding entity created in 2018 to enable the EU and EuroHPC participating countries to coordinate their efforts and pool their resources with the objective of making Europe a world leader in supercomputing. The mission of the EuroHPC JU is:to develop, deploy, extend and maintain in the EU a federated, secure hyperconnected supercomputing, quantum computing, service and data infrastructure ecosystem;to support the development and uptake of demand-oriented and user-driven innovative and competitive supercomputing and quantum computing systems based on a supply chain that will ensure the availability of components, technologies and knowledge;and, to widen the use of that supercomputing and quantum computing infrastructure to a large number of public and private users.To date the EuroHPC JU has already procured eight supercomputers, located across Europe: LUMI in Finland (which ranks number 3 in the world), Vega in Slovenia, MeluXina in Luxembourg, Discoverer in Bulgaria, Karolina in the Czechia, LEONARDO in Italy, MareNostrum5 in Spain, and Deucalion in Portugal.
|
Physics
|
We don’t have to worry too much about our Sun. It can burn our skin, and it can emit potent doses of charged material—called Solar storms—that can damage electrical systems. But the Sun is alone up there, making things simpler and more predictable.
Other stars are locked in relationships with one another as binary pairs. A new study found a binary pair of stars that are so close to each other they orbit every 51 minutes, the shortest orbit ever seen in a binary system. Their proximity to one another spells trouble.
Stars this close to each other are called cataclysmic variables. In cataclysmic variables, the primary star is a white dwarf; in this pair, the other star is a Sun-like star, but older. White dwarfs are tiny for stars, about the size of the Earth, but they’re incredibly dense. The white dwarf’s powerful gravity draws material away from its companion, the donor star. The material forms an accretion ring around the white dwarf. This process creates bright flashes at irregular or variable times as the disk heats and material falls into the white dwarf.
The stars in a cataclysmic variable (CV) must be close together for the white dwarf “vampire star” to draw material from the donor star. Astronomers know of more than 1,000 CVs, and only a dozen of those have orbits shorter than 75 minutes. But the authors of this study found the closest orbit yet. This pair of stars needs only 51 minutes to complete an orbit. This is rare, and the binary pair is evidence of a missing link in astrophysics.
The study is “A dense 0.1-solar-mass star in a 51-minute-orbital-period eclipsing binary,” and it’s published in the journal Nature. The lead author is Kevin Burdge from the Department of Physics at MIT. The stars in this study are about 3,000 light-years away in the direction of the Hercules constellation.
“This one star looked like the Sun, but the Sun can’t fit into an orbit shorter than eight hours — what’s up here?” Kevin Burdge, Department of Physics, MIT.
These stars are at the end of a long story. They’ve been companions for about 8 billion years, though they’ve aged differently. One is a white dwarf, the stellar remnant of a main sequence star that went through its red giant phase and is now just a hyperdense, fusionless core of matter. Its companion is a Sun-like star on its way to becoming a red giant and eventually a white dwarf. But the existing white dwarf is disrupting that pathway by slowly consuming it.
The larger donor star is about the same temperature as our Sun. But it’s lost so much of its mass that it’s tiny; only a tenth of the diameter of the Sun, or about the size of Jupiter. “This one star looked like the Sun, but the Sun can’t fit into an orbit shorter than eight hours — what’s up here?” Burdge said in a press release. The white dwarf is even smaller; its diameter is about 1.5 times Earth’s, while its densely packed matter means it’s about 56% as massive as the Sun. A bizarre object.
This is an illustration of a class of double stars called a cataclysmic variable. The system consists of a white dwarf star – a dense, burned-out star that has collapsed to the size of Earth and a companion that is a typical star, similar to but smaller than the Sun. Image Credit: STScI
Astronomers have discovered other eclipsing binaries, but none this close together. Not only are the pair extremely close to one another, but they eclipse each other from our line of sight. This gave the researchers multiple opportunities to observe the eclipses and to take precise measurements for both stars. This binary pair is named ZTF J1813+4251. ZTF stands for Zwicky Transient Facility, a notable public-private partnership engaged in an optical study of the northern sky looking for transient phenomena like variables. But the name isn’t that important. Instead, it’s the particular stage the pair are in that makes scientists sit up and take note.
The researchers found that the vampire star has been stripping the hydrogen away from the donor star, and now it’s starting to cannibalize helium. “This is a rare case where we caught one of these systems in the act of switching from hydrogen to helium accretion,” said lead author Burdge. Observing a binary star switching from hydrogen to helium accretion is essential because the switch is a missing link in astrophysics. Astronomers know of a population of CVs called helium CVs, but there was no clear evidence of how the stars in these CVs switched from hydrogen to helium. Before this study, the evolution from hydrogen accretion to helium accretion in helium CVs was unclear. Astronomers had never observed a star making the transition. But the observations of ZTF J1813+4251 have changed that. Observations showed that the donor star is about the same temperature as the Sun but 100 times denser. That density means the star has a helium-rich composition and the white dwarf companion is accreting helium rather than hydrogen. “This is a special system.”Kevin Burdge, Department of Physics, MIT.
Scientists predicted decades ago that binary stars could shrink until their orbits are ultrashort and become cataclysmic variables. As the white dwarf consumes the Sun-like star’s hydrogen, denser helium is left behind. The Sun-like star burns out, and a helium core is left behind. The heavy helium core is enough to keep the dead star in a tight orbit. The repeated observations of the stars eclipsing one another were just the beginning. With the more precise data that the researchers gathered, they performed more accurate simulations to see what might become of the pair. Those simulation results answered long-standing questions about cataclysmic variables and their shrinking orbits.
The simulations show that in about 70 million years, the pair will grow even closer until their orbital period is only 18 minutes. At that point, it’ll be a helium CV binary. That transition is “… a previously missing link between helium CV binaries and hydrogen-rich CVs,” the authors write.
In the images below, the orange dotted line, red dotted line, and blue dotted line represent different evolutionary trajectories depending on when the donor star began to lose mass to the WD companion in its lifetime. Orange is when it started at 97% of its main sequence lifetime, red is at 95%, and blue is at 94%. The black star on the red line is where ZTF J1813+4251 is. (The purple line represents the evolutionary track for another likely transitional CV named El Psc and is shown for comparison.) The team’s simulation laid out an evolutionary pathway for the binary star in (a). As the stars become closer, mass loss accelerates, and the donor star’s temperature rises as it tries to respond to losing mass. Then the temperature declines as the last hydrogen is fused. As the orbital period shrinks and the donor star loses mass, it expands, and its temperature drops adiabatically due to expansion. At that point, the binary star is a Helium CV. (b) shows how the binary star will reach a period minimum of about 18 minutes in about 75 million years. After that, the pair will spend the next 300 million years growing apart until its period is about 30 minutes. (The Y axis shows 100 million year increments, not labelled.) c) shows the evolution of the donor star’s mass related to the orbital period, reaching just a few hundredths of a solar mass as the tracks evolve out to longer orbital periods as helium CVs. d) shows how the donor loses its hydrogen on its way to becoming a helium CV. The star loses all of its hydrogen at about the minimum orbital period. Gravitational waves play a role in this study, too. Burdge’s specialty is astrophysical sources of gravitational and electromagnetic radiation. Gravitational waves were first measured in 2015, though they’d been predicted long before that, and they’re an important area of study in astronomy. “Gravitational waves are allowing us to study the universe in a totally new way,” said Burdge.
This binary pair should emit gravitational waves at such close proximity to one another. They have to be very close together to emit the waves, but not too close; at about 10,000 km separation, they will merge and explode, ending the emissions of gravitational waves. “People predicted these objects should transition to ultrashort orbits, and it was debated for a long time whether they could get short enough to emit detectable gravitational waves. This discovery puts that to rest,” Burdge said in a press release. Burdge and his colleagues worked hard to find this binary pair. They scoured data from ZTF, looking for variables that repeatedly flashed in less than an hour. That signals that the stars cross each other’s orbits and that the orbital period is short. First, they used an algorithm to search through ZTF data on over one billion stars. That algorithm produced about one million stars that flashed about every hour. Burdge then looked through that selection, seeking interesting signals. Eventually, he zeroed in on ZTF J1813+4251. “This thing popped up, where I saw an eclipse happening every 51 minutes, and I said, OK, this is definitely a binary,” Burdge said.
“This is a special system,” Burdge said. “We got doubly lucky to find a system that answers a big open question and is one of the most beautifully behaved cataclysmic variables known.”
More:
Press Release: Astronomers find a “cataclysmic” pair of stars with the shortest orbit yetPublished Research: A dense 0.1-solar-mass star in a 51-minute-orbital-period eclipsing binaryUniverse Today: Astronomers Have a New Way to Find Exoplanets in Cataclysmic Binary Systems
Like this:Like Loading...
|
Physics
|
The world’s most powerful telescope has made its first observations of a planet beyond our solar system, heralding in a new era of astronomy in which distant worlds can be scanned for signs of life.The observations, from Nasa’s James Webb Space Telescope, give new insights into the formation of the planet, a hot gas giant called Wasp-39bthat is 700 light years away in the Virgo constellation. They also provide the first clearcut evidence for carbon dioxide in the atmosphere of a planet orbiting a distant star.“We want to know how unique we are and what the chance is of life elsewhere in the universe,” said Dr Vivien Parmentier, associate professor in physics at Oxford University and a member of the collaboration behind the work. “CO2 detection is typically one of the things we’re going to be looking for. This shows we have the capability, which is extremely exciting for all of us.”A central aim of James Webb is to analyse the atmospheres of distant planets and search for biosignature gases that could indicate the presence of life on the planet below.Wasp-39b itself is not viewed as a promising candidate for life. The vast gas planet is about 1.3 times the size of Jupiter, with an average temperature of around 900C. It is so close to its host star – about one-eighth the distance between the sun and Mercury – that it only takes around four Earth days to make a complete circuit.Its proximity to the star means it is likely to be tidally locked, with one side constantly facing towards its star and the other side shrouded in unending darkness.The planet was discovered in 2011, after astronomers spotted subtle, periodic dimming of light from its host star, caused by the planet passing in front. The latest work goes further by measuring starlight that is being filtered through the planet’s atmosphere. Because different gases absorb different wavelengths of light, analysing the rainbow of starlight can indicate exactly which gases are present.Previous results from the Hubble and Spitzer telescopes had given hints of the presence of carbon dioxide, but the latest observations, due to be published in the journal Nature, give the first conclusive evidence.Wasp-39b’s vast size and cloudless atmosphere made it an ideal first target. Astronomers now plan to apply the same techniques to analyse the atmospheres of smaller, rocky planets that are viewed as potentially habitable, such as those in the Trappist-1 star system. They will be looking for Earth-like atmospheres, dominated by nitrogen, carbon-dioxide and water vapour and an overall balance of gases that hints at a contribution from biological processes.“We’re looking for a combination of gases that we can’t easily explain with our understanding of chemistry could indicate that something is producing it,” said Dr Jo Barstow, an astronomer at The Open University and a member of the JWST collaboration behind the paper.Observing planetary atmospheres will also help astronomers distinguish between small, rocky planets that are more Earth-like and those closer to Venus, which is sometimes referred to as Earth’s evil twin due to its blazing 470°C surface temperature and its dense, toxic atmosphere.“It was probably a bit luck of the drawer that Venus ended up so inhospitable and Earth ended up with life,” said Barstow. “It might’ve been a very small tipping point that drove them in such different directions.”
|
Physics
|
If you thought entangling qubits using the Fibonacci sequence was confusing, you’d better hold onto something. A team of physicists recently found that quantum systems can imitate wormholes, theorized shortcuts in spacetime, in that the systems allow the instantaneous transit of information between remote locations.OffEnglishThe research team thinks their findings could have implications for probing quantum gravity—the catch-all term used to marry quantum mechanics and Newtonian gravity, which doesn’t affect quantum particles the way it does classical objects. The research was published this week in Nature.“The relationship between quantum entanglement, spacetime, and quantum gravity is one of the most important questions in fundamental physics and an active area of theoretical research,” said Maria Spiropulu, a physicist at the California Institute of Technology and the paper’s lead author, in an institute release. “We are excited to take this small step toward testing these ideas on quantum hardware and will keep going.“Let’s slow down momentarily. To be clear, the researchers did not literally send quantum information through a rupture in spacetime, which in theory could connect two separate regions of spacetime. (Imagine folding a piece of paper and stabbing a pencil through the two layers. The paper is spacetime, and you now have a portal between two very distant areas.)An idea floating around in theoretical physics is that wormholes are equivalent to quantum entanglement, which Einstein famously referred to as “spooky action at a distance.” That means that, even at great distances, entangled quantum particles are defined by the spin of each other. Because quantum particles have this unique connection, they’re a great test-bed for teleportation.G/O Media may get a commissionIn 2017, a different team demonstrated that the way theorized wormholes in spacetime are described gravitationally is equivalent to the transmission of quantum information. The recent team has been looking at the issue themselves for a few years. They wanted to show that, not only is the relationship equivalent, but that transmitted information can be described gravitationally or vis-a-vis quantum entanglement. To do this, the researchers used the Sycamore quantum processor at Google.“We performed a kind of quantum teleportation equivalent to a traversable wormhole in the gravity picture,” said Alexander Zlokapa, a graduate student at MIT and a part of the team, in the release. “To do this, we had to simplify the quantum system to the smallest example that preserves gravitational characteristics so we could implement it on the Sycamore quantum processor at Google.”The team put a qubit (a quantum bit) into a special quantum system and observed information exiting from another system. The information they had put to one quantum system had traveled through the quantum equivalent of a wormhole to exit from the other system, according to their paper.The teleportation of the quantum information was simultaneously what was expected from a quantum physical perspective and from the gravitational understanding of how an object would travel through a wormhole, the researchers said.The team plans to build more complex quantum systems to test how this quantum information transmission might change in a more complex experimental set-up. It’s been 87 years since Einstein and his colleagues described wormholes—perhaps physicists will have cracked this multi-dimensional egg before the idea turns 100.More: Google Scientists Are Using Quantum Computers to Study Wormholes
|
Physics
|
Travelling wave: new computational research has revealed the role of friction in how dominoes topple. (Courtesy: iStock/Soulmemoria) Inspired by a video on YouTube, two researchers have uncovered new insights into the physics of toppling dominoes. Through an extensive set of simulations, David Cantor at Canada’s Polytechnique Montréal, together with Kajetan Wojtacki at the Polish Academy of Sciences, showed that the speed of a wave of falling dominoes is affected by two types of friction, as well as the spacing between the dominoes.
In 2017, Destin Sandlin, host of the YouTube channel SmarterEveryDay, posted a video called “Dominoes – HARDCORE Mode” – where he used a high-speed camera to film a chain of toppling dominoes. He noticed that the speed of the resulting wavefront – which propagates as each domino falls and strikes its neighbour – was affected by the friction between the dominoes and the surface they were placed on.
On smooth, low-friction hardwood, each domino appeared to backslide as it fell – in contrast to high-friction felt, where the bottom of each domino largely stayed in place. On low-friction hardwood, a domino struck its neighbour further down, slightly lowering the speed of the wavefront. Yet under the limitations of his experiment, Sandlin soon realized that problem was far more complex than he had anticipated – leading him to admit: “this has broken me. I do not understand dominoes”.
Toppling simulations
In a new study, Cantor and Wojtacki delved further into the problem by simulating the toppling of 200 evenly spaced dominoes. Across 1210 simulations, they examined a wide range of spacings between dominoes, while also varying surface friction, and the friction between neighbouring dominoes.
The duo discovered that when dominoes are spaced apart by half their thickness, increasing the friction between them causes the wave to slow down, since the dominoes absorb more of its energy. In contrast, increasing domino-surface friction increased the wave speed in some cases, for the same reasons that Sandlin highlighted in his video.
Yet for spacings between 1.5–5 times the dominoes’ thickness, the simulations showed that domino-surface friction had little effect on the wave speed. This suggested that each domino gains more kinetic energy as it falls, making its neighbour less likely to backslide, regardless of surface friction.
Backsliding to a halt
For spacings larger than three times the dominoes’ thickness, Cantor and Wojtacki showed that the wave could become unstable when both domino-domino friction is high, and domino-surface friction is low. This combination would cause dominoes to backslide too far to reach their neighbours, bringing the wave to a stop. Read more How many dominoes will topple a cathedral tower? One other interesting result is that once the coefficient of domino-domino friction reaches 0.4, any further increase in friction does not seem to affect the wave’s propagation speed. This could be because the motion was no longer significantly affected by the dominoes sliding against each other. This same saturation effect is commonly found in related systems such as the friction between grains in a pile of sand – where friction places an upper limit on the steepness of the pile.
Based on these results, Cantor and Wojtacki constructed a law to predict wave propagation speeds. This incorporates domino spacing, as well as both types of friction. This law is in close agreement with past experiments – but more work is needed to fully uncover the physical mechanisms responsible.
The research is reported Physical Review Applied.
|
Physics
|
“Exascale” sounds like a science-fiction term, but it has a simple and very nonfictional definition: while a human brain can perform about one simple mathematical operation per second, an exascale computer can do at least one quintillion calculations in the time it takes to say, “One Mississippi.”
In 2022 the world’s first declared exascale computer, Frontier, came online at Oak Ridge National Laboratory—and it’s 2.5 times faster than the second-fastest-ranked computer in the world. It will soon have better competition (or peers), though, from incoming examachines such as El Capitan, housed at Lawrence Livermore National Laboratory, and Aurora, which will reside at Argonne National Laboratory.
It’s no coincidence that all of these machines find themselves at facilities whose names end with the words “national laboratory.” The new computers are projects of the Department of Energy and its National Nuclear Security Administration (NNSA). The DOE oversees these labs and a network of others across the country. NNSA is tasked with keeping watch over the nuclear weapons stockpile, and some of exascale computing’s raison d’être is to run calculations that help maintain that arsenal. But the supercomputers also exist to solve intractable problems in pure science.
When scientists are finished commissioning Frontier, which will be dedicated to such fundamental research, they hope to illuminate core truths in various fields—such as learning about how energy is produced, how elements are made and how the dark parts of the universe spur its evolution—all through almost-true-to-life simulations in ways that wouldn’t have been possible even with the nothing-to-sniff-at supercomputers of a few years ago.
“In principle, the community could have developed and deployed an exascale supercomputer much sooner, but it would not have been usable, useful and affordable by our standards,” says Douglas Kothe, associate laboratory director of computing and computational sciences at Oak Ridge. Obstacles such as huge-scale parallel processing, exaenergy consumption, reliability, memory and storage—along with a lack of software to start running on such supercomputers—stood in the way of those standards. Years of focused work with the high-performance computing industry lowered those barriers to finally satisfy scientists.
Frontier can process seven times faster and hold four times more information in memory than its predecessors. It is made up of nearly 10,000 CPUs, or central processing units—which perform instructions for the computer and are generally made of integrated circuits—and almost 38,000 GPUs, or graphics processing units. GPUs were created to quickly and smoothly display visual content in gaming. But they have been reappropriated for scientific computing, in part because they’re good at processing information in parallel.
Inside Frontier, the two kinds of processors are linked. The GPUs do repetitive algebraic math in parallel. “That frees the CPUs to direct tasks faster and more efficiently,” Kothe says. “You could say it’s a match made in supercomputing heaven.” By breaking scientific problems into a billion or more tiny pieces, Frontier allows its processors to each eat their own small bite of the problem. Then, Kothe says, “it reassembles the results into the final answer. You could compare each CPU to a crew chief in a factory and the GPUs to workers on the front line.”
The 9,472 different nodes in the supercomputer—each essentially its own not-so-super computer—are also all connected in such a way that they can pass information quickly from one place to another. Importantly, though, Frontier doesn’t just run faster than machines of yore: it also has more memory and so can run bigger simulations and hold tons of information in the same place it’s processing those data. That’s like keeping all the acrylics with you while you’re trying to do a paint-by-numbers project rather than having to go retrieve each color as needed from the other side of the table.
With that kind of power, Frontier—and the beasts that will follow—can teach humans things about the world that might have remained opaque before. In meteorology, it could make hurricane forecasts less fuzzy and frustrating. In chemistry, it could experiment with different molecular configurations to see which might make great superconductors or pharmaceutical compounds. And in medicine, it has already analyzed all of the genetic mutations of SARS-CoV-2, the virus that causes COVID—cutting the time that calculation takes from a week to a day—to understand how those tweaks affect the virus’s contagiousness. That saved time allows scientists to perform ultrafast iterations, altering their ideas and conducting new digital experiments in quick succession.
With this level of computing power, scientists don’t have to make the same approximations they did before, Kothe says. With older computers, he would often have to say, “I’m going to assume this term is inconsequential, that term is inconsequential. Maybe I don’t need that equation.” In physics terms, that’s called making a “spherical cow”: taking a complex phenomenon, like a bovine, and turning it into something highly simplified, like a ball. With exascale computers, scientists hope to avoid cutting those kinds of corners and simulate a cow as, well, essentially a cow: something that more closely approaches a representation of reality.
Frontier’s upgraded hardware is the main factor behind that improvement. But hardware alone doesn’t do scientists that much good if they don’t have software that can harness the machine’s new oomph. That’s why an initiative called the Exascale Computing Project (ECP)—which brings together the Department of Energy and its National Nuclear Security Administration, along with industry partners—has sponsored 24 initial science-coding projects alongside the supercomputers’ development.
Those software initiatives can’t just take old code—meant to simulate, say, the emergence of sudden severe weather—plop it onto Frontier and say, “It made an okay forecast at lightning speed instead of almost lightning speed!” To get a more accurate result, they need an amped-up and optimized set of codes. “We're not going to cheat here and get the same not-so-great answers faster,” says Kothe, who is also ECP’s director.
But getting greater answers isn’t easy, says Salman Habib, who’s in charge of an early science project called ExaSky. “Supercomputers are essentially brute-force tools,” he says. “So you have to use them in intelligent ways. And that's where the fun comes in, where you scratch your head and say, ‘How can I actually use this possibly blunt instrument to do what I really want to do?’” Habib, director of the computational science division at Argonne, wants to probe the mysterious makeup of the universe and the formation and evolution of its structures. The simulations model dark matter and dark energy’s effects and include initial conditions that investigate how the universe expanded right after the big bang.
Large-scale astronomical surveys—for instance, the Dark Energy Spectroscopic Instrument in Arizona—have helped illuminate those shady corners of the cosmos, showing how galaxies formed and shaped and spread themselves as the universe expands. But data from these telescopes can’t, on its own, explain the why of what they see.
Theory and modeling approaches like ExaSky might be able to do so, though. If a theorist suspects that dark energy exhibits a certain behavior or that our conception of gravity is off, they can tweak the simulation to include those concepts. It will then spit out a digital cosmos, and astronomers can see the ways it matches, or doesn’t match, what their telescopes’ sensors pick up. “The role of a computer is to be a virtual universe for theorists and modelers,” Habib says.
ExaSky extends algorithms and software written for lesser supercomputers, but simulations haven’t yet led to giant breakthroughs about the nature of the universe’s dark components. The work scientists have done so far offers “an interesting combination of being able to model it but not really understand it,” Habib says. With exascale computers, though, astronomers s Habib can simulate a larger volume of space, using more cowlike physics, in higher definition. Understanding, perhaps, is on the way.
Another early Frontier project called ExaStar, led by Daniel Kasen of Lawrence Berkeley National Laboratory, will investigate a different cosmic mystery. This endeavor will simulate supernovae—the end-of-life explosions of massive stars that, in their extremity, produce heavy elements. Scientists have a rough idea of how supernovae play out, but no one actually knows the whole-cow version of these explosions or how heavy elements get made within them.
In the past, most supernova simulations simplified the situation by assuming stars were spherically symmetric or by using simplified physics. With exascale computers, scientists can make more detailed three-dimensional models. And rather than just running the code for one explosion, they can do whole suites, including different kinds of stars and different physics ideas, exploring which parameters produce what astronomers actually see in the sky.
“Supernovae and stellar explosions are fascinating events in their own right,” Kasen says. “But they’re also key players in the story of the universe.” They provided the elements that make up Earth and us —and the telescopes that look beyond us. Although their extreme reactions can’t quite be replicated in physical experiments, digital trials are both possible and less destructive.
A third early project is examining phenomena that are closer to home: nuclear reactors and their reactions. The ExaSMR project will use exascale computing to figure out what’s going on beneath the shielding of “small modular reactors,” a type of facility that nuclear-power proponents hope will become more common. In earlier days supercomputers could only model one component of a reactor at a time. Later they could model the whole machine but only at one point in time—getting, say, an accurate picture of when it first turns on. “Now we're modeling the evolution of a reactor from the time that it starts up over the course of an entire fuel cycle,” says Steven Hamilton of Oak Ridge, who’s co-leading the effort.
Hamilton’s team will investigate how neutrons move around and affect the chain reaction of nuclear fission, as well as how heat from fission moves through the system. Figuring out how the heat flows with both spatial and chronological detail wouldn’t have been possible at all before because the computer didn’t have enough memory to do the math for the whole simulation at once. “The next focus for us is looking at a wider class of reactor designs” to improve their efficiency and safety, Hamilton says.
Of course, nuclear power has always been the flip side of that other use of nuclear reactions: weapons. At Lawrence Livermore, Teresa Bailey leads a team of 150 people, many of whom are busy preparing the codes that simulate weapons to run on El Capitan. Bailey is associate program director for computational physics at Lawrence Livermore, and she oversees parts of the Advanced Simulation and Computing project—the national security side of things. Teams from the NNSA labs—supported by ECP and the Advanced Technology Development and Mitigation program, a more weapons-oriented effort—worked on R&D that helps with modernizing the weapons codes.
Ask any scientist whether computers like Frontier, El Capitan and Aurora are finally good enough, and you’ll never get a yes. Researchers would always take more and better analytical power. And there’s extrinsic pressure to keep pushing computing forward: not just for bragging rights, although those are cool, but because better simulations could lead to new drug discoveries, new advanced materials or new Nobel Prizes that keep the country on top.
All those factors have scientists already talking about the “post-exascale” future—what comes after they can do one quintillion math problems in one second. That future might involve quantum computers or augmenting exascale systems with more artificial intelligence. Or maybe it’s something else entirely. Maybe, in fact, someone should run a simulation to predict the most likely outcome or the most efficient path forward.
|
Physics
|
Scientists at Lawrence Livermore National Laboratory passed a major fusion milestone in December, igniting a fusion reaction that for a fleeting moment produced more energy than was used to trigger it.The achievement is the high-water mark for fusion research, a field that produced thermonuclear weapons more than 70 years ago but still no reactor that could generate electrical power. The scientific and engineering challenges of controlled fusion are formidable.But what does the experiment at LLNL's National Ignition Facility, aka NIF, mean for science and for the dream of a new energy source that'll power our homes and cars without releasing any of the carbon dioxide?In short, it's a big deal and fine to applaud, but it doesn't mean a green energy revolution is imminent. It'll still be years before fusion power progress bears fruit — likely a decade or so — and it's still not clear if fusion will ever be cheap enough to radically transform our power grid. Continuing today's investments in solar and wind is critical to combating climate change. Here's a look at what's happened and what's still to come.What is fusion?Fusion occurs when two lighter elements like hydrogen or helium merge into a single, heavier element. This nuclear reaction releases a lot of energy, as exhibited by the biggest fusion furnace around, the sun.It's harder to get fusion to occur on Earth, though, because atomic nuclei are positively charged and therefore repel each other. The sun's enormous mass produces tremendous pressure that overcomes that repulsion, but on Earth, other forces are required.There are two general approaches to fusion: inertial and magnetic confinement. Inertial confinement usually uses lasers to zap a pellet with a lot of power, triggering an explosion that compresses the fusion fuel. That's the method NIF uses.The other approach uses magnetic fields. It's more widespread among companies trying to commercialize fusion energy.What did the experiment at NIF accomplish?It crossed a critical threshold for fusion where the energy that the fusion reaction generated — 3.15 million joules — exceeded the 2.05 megajoules the lasers pumped out to trigger the reaction. Fusion researchers denote the ratio of output energy to input energy with the letter Q, and this is the first time a fusion reaction surpassed Q = 1.Fusion reactors will have to reach a threshold of Q = 10 before energy generation is practical. That's what everybody is aiming for, including another massive government-funded project called ITER in France. And fusion reactors will have to reach Q = 10 much more frequently than NIF can.In some ways, it's an academic milestone, one fusion experiments have nudged toward for decades. But given fusion's reputation for not ever getting there, it's an important proof of what's possible. Think a little bit more carefully before you repeat that oft-quoted snarky remark that fusion is the energy source of the future and always will be.What does the NIF experiment mean for green power?Not a huge amount, for a few reasons. For one thing, most commercial fusion energy projects are using various forms of magnetic confinement, not NIF's laser-based approach, so the engineering challenges are different. For another, NIF is a gargantuan, $3.5 billion national lab project funded to research nuclear weapons, not a project designed to produce reliable energy for the grid at the most competitive cost."Don't expect future fusion plants to look anything like NIF," said Princeton physicist Wilson Ricks in a tweet. Huge inefficiencies in NIF's lasers and in the conversion of fusion heat to electrical power mean its design is inherently impractical. In comparison, "magnetic confinement fusion holds some real promise," he tweeted.Lowering fusion's cost is critical to its success since it'll have to compete against zero-carbon alternatives like today's fission-based nuclear reactors that can generate a steady supply of power and renewables like wind and solar that are cheaper but intermittent."Fusion's first competitor is fission," Ricks and other researchers at the Princeton Plasma Physics Laboratory concluded in an October research paper, not yet peer reviewed, that assesses fusion's prospects on the electrical grid. They expect that if fusion's high costs can come down enough, it could replace the need for future fission plants, and if lowered further, could compete against the combination of solar and energy storage.NIF is a big, complicated site. If fusion power plants can be built in cheaper, smaller units that are more like something coming off a factory line, production costs should decrease. That's thanks to a phenomenon called Wright's Law, the experience curve or the learning curve, which has steadily lowered costs for solar and wind. The bigger and more customized a fusion plant is, the less costs will drop and the less competitive fusion will be.Are there at least some less direct benefits from NIF's results?Yes. Scientists could benefit somewhat from the NIF experiment by updating fusion physics models to account for the fact that it's supplying its own heat instead of relying on external sources, said Andrew Holland, chief executive of the Fusion Industry Association, an advocacy group for the industry.And the attention could help, too, especially given longrunning skepticism about fusion energy. TAE Technologies CEO Michl Binderbauer called NIF's result "a huge stepping stone into the dawn of the fusion age," and said it's an important illustration that fusion energy really is plausible.Investors have noticed, too. Downloads of the Fusion Industry Association's annual report, which details the $4.8 billion in venture capital investments in fusion energy startups, increased tenfold since the NIF achievement was announced, Holland said. Many of those requesting it are from investment firms, he added.How does fusion work at NIF?NIF triggers fusion using 192 powerful infrared lasers with a combined energy level of 4 megajoules — about the same as a two-ton truck traveling at 100mph. That's converted first into 2 megajoules of ultraviolet light, then into X-rays that strike a peppercorn sized pellet of fusion fuel.The intense X-rays cause the outer layer of the pellet to blow off explosively, compressing the pellet interior and triggering fusion. The heat from that fusion sustains the reaction until it runs out of fuel or becomes lopsided and falters.The National Ignition Facility at Lawrence Livermore National Laboratory is the size of three football fields. Lawrence Livermore National Laboratory Nuclei? Hydrogen? Catch me up on atomic physics, pleaseSure! Here's a quick refresher.Everything on Earth is made of tiny atoms, each consisting of a central nucleus and a cloud of negatively charged electrons. The nucleus is made of neutrons and positively charged protons. The more protons in the nucleus, the heavier the element is.Hydrogen usually has one proton and one electron. An unusual variety called deuterium has a neutron, too, and using nuclear reactors or fusion reactors, you can make a third variety called tritium with two neutrons.Chemical reactions, like iron rusting or wood burning, occur when those positive and electrical charges cause atoms to interact. In comparison, nuclear reactions occur when the nuclei of atoms split apart or join together. Here on Earth, it's harder to marshal the required forces to get nuclear reactions to take place, which is why it's easier to make a steam engine than a nuclear bomb.When you heat atoms up enough, they get so energetic that the electrons are stripped loose. The resulting cloud of negatively charged electrons and positively charged nuclei is called a plasma, a more exotic state of matter than the solids, liquids and gases that we're used to at room temperature here on Earth.The sun is made of plasma, and fusion reactors need it, too, to get those hydrogen nuclei to bounce around energetically enough. A convenient property of plasmas is that their electrically charged particles can be manipulated with magnetic fields. That's crucial to many fusion reactor designs.What do you use for fusion fuel?NIF and most other fusion projects use the two heavy versions of hydrogen, deuterium and tritium, called DT fuel. But there are other options, including hydrogen-boron and deuterium-helium-3, a form of helium with only one neutron instead of the more common two.To get deuterium and tritium to fuse, you need to heat a plasma up to a whopping temperature of about 100 million degrees Celsius (180 million degrees Fahrenheit). Other reactions are even higher, for example about a billion degrees for hydrogen-boron fusion.Deuterium can be filtered out of ordinary water, but tritium, which decays away radioactively over a few years, is harder to come by. It can be manufactured in nuclear reactors and, in principle, in future fusion reactors, too. Managing tritium is complex, though, because it's used to boost nuclear weapon explosions and thus is carefully controlled.How do you turn that fusion reaction into power?The deuterium-tritium fusion reaction produces fast-moving solo neutrons. Their kinetic energy can be captured in a "blanket" of liquid that surrounds the fusion reactor chamber and heats up as the neutrons collide.That heat is then transferred to water that boils and powers conventional steam turbines. That technology is well understood, but nobody has yet connected it to a fusion reactor. Indeed the first generation of fusion power reactors being built today are designed to exceed Q=1, but not to capture power. That'll wait for the pilot plants that are expected to arrive in the next wave of development.How is fusion different from fission?Fission, which powers today's nuclear reactors, is the opposite of fusion. In fission, heavy elements like uranium split apart into lighter elements, releasing energy in the process.Humans have been able to achieve fusion for decades with thermonuclear weapons. These designs slam material like uranium or plutonium together to trigger a fission explosion, and that provides the tremendous energy needed to initiate the secondary and more powerful fusion reaction.In bombs, the process occurs in a fraction of a second, but for energy production, fusion must be controlled and sustained.Do fusion reactors create radioactive waste?Yes, generally, but it's not nearly as troublesome as with fission reactors. For one thing, most of the radioactive emissions are short-lived alpha particles — helium nuclei with a pair of protons and a pair of neutrons — that are easily blocked. The fast-moving neutrons can collide with other materials and create other radioactive materials.Fusion reactors' neutron output generally will degrade components, requiring periodic replacement that could require downtime lasting perhaps a few months every few years. It's vastly easier to handle than the high-level nuclear waste of fission power plants, though.Hydrogen-boron fusion is harder to achieve than deuterium-tritium fusion, but part of its appeal is that it doesn't produce any neutrons and attendant radioactive materials. The most prominent company pursuing this approach is TAE Technologies.What are the safety risks of fusion power?Fusion power plants don't have the meltdown risks that have caused problems with fission reactors like the Fukushima and Chernobyl sites. When a fusion reaction goes awry, it just fizzles out.But there still are significant operational issues that you'll see at major industrial sites, including a lot of electrical power and high-pressure steam. In other words, the big problems are more like those you'd find at an industrial site than at one of today's fission nuclear power plants.So there are real advantages to fusion. NIF helps show that there's a future for fusion energy. But there's still a very long way to go.
|
Physics
|
In the much-missed student quiz show Blockbusters, teenagers would ask host Bob Holness for a letter from a hexagonal grid. How we laughed when a contestant asked for a P!Holness would reply with a question in the following style: What P is…For example: What P is an area of cutting edge mathematical research and also a process in the making of an espresso?The answer is the subject of today’s puzzle: percolation.Percolation is an important research area that emerged from statistical physics and is concerned with how fluids flow through porous materials. French mathematician Hugo Duminil-Copin won the Fields Medal, maths’ most high profile prize, earlier this month for his work in this area.Today’s perplexing percolation poser concerns the following Blockbusters-style hexagonal grid:The grid above shows a 10x10 hexagonal tiling of a rhombus (i.e. a diamond shape), plus an outer row that demarcates the boundary of the rhombus. The boundary row on the top right and the bottom left are coloured blue, while the boundary row on the top left and the bottom right are white.If we colour each hexagon in the rhombus either blue or white, one of two things can happen. Either there is a path of blue hexagons that connects the blue boundaries, such as here:Or there is no path of blue hexagons that connects the blue boundaries, such as here:There are 100 hexagons in the rhombus. Since each of these hexagons can be either white or blue, the total number of possible configurations of white and blue hexagons in the rhombus is 2 x 2 x … x 2 one hundred times, or 2100, which is about 1,000,000,000,000,000,000,000,000,000,000.In how many of these configurations is there a path of blue hexagons that connects the blue boundaries?The answer requires a simple insight. Indeed, it is the insight on which the quiz show Blockbusters relied.I’’ll be back at 5pm UK with the answer. Meanwhile, NO SPOILERS.The hexagonal grid is a basic model in percolation theory: the path between boundaries represents the ability of a fluid to pass across the grid. I’ll explain more about the relevance and importance of this model in the 5pm post that reveals the answer to the puzzle.For clarification: a path of hexagons means a sequence of adjacent hexagons that are the same colour.Thanks to Ariel Yadin, of Ben-Gurion University in Israel, for suggesting this puzzle.I set a puzzle here every two weeks on a Monday. I’m always on the look-out for great puzzles. If you would like to suggest one, email me. Photograph: Walker BooksHere’s a final question: What T is the latest title in the children’s book series Football School that I write with Ben Lyttleton, which is full of hilarious questions about the world’s most popular sport? Yes, it’s the The Greatest Ever Quiz Book, out now!I give school talks about maths and puzzles (online and in person). If your school is interested please get in touch.
|
Physics
|
In this episode of the Physics World Weekly podcast, collision expert Michael Hall explains how Newtonian physics is used to piece together what happened in motor vehicle accidents, sometimes revealing insurance fraud. Hall is a physicist and head of research at GBB – a company in Preston, UK, that provides impartial scientific, forensic and engineering advice on traffic collisions. He has also written an article for Physics World about the physics of car crashes. It is called “Using Newton’s laws to weed out bogus car-crash claims”.
Also in this episode, Physics World’s Margaret Harris reports back from the 2022 Institute of Physics Business Awards gala, where seven companies were honoured for their innovations. These include two firms that make technologies for diagnosing cancer and a company that has just won a £60m contract to build a quantum computer.
|
Physics
|
Illustration of two a chip comprising two entangled quantum light sources. Credit: Peter Lodahl In a new breakthrough, researchers at the University of Copenhagen, in collaboration with Ruhr University Bochum, have solved a problem that has caused quantum researchers headaches for years. The researchers can now control two quantum light sources rather than one. Trivial as it may seem to those uninitiated in quantum, this colossal breakthrough allows researchers to create a phenomenon known as quantum mechanical entanglement. This in turn, opens new doors for companies and others to exploit the technology commercially. Going from one to two is a minor feat in most contexts. But in the world of quantum physics, doing so is crucial. For years, researchers around the world have strived to develop stable quantum light sources and achieve the phenomenon known as quantum mechanical entanglement—a phenomenon, with nearly sci-fi-like properties, where two light sources can affect each other instantly and potentially across large geographic distances.
Entanglement is the very basis of quantum networks and central to the development of an efficient quantum computer.
Today, researchers from the Niels Bohr Institute published a new result in the journal Science, in which they succeeded in doing just that. According to Professor Peter Lodahl, one of the researchers behind the result, it is a crucial step in the effort to take the development of quantum technology to the next level and to "quantize" society's computers, encryption and the internet.
"We can now control two quantum light sources and connect them to each other. It might not sound like much, but it's a major advancement and builds upon the past 20 years of work. By doing so, we've revealed the key to scaling up the technology, which is crucial for the most ground-breaking of quantum hardware applications," says Professor Peter Lodahl, who has conducted research the area since 2001.
The magic all happens in a so-called nanochip—which is not much larger than the diameter of a human hair—that the researchers also developed in recent years.
Quantum sources overtake the world's most powerful computer Peter Lodahl's group is working with a type of quantum technology that uses light particles, called photons, as micro transporters to move quantum information about.
While Lodahl's group is a leader in this discipline of quantum physics, they have only been able to control one light source at a time until now. This is because light sources are extraordinarily sensitive to outside "noise", making them very difficult to copy. In their new result, the research group succeeded in creating two identical quantum light sources rather than just one. Part of the team behind the invention. From left:: Peter Lodahl, Anders Sørensen, Vasiliki Angelopoulou, Ying Wang, Alexey Tiranov, Cornelis van Diepen. Credit: Ola J. Joensen, NBI "Entanglement means that by controlling one light source, you immediately affect the other. This makes it possible to create a whole network of entangled quantum light sources, all of which interact with one another, and which you can get to perform quantum bit operations in the same way as bits in a regular computer, only much more powerfully," explains postdoc Alexey Tiranov, the article's lead author.
This is because a quantum bit can be both a 1 and 0 at the same time, which results in processing power that is unattainable using today's computer technology. According to Professor Lodahl, just 100 photons emitted from a single quantum light source will contain more information than the world's largest supercomputer can process.
By using 20-30 entangled quantum light sources, there is the potential to build a universal error-corrected quantum computer—the ultimate "holy grail" for quantum technology, that large IT companies are now pumping many billions into.
Other actors will build upon the research
According to Lodahl, the biggest challenge has been to go from controlling one to two quantum light sources. Among other things, this has made it necessary for researchers to develop extremely quiet nanochips and have precise control over each light source.
With the new research breakthrough, the fundamental quantum physics research is now in place. Now it is time for other actors to take the researchers' work and use it in their quests to deploy quantum physics in a range of technologies including computers, the internet and encryption.
"It is too expensive for a university to build a setup where we control 15-20 quantum light sources. So, now that we have contributed to understanding the fundamental quantum physics and taken the first step along the way, scaling up further is very much a technological task," says Professor Lodahl. More information: Alexey Tiranov et al, Collective super- and subradiant dynamics between distant optical quantum emitters, Science (2023). DOI: 10.1126/science.ade9324. www.science.org/doi/10.1126/science.ade9324 Citation: Quantum physicists determine how to control two quantum light sources rather than one (2023, January 26) retrieved 27 January 2023 from https://phys.org/news/2023-01-quantum-physicists-sources.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
|
Physics
|
Physics World is delighted to announce its top 10 Breakthroughs of the Year for 2022, which span everything from quantum and medical physics to astronomy and condensed matter. The overall Physics World Breakthrough of the Year will be revealed on Wednesday 14 December.
The 10 Breakthroughs were selected by a panel of Physics World editors, who sifted through hundreds of research updates published on the website this year across all fields of physics. In addition to having been reported in Physics World in 2022, selections must meet the following criteria: Significant advance in knowledge or understanding
Importance of work for scientific progress and/or development of real-world applications
Of general interest to Physics World readers The Top 10 Breakthroughs for 2022 are listed below in no particular order. Come back next week to find out which one has bagged the overall Physics World Breakthrough of the Year award.
Ushering in a new era for ultracold chemistry
Cooling light: the experimental set-up used by John Doyle and colleagues. (Courtesy: John Doyle)
To Bo Zhao, Jian-Wei Pan and colleagues at the University of Science and Technology of China (USTC) and the Chinese Academy of Sciences in Beijing; and independently to John Doyle and colleagues at Harvard University in the US, for creating the first ultracold polyatomic molecules.
Although physicists have been cooling atoms to a fraction above absolute zero for more than 30 years, and the first ultracold diatomic molecules appeared in the mid-2000s, the goal of making ultracold molecules containing three or more atoms had proved elusive.
Using different and complementary techniques, the USTC and Harvard teams produced samples of triatomic sodium-potassium molecules at 220 nK and sodium hydroxide at 110 µK, respectively. Their achievement paves the way for new research in both physics and chemistry, with studies of ultracold chemical reactions, novel forms of quantum simulation, and tests of fundamental science all closer to being realized thanks to these multi-atom molecular platforms.
Observing the tetraneutron
To Meytal Duer at the Institute for Nuclear Physics at Germany’s Technical University of Darmstadt and the rest of the SAMURAI Collaboration for observing the tetraneutron and showing that uncharged nuclear matter exists, if only for a very short time.
Comprising four neutrons, the tetraneutron was spotted at the RIKEN Nishina Centre’s Radioactive Ion Beam Factory in Japan. The tetraneutrons were created by firing helium-8 nuclei at a target of liquid hydrogen. The collisions can split a helium-8 nucleus into an alpha particle (two protons and two neutrons) and a tetraneutron.
By detecting the recoiling alpha particles and hydrogen nuclei, the team worked out that the four neutrons existed in an unbound tetraneutron state for just 10−22 s. The statistical significance of the observation is greater than 5σ, putting it over the threshold for a discovery in particle physics. The team now plans to study the individual neutrons within tetraneutrons and look for new particles containing six and eight neutrons.
Super-efficient electricity generation
To Alina LaPotin, Asegun Henry and colleagues at the Massachusetts Institute of Technology and the National Renewable Energy Laboratory, US, for constructing a thermophotovoltaic (TPV) cell with an efficiency of more than 40%.
The new TPV cell is the first solid-state heat engine of any kind to convert infrared light into electrical energy more efficiently than a turbine-based generator, and it can operate with a broad range of possible heat sources. These include thermal energy storage systems, solar radiation (via an intermediate radiation absorber) and waste heat as well as nuclear reactions or combustion. The device could therefore become an important component of a cleaner, greener electricity grid, and a complement to visible-light solar photovoltaic cells.
The fastest possible optoelectronic switch
To Marcus Ossiander, Martin Schultze and colleagues at the Max Planck Institute for Quantum Optics and LMU Munich in Germany; the Vienna University of Technology and the Graz University of Technology in Austria; and the CNR NANOTEC Institute of Nanotechnology in Italy, for defining and exploring the “speed limits” of optoelectronic switching in a physical device.
The team used laser pulses lasting just one femtosecond (10−15 s) to switch a sample of a dielectric material from an insulating to a conducting state at the speed needed to realize a switch that operates 1000 trillion times a second (one petahertz). Although the apartment-sized apparatus required to drive this super-fast switch means it will not appear in practical devices any time soon, the results imply a fundamental limit for classical signal processing and suggest that petahertz solid-state optoelectronics is, in principle, feasible.
Opening a new window on the universe
Spectacular vistas: the Carina Nebula as seen by the JWST. (Courtesy: NASA, ESA, CSA and STScI)
To NASA, the Canadian Space Agency and the European Space Agency for the deployment and first images from the James Webb Space Telescope (JWST).
Following years of delays and cost hikes, the $10bn JWST finally launched on 25 December 2021. For many space probes, launch is the most dangerous part of the mission, but the JWST also had to survive a series of hazardous deep-space unpacking manoeuvres, which involved unfolding its 6.5 m primary mirror as well as unfurling its tennis-court-sized sunshield.
Prior to launch, engineers identified 344 “single-point” failures that could have hampered the observatory’s mission, or worse, make it unusable. Remarkably, no issues were encountered and following the commissioning of the JWST’s science instruments, the observatory soon began taking data and capturing spectacular images of the cosmos.
The first JWST picture was announced by US president Joe Biden at a special event at the White House and many dazzling images have since been released. The observatory is expected to operate well into the 2030s and is already on course to revolutionize astronomy.
First-in-human FLASH proton therapy
To Emily Daugherty from the University of Cincinnati in the US and collaborators working on the FAST-01 trial for performing the first clinical trial of FLASH radiotherapy and the first-in-human use of FLASH proton therapy.
FLASH radiotherapy is an emerging treatment technique in which radiation is delivered at ultrahigh dose rates, an approach that is thought to spare healthy tissue while still effectively killing cancer cells. Using protons to deliver the ultrahigh-dose-rate radiation will allow treatment of tumours located deep inside the body.
The trial included 10 patients with painful bone metastases in their arms and legs, who received a single proton treatment delivered at 40 Gy/s or greater – some 1000 times the dose rate of conventional photon radiotherapy. The team demonstrated the feasibility of the clinical workflow and showed that FLASH proton therapy was as effective as conventional radiotherapy for pain relief, without causing unexpected side effects.
Perfecting light transmission and absorption
To a team led by Stefan Rotter of Austria’s Technical University of Vienna and Matthieu Davy of the University of Rennes in France for creating an anti-reflection structure that enables perfect transmission through complex media; along with a collaboration headed up by Rotter and Ori Katz from the Hebrew University of Jerusalem in Israel, for developing an “anti-laser” that enables any material to absorb all light from a wide range of angles.
In the first investigation, the researchers designed an anti-reflection layer that’s mathematically optimized to match the way waves would reflect from the front surface of an object. Placing this structure in front of a randomly disordered medium completely eliminates reflections and makes the object translucent to all incoming light waves.
In the second study, the team developed a coherent perfect absorber, based around a set of mirrors and lenses, that traps incoming light inside a cavity. Due to precisely calculated interference effects, the incident beam interferes with the beam reflected back between the mirrors, so that the reflected beam is almost completely extinguished.
Cubic boron arsenide is a champion semiconductor
Champion semiconductor: ball-and-stick representation of cubic boron arsenide. (Courtesy: Christine Daniloff/MIT)
To independent teams led by Gang Chen at the Massachusetts Institute of Technology in the US and Xinfeng Liu of the National Center for Nanoscience and Technology in Beijing, China for showing that cubic boron arsenide is one of the best semiconductors known to science.
The two groups did experiments that revealed that small, pure regions of the material have a much higher thermal conductivity and hole mobility than semiconductors such as silicon, which forms the basis of modern electronics. Silicon’s low hole mobility limits the speed at which silicon devices operate, while its low thermal conductivity causes electronic devices to overheat.
Cubic boron arsenide, in contrast, had long been predicted to outperform silicon on these measures, but researchers had struggled to create large enough single-crystal samples of the material to measure its properties. Now, however, both teams have now overcome this challenge, bringing the practical use of cubic boron arsenide one step closer.
Changing an asteroid’s orbit
To NASA and the Johns Hopkins Applied Physics Laboratory in the US for the first demonstration of “kinetic impact” by successfully changing the orbit of an asteroid.
Launched in November 2021, the Double Asteroid Redirection Test (DART) craft was the first-ever mission to investigate kinetic impact of an asteroid. Its target was a binary near-Earth asteroid system consisting of a 160-metre-diameter body called Dimorphos that orbits a larger 780-metre-diameter asteroid called Didymos.
Following an 11-million-kilometre journey to the asteroid system, in October DART successfully impacted Dimorphos while travelling at about 6 km/s. Days later, NASA confirmed that DART had successfully altered the Dimorphos’ orbit by 32 minutes – shortening the orbit from 11 hours and 55 minutes orbit to 11 hours and 23 minutes.
This change was some 25 times greater than the 73 seconds that NASA had defined as a minimum successful orbit period change. The results will also be used to assess how best to apply the kinetic impact technique for defending our planet.
Detecting an Aharonov–Bohm effect for gravity
To Chris Overstreet, Peter Asenbaum, Mark Kasevich and colleagues at Stanford University in the US for detecting an Aharonov–Bohm effect for gravity.
First predicted in 1949, the original Aharonov–Bohm effect is a quantum phenomenon whereby the wave function of a charged particle is affected by an electric or magnetic potential even when the particle is in a region of zero electric and magnetic fields. Since the 1960s, the effect has been observed by splitting a beam of electrons and sending the two beams on either side of a region containing a completely shielded magnetic field. When the beams are recombined at a detector, the Aharonov–Bohm effect is revealed as an interference between the beams.
Now, the Stanford physicists have observed a gravitational version of the effect using ultracold atoms. The team split the atoms into two groups that were separated by about 25 cm, with one group interacting gravitationally with a large mass. When recombined, the atoms displayed an interference that is consistent with an Aharonov–Bohm effect for gravity. The effect could be used to determine Newton’s gravitational constant to very high precision. Congratulations to all the teams who have been honoured – and stay tuned for the overall winner, which will be announced on Wednesday 14 December 2022.
|
Physics
|
Quarks are elementary particles and a fundamental constituent of matter. They are the smallest things we know of and are not made up of anything smaller or simpler. Quarks come in six different “flavors”: up, down, charm, strange, top, and bottom. They are never found alone in nature, but are always found in combination with other quarks to form protons and neutrons in the nucleus of an atom, and other subatomic particles like mesons. Quarks are believed to be the building blocks of protons and neutrons, which make up the majority of the matter in the universe.
Scientists are investigating how matter gets its mass by confining quarks.
A novel method for investigating quarks, the fundamental particles that make up the protons and neutrons in atomic nuclei, has been proposed. This innovative approach has never been attempted before and could provide answers to many fundamental questions in physics, particularly the origin of mass in matter.
The study of matter can seem a bit like opening a stack of Russian matryoshka dolls, each level down revealing another familiar, yet different, arrangement of components smaller and harder to explore than the one before. At our everyday scale, we have objects we can see and touch. Whether water in a glass or the glass itself, these are mostly arrangements of molecules too small to see. The tools of physics, microscopes, particle accelerators, and so forth, let us peer deeper to reveal molecules are made from atoms. But it doesn’t stop there — atoms are made from a nucleus surrounded by electrons.
The nucleus in turn is an arrangement of nucleons (protons and neutrons), which gives the atom its properties and its mass. But it doesn’t end here either; the nucleons are further composed of less familiar things known as quarks and gluons. . And it’s at this scale that limits to our knowledge of fundamental physics present a block. As to explore quarks and gluons, they must ideally be isolated from each other; however, at present, this seems to be impossible. When particle accelerators smash atoms and create showers of atomic debris, quarks and gluons bind again too quickly for researchers to explore them in detail. New research from the University of Tokyo’s Department of Physics suggests we could soon open up the next layer of the matryoshka doll. “To better understand our material world, we need to do experiments and to improve upon experiments, we need to explore new approaches to the way we do things,” said Professor Kenji Fukushima. “We have outlined a possible way to identify the mechanism responsible for quark confinement. This has been a longstanding problem in physics, and if realized, could unlock some deep mysteries about matter and the structure of the universe.”
The mass of subatomic quarks is incredibly small: Combined, the quarks in a nucleon make up less than 2% of the total mass, and gluons appear to be entirely massless. So, physicists suggest the majority of atomic mass actually comes from the way in which quarks and gluons are bound, rather than from the things themselves. They are bound by the so-called strong force, one of the four fundamental forces of nature, including electromagnetism and gravity, and it’s believed the strong force itself endows a nucleon with mass. This is part of a theory known as quantum chromodynamics (QCD), where “chromo” comes from the Greek word for color, which is why you sometimes hear quarks referred to as being red, green, or blue, despite the fact they’re colorless.
“Rigorous proof that the strong force gives rise to mass remains out of reach,” said Fukushima. “The obstacle is that QCD describes things in such a way that makes theoretical calculations hard. Our achievement is to demonstrate that the strong force, within a special set of circumstances, can realize quark confinement. We did this by interpreting some observed parameters of quarks as a new variable we call the imaginary angular velocity. Though purely mathematical in nature, it can be converted back into real values of things we can control. This should lead to a means to realize an exotic state of rapidly rotating quark matter once we learn how to turn our idea into an experiment.”
Reference: “Perturbative Confinement in Thermal Yang-Mills Theories Induced by Imaginary Angular Velocity” by Shi Chen, Kenji Fukushima and Yusuke Shimada, 8 December 2022, Physical Review Letters.
DOI: 10.1103/PhysRevLett.129.242002
The study was funded by the Japan Society for the Promotion of Science.
|
Physics
|
In a Best-in-Physics presentation at the AAPM Annual Meeting, Arutselvan Natarajan showed how a long-lived antibody PET tracer could enable biology-guided radiation therapy for five consecutive days following a single tracer injection Development team: Stanford researchers (left to right) Hieu Nguyen, Guillem Pratx, Arutselvan Natarajan and Syamantak Khan. (Courtesy: Lindsey Vaughn) Biology-guided radiation therapy (BgRT), in which PET images of the tumour target are used to guide beam delivery, has gained momentum over the last 10–15 years. BgRT, which is currently awaiting approval from the FDA for clinical use, is performed using the positron-emitting PET tracer 18F-FDG. But as Arutselvan Natarajan from Stanford University explained, this may not be the optimal approach.
Natarajan shared an example workflow for a BgRT treatment, pointing out that it typically requires administration of five to seven separate doses of the short-lived 18F-FDG (which has a half-life of about 110 min). “We want to develop an alternative approach, using one injection, a single dose of a long-lived isotope,” he explained. “A long-lived isotope would be very good to track, direct and deliver radiotherapy.”
To achieve this, Natarajan and colleagues combined the positron-emitting isotope 89Zr (which has a half-life of 78 h) with an antibody to create the PET tracer 89Zr-Panitumumab (89Zr-pan). Antibodies are particularly suited to this application as they exhibit high tumour specificity and uptake. Early studies in mice demonstrated that it is possible to track the 89Zr-pan PET signal for nine days following tracer injection.
The researchers tested the use of 89Zr-pan for PET in mice with implanted tumours. They injected the animals with 0.2 mCi of the tracer two weeks after tumour induction, and then delivered 5 Gy doses to the tumour on days 1 to 6 after administration, performing sequential PET/CT and radiotherapy. Assessing the tumour volume revealed that in control mice, which did not receive radiotherapy, tumour growth was considerable. The irradiated mice, on the other hand, showed clear tumour shrinkage following treatment.
The researchers also analysed the PET images to determine the stability of the PET signal in tumours following radiotherapy. They observed a progressive reduction in PET signal after irradiation compared with the signal from non-irradiated animals, with tumour uptake 50% lower in treated mice than controls. Fortunately, this decrease was not enough to affect the ability of BgRT to track the tumours.
To determine whether 89Zr-pan could track tumours in human patients, despite the observed signal decrease, the researchers applied two criteria for clinical BgRT: an activity concentration (AC) above 5 kBq/ml, and a normalized target signal (NTS) above 2.7. Read more Biology-guided radiotherapy system spares critical organs Natarajan pointed out that extrapolation from mouse to human could be more complex than simple scaling. However, based on a coarse rescaling of the mouse data to obtain hypothetical 89Zr-pan uptake values in equivalent human tumours, computations showed that the AC threshold was achieved on days 2 to 6 following tracer administration, while the NTS level was met on days 1 to 9. As such, the team concluded that BgRT could be feasible in equivalent human-sized tumours for roughly five consecutive days following a single tracer injection.
“The 89Zr immunoPET tracer has potential to guide BgRT,” Natarajan concluded. “What is important is that compared with FDG PET, long-lived-isotope-based BgRT has potential to use a single dose to track or treat for up to nine days, based on preclinical trial results.”
|
Physics
|
The letters “www” are typically followed by a “dot” — but not in this experiment. Around 270 WWW events, trios of particles called W bosons, appeared in an experiment at the world’s largest particle collider, researchers report in the Aug. 5 Physical Review Letters. By measuring how often W boson triplets appear in such experiments, physicists can check their foundational theory of particle physics — the standard model — for any cracks. To produce the rare boson triplets, scientists smashed protons together at the ATLAS experiment at the Large Hadron Collider, or LHC, near Geneva. W bosons are particles that transmit the weak force, which is responsible for certain types of radioactive decay. The particles are mysterious: In April, researchers with the now-concluded CDF experiment at Fermilab in Batavia, Ill., reported that the W boson was more massive than predicted, hinting that something may be amiss with the standard model (SN: 4/7/22). Sign Up For the Latest from Science News Headlines and summaries of the latest Science News articles, delivered to your inbox In the new study, the probability of a WWW appearance was slightly higher than predicted by the standard model, the team found, though not enough for scientists to declare the theory flawed. “We need to accumulate more data to see how this evolves,” says ATLAS spokesperson and physicist Andreas Hoecker of CERN, the particle physics lab that is the home of the LHC. Those proton collisions, which reached an energy of 13 trillion electron volts, occurred before the LHC shut down for upgrades in 2018. In July, the LHC restarted at a higher energy of 13.6 trillion electron volts (SN: 4/22/22). New data could help nail down whether these threes of a kind really do misbehave. The WWW discovery is fitting — in 1989, computer scientist Tim Berners-Lee invented the World Wide Web while working at CERN.
|
Physics
|
Attracting talent: The House of Lords Science and Technology Committee have come up with four policy recommendations to tackle the UK’s skills shortage (Courtesy: iStock/Studio Pro) The UK must attract highly qualified workers from abroad if the country wants to have a flourishing industry and economy. That is one of four recommendations in a new report released by the House of Lords Science and Technology Committee. The conclusions were reached following an inquiry by the committee last year into science, technology, engineering and mathematics (STEM) skills in the UK.
Led by Julia King, a former chief executive of the Institute of Physics, which publishes Physics World, the inquiry sought to assess whether the UK’s workforce is sufficiently skilled to achieve the government’s ambition of becoming a “science and technology superpower” by 2030.
After hearing from representatives from a range of sectors, including pharmaceuticals and manufacturing, the committee has concluded that there is a widespread shortage of STEM skills, such as mathematics and coding. It also says that the government’s proposed solutions to tackle the shortage are “inadequate and piecemeal”.
To address the skills gap, the committee recommends four policies, the first being to encourage skilled workers from abroad to move to the UK. The report states that overseas talent is a “key” part of the solution and calls on the government to explore new types of visas, revise visa costs and make it easier for small companies to sponsor people from overseas.
The committee’s second recommendation is for a quantitative assessment of exactly which skills are missing in the UK, with routes for people to gain them through apprenticeships and – later in their careers – through modular courses below degree level.
Recruiting and retaining science teachers, particularly in high-demand subjects like physics and computing, is another priority, as is tackling the uncertainty of short-term postdoc work in academia. More should also be done support PhD students to find careers in industry.
Economic focus
To become as science superpower, King says the UK would need a growing STEM culture, excellent teaching, a science-literate population as well as more young people aspiring to STEM jobs. Together with well-funded research in UK universities, this would then fuel a rapid growth in technology companies.
Markers of success for this strategy would include the UK becoming a preferred international research partner as well as a desirable work destination for world-class scientists. Companies would also choose to list on the UK stock market, rather than seeking financial support elsewhere. Read more Closing the skills gap “The right skills are critical to the UK’s economic growth,” King told Physics World. “For example, there are many opportunities from the green economy, from retrofitting homes to developing new low-carbon heating technologies to zero-carbon aviation.”
King adds that companies in all areas and of all sizes are reporting skills shortages at technician, graduate and PhD level. “Investment in STEM skills is critical to drive the growth we need to restore the economy and to support critical services such as the NHS,” she says.
The findings from the Lords’ report are detailed in a letter to UK science minister George Freeman published in mid-December. The committee has requested a response from the UK government by 15 February.
|
Physics
|
On Oct. 9, an unimaginably powerful influx of X-rays and gamma rays infiltrated our solar system. It was likely the result of a massive explosion that happened 2.4 billion light-years away from Earth, and it has left the science community stunned.In the wake of the explosion, astrophysicists worldwide turned their telescopes toward the spectacular show, watching it unfold from a variety of cosmic vantage points -- and as they vigilantly studied the event's glimmering afterglow over the following week, they grew shocked by how utterly bright this gamma-ray burst seems to have been. Eventually, the spectacle's sheer intensity earned it a fitting (very millennial) name to accompany its robotic title of GRB221009A: B.O.A.T. -- the "brightest of all time.""This GRB is an extraordinarily rare event," Jillian Rastinejad, an astronomer at Northwestern University, said in a statement. "It was so bright that it triggered the Swift gamma-ray telescopes twice and fully saturated the detectors -- something I haven't seen in my time observing GRBs."
So, what could be the root of this record-breaking eruption? Well, scientists reasoned, perhaps something just as mind-bendingly extreme. As of now, the leading hypothesis is this GRB was generated by the death of an ancient star as it transformed into a monstrous black hole. Highlighted is a speck of light signifying where GRB221009A came from. International Gemini Observatory/NOIRLab/NSF/AURA/B. O'Connor/J. Rastinejad/W. Fong/T.A. Rector/J. Miller/M. Zamani/D. de Martin The idea here is that a huge supernova in the distant universe might have spurred the birth of a black hole, and as black holes are known to spew supreme particle-jets traveling at nearly the speed of light, maybe this one's jet spit its contents toward Earth. Perhaps Oct. 9 was the day we received evidence of the budding abyss.An artist's illustration of what the 2.4 billion-light-year-away jet may look like if we could stand right in front of it. NASA/Swift/Cruz deWilde A 'once-in-a-century' opportunity"We think this is a once-in-a-century opportunity to address some of the most fundamental questions regarding these explosions, from the formation of black holes to tests of dark matter models," Brendan O'Connor, an astrophysicist at the University of Maryland who helped initially observe the GRB, said in a statement.Plus, if the burst really is connected to the genesis of an abyss like scientists imagine, it could provide us with valuable insight about how matter behaves while traveling near the speed of light, how stars collapse into unimaginably dense voids, and in a broader sense, what the conditions might be like in a galaxy other than our own -- the distant realm where B.O.A.T. was born.Swift's X-Ray Telescope captured the afterglow of GRB 221009A about an hour after it was first detected. The bright rings form as a result of X-rays scattered from otherwise unobservable dust layers within our galaxy that lie in the direction of the burst. NASA/Swift/A. Beardmore (University of Leicester) However, it's worth mentioning that everyone involved with researching this GRB is being super careful before making a final declaration of cause. Teams are still observing the event's "afterglow," in order to pinpoint whether the dead star; black hole theory stands strong. "Given that most other long GRBs result from a massive star collapsing, we have every reason to believe that we will find direct evidence of a supernova," Rastinejad said. "But that will take more work and time to confirm, and the universe could always surprise us."GRBs can also be associated with other cosmic marvels. As an example, shorter ones, which last mere fractions of a second, tend to stem from neutron star collisions -- the crash of stellar bodies so dense a tablespoon of one is equal to something like the weight of Mount Everest. On the bright side, though, because this GRB is so bright and in its infancy, scientists expect to be able to monitor it for several months. After one month, Rastinejad expects evidence of the event to disappear behind the sun, but once it comes back out early next year, says "we will be excited to see the GRB as a messy 'toddler.' Then, we will be ready and waiting to capture it on camera."All eyes are on B.O.A.T."The record-breaking nature of this GRB has reinvigorated the larger observational community in a big way," Rasinejad said. "Everyone -- even those who don't typically study GRBs -- has tried to point their detectors at it. It is a beautiful and surreal thing to be a part of and to watch how this story unfolds."On one hand, NASA instruments on the International Space Station like the NICER X-Ray Telescope and a Japanese detector dubbed the Orbiting High-energy Monitor Alert Network are involved. Then you have two independent teams, one led by Rastinejad and the other by O'Connor, utilizing the ground-based Gemini South telescope in Chile. And that just scratches the surface of who's staring at the electrifying burst.
With all eyes on B.O.A.T, even if it turns out to be true that this ultra-bright GRB is the product of a star's collapse, there'd remain far more to learn from it. We'd have the "how," but some researchers are especially interested in understanding why the collapse would have spurred an event with this level of energy.Although explosive GRB eruptions are captured a couple times per week, Wen-fai Fong, an astrophysicist at Northwestern University, emphasizes that "as long as we have been able to detect GRBs, there is no question that this GRB is the brightest that we have ever witnessed by a factor of 10 or more."It's also curious that such high-energy rays could survive a 2.4 billion year-long journey to our planet in the first place. As the National Science Foundation's NOIRLab puts it, scientists are wondering how particles emitted by the burst could "defy our standard understanding of physics."To get to the bottom of all this, it's promising that scientists believe this burst is much closer to Earth than your average GRB. This means we can glean lots of details from it that otherwise might be too faint to see. And even though such proximity may also partially explain why it appears so luminescent to us, "it's also among the most energetic and luminous bursts ever seen regardless of distance, making it doubly exciting," Roberta Pillera, the astrophysicist at the Polytechnic University of Bari, Italy, who led initial communications about the burst, said in a statement.As NASA simply summarized, "another GRB this bright may not appear for decades." CNET Science From the lab to your inbox. Get the latest science stories from CNET every week. By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time. An error occurred. Please check your email and try again
|
Physics
|
By Christopher Wiebe - University of WinnipegIf you think technologies from Star Trek seem far-fetched, think again. Many of the devices from the acclaimed television series are slowly becoming a reality. While we may not be teleporting people from starships to a planet’s surface anytime soon, we are getting closer to developing other tools essential for future space travel endeavours. I am a lifelong Star Trek fan, but I am also a researcher that specializes in creating new magnetic materials. The field of condensed-matter physicsencompasses all new solid and liquid phases of matter, and its study has led to nearly every technological advance of the last century, from computers to cellphones to solar cells.My approach to looking for new phenomena in materials comes from a chemistry perspective: How can we create materials that have new properties that can change our world, and eventually be used to explore “strange, new worlds”? I believe an understanding of so-called “quantum materials” in particular is essential to make science-fiction science fact.Quantum materialsWhat makes a substance a quantum material? Quantum materials have unusual and fantastic properties that arise from enormous numbers of particles acting in a concerted way.Think of a conductor directing a symphony: without some order brought to the music, all you have is noise. The more musicians you have performing out of step, the more noise you will have.A quantum material has all of the constituent musicians — in this case, the electrons or atoms in a material, which amounts to billions upon billions of particles — acting in a certain way according to quantum rules, or the “sheet music,” if you will.Instead of noise from random electronic and atomic motions, with a conductor you get music — or in the case of new materials, a new property that emerges. The use of these new properties for devices is what is driving the technological revolutions that we are seeing today.Magnetic fields and shieldsSo, how can these new materials be used in the spacecraft of tomorrow? One example might be the force-shields that protect ships in Star Trek. High magnetic fields could be used to protect bodies from incoming projectiles, especially if the projectiles have an electric charge.How do you create large magnetic fields? One way is to use a superconducting magnet. Superconductors have electrons that conduct electricity with no resistance to flow. One of the consequences of this is that large magnetic fields can be generated — the current supported by a superconductor that generates the magnetic field can be huge without destroying the superconductivity itself.These superconductors are used every day to create large magnetic fields in places such as hospitals for MRI (magnetic resonance imaging) devices to see inside the body.Advanced superconductors might have new applications as magnetic shields for spacecraft. Imagine your spaceship coated in a superconductor that can generate a large magnetic field with a flick of a switch to get the current flowing, creating a magnetic force shield.This is exactly what scientists at the European Organization for Nuclear Research, CERN, are investigating: a new magnetic shield for spacecraft— superconducting magnesium diboride, or MgB₂. Superconductors on spaceshipsA spaceship coated in superconducting magnets would generate a “magnetosphere” around the craft which could be used to deflect harmful projectiles. While we don’t have to worry about Klingontorpedoes just yet, we do have to worry about harmful cosmic rays in outer space for future space travel.Cosmic rays are typically charged particles that can interfere with the electronics of a spacecraft, and more importantly, give astronauts a lethal dose of radiation during long space flights.Protecting future spacecraft from these rays is crucially important for the future of any space program, including trips to Mars in the next few decades. And who knows, with the superconducting magnet shields you might be able to escape a Romulan attack on the way.Technical hurdlesThere is a catch, however. Superconductors do not work at high temperatures and there is no room-temperature superconductor. Above a certain temperature called the “critical temperature,” the superconductor becomes “normal” and the electrons experience a resistance to flow again. For magnesium diboride, this occurs at a very cold temperature — around -248℃. This is actually fine for interstellar space where the background temperature is a much colder -270℃ or so but it is not conducive to spacecraft visiting other warmer planets.Scientists like me are searching for “room temperature” superconductors that would enable these shields to work at much higher temperatures. This would also enable new advances to society such as cheaper health care, for example, since one wouldn’t need low temperatures for MRI instruments to work.However, high temperature superconductivity has been a mystery for decades, and progress is in slow increments. As someone who works on the border between physics and chemistry, I believe that the answer will be found in the discovery of new materials. Historically, this is where progress has been made to raise the critical temperature to one above the liquid nitrogen boiling point of -196℃.These superconductors would be great to use as magnetic shield devices if you were exploring many areas of the galaxy. But they wouldn’t work on warmer planets such as Mars without significant amounts of cryogens to keep the magnets cold.Quantum computers and societal revolutionSuperconducting technology would also have a variety of other uses aboard starships. Quantum computers can perform operations orders of magnitude faster than conventional computers, and would undoubtedly be used on a modern starship. Need to send an encrypted message to Starfleet? If the Klingons have a quantum computer, they might be able to intercept and hack your message, so you had better make sure that you understand the technology.And superconducting electrical systems would naturally be used for the most efficient devices, from starship engines down to tricorders used in away missions. The emergence of room temperature superconductors would spark a transformation of our society that would rival the silicon age of modern electronics. Their discovery is an essential hurdle to cross for the next part of our evolution as a species to a new technological age.It would be highly logical to continue our search for a room temperature superconductor. If only we could make it so. Quantum materials offer strange new worlds of discovery and perhaps most exciting are the technologies we haven’t discovered yet — that will exploit quantum effects on scale that humans can easily see.Source: The Conversation If you enjoy our selection of content please consider following Universal-Sci on social media:
|
Physics
|
Ice, specifically water ice, is a very useful resource for planetary colonists and explorers alike as it can be used to grow crops and even form the basis of rocket fuel. The martian north- and south pole are both covered with a layer of frozen carbon dioxide, also known as dry ice, of respectively 1 meters and 8 meters thick. However, fortuitously, apart from the top layers, both martian poles predominantly consist of water ice! But how long has it been there? In this article, Matt Williams explains what we know so far.By Matt Williams On Earth, the study of ice core samples is one of many methods scientists use to reconstruct the history of our past climate change. The same is true of Mars’ northern polar ice cap, which is made up of many layers of frozen water that have accumulated over eons. The study of these layers could provide scientists with a better understanding of how the Martian climate changed over time.This remains a challenge since the only way we are able to study the Martian polar ice caps right now is from orbit. Luckily, a team of researchers from UC Boulder was able to use data obtained by the High-Resolution Imaging Science Experiment (HiRISE) aboard the Mars Reconnaissance Orbiter (MRO) to chart how the northern polar ice caps’ evolved over the past few million years.The research was conducted by Andrew Wilcoski and Paul Hayne, a Ph.D. student and assistant professor from the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder. The study that describes their findings recently appeared in the Journal for Geological Research (JGR), a publication maintained by the American Geophysical Union (AGU). For the sake of their study, Wilcosky and Hayne sought to determine the current state of the Martian North Polar Residual Cap (NPRC), which is vital to understanding the North Polar Layered Deposits (NPLD). Using the high-resolution images gathered by the HiRISE instrument, Wilcosky and Hayne examined the rough features of the NPRC – which includes ripples and ridges of varying size and shape.They then modeled the growth and recession of NPRC over time based on its interaction with solar radiation and how the rate of growth and loss is affected by the amount of atmospheric water vapor. What they found was that in addition to causing the formation of rough terrain (ripples and ridges) in an ice sheet, exposure to solar radiation will also cause ice to sublimate unevenly.Basically, Mars’ axial tilt, which is responsible for it experiencing seasonal changes similar to Earth, also causes one side of these features to sublimate (the Sun-facing side) while the other does not. This has the effect of exaggerating these features, leading to pronounced ridges and valleys that become more pronounced as time goes on. Overall, the model employed by Wilcoski and Hayne determined that the rough features observed by the MRO should measure 10 m (33 ft) in diameter and 1 m (3.3 ft) deep. Furthermore, their results demonstrated that as the features age, the spatial wavelength (the distance) between each ripple increases – from 10 to 50 m (164 ft). As they state in their study:“Our results show that the size of mounds and depressions on the ice cap surface suggest that it took 1–10 thousand years to form these roughness features. Our results also suggest that the formation of features on the surface may depend on when water vapor is present in the atmosphere over the course of a year (e.g., summer or winter).” A composite image showing alternating layers of ice and sand around the northern polar region, taken by the MRO’s HiRISE camera. - Image Credit: NASA/JPL/University of Arizona These results are consistent with the images taken by the HiRISE instrument of the Martian North Polar Residual Cap (NPRC). What they indicated is that the rough features observed around Mars’ northern polar ice formed within the last 1000 to 10,000 years, which provides scientists with a starting point for reconstructing the climate history of Mars.Such is the nature of the Red Planet. Today, scientists have a pretty good understanding of the nature of the Martian landscape and how it changes throughout the year. They also have an idea of what it used to look like billions of years ago, thanks to impeccably-preserved surface features that indicate the past presence of flowing and standing water (rivers, streams, and lakes).But the intervening period, where the climate transitioned from one to the other, that’s where much remains to be learned. In the coming years, robotic missions could be sent to Mars for the sake of studying the ice sheets directly and maybe even return samples to Earth. In the next decade, as astronauts begin to set foot on Mars, the opportunity to explore the ice caps could also be possible.Sources and further reading:JGR PlanetsEOS (AGU)Article provided by Universe Today / Edited by Universal-Sci (CC BY 4.0)More interesting articles on the subject of Mars:Mars colony: how to make breathable air and fuel from brineLife could have existed deep under the surface of MarsWhy building Mars habitats from fungi instead of metal is a brilliant ideaReview - Surviving Mars: Playing Elon Musk's vision of colonizing another planet
|
Physics
|
David Leigh dreams of building a small machine. Really small. Something minuscule. Or more like … molecule. “Chemists like me have been working on trying to turn molecules into machines for about 25 years now,” says Leigh, an organic chemist from the University of Manchester in the United Kingdom. “And of course, it's all baby steps. You're building on all those that went before you.”In 1936, English mathematician Alan Turing imagined an autonomous machine capable of carrying out any precisely coded algorithm. The hypothetical machine would read a strip of tape dotted with symbols that, when interpreted sequentially, would instruct the machine to act. It might transcribe, translate, or compute—turning code into a message, or a math problem into an answer. The Turing machine was a prophetic vision of modern computers. While your laptop doesn’t rely on tape to run programs, the philosophy behind it is the same. “That laid the foundation for modern computing,” says Leigh.Leigh now believes that tiny molecular versions of the Turing machine could assemble what we struggle to build in the organic realm, like new drugs and plastics with traits so enhanced and precise that they’re out of reach for current tools. And he’s confident that he can do it. “It's absolutely clear that it's possible,” he says, “because there already is this working example called biology.” Nature has given every life-form its version of the Turing machine: ribosomes, cellular structures that slide down sequences of mRNA to churn out proteins one amino acid at a time. No life on earth can function without them.A molecular machine would work like a ribosome, in that instructions would be encoded on one molecule, and another one would interpret them, or read them out. Or, you can think of it a bit like a tape recorder, in that information is encoded on one molecule that serves as a track, and is read by a second molecule that serves as the reader “head” that plays it back.A fully working machine doesn’t exist yet. Researchers like Leigh are building it piece by piece. His team designed a ring-like “ratchet” molecule in 2007 that was powered by light and could move forward along a molecular track. But that wasn’t what Leigh really wanted: Biological systems are powered by chemical fuels, not light.So five years ago, they discovered how to nudge these ratchet molecules along using trichloroacetic acid as a chemical fuel. The machines are in a liquid, and the team pulsed the acid into it. The surrounding liquid's pH changes as the acid decomposes, triggering the track molecule to usher the head molecule forward one step—and never backward. Think of it like an escalator, or a zip tie: The sawtoothed shape of the track restricts motion to only one direction.Video: David LeighNow, in a study recently published in Nature, Leigh’s team combined these innovations to demonstrate that a molecule-sized machine can read as it moves. They encoded blocks of information on one molecule (the tape) and designed another to slide down its length (the head). As the head moved along the tape, it would contort into a predictable shape each time it scanned a specific block of information. That allowed the team to interpret the information on the tape based on the changes to the shape of the head—to essentially read its code.Leigh’s team designed the molecular tape for this study to be more ambitious than the binary bits we’re used to in computing, which can be either a 0 or a 1. Instead, each block of information on the tape is written in three-way, or ternary, code, taking the values -1, 0, or +1.The reason they could opt for a more information-dense bit is because of the physics of the reading head. When the head sticks to a -1, it contorts in a predictable way. When it sticks to a section deemed +1, it contorts the opposite way. For 0, no contortion.Then, if you shine light at the molecular machine while it reads, each of the three contortions will twist that light in a unique way. The scientists were able to follow along with how the head was changing its shape by reading this light. They used a process called circular dichroism spectroscopy to determine the shape of the ratchet as it inched down the tape.Final result: They showed that the head reacts to what it reads. In other words, they found that you can use the fundamental processes of physics and chemistry to relay information at the molecular level. "This is the first proof of principle, showing that you can effectively do it," says Jean-François Lutz, a polymer chemist with France’s National Center for Scientific Research who was not involved in the research. “It has been conceptualized, but never really achieved.” “The way the molecular machines have been designed is really intricate, and really nice,” says Lee Cronin, a chemist at the University of Glasgow who was not involved in the study. (Cronin’s team has pioneered a different type of chemical computer, called the Chemputer, which reliably automates chemical reactions.) “If you could digitally control assembly at the molecular level, and make every single strand bespoke, then you can make amazing materials,” he continues. “But we're a little bit far away from that. And I'm anxious not to over-promise that.”Lutz, too, is careful not to overpromise. He points out that the “read” function is slow and the information that can be read is minimal. It’s also not yet possible to “write” information using a molecular computer, which is what would be required to actually fabricate new drugs or plastics.Leigh isn’t worried about speed. In the current experiment, it took several hours to move between blocks of information. He thinks it will ultimately go faster, because in nature, “ribosomes can read about 20 digits a second.” And to him, the minimalism of the information is also the point. It’s about packing information into as small a space as possible—perhaps for computing, data storage, or manufacturing—and retrieving it autonomously. He calls it “the ultimate miniaturization of technology.”That said, he does have ideas for growth. He imagines one day being able to use 5- or 7-way code, which would embed even more information into each block of tape.The next step forward will be getting his molecular machines to write. In the current paper, Leigh’s team proposes that the shape-shifting reader molecules may be able to catalyze different chemical reactions depending on their shape. (Read a +1, create molecule A. Read a 0, create molecule B.) You can imagine a vat full of such molecular readers, all programmed to print the same molecules, functioning as a sort of factory—perhaps to churn out super-polymers that cells could never make. “As synthetic scientists, we've got the whole of the periodic table of elements that we can use,” says Leigh. “It’s breaking free of ways that biology is restricted.”Leigh is especially tempted to manufacture new plastics this way. Plastics like polystyrene, polymethacrylate, and polypropylene are polymers, long chains of the same repeating unit, or monomer. Their physical properties are useful to us. But who knows what kind of super-materials could arise from mixing and matching monomers intentionally?Combining building blocks is a powerful concept in biology. For example, all the proteins in the world are based on some combination of only 20 amino acids. “Take spider silk—that's a protein, and it's five times tougher than steel,” says Leigh. “If you take exactly the same 20 amino acids but assemble them in a different sequence, you'll get myosin, which is the constituency of muscle and can generate a force, or you can make antibodies.”Lutz cautions that lofty ambitions for molecular machines are nothing new. “Dreaming in chemistry is always quite easy—making it happen is different,” he says.Still, incremental advances like Leigh’s are getting chemistry a little closer. “If they can scale it, it will be amazing,” says Cronin. “But they're a very long way from a Turing machine.”
|
Physics
|
It’s 10 years to the day since evidence of the Higgs boson – the elusive particle associated with an invisible mass-giving field – was announced. But for Prof Daniela Bortoletto the memories are as fresh as ever.“I just remember joy. I remember that everybody was so happy. And what surprised me [was] how everybody was interested, it seemed like the whole world was celebrating us,” she said.Now, as the Large Hadron Collider (LHC) – the monster proton smasher at the European particle laboratory, Cern – gears up to start its third period of data collection on Tuesday, experts are hoping to unpick further secrets of the fundamental building blocks of the universe.Bortoletto, now head of particle physics at the University of Oxford and part of the team that discovered the Higgs boson, – said her main memory of the events a decade ago was the moment two weeks before the announcement when the researchers unblinded their analysis of the data and saw unambiguous signs of the boson.“I still, thinking [about] that moment, get the butterflies in my stomach,” she said. “It was unbelievable. It’s really a unique moment in the life of the scientist.”The media furore when the discovery was announced was enormous, with newspapers, radio and TV all focused on a particle as fleeting as it was important.Dubbed the “God particle” and named after physicist Peter Higgs, the Higgs boson is the signature particle of the Higgs field – an invisible energy field that pervades the universe. In a nutshell, it is the interaction of fundamental particles with this field, interactions first thought to have occurred shortly after the big bang as the universe expanded and cooled, which gives them mass.The existence of the Higgs boson was predicted by the standard model, a key theory that explains three out of the four fundamental forces of nature, but it was not until the seminal experiments at the LHC that scientists found the crucial evidence.Thanks to the discovery of the Higgs boson, scientists can now explain a host of phenomena: from why electrons have mass and hence can create a cloud around a nucleus, giving rise to atoms; to why a neutron is more massive than a proton, and hence why the former decays but the latter is stable.“The Higgs field explains why atoms exist, why we exist. And the fact that we can put it in a context that we think that we understand, I think, is pretty cool,” she Bortoletto.But the story is far from over. Since the announcement in 2012 there have been further revelations – including insights into how the Higgs boson is born and decays and its interactions with heavy particles such as top and bottom quarks. And work continues apace.Among other endeavours scientists are looking to probe interactions between the Higgs boson and muons – fundamental, negatively charged subatomic particles – and explore the coupling of the Higgs boson to itself.“Understanding, for example, the Higgs self-coupling could [help us] understand the shape of the Higgs potential and understand better what happened at the beginning of the universe,” said Bortoletto.Key to such work is the third run of the LHC, due to begin on Tuesday. This time the atom smasher will operate at 13.6 trillion electronvolts (TeV), up from 13 TeV, with Bortoletto revealing both the Atlas and CMS experiments are expected to double their datasets.“More data and a little bit more energy opens new opportunities,” said Bortoletto. She said scientists would be able to study the Higgs boson in more detail, and the work may also provide new insights into the mass of the W boson. Another fundamental particle, the W boson was at the heart of a sensation earlier this year when researchers at the Collider Detector at Fermilab in the US revealed their data suggested the particle has a far greater mass than predicted by the standard model.Bortoletto added that there was room for more seminal discoveries.“There is a lot of scope in the Higgs sector,” she said. “Again, we have a little bit more energy, we might discover something new, some new particle – we have a chance, every time we go higher in energy to discover maybe new physics.”
|
Physics
|
Treatment plan comparisons CT images overlaid with dose-weighted LET distributions for three planning strategies: the base clinical distribution (left), a three-beam orientation (centre) and an alternate-beam-angle setup (right). The clinical target volume is outlined in red and the brainstem in blue. (Courtesy: CC BY 4.0/J. Appl. Clin. Med. Phys. 10.1002/acm2.13782) Proton therapy can deliver highly conformal dose distributions to a tumour target while minimizing dose to tissues outside the target volume. Creating treatment plans that realize this strength is a top priority for dosimetrists and medical physicists.
Protons deposit dose in a fundamentally different way to X-rays, another type of external-beam radiation therapy. As a proton reaches the end of its trajectory, the rate at which its energy is transferred to tissue – its linear energy transfer (LET), expressed in keV/µm – increases.
The relative biological effectiveness (RBE) captures the biological implications of increasing LET, and a fixed RBE value of 1.1 is often applied for clinical proton treatments. But proton RBE is dependent on many other factors, including clinical endpoints, tissue type, fractionation scheme, patient-specific radiosensitivity, physical dose, and uncertainties in experimental measurements. As a result, using a fixed RBE value in proton therapy likely underestimates RBE in high-LET locations, which could result in an increased risk of radiation-induced toxicities.
Still, LET is strongly correlated with RBE and is a key factor for determining variable RBE in proton therapy. As such, researchers are investigating approaches for calculating and evaluating LET during treatment planning. These biological treatment planning tools are limited, however, and until they are developed and studied further, clinics must identify their own treatment planning practices to minimize LET outside of target volumes, says Austin Faught, a medical physicist at St Jude Children’s Research Hospital in Tennessee.
“How to influence the [LET distribution] is an active area of research, and there are some great methods under development,” Faught explains. “The problem that we face is that those are not readily available without custom software developed in-house or through special research versions of vendor-provided applications … [and there are] few studies providing quantitative guidance on what we should aim for.”
Treatment planning strategies
In a step toward LET-based plan evaluation and optimization for photon therapy, Faught and his team performed a survey of planning strategies that are commercially available to clinical teams for intensity-modulated proton therapy (IMPT). Their study, reported in the Journal of Applied Clinical Medical Physics, introduces some guidance for proton therapy treatment planners. “We wanted to look at some readily available treatment planning techniques and how they may affect LET,” Faught explains.
The researchers evaluated the differences in dose-weighted LET (LETd) between eight forward-based treatment planning approaches applied to a cylindrical water phantom and four paediatric brain tumour cases (Faught notes that radiation-induced toxicities are a focus area for the team). They compared these planning strategies to a plan using opposed lateral beams (for the phantom) or to the original clinical plan (for patients), using Monte Carlo secondary calculations to evaluate both the dose and LETd.
The researchers found that treatment field geometry was the biggest contributor to the location of high-LET areas. To mitigate the potential impact of biologic uncertainties associated with high LETd, they suggest that treatment planners use large intersection angles between treatment beams and avoid beams that stop immediately proximal to critical structures.
“This is great news as it means careful selection of the number of treatment fields and their orientation with respect to nearby health tissues can be effective,” Faught says. “With some conscious, upfront thought, that’s something all treatment planners can take into consideration during the planning process.”
The researchers also found that using a range shifter significantly reduced the mean LETd in the clinical target volume. As a result, they recommend using range shifters and alternative strategies of spot-placement restrictions sparingly, and only when clinics can calculate the resulting LETd to evaluate against alternate planning strategies.
Because of the study’s small sample size, the researchers couldn’t establish a clear trend in LETd variations in the clinical cases. They did not evaluate the relationship between changes in LET and a change in the probability of tumour control or normal tissue complications. Read more LET-based plans optimize proton therapy While the effects of each planning approach on high-LET regions were modest, Faught says it’s important to recognize that the team’s treatment planning strategies and recommendations are evidence-based and can readily be worked into clinical practice.
“I hope that one of the takeaways is that we, as a field, would benefit from commercial tools that allow for the calculation of LET within the treatment planning system. Even better, we would love to have ways to optimize with LET in mind. This study was a good bridge until those tools are more widely available,” Faught says.
|
Physics
|
Taken from the July 2022 issue of Physics World. Members of the Institute of Physics can enjoy the full issue via the Physics World app. Particle man Peter Higgs visits the CMS experiment at CERN in 2008. (Courtesy: CERN) As someone who was working at CERN at the time, the 2012 discovery of the Higgs boson is close to my heart. So when reading Elusive: How Peter Higgs Solved the Mystery of Mass I was keen to learn the life story of the scientist after whom the particle is named. Written by particle physicist Frank Close and released to coincide with the 10th anniversary of the discovery, Elusive is an anecdote-filled, meandering – and sometimes confusing – glimpse into the life and work of the theorist Peter Higgs, whose name is one of three associated with the Brout–Englert–Higgs (BEH) mechanism that gives elementary particles their mass.
Unfortunately, my enthusiasm was quickly dampened. Just a couple of chapters in, it began to feel as if the title refers to the structure of the book itself, which seemed harder to locate than the elementary particle in question. But while initially disjointed, I suggest you don’t let this put you off the book – it does get better.
I first had alarm-bells ringing as I read the preface, where Close remarks that the Higgs boson was dubbed the God Particle “in media headlines”. However, it was physicist and Nobel laureate Leon Lederman who came up with the epithet for the title of his 1993 book – a fact that Close himself references later in the book. The fact that the media ran with the phrase is neither here nor there, but putting the “blame”, as it were, on the media seems slightly uncharitable. In the preface Close also only mentions the 2013 Nobel Prize for Physics going to Higgs, ignoring until later on in the book that it was jointly rewarded to François Englert, which feels misleading and unfair on Englert. Even the title makes it seem as if Higgs was the only person involved in the scientific endeavour the book goes on to describe.
But I set this aside and continued. As a friend of Higgs, Close is uniquely placed to tell us the theorist’s story, with all the partiality one might expect from such a relationship. Close draws upon their private and public conversations, as well as referring to other books, scientific papers and primary sources. He begins by introducing us to Higgs’ family, including his grandparents. We are told of Higgs’ early education and how he attended Cotham Secondary School in Bristol, the same school that Paul Dirac once attended. It is not immediately clear, however, what relevance some of these snippets have. For example, Close describes Higgs’ conversion to socialism while coming from a traditionally conservative family, but the paragraph, inserted abruptly, does not seem to lead anywhere.
This unexpected dead-end is unfortunately not an exception. Close varies the attention that he gives different areas of physics in a way that can frustrate. Some ideas are introduced and then dropped almost immediately, while other statements are presented as fact without further discussion. Some terms are defined well after they are first introduced, and others have pages and pages of explanation devoted to them, with occasional (and needless) repetition of ideas and phrases – a proclivity raised in a Physics World review of Close’s previous book Trinity. We are told, for example, that Higgs’ father viewed Oxford and Cambridge as places that “were for the sons of the idle rich to waste their time and also that of their tutors”, and are then reminded of this exact sentiment with near-identical phrasing mere pages later.
Having said that, Close’s scientific narrative presents a more historically accurate description of the meandering path that led to Higgs’ ideas compared with other popular explanations of the significance of the BEH mechanism. Commonly, the tales begin with how the mechanism solves the problem of the W and Z boson masses under the unification of the electromagnetic and weak forces. Close chooses instead to introduce the reader to the crucial work of Jeffrey Goldstone and the problems arising from his ideas that Higgs and fellow theorists were trying to solve, as well as the importance of Philip Anderson’s 1962 paper that first introduced a mass-giving mechanism. Close also explains in welcome detail the link between the 1964 papers proposing the BEH mechanism to superconductivity, providing a rich history of 21st-century particle physics and its relationship with other domains of physics. Higgs is the protagonist of the story Close tells us but Elusive also explores the crucial roles played by many other principal actors on the particle-physics stage. Despite glossing over them in the preface, Close goes into detail about the work of Brout and Englert, and includes that of Gerald Guralnik, Carl Hagen and Tom Kibble. Close’s writing is peppered with colourful metaphors but unfortunately, some left me scratching my head Close’s writing is peppered with colourful metaphors but unfortunately, some left me scratching my head. For example, when referring to theorists proposing the existence of new particles, he alludes to trails and peaks before then switching metaphors to cookery and gourmet banquets in the same paragraph. Elsewhere, we are told that the W and Z bosons are bears in a cave, a concept first introduced in pages 44–48 and then dropped in without ceremony some 80 pages later. Bizarrely (or perhaps intentionally?), he later refers to Carlo Rubbia, one of the driving forces behind the discovery of the W and Z bosons, as “a bear of a man”. Read more Tracks of my tears: the true meaning of Peter Higgs’ emotion at CERN in 2012 None of this is to say, of course, that Close is not a compelling storyteller. There are parts of the book that lead you on with delight: “This particle carried zero charge, so he [Sheldon Glashow] named it Z, and like his native city New York, New York – so good they named it twice – he appended the traditional superscript 0 as well, making it Z0.” But I feel as though the book as a whole could have done with some more forceful editing. Some threads come together to form a unified tapestry, but the images they represent appear disjointed and occasionally without relation to anything else mentioned.
Indeed, on the editorial side of things, the most frustrating aspect of reading Elusive is to constantly gamble as to whether a note at the end of the book is worth looking at: sometimes they are references to papers, while others include a paragraph of contextualizing or expanding information. These notes would have served a better purpose as footnotes on the same pages they are referenced on.
The story of the Higgs boson is longer than the 48 years between the papers that predicted its existence and the announcement that it had finally been found – and it is a vastly more complex journey than is evident at first glance. Elusive is a timely and in-depth narrative, and although Close, as he might put it, has a mountain to climb, at least he is equipped with all of the ingredients needed for a scrumptious meal once at the top. 2022 Allen Lane 304pp £25hb
|
Physics
|
This episode of the Physics World Weekly podcast features an interview with Andrew Cheng, who is a lead scientist on the Double Asteroid Redirection Test (DART) space mission. In September 2022 the DART spacecraft smashed into an asteroid and was successful in changing the orbit of that near-Earth object.
DART was conceived and executed by NASA and an international team led by the Johns Hopkins Applied Physics Laboratory – and they are the winners of the Physics World 2022 Breakthrough of the Year Award.
Cheng is based at Johns Hopkins and he recalls the final moments in mission control before the impact, which he describes as “one of the greatest moments of my life”. He explains how the DART mission came together and talks about how we could defend Earth from asteroid impacts in the future.
This is the final Weekly podcast of 2022. Thanks for listening and we will be back on 5 January with the first episode of 2023.
|
Physics
|
Taken from the August 2022 issue of Physics World, where it appeared under the headline "Magnetic economy". Members of the Institute of Physics can enjoy the full issue via the Physics World app. James McKenzie realizes that we’re going to need lots of magnets if we want to turn the economy green Green future Electric car motors on an assembly line. (Courtesy: iStock/Aranga87) I was recently in Newcastle to attend PEMD2022 – the 11th international conference on power electronics, machines and drives. What struck me was not only the huge performance improvements that have been happening in electric motors and generators but just how far we still have to go to make transport fully carbon-free.
Global sales of electric cars (including fully battery powered, fuel cell and plug-in hybrids) doubled in 2021 to an all-time high of 6.6 million. They now account for 5–6% of vehicle sales, with more being sold each week than in the whole of 2012, according to the Global Electric Vehicle Outlook 2022 report.
Each new electric vehicle will need at least one high-power electric motor. Projections vary, but annual sales are expected to increase to 65 million electric vehicles by 2030 globally, according to market research firm IHS Markit. Annual sales of vehicles with internal combustion engines, in contrast, will decline from 68 million units in 2021 to 38 million by 2030.
What’s obvious is that each new electric vehicle will need at least one high-power electric motor. Almost all (about 85%) of these vehicles currently use motors with permanent magnet (PMs) as they are the most efficient (the record is 98.8%). A few use Alternating Current (AC) induction motors and generators, but they are 4–8% less efficient than PM motors, up to 60% heavier and up to 70% larger. Read more Could this revolutionary plane turn air travel green? Still, these non-PM motors and generators are perfect for, say, trucks, ships and wind-turbine generators. They are also easy to recycle as they can, in principle, be made of one material (say aluminium) and then melted down when they come to the end of their life. Some firms, like Tesla Motors, are even combining the PM and electromagnetic approaches in ever more complex designs to optimize performance and range. None of the advances in electric vehicles would, however, be possible without the huge advances in solid-state power electronics.
Magnetic attraction
Magnets have come a long way since a shepherd in Magnesia in northern Greece noticed the nails in his shoe and the metal tip of his staff were stuck fast to a magnetic rock (or so legend has it). These “lodestones” were used for thousands of years in compasses to navigate but it was not until the early 1800s that Hans Christian Ørsted discovered that an electric current can influence a compass needle.
The first demonstration of a motor with rotary motion occurred in 1821 when Michael Faraday dipped a free-hanging wire into a pool of mercury, on which a PM was placed. The first DC electric motor that could turn machinery was developed by British scientist William Sturgeon in 1832. US inventors Thomas and Emily Davenport built the first practical battery-powered DC electric motor at about the same time.
These motors were used to run machine tools and a printing press. But as the battery power was so expensive, the motors were commercially unsuccessful, and the Davenports ended up bankrupt. Other inventors who tried to develop battery-powered DC motors struggled with the cost of the power source too. Eventually, in the 1880s, attention turned to AC motors, which took advantage of the fact that AC can be sent over long distances at high voltage.
The first AC “induction motor” was invented by the Italian physicist Galileo Ferraris in 1885, with the electric current to drive the motor obtained by electromagnetic induction from the magnetic field of the stator winding. The beauty of this device is that it can be made without any electrical connections to the rotor – a commercial opportunity seized upon by Nikola Tesla. Having independently invented his own induction motor in 1887, he patented the AC motor the following year.
For many years, though, PMs had fields no higher than naturally occurring magnetite (about 0.005 T). It wasn’t until the development of alnico (alloys of mostly aluminium, nickel and cobalt) in the 1930s that practically useful PM DC motors and generators became a possibility. In the 1950s low-cost, ferrite (ceramic) PMs appeared, followed in the 1960s by samarium and cobalt magnets, which were stronger again.
But the real game-changer occurred in the 1980s with the invention of neodymium PMs, which contain neodymium, iron and boron. These days, the N42 grade of neodymium PMs has a strength of some 1.3 T, although that’s not the only key metric when it comes to magnet and motor design: operating temperature is vital too.
Prices of some rare-earth materials have skyrocketed, prompting a huge amount of research into new magnet compositions. That’s because the performance of PMs falls as they warm-up and once they go above “Curie point” (about 320 °C for neodymium magnets), they completely demagnetize – rendering the motor useless. Another important thing about all rare-earth magnets, including neodymium, cobalt and samarium, is that they have a high coercivity, meaning they don’t demagnetize easily when in operation. To make the highest coercivity and best temperature performance magnets you also need small amounts of other heavy rare earths such as dysprosium, terbium and praseodymium. A question of supply
Trouble is, rare-earth elements are in short supply. It’s not because they are intrinsically rare, their name simply comes from their location in the periodic table. According to a report last year from Magnetics & Materials LLC, by 2030 the world will need 55,000 more tonnes of neodymium magnets than are likely to be available, with 40% of the total demand expected to come from electric vehicles and 11% from wind turbines.
China currently makes 90% of all the world’s neodymium magnets, which is why the US, the EU and others are all trying to develop their capabilities in the supply chain so as not to be disadvantaged. Prices of some rare-earth materials have skyrocketed, prompting a huge amount of research into new magnet compositions, recycling of existing magnets and advanced AC induction motors.
Whichever way you look at it, we’re going to need a lot of magnets if we are to green the economy.
|
Physics
|
U.S. scientists have achieved “ignition” — a fusion reaction that produced more energy than it took to create — a critical milestone for nuclear fusion and a step forward in the pursuit of a nearly limitless source of clean energy, Energy Department officials said Tuesday. Nuclear fusion, the process that powers the sun and other stars, occurs when two atoms’ nuclei collide under extreme temperatures, causing a reaction that can generate incredible amounts of energy with few environmental costs. Scientists have been chasing the promise of fusion since the dawn of the atomic age, but had yet to cross a threshold in which more energy was created by a fusion reaction than the energy needed to produce it. To exceed that milestone, researchers at the Lawrence Livermore National Laboratory’s National Ignition Facility last week fired the energy of 192 laser beams at a cylindrical target called a hohlraum, creating x-ray radiation. The radiation imploded a tiny, diamond capsule filled with two isotopes of hydrogen, deuterium and tritium, releasing energy. "We have taken the first tentative steps toward a clean energy source," said Jill Hruby, the Energy Department's National Nuclear Security Administration. The breakthrough will not immediately open the floodgates to clean power in American homes, but it is a powerful symbol that the fundamental scientific concepts underlying the promise of fusion are sound. “There are a lot of scientists who said, ‘I don’t believe any of you guys. You’ll never make it work,’” said Stephen Bodner, a former director of the laser fusion program at the U.S. Naval Research Laboratory, or NRL. “Livermore showed — lo and behold — you can do it.” Engineering challenges remain that could take years or decades to work out before the technology could fuel power plants and transfer energy to the U.S. electrical grid, experts say. While the Livermore team achieved what researchers call a scientific break-even or energy gain, it did not achieve an engineering break-even: The inefficient lasers used in the experiment required far more energy to operate than was produced by the reaction. Scientists must now find ways to reduce inefficiencies, burn a larger portion of available fuel during the reaction and harness the energy for use as electricity, said Troy Carter, a professor in UCLA’s department of physics and astronomy and the director of the Plasma Science and Technology Institute.“The laser technology at NIF is ‘90s technology,” he said. NIF’s lasers can perform only a few shots each day. A power plant would require about 10 shots per second. Researchers would also have to figure out how to mass produce perfect capsules. The National Ignition Facility program is not designed to make energy for electricity; instead, it’s part of the “stockpile stewardship” program for U.S. nuclear weapons, which allows scientists to verify their reliability without detonation. It’s unlikely the next big advances will come out of the NIF laboratory, experts say. “The NIF facility is doing world-leading research, but it was never designed to generate electricity; it's not a power plant,” said R. David Edelman, the chief policy and global affairs officer for TAE Technologies, a California-based fusion company.Instead, advances could come from other federal laboratories or from the private sector, where investors are seeding billions into a smattering of fusion projects. Some projects use similar technology to what NIF has demonstrated; others are pursuing a technology that uses magnets to confine plasma at extreme temperatures to force fusion reactions. Fusion companies reported more than $4.7 billion in total private investment commitments through the end of the year, according to the Fusion Industry Association. Fusion projects could see a boost in federal funding in the U.S., too. The Inflation Reduction Act provided millions in new funding for fusion projects and the White House this year convened the first fusion summit and developed a 10-year plan to commercialize fusion technology. Bodner said he’s always been cautious with predictions about nuclear fusion in the past, but became convinced in the last few years that a fusion program could succeed if the federal government shifted its focus from weaponry to energy production. “I think well within a decade, a well-run program would either succeed or fail. The technology is ready to go. What’s mainly needed is the money,” he said, adding the next step would be to build a more efficient, high-powered laser.With fusion, there is no risk of a nuclear meltdown and it produces only helium as waste. Fusion reactors use relatively little fuel and cannot be used for nuclear warfare. The core materials — deuterium and tritium — are theoretically plentiful. Deuterium is abundant and can be extracted from water, Carter said. Tritium can be extracted from lithium, which is also in ocean water. “Hundreds of thousands to millions of years of fuel for all energy we need on earth is in seawater,” Carter said. If scientists can work out more of the problems that have dogged fusion for seven decades, it would be a reliable source of power to work in concert with wind and solar power to boost the clean energy movement. “You can deploy it anywhere and it can be always available,” Carter said. “If we deploy it in time, I think it could fill a lot of the energy needs of the future.”Researchers view it as a complement for wind and solar power, which are scaling up quickly and falling in cost but remain dependent on particular weather conditions and are difficult to store with today’s technology. From a cost perspective, “I don’t think you’ll ever compete with wind and solar. They just sit there and turn out energy, there’s nothing simpler than that,” Bodner said. Paul Dabbar, the former under secretary of energy for science until 2021, compared the process of scaling the experimental reaction at a lab into a commercially viable source of electricity to the evolution of electric vehicles over the past several decades. “It takes a while, but it’s doable,” said Dabbar, now a visiting fellow at Columbia University’s Center on Global Energy Policy. “What we did 15 years ago has now turned into whole new industries. So the prospect of this not in two years, but in 10 to 15 years, similar to what happened with EVs and lithium-ion is absolutely possible.” Dabbar predicted that developing the experimental process into a commercial power plants would require additional “hard science” from the federally-supported National Laboratories as well as major universities that conduct fusion research, plus start-ups and private-sector companies with experience in building power plants.As humans seek ways to power modern life without harming the planet, Dabbar said it was fitting that the “ultimate source” of clean energy would be the one we see every day in the sky: The sun, whose process of nuclear fusion scientists have spent decades trying to recreate.“Nature decided that fusing hydrogen atoms was the most efficient way to make energy. The universe is populated primarily with hydrogen,” Dabbar said. “So nature’s already made its decision on what’s best.”
|
Physics
|
Anna Mani was an Indian physicist and meteorologist who made many valuable contributions to the design of weather observation instruments, playing a vital role in making India self-reliant in measuring aspects of the weather. She was also an early advocate for harnessing solar and wind power as alternative energy sources, foreseeing the benefits they promised for her country.To honor her contribution to science, Google on Tuesday will dedicated its Doodle to Mani in celebration of her 104th birthday.Mani was born Aug. 23, 1918, in Peermade, a village in the Indian state of Kerala. The seventh of eight children, Mani enjoyed an upper-class upbringing in a time when men were being prepared for professional careers and women readied for domestic life.But Mani had other interests. A voracious reader, Mani is said to have read almost all the books in local public library. For her eighth birthday, on her request, she was given at set of Encyclopedia Britannica instead of her family's traditional gift of diamond earrings.After earnings a bachelor's degree in chemistry and physics in 1939, Mani went on to author five papers on the spectroscopy of diamonds and rubies but was denied a Ph.D. because she hadn't first earned a master's degree. In 1945, she won a scholarship to study in England, learning about meteorology and the instruments necessary to measure changes in the weather. Three years later, she returned to India to work for the India Meteorological Department, where she helped the country produce its own weather-monitoring instruments. By 1953, she was leading the division, simplifying the design and production of more than 100 weather instruments.In the 1950s, she established a network of solar radiation monitoring stations across India for future solar energy projects. She also created a workshop to make instruments that measured wind speed and solar radiation. Her interest in studying ozone -- the gas shielding life on Earth from harmful ultraviolet radiation -- led to the creation of the ozonesonde, a balloon-borne instrument for measuring ozone levels.Mani would go on to work for the Indian government, serving as deputy director general of the Indian Meteorological Department until her retirement in 1976. She died in 2001 at the age of 82.
|
Physics
|
Plan comparisons Radiotherapy treatment plan for a head-and-neck cancer patient, with the planning target volume (PTV) outlined in red. The graph shows the physical dose–volume histogram (DVH), the radiobiological DVH from EQD2VH and the point-dose calculation method, for the PTV and an organ-at-risk. (Courtesy: CC BY 4.0/J. Appl. Clin. Med. Phys. 10.1002/acm2.13716) Cyber attacks on hospitals can have a devastating impact, especially for radiology and radiotherapy departments that are particularly reliant upon technology to function. A case in point is the nationwide cyber attack on Ireland’s public health services in May 2021, which interrupted scheduled radiotherapy treatments for some cancer patients for up to 12 days.
Following this incident, medical physicists at University Hospital Galway and the National University of Ireland Galway began to develop an in-house tool to help create revised radiotherapy treatment plans after interruptions occur. The tool – named EQD2VH – calculates treatment compensation plans and enables visual comparison of all plan options, as well as individual analysis of each structure in a patient’s plan. The researchers describe the new software tool in the Journal of Applied Clinical Medical Physics.
Radiotherapy is most commonly delivered over several weeks in a series of small radiation doses (conventionally 2 Gy) called fractions. Unplanned treatment gaps – whether due to cyber attacks, machinery breakdowns or patient illness – can cause significant setbacks. During such gaps, cancer cells rapidly repopulate in tumour tissue, resulting in a decrease in the radiobiological dose to the planning target volume (PTV).
Lead author: Katie O’Shea from NUI Galway. (Courtesy: Katie O’Shea)
To address this problem, EQD2VH uses dose–volume histogram (DVH) information extracted from original patient plans to perform treatment gap calculations. Lead author Katie O’Shea, of the National University of Ireland Galway, and colleagues explain that the software converts the physical dose in each dose bin (the range of dose between data points in a DVH) into the biologically effective dose (BED). This accounts for both repopulation effects in the PTV and the effects of sub-lethal damage to unrepaired normal tissue in organs-at-risk (OARs).
After modifying the BED conversion to account for dose variations in each structure, using a variable-dose method, the tool converts the BED for each structure into the equivalent dose in 2 Gy fractions (EQD2). This normalizes each treatment to conventional fractionation and makes it possible to sum plans with different fractionation schemes together. The resultant EQD2 -based DVH provides a 2D representation of the impact of treatment gap compensation strategies on both PTV and OAR dose distributions, as compared with the prescribed treatment plan.
To evaluate EQD2VH as a clinical decision-making tool, the researchers selected five high-priority patients with rapidly growing tumours whose treatment gaps should not surpass two days. This included four patients with head-and-neck cancers undergoing intensity-modulated radiotherapy and one lung cancer patient undergoing 3D conformal radiotherapy, who had treatment gaps of 12 or 13 days. These cases enabled the team to evaluate the use of EQD2VH for patients with both conventional (2 Gy) and non-conventional (2.2 Gy) fractionation and different treatment gap times (from nine to 46 days into their therapy).
The revised treatment plans for each patient were based on their original plans with either the dose-per-fraction or the number of fractions changed. O’Shea explains that each patient’s revised plan and schedule used a combination of twice-daily fractionation, weekend treatments and increased dose to the target volume to reduce the effects of cell repopulation.
The plans limited treatment to six fractions per week and precluded twice-daily fractionation on consecutive days. If the prescribed treatment could not be completed in the required time frame, the researchers investigated plans using hypofractionation (delivery of increased dose per fraction). They were able to visually and quantitatively compare various revised plans with the patient’s original plan to determine which would deliver the best dose to the PTV with the least dose to OARs.
The researchers note that the 2D representation of each individual structure in EQD2VH provides a more in-depth analysis than the Royal College of Radiologists (RCR)-recommended 1D point-dose calculation method that’s currently used to manage radiotherapy gaps. A 1D representation of dose distribution within a volume does not account for OARs typically having a non-uniform dose distribution and could overestimate OAR dose. In addition, the EQD2VH tool can create plans for any length of treatment gap, whereas the RCR guidelines are based on a standard gap of four to five days.
Additional benefits of the new tool include the ability to monitor each OAR in the patient’s plan to minimize further dose increases that could cause more acute toxicities. Users can also calculate the impact of different treatment gap durations on a patient’s treatment. This capability can help determine whether to transfer a patient to a different clinic if the gap at the scheduled clinic is too long or whether the patient can safely wait for treatment to resume. Read more How has COVID-19 impacted the provision of radiation therapy? EQD2VH can also account for changes in the overall treatment time and sublethal damage in normal tissue, which a commercial system may not be able to do. Most importantly, the tool does not need to be connected to hospital network to function – it can be used even if a hospital’s servers are still crippled by a cyber attack.
“We are still evaluating EQD2VH as a decision-making tool,” says principal investigator Margaret Moore from University Hospital Galway. “It is part of a current project reviewing patients receiving multiple re-treatments for palliative regimes where the dose-per-fraction is non-standard and where there may be a choice of fractionation schemes to consider. Converting treatment dose from a number of treatments with differing fractionations to EQD2 allows the radiobiological dose to target tissues and OARs to be accumulated for an overall dose overview, which can assist with the decision-making for the choice of further treatment.”
|
Physics
|
The Mayneord-Phillips Educational Programme (MPEP), run by the Institute of Physics, the British Institute of Radiology, and the Institute of Physics and Engineering in Medicine is a forum for early-career professionals to explore new developments in medical physics, enhance subject-knowledge and form long-term networks. The programme hosts approximately 30 early-career professionals and has been successfully running since 2012. The aim is to host the event every two years.
The topic of the 2022 school will be Advances in Solid State Detectors for Early Diagnosis which given the current COVID pandemic, has emphasised how vital it is for scientists to explore and develop new solutions. The School will provide an opportunity for NHS Clinical Scientists, post-graduate students, post-doctoral researchers, and engineers and innovators working on product solutions in industry to develop their interest and expertise in an exciting, nascent topic.
Continuous Professional Development (CPD) is fundamental in a field where new production cycles and technological advancements are being introduced rapidly, requiring previously-acquired knowledge to be renewed and updated. On a wider scale, scientists engaging in CPD activities on advancements in the field will provide a range of societal benefits including better healthcare practices, improved patient experiences and give reassurances to the general public that our scientists operate to high standards.
|
Physics
|
A train that’s faster than a plane What sounded like science fiction, may become a reality. Traveling in a train at 1,000 km/h (620 mph), faster than a plane, will be possible in the near future. TRANSPOD’S ‘FLUXJET’ WILL REDEFINE PASSENGER + CARGO TRANSPORT TransPod, the Canadian startup building the world’s leading ultra-high-speed ground transportation system — the TransPod Line — has just unveiled ‘FluxJet’, an industry-defining innovation that will transform the way passengers and cargo are moved. Based on groundbreaking advances in propulsion and fossil-fuel-free energy systems, the fully electric vehicle is designed as a hybrid between aircraft and train. It features technological leaps in contactless power transmission and a new field of physics called veillance flux — enabling it to travel in a protected guideway at over 1000 km/h – faster than a jet and thrice as fast as a high-speed train. ‘This milestone is a major leap forward,’ says Ryan Janzen, co-founder and CTO at TransPod. ‘The FluxJet is at a nexus of scientific research, industrial development, and massive infrastructure to address passengers’ needs and reduce our dependence on fossil-fuel-heavy jets and highways.’ all images courtesy of TransPod ENABLING AFFORDABLE AND ULTRA-HIGH-SPEED TRAVEL Preliminary construction work on the ‘FluxJet’, including the environmental impact assessment, has already begun. This ultra-high-speed vehicle will operate exclusively on the TransPod Line, a network system with stations in key locations and major cities and featuring high-frequency departures designed to enable fast, affordable, and safe travel. Most recently, TransPod (see more here) announced the next phase of an $18B US infrastructure project to build the network system that will connect Calgary and Edmonton in Alberta, Canada. According to the startup, this project will create up to 140,000 jobs and add $19.2B to the region’s GDP throughout construction. In addition, once the TransPod Line is in operation, it will cost passengers approximately 44% less than a plane ticket to travel the corridor and help reduce CO2 emissions by 636,000 tonnes per year. the ‘FluxJet’ will operate exclusively on the TransPod Line ‘The FluxJet is a first for Canadian innovation and is the next great infrastructure project to be brought worldwide,’ continues Janzen. ‘The TransPod Line is being developed in collaboration with our partners in Europe, USA, and beyond, including universities, research centers, the aerospace industry, architecture, railway, and construction partners.’ At TransPod’s unveiling event in Toronto, a scaled-down prototype was featured in a live demonstration showing its flight capabilities. The almost 1-tonne vehicle engaged in take-off, travel, and landing procedures within its guideway. https://youtube.com/watch?v=MmtwfXG2MfE%3Ffeature%3Doembed this project will create up to 140,000 jobs and add $19.2B to Canada’s GDP throughout construction
|
Physics
|
For nearly 650 years, the fortress walls in the Chinese city of Xi’an have served as a formidable barrier around the central city. At 12 meters high and up to 18 meters thick, they are impervious to almost everything — except subatomic particles called muons.
Now, thanks to their penetrating abilities, muons may be key to ensuring that the walls that once protected the treasures of the first Ming Dynasty — and are now a national architectural treasure in their own right — stand for centuries more.
A refined detection method has provided the highest-resolution muon scans yet produced of any archaeological structure, researchers report in the Jan. 7 Journal of Applied Physics. The scans revealed interior density fluctuations as small as a meter across inside one section of the Xi’an ramparts. The fluctuations could be signs of dangerous flaws or “hidden structures archaeologically interesting for discovery and investigation,” says nuclear physicist Zhiyi Liu of Lanzhou University in China.
Muons are like electrons, only heavier. They rain down all over the planet, produced when charged particles called cosmic rays hit the atmosphere. Although muons can travel deep into earth and stone, they are scattered or absorbed depending on the material they encounter. Counting the ones that pass through makes them useful for studying volcano interiors, scanning pyramids for hidden chambers and even searching for contraband stashed in containers impervious to X-rays (SN: 4/22/22).
Though muons stream down continuously, their numbers are small enough that the researchers had to deploy six detectors for a week at a time to collect enough data for 3-D scans of the rampart.
It’s now up to conservationists to determine how to address any density fluctuations that might indicate dangerous flaws, or historical surprises, inside the Xi’an walls.
|
Physics
|
Artificial intelligence has the potential to revolutionize quantum physics. In a recent study, a team of researchers from the University of Toronto used artificial intelligence to reduce a 100,000-equation quantum physics problem to only four equations.The research was published in the journal Nature Communications.In the study, the team used artificial intelligence to study a specific type of quantum system known as a many-body localized system. Many-body localization is a phenomenon in which a group of particles become trapped in a local region of space and time, and their dynamics are governed by the laws of quantum mechanics.While many-body localization is well-understood in theory, it is difficult to study in practice because of the large number of particles involved. The typical way to simulate many-body localization is to use a computer to solve the Schrödinger equation, which is a set of differential equations that describe the behavior of quantum systems.However, the Schrödinger equation becomes increasingly complex as the number of particles increases. For example, a system with just two particles has four equations, while a system with three particles has nine equations. A system with 100,000 particles would have 10 million equations.The team used artificial intelligence to reduce the number of equations that need to be solved. They did this by training a neural network to learn the solutions to the Schrödinger equation for various values of the parameters that describe the system. Once the neural network was trained, it could be used to approximate the solutions for any value of the parameters, without having to solve the Schrödinger equation explicitly.The team found that their artificial intelligence algorithm was able to reduce the number of equations that need to be solved from 100,000 to just four. This is a significant reduction, and it could have important implications for the study of many-body localization and other quantum phenomena.The ability to approximate solutions to the Schrödinger equation using artificial intelligence opens up new possibilities for studying quantum systems. In particular, it may be possible to use artificial intelligence to study systems that are too large or too complex to be studied using traditional methods.This could lead to a better understanding of many-body localization and other quantum phenomena, and could ultimately lead to new applications of quantum physics in technology and medicine.
|
Physics
|
White holes are theoretical space objects that are the opposites of black holes. If black holes eat everything, then white holes, on the contrary, don’t let anything enter them. White Holes wouldn’t appear white. White holes are just a name. We still have not discovered their twin “White Holes.” However, some scientists now believe that white holes may not exist.
How are White Holes Formed?
Initially, all things in the universe have a clear shape, but as time passes, they become more and more disordered. This is a law of the universe. White holes may not last for long because they would release much of their matter into space. This matter would gather around the white hole until it collapses and forms a black hole.
Event Horizon in White Holes
An event horizon is like a boundary. For a black hole, if you go beyond this line, there’s no turning back. But for a white hole, it’s like an invisible barrier that can’t be crossed. Once a white hole got a massive amount of material from somewhere, but then somehow, it stopped absorbing this material and started spitting it out, so that means that it spits out remnants of the past.
The General Theory of Relativity
In the 20th century, Albert Einstein discovered something incredible: gravity can bend space and time. It took scientists more than 40 years to understand these space objects or even prove their existence.
The photo of a black hole located in the center of the Galaxy M87. Even though many people made fun of the blurry images, giving scientists more credit for their work is essential. This photo was obtained through the international collaboration of space stations and eight ground-based telescopes. Obtaining proof of black holes, even in miniature form, required extensive effort and resources. We still have not discovered their twin “White Holes.” However, some scientists now believe that white holes may not exist.
Existence of White Holes
In 2022, three scientists got the Nobel Prize for proving that Einstein was wrong about many things and the universe is way more complicated than we thought. White holes could exist, and although they may seem bizarre and beyond our understanding, they are a possibility.
However, no evidence of white holes has been found yet, which may be due to our limited understanding of space.
Are White Holes Possible?
Scientists are questioning the possibility of white holes. We know how black holes are born, but for a white hole to appear, this whole process must be reversed, which is impossible. Physicists aim to revive the idea of white holes from being scientifically forgotten. Their argument asks, “What happens to objects entering a black hole?” According to the laws of physics, matter in the universe cannot disappear into a state of non-existence. It never disappears; it just changes.
The Difference Between Black Holes and White Holes
A black hole is a mysterious space object with a powerful, attractive force. Its gravitational pull is so intense that even light cannot escape. That’s why it seems black to us. They appear when stars at least three times bigger than our Sun ultimately burn out, turning old.
A star reduces in size and sheds gas layers over time. Its core shrinks until it turns into a small ball with immense pressure. And when it can’t withstand this pressure anymore, it collapses on itself and becomes a black hole. But a white hole visually has no difference from a black hole.
Conclusion
There is a suggestion that white holes may emerge as the offspring of a black hole. A black hole can shrink to a small size as it ages, while a white hole would be minuscule, equivalent to a particle and weighting like a hair strand. It wouldn’t have the incredible gravitational mass of its ancestor, but it would have its insides.
It is possible that our entire universe was formed from the creation of a white hole. Physicists claim that The Big Bang resembles the possible behavior of a white hole. Both of them are very similar mathematically.
|
Physics
|
Story at a glance Energy Secretary Jennifer Granholm is expected to announce a major scientific breakthrough on Tuesday, according to numerous media reports. It’s anticipated Granholm will announce federal scientists successfully achieved a net energy gain in a nuclear fusion reaction for the first time. If true, the breakthrough will mark a significant step forward in the quest for a zero-carbon energy source. Government scientists are expected to make a major announcement Tuesday regarding a breakthrough in nuclear fusion technology, according to a report from the Financial Times. Preliminary results of an experiment are anticipated to show that scientists have achieved a net energy gain in a fusion reaction for the first time, marking a step forward in pursuit of near limitless, zero-carbon power that doesn’t produce large amounts of radioactive waste. But what exactly is this technology and why do some hail it as a “holy grail” for clean energy? Nuclear fusion is the same energy that powers the sun and stars, and is created when nuclei from two light atoms combine at high speeds. These atoms go on to create the nucleus of a heavier atom and generate energy in the process. That energy can be used to power homes and offices without emitting carbon into the air, dumping radioactive waste into the environment, and absent the threat of a nuclear meltdown. In contrast, current nuclear power plants create energy by splitting uranium atoms (i.e. nuclear fission) and produce long-lived toxic nuclear waste. Nuclear fusion also uses fewer resources than other clean energy options like solar or wind power. Its main fuel consists of two hydrogen isotopes: deuterium, which can be found in water, and tritium, which can be produced from lithium. The energy created from one gram of deuterium-tritium is equal to the energy from approximately 2,400 gallons of oil. Overall, “the fuel is abundant,” said Eugenio Schuster, a professor in the department of mechanical engineering and mechanics at Lehigh University, in an interview with Changing America. “When we compare with nuclear fission, it has some advantages too, in the sense that there is no risk of nuclear accidents as we know them,” Schuster added. “Although the probability of these accidents to happen in present nuclear-fission power plants with the technology we have is almost zero”. America is changing faster than ever! Add Changing America to your Facebook or Twitter feed to stay on top of the news. Nuclear fusion has been touted as one way to bring cheap electricity to impoverished regions, as, according to the Financial Times, just a small cup of the hydrogen fuel produced could theoretically power a house for hundreds of years. But to achieve fusion, it is crucial that reactors create more energy than they take in. The announcement from the National Ignition Facility (NIF) at the federal Lawrence Livermore National Laboratory in California, is expected to say researchers have done just that. The process is dubbed inertial confinement fusion and uses one of the largest lasers in the world to produce what is essentially a man-made star. The Financial Times reports the fusion at the California facility produced around 2.5 megajoules of energy, marking about 120 percent of the 2.1 megajoules of energy in the lasers, although the data are still being analyzed. “The achievement at NIF represents the very best in modern science,” Michael Mauel, a professor of applied physics at Columbia University, told Changing America. “It comes from more than a hundred dedicated scientists who have worked for more than a decade to produce net fusion energy safely in the laboratory,” Mauel said. Mauel did not have early access to the details of tomorrow’s announcement, but based his comments on reports of the breakthrough. Scientists have been investigating nuclear fusion since the 1950s, and despite the anticipated breakthrough, the technology is still years away from commercial implementation. Fusion reactions take place in plasma that can reach 100 million to 200 million degrees Celsius, and can be contained by either magnetic fields or compressing the fuel with lasers. To implement the fusion on a broad scale, materials and structures need to be designed that can withstand these conditions over time. But it remains unknown whether the technology can be implemented on a wide enough scale in time to meaningfully alter the course of climate change. The U.S. power grid would also need to undergo a redesign before fusion plants become commonplace. “Fusion is a big promise in terms of generation of energy and it will play a critical role in our portfolio of green sources of energy,” Schuster said. Over the years, billions of dollars from governments and private corporations have funded research on the technology. A collaboration between China, the European Union, India, Japan, Russia, South Korea and the United States led to the creation of the International Thermonuclear Experimental Reactor (ITER) in Southern France, which uses a Russian-inspired reactor called a tokamak. In contrast to the NIF, ITER uses a magnetic field to hold the plasma in place, so researchers can extract heat without setting the walls of the reactor on fire. “Tomorrow’s announcement is not the first major breakthrough in fusion research, nor will it be the last,” Mauel said. “But, it demonstrates continued progress and makes clear how scientists are successfully developing the know-how to bring the energy of the stars to Earth.” Energy Secretary Jennifer Granholm is expected to make the announcement.
|
Physics
|
ABSTRACT breaks down mind-bending scientific research, future tech, new discoveries, and major breakthroughs.Scientists who are working toward the dream of nuclear fusion, a form of power that could potentially provide abundant clean energy in the future, have discovered surprising and unexplained behavior among particles in a government laboratory, reports a new study. The results hint at the mysterious fundamental physics that underlie nuclear fusion reactions, which fuel the Sun and other stars.Researchers at the National Ignition Facility (NIF), a device at the U.S. Department of Energy’s Lawrence Livermore National Laboratory (LLNL), recently celebrated the milestone of creating what’s known as a “burning plasma,” which is an energized state of matter that is mostly sustained by “alpha particles” created by fusion reactions. The NIF has also reached the threshold of producing “ignition,” meaning fusion reactions that are self-sustaining, which is a major breakthrough, though it will likely still take decades to develop a fusion reactor—assuming it is possible at all.Now, a team led by Ed Hartouni, a physicist at LLNL, has revealed that particles inside burning plasmas have unexpectedly high energies that could open new windows into the exotic physics of fusion reactors, which “could be important for achieving robust and reproducible ignition,” according to a study published on Monday in Nature Physics.“This is a new regime of plasma; NIF diagnostics have made it possible to study these things in ways we couldn't do before,” said Hartouni in a call with Motherboard that also included study authors Alastair Moore, a physicist at LLNL, and Aidan Crilly, a research associate in plasma physics at Imperial College London. “We're able to see things at a level that we hadn't been able to see before, and there are surprises with these plasmas in an actual laboratory.”“It's a really exciting time for us to finally have an almost-igniting facility and experiments to understand this physics that we haven't really been able to understand before and begin to get to the point where we can think about what a future fusion facility might look like,” added Moore. The team discovered the strange behavior of the ions while examining observations from several experiments that have occurred at NIF in recent years. These tests involve fusion between particles called ions, which are atoms that don’t have the same number of positive and negative components (protons and electrons), leaving them with an electric charge. Using dozens of lasers to heat deuterium and tritium ions, which are both heavier versions of hydrogen, NIF researchers generate fusion reactions between the ions. In a burning plasma, the reactions between the ions produce new entities, called alpha particles, that drive up higher temperatures that, in turn, spark even more reactions as part of a thermonuclear burn. Hartouni and his colleagues have now shown that NIF experiments that produce alpha particles consistently show ions with higher energies than predicted by models, though the source of these energy boosts “is an open experimental question,” according to the study. The team presented four possible explanations for the observation, including so-called “kinetic effects” that have been speculated about in previous theories, but it will take more experiments and meticulous research to understand the underlying mechanisms at work in the plasma.“It's a mystery, but there's multiple hypotheses,” said Crilly. “Whether it's one on its own, like this kinetic effect, or it's a combination of them and they all add their little bit to that gap.”“It's worth thinking about how extreme these conditions are and why it's hard,” he added, noting that the NIF fusion reactions occur at temperatures around 180 million degrees Fahrenheit and in conditions that are 30 times more dense than the Sun. Within this otherworldly environment “we need to understand exactly how an alpha particle bumps into all these other particles and distributes its energy and how they all collide,” Crilly noted.To that end, the team plans to continue searching for clues about the weird ion behavior in both models and experiments. Given that NIF has produced this unprecedented glimpse into the weird world of fusion reactions, the facility is bound to find strange new insights no matter what direction their research takes them.“We've never been able to study this before,” Moore said. “This is the first burning plasma that we've ever created on the planet, so it's pretty amazing.”ORIGINAL REPORTING ON EVERYTHING THAT MATTERS IN YOUR INBOX.By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.
|
Physics
|
Research News
Physicists have invented a new type of analogue quantum computer that can tackle hard physics problems that the most powerful digital supercomputers cannot solve.
New research published in Nature Physics by collaborating scientists from Stanford University in the USA and University College Dublin (UCD) in Ireland has shown that a novel type of highly-specialized analogue computer, whose circuits feature quantum components, can solve problems from the cutting edge of quantum physics that were previously beyond reach. When scaled up, such devices may be able to shed light on some of the most important unsolved problems in physics.
For example, scientists and engineers have long wanted to gain a better understanding of superconductivity, because existing superconducting materials â such as those used in MRI machines, high speed train and long-distance energy-efficient power networks â currently operate only at extremely low temperatures, limiting their wider use. The holy grail of materials science is to find materials that are superconducting at room temperature, which would revolutionize their use in a host of technologies.
Dr Andrew Mitchell (right) is Director of the UCD Centre for Quantum Engineering, Science, and Technology (C-QuEST), a theoretical physicist at UCD School of Physics and a co-author of the paper. He said: âCertain problems are simply too complex for even the fastest digital classical computers to solve. The accurate simulation of complex quantum materials such as the high-temperature superconductors is a really important example â that kind of computation is far beyond current capabilities because of the exponential computing time and memory requirements needed to simulate the properties of realistic models.
âHowever, the technological and engineering advances driving the digital revolution have brought with them the unprecedented ability to control matter at the nanoscale. This has enabled us to design specialised analogue computers, called âQuantum Simulators,â that solve specific models in quantum physics by leveraging the inherent quantum mechanical properties of its nanoscale components. While we have not yet been able to build an all-purpose programmable quantum computer with sufficient power to solve all of the open problems in physics, what we can now do is build bespoke analogue devices with quantum components that can solve specific quantum physics problems.â
The architecture for these new quantum devices involves hybrid metal-semiconductor components incorporated into a nanoelectronic circuit, devised by researchers at Stanford, UCD and the Department of Energy's SLAC National Accelerator Laboratory (located at Stanford). Stanfordâs Experimental Nanoscience Group, led by Professor David Goldhaber-Gordon, built and operated the device, while the theory and modelling was done by Dr Mitchell at UCD.
Prof Goldhaber-Gordon, who is a researcher with the Stanford Institute for Materials and Energy Sciences, said: "We're always making mathematical models that we hope will capture the essence of phenomena weâre interested in, but even if we believe they're correct, they're often not solvable in a reasonable amount of time."
With a Quantum Simulator, "we have these knobs to turn that no one's ever had before," Prof Goldhaber-Gordon said.
WHY ANALOGUE?
The essential idea of these analogue devices, Goldhaber-Gordon said, is to build a kind of hardware analogy to the problem you want to solve, rather than writing some computer code for a programmable digital computer. For example, say that you wanted to predict the motions of the planets in the night sky and the timing of eclipses. You could do that by constructing a mechanical model of the solar system, where someone turns a crank, and rotating interlocking gears represent the motion of the moon and planets. In fact, such a mechanism was discovered in an ancient shipwreck off the coast of a Greek island dating back more than 2000 years. This device can be seen as a very early analogue computer.
Not to be sniffed at, analogous machines were used even into the late 20th century for mathematical calculations that were too hard for the most advanced digital computers at the time.
But to solve quantum physics problems, the devices need to involve quantum components. The new Quantum Simulator architecture involves electronic circuits with nanoscale components whose properties are governed by the laws of quantum mechanics. Importantly, many such components can be fabricated, each one behaving essentially identically to the others. This is crucial for analogue simulation of quantum materials, where each of the electronic components in the circuit is a proxy for an atom being simulated, and behaves like an âartificial atomâ. Just as different atoms of the same type in a material behave identically, so too must the different electronic components of the analogue computer.
The new design therefore offers a unique pathway for scaling up the technology from individual units to large networks capable of simulating bulk quantum matter. Furthermore, the researchers showed that new microscopic quantum interactions can be engineered in such devices. The work is a step towards developing a new generation of scalable solid-state analogue quantum computers.
Micrograph image of the new Quantum Simulator, which features two coupled nano-sized metal-semiconductor components embedded in an electronic circuit.
|
Physics
|
Black holes are among the most awesome and mysterious objects in the known Universe. These gravitational behemoths form when massive stars undergo gravitational collapse at the end of their lifespans and shed their outer layers in a massive explosion (a supernova). Meanwhile, the stellar remnant becomes so dense that the curvature of spacetime becomes infinite in its vicinity and its gravity so intense that nothing (not even light) can escape its surface. This makes them impossible to observe using conventional optical telescopes that study objects in visible light.
As a result, astronomers typically search for black holes in non-visible wavelengths or by observing their effect on objects in their vicinity. After consulting the Gaia Data Release 3 (DR3), a team of astronomers led by the University of Alabama Huntsville (UAH) recently observed a black hole in our cosmic backyard. As they describe in their study, this monster black hole is roughly twelve times the mass of our Sun and located about 1,550 light-years from Earth. Because of its mass and relative proximity, this black hole presents opportunities for astrophysicists.
The study was led by Dr. Sukanya Chakrabarti, the Pei-Ling Chan Endowed Chair in the Department of Physics at UAH. She was joined by astronomers from the Observatories of the Carnegie Institution for Science, the Rochester Institute of Technology, the SETI Institute Carl Sagan Center, UC Santa Cruz, UC Berkeley, the University of Notre Dame, Wisconsin-Milwaukee, Hawaii, and Yale. The paper that describes their findings recently appeared online and is being reviewed by the Astrophysical Journal. The Magellan Telescopes at the Las Campanas Observatory in Chile. Credit: Carnegie Institute of Science
Black holes are of particular interest to astronomers because they offer opportunities to study the laws of physics under the most extreme conditions. In some cases, like the supermassive black holes (SMBH) that reside at the center of most massive galaxies, they also play a vital role in galaxy formation and evolution. However, there are still unresolved questions regarding the role noninteracting black holes play in galactic evolution. These binary systems consist of a black hole and a star, where the black hole does not draw material from the stellar companion. Said Dr. Chakrabari in a UAH press release:
“It is not yet clear how these noninteracting black holes affect galactic dynamics in the Milky Way. If they are numerous, they may well affect the formation of our galaxy and its internal dynamics. We searched for objects that were reported to have large companion masses but whose brightness could be attributed to a single visible star. Thus, you have a good reason to think that the companion is dark.”
To find the black hole, Dr. Chakrabarti and her team analyzed data from the Gaia DR3, which included information on nearly 200,000 binary stars observed by the European Space Agency’s (ESA) Gaia Observatory. The team followed up on sources of interest by consulting spectrographic measurements from other telescopes, like the Lick Observatory’s Automated Planet Finder, the Magellan Telescopes, and the W.M. Keck Observatory in Hawaii. These measurements showed a main sequence star subject to a powerful gravitational force. As Dr. Chakrabari explained:
“The pull of the black hole on the visible sun-like star can be determined from these spectroscopic measurements, which give us a line-of-sight velocity due to a Doppler shift. By analyzing the line-of-sight velocities of the visible star – and this visible star is akin to our own Sun – we can infer how massive the black hole companion is, as well as the period of rotation, and how eccentric the orbit is. These spectroscopic measurements independently confirmed the Gaia solution that also indicated that this binary system is composed of a visible star that is orbiting a very massive object.” Members of GCOI posing in front of the Keck Observatory at the summit of Maunakea, Hawaii. Credit: W.M. Keck Observatory
Interacting black holes are typically easier to observe in visible light because they are in tighter orbits and pull material from their stellar companions. This material form a torus-shaped accretion disk around the black hole that is accelerated to relativistic velocities (close to the speed of light), becoming highly energetic and emitting X-ray radiation. Because noninteracting black holes have wider orbits and do not form these disks, their presence has to be inferred from analyzing the motions of the visible star. Said Dr. Chakrabarti:
“The majority of black holes in binary systems are in X-ray binaries – in other words, they are bright in X-rays due to some interaction with the black hole, often due to the black hole devouring the other star. As the stuff from the other star falls down this deep gravitational potential well, we can see X-rays. In this case, we’re looking at a monster black hole, but it’s on a long-period orbit of 185 days, or about half a year. It’s pretty far from the visible star and not making any advances toward it.”
The techniques employed by Dr. Chakrabarti and her colleagues could lead to the discovery of many more noninteracting systems. According to current estimates, there could be a million visible stars in our galaxy that have massive black hole companions. While this represents a tiny fraction of its stellar population (~100 billion stars), the Gaia Observatory‘s precise measurements have narrowed that search. To date, Gaia has obtained data on the positions and proper motions of over 1 billion astronomical objects – including stars, galaxies, Further studies of this population will allow astronomers to learn more about this population of binary systems and the formation pathway of black holes. As Dr. Chakrabarti summarized:
“There are currently several different routes that have been proposed by theorists, but noninteracting black holes around luminous stars are a very new type of population. So, it will likely take us some time to understand their demographics, and how they form, and how these channels are different – or if they’re similar – to the more well-known population of interacting, merging black holes.”
Further Reading: UAH, arXiv
Like this:Like Loading...
|
Physics
|
The scientist Simon Altmann, who has died aged 98, crossed the boundaries between theoretical physics, theoretical chemistry and mathematics.Born and brought up in Buenos Aires, Argentina, he was the son of Aarón Altmann, a travelling salesman, and his wife, Matilde (nee Branover), a secretary. Simon obtained a doctorate in chemistry, while attending as many mathematics and physics lectures as he could. In 1949, as a British Council scholar, he travelled to King’s College London, studying for a second doctorate supervised by Charles Coulson, my father, who had just been appointed professor of theoretical physics. The Coulson and Altmann families have been friends ever since.In 1948 Simon had married Bocha Liebeschütz in Buenos Aires, and in 1950 she joined him in London, where she continued her studies for a PhD in biochemistry at University College. Among the researchers and fellow students at King’s they made some lifelong friends, including Roy McWeeny, Peter Higgs and, in particular, during a difficult time for her, Rosalind Franklin, who took the famous photos that showed that DNA took the form of a double helix.In 1952 Simon moved to Oxford University. My father had been appointed professor of applied mathematics and Simon came as his research assistant. At the Mathematical Institute they used advanced mathematical techniques, alongside the new resource of computers, to determine the shapes and structures of molecules. Their solutions, or approximations, to the equations that determine these structures could be tested by experimental chemists and crystallographers, and more often than not they were confirmed as correct.As a student in Buenos Aires, Simon had been briefly imprisoned for his anti-Perón activities. After Juan Perón’s regime fell, the situation changed. Simon was offered and took up a post at the University of Buenos Aires in 1957; but he quickly became disillusioned with university politics, and a year later returned to Oxford.In 1959 Simon became university lecturer in the theory of metals, transferring to the department of metallurgy (now the department of materials). The following year he started teaching at Brasenose College, where in 1964 he was elected tutorial fellow in physics. From 1955 till his death in 1973, my father ran summer schools in Oxford that attracted chemists from all over the world and gave them insights into theoretical chemistry. Simon was always there too, equally able to make the complex processes accessible to the very mixed audiences. This introduced him to many international contacts and visiting lectureships, particularly in Rome, fostering his love of Italy.Although substantially self-taught, Simon was an accomplished mathematician who specialised in the applications of group theory in crystallography and in understanding the structures of solids. His work on quaternions, which can be used to describe rotations around several axes simultaneously, has been used in robotics and computer graphics.Possessed of a lifelong interest in philosophy, poetry, classical music, art and architecture, he designed a house in Italy where he spent every summer vacation he could manage. After his retirement in 1991, he published work in the philosophy of science; and he took his interest in symmetry, inherent in group theory, in unexpected directions, exploring symmetries in Renaissance paintings and publishing several papers on the history of art.Bocha died in 2012. He is survived by their sons, Dan, Simon and Paul.
|
Physics
|
Etiamophobia is the fear of an asteroid hitting the Earth, presumably ending all life as we know it. While improbable before, we now have a means to protect humanity from the whims of an unpredictable universe. On Monday evening, about seven million miles away, NASA’s Double Asteroid Redirection Test (DART), at last made contact with the asteroid Didymos and its football stadium-sized moonlet, a particularly small natural satellite, Dimorphus. The spacecraft journeyed for a little over 10 months to test if it would be possible to save Earth from future hazardous asteroids or comets by booting them off course. “We only have one home so we ought to take care of it,” said NASA Administrator Bill Nelson during a DART mission overview briefing on Monday afternoon before the mission collision. He went on to note that DART is “the world’s first mission to test the technology for defending Earth against an incoming killer asteroid.” Thanks DART. You’ve served your purpose. This is the last complete image of asteroid moonlet Dimorphos before DART smashed into the surface. NASA/Johns Hopkins APL While Earth is bombarded by asteroids and smaller meteors on a fairly regular basis, not many are noticed or pose any danger to life on the planet. While humans (so far) have been luckier than the dinosaurs, there’s no telling if Earth’s good fortune will hold up. In fact, the largest recorded asteroid impact to date happened only 115 years ago, when an asteroid the size of a 25 story-building flattened about 800 square miles of forest in an uninhabited area of Siberia, Russia. Lindley Johnson, NASA’s planetary defense officer, told Popular Science that if a similar impact were ever to occur in a metropolitan area, it would most certainly be on the scale of a natural disaster. Johnson says that DART is a “significant milestone” in humanity’s capabilities to protect the planet from such a dark outcome. “This is the first time that humankind acquired the knowledge and the technology to start to rearrange things a little bit in the solar system, if you will, and make it a more hospitable place for life,” Johnson says. [Related: NASA’s first attempt to smack an asteroid was picture perfect] In the minutes before impact, DART hurled toward the moonlet at more than 14,000 miles per hour, before striking 17 meters from the craggy center and utterly destroying itself around 7:14pm EDT. By crashing the more than $3 million probe into the Didymos’ satellite, scientists expect the hit to have shaved at least a fraction of a millimeter per second off Dimorphus’ orbital speed. As DART is about 11 billion pounds smaller than its target, the craft aimed to alter the asteroid’s course, which takes less energy than trying to completely obliterate it, says Johnson. Ultimately, pushing the asteroid away is a safer and altogether surer protective maneuver. “You can never really be assured that you’re going to completely break up an asteroid or destroy it,” he says. “If you’ve done nothing to change its orbit, then you’ve just got a bunch of pieces that are headed at you.” It also saves what could be a precious amount of time before planet impact and maintains more control over the object. While the data from the collision is still being collected and processed, humanity’s first attempt at moving a celestial object and its first planetary defense test seems to have been a success: Along with the loss of camera visuals, the spacecraft’s impact was confirmed by a loss of signal. Although it could take anywhere from weeks to a few months before NASA knows just how far the mission was able to push the asteroid out of orbit, the spacecraft’s ability to nail its target has catapulted the concept of planetary defense out of the realm of doomsday-esque movie plots and into a real-life solution. Yet what does this triumphant first step mean for the advancement of other precautionary measures? While the DART spacecraft met its valiant end, NASA scientists say that the real science of the mission has only just begun. Telescopes on Earth have spent years studying and measuring the Didymos-Dimorphus system, and those same telescopes will now be trained on the system to make new measurements on its orbit relative to what it was before. Other missions that survey the vast sky, like the James Webb Space Telescope, will also soon point towards the asteroid system, said Elena Adams, DART missions systems engineer at Johns Hopkins Applied Physics Laboratory, during a post-impact panel on Monday night. NASA and the public could also get images of the system from other active crafts like LICIACube, LUCY, as well as the Hubble Space Telescope. [Related: When Voyager 1 goes dark, what comes next?] And the US isn’t the only nation investing in our planet’s defenses. In October 2024, the European Space Agency will send another probe, HERA, to examine the aftermath of the DART mission, making a detailed impact survey that will give scientists information they need to understand the experiment well enough to do again, with even more success. DART is only the beginning, but it marks the dawn of a universe where humans aren’t just passive residents, but where we can be assured of our place among the inconstant cosmos.
|
Physics
|
An illustration showing the internal workings of heavier (left) and lighter (right) neutron stars, imagined as pralines.Illustration: Peter Kiefer & Luciano RezzollaAstrophysicists modeling the insides of neutron stars have found that the extremely compact objects have different internal structures, depending on their mass. They suggest thinking of the stars as different types of chocolate praline, a delicious treat—but that’s where the similarities end, at least as far as we know. OffEnglishNeutron stars are the extraordinarily dense corpses of massive stars that imploded; they’re second only to black holes in terms of their density. Neutron stars are so-named because their gravitational force causes their atoms’ electrons to collapse onto the protons, creating an object that is almost entirely composed of neutrons.Neutron stars’ gravitational fields are super intense. If a human observer went near one, they’d be torn apart at an atomic level. Their gravitations fields are so strong that a “mountain” on a neutron star would stand less than a millimeter tall. The recent research team constructed millions of models to try to discern the internal workings of these stars, which are remarkably difficult to study and, as a result, are more the domain of theory than observation.The researchers found that lighter neutron stars—those with masses about 1.7 times that of our Sun and under—should have soft mantles and stiff cores. Heavier neutron stars have the opposite, according to the team’s findings, which were published today in The Astrophysical Journal Letters.G/O Media may get a commissionAn ultra-smart air monitoror Black Friday, uHoo is $140 off its original price, plus you’ll get one year of uHoo’s Premium plan, with customized alerts about air quality.Luciano Rezzolla, an astrophysicist at the Institute of Theoretical Physics and who led the research, likened the stars’ structure to chocolate pralines.“Light stars resemble those chocolates that have a hazelnut in their centre surrounded by soft chocolate, whereas heavy stars can be considered more like those chocolates where a hard layer contains a soft filling,” Rezzolla said in a Goethe University Frankfurt release.The researchers modeled over a million possible scenarios for neutron star makeup, based on expectations for the star’s mass, pressure, volume, and temperature, as well as astronomical observations of the objects. Modeling is a crucial means of interrogating neutron stars, because only a few contraptions on Earth—CERN’s Large Hadron Collider and SLAC’s Matter in Extreme Conditions instrument, for two—are capable of mimicking such intense physics.To determine the consistencies of the stars, the researchers modeled how the speed of sound would travel through the objects. Sound waves are also used to understand the internal structure of planets, as the InSight lander has intrepidly done on Mars.“What we have shown, by constructing millions of equation of state models (from which the sound speed can be computed), is that maximally massive neutron stars have a lower sound speed in the core region than in their outer layers,” said Christian Ecker, an astrophysicist at Goethe University, in an email to Gizmodo.“This hints to some material change in their cores, like for example a transition from baryonic to quark matter,” Ecker added.The researchers also found that all neutron stars are probably about 7.46 miles (12 km) across, regardless of their mass. That measurement is less than half that of a 2020 finding that the typical neutron star was about 13.6 miles (22 km) across. Despite that size, the average neutron star mass is around half a million Earths. There’s dense, and then there’s dense.While the findings offer some insight about the diversity of neutron stars in terms of their consistency, the researchers did not investigate the stars’ ingredients or how they fit together. (If you’ve gotten this far, neutron stars are not actually made of chocolate.) Some suspect that neutron stars are neutrons all the way down; others believe that the centers of the stars are factories for exotic, hitherto unidentified particles. But for the most part, these superdense enigmas remain just that. Thankfully, there are observatories set up to collect more direct data. Mergers (i.e. violent collisions) between neutron stars and with black holes can reveal the mass of the involved objects, as well as the nature of neutron star material.Projects like NICER, NANOGrav, the CHIME radio telescope, and the LIGO and Virgo scientific collaborations are all teaching physicists about neutron star size and structure. More observational data can be fed into models for better estimates of the stars’ aspects. Ecker added that very massive neutron stars (in the ballpark of two solar masses) would be particularly helpful in better constraining expectations of the physical characteristics of these extreme objects. With any luck, we may soon get more details of the exact ingredients of these giant cosmic pralines—and how their recipes may differ depending on their size.More: Extremely Massive Neutron Star May Be the Largest Ever Spotted
|
Physics
|
On a cold winter day, the warmth of the sun is welcome. Yet as humanity emits more and more greenhouse gases, the Earth’s atmosphere traps more and more of the sun’s energy and steadily increases the Earth’s temperature. One strategy for reversing this trend is to intercept a fraction of sunlight before it reaches our planet. For decades, scientists have considered using screens, objects or dust particles to block just enough of the sun’s radiation—between 1 or 2%—to mitigate the effects of global warming.
A University of Utah-led study explored the potential of using dust to shield sunlight. They analyzed different properties of dust particles, quantities of dust and the orbits that would be best suited for shading Earth. The authors found that launching dust from Earth to a way station at the “Lagrange Point” between Earth and the sun (L1) would be most effective but would require astronomical cost and effort. An alternative is to use moondust. The authors argue that launching lunar dust from the moon instead could be a cheap and effective way to shade the Earth.
The team of astronomers applied a technique used to study planet formation around distant stars, their usual research focus. Planet formation is a messy process that kicks up lots of astronomical dust that can form rings around the host star. These rings intercept light from the central star and re-radiate it in a way that we can detect it on Earth. One way to discover stars that are forming new planets is to look for these dusty rings.
“That was the seed of the idea; if we took a small amount of material and put it on a special orbit between the Earth and the sun and broke it up, we could block out a lot of sunlight with a little amount of mass,” said Ben Bromley, professor of physics and astronomy and lead author of the study.
“It is amazing to contemplate how moon dust—which took over four billion years to generate—might help slow the rise in Earth’s temperature, a problem that took us less than 300 years to produce,” said Scott Kenyon, co-author of the study from the Center for Astrophysics | Harvard & Smithsonian.
Casting a shadow
A shield’s overall effectiveness depends on its ability to sustain an orbit that casts a shadow on Earth. Sameer Khan, undergraduate student and the study’s co-author, led the initial exploration into which orbits could hold dust in position long enough to provide adequate shading. Khan’s work demonstrated the difficulty of keeping dust where you need it to be.
“Because we know the positions and masses of the major celestial bodies in our solar system, we can simply use the laws of gravity to track the position of a simulated sunshield over time for several different orbits,” said Khan.
Two scenarios were promising. In the first scenario, the authors positioned a space platform at the L1 Lagrange point, the closest point between Earth and the sun where the gravitational forces are balanced. Objects at Lagrange points tend to stay along a path between the two celestial bodies, which is why the James Webb Space Telescope (JWST) is located at L2, a Lagrange point on the opposite side of the Earth.
In computer simulations, the researchers shot test particles along the L1 orbit, including the position of Earth, the sun, the moon, and other solar system planets, and tracked where the particles scattered. The authors found that when launched precisely, the dust would follow a path between Earth and the sun, effectively creating shade, at least for a while. Unlike the 13,000-pound JWST, the dust was easily blown off course by the solar winds, radiation, and gravity within the solar system. Any L1 platform would need to create an endless supply of new dust batches to blast into orbit every few days after the initial spray dissipates.
“It was rather difficult to get the shield to stay at L1 long enough to cast a meaningful shadow. This shouldn’t come as a surprise, though, since L1 is an unstable equilibrium point. Even the slightest deviation in the sunshield’s orbit can cause it to rapidly drift out of place, so our simulations had to be extremely precise,” Khan said.
In the second scenario, the authors shot lunar dust from the surface of the moon towards the sun. They found that the inherent properties of lunar dust were just right to effectively work as a sun shield. The simulations tested how lunar dust scattered along various courses until they found excellent trajectories aimed toward L1 that served as an effective sun shield. These results are welcome news, because much less energy is needed to launch dust from the moon than from Earth. This is important because the amount of dust in a solar shield is large, comparable to the output of a big mining operation here on Earth. Furthermore, the discovery of the new sun-shielding trajectories means delivering the lunar dust to a separate platform at L1 may not be necessary.
Just a moonshot?
The authors stress that this study only explores the potential impact of this strategy, rather than evaluate whether these scenarios are logistically feasible.
“We aren’t experts in climate change, or the rocket science needed to move mass from one place to the other. We’re just exploring different kinds of dust on a variety of orbits to see how effective this approach might be. We do not want to miss a game changer for such a critical problem,” said Bromley.
One of the biggest logistical challenges—replenishing dust streams every few days—also has an advantage. Eventually, the sun’s radiation disperses the dust particles throughout the solar system; the sun shield is temporary and shield particles do not fall onto Earth. The authors assure that their approach would not create a permanently cold, uninhabitable planet, as in the science fiction story, “Snowpiercer.”
“Our strategy could be an option in addressing climate change,” said Bromley, “if what we need is more time.
|
Physics
|
Scientists today wield an arsenal of cutting-edge technology. Chemical engineers can turn CO2 into vodka, planetary scientists can work in outer space and physicists can manipulate single atoms in the lab. Researchers use these tools to figure out who we are and how we got here. Like Taylor Perron, a geomorphologist who searches for clues in planetary landscapes, or Monika Schleier-Smith, a physicist testing the mechanical rules of the quantum realm. They also confront humanitarian crises. By 2050, climate change will cause 250,000 deaths per year. Corinne Le Quéré traces greenhouse gases through air, land and sea, while Paul Anastas innovates ways to reduce their emissions in the first place. Meanwhile, cancer rates will increase by nearly 50 percent in the same timeframe. Carolyn Bertozzi invented a new field of chemistry that could spawn efficient cancer treatments. These scientists embody the chief objectives of science — to push the frontiers of what we know and to advance human welfare along the way. 1. Paul Anastas: The Father of Green Chemistry(Credit: Jo karen/Shutterstock)When you think of chemists, you might picture pollutants and health hazards. But Paul Anastas flips that perception on its head. Anastas founded green chemistry in the 1990s to reduce toxic wastes from chemical processes. In addition to his research on sustainable chemistry at Yale, he has advised the Environmental Protection Agency and the White House on environmental issues surrounding 9/11, the nuclear disaster at Fukushima and the BP oil spill. He even enters the board room and helps make Fortune 100 companies more sustainable. His work is transforming our future. Read more: Scientist You Should Know: Paul Anastas is the Father of Green Chemistry2. Carolyn Bertozzi: The Sugar Scientist Revolutionizing Healthcare(Credit: Anusorn Nakdee/Shutterstock)Cells are coated in sugars, like the colored layers of M&Ms. In a healthy cell, sugars organize like a neatly manicured lawn, but in tumors, they go haywire. Carolyn Bertozzi, a professor at Stanford, studies this process to develop cancer treatments. Along the way, she invented a field of chemistry that transformed medical imaging and drug delivery. Bioorthogonal chemistry involves a suite of reactions that can perform in the body without altering it. It’s led to basic discoveries and medical advances, revolutionizing biochemistry as we know it. 3. Corinne Le Quéré: The Climate Pragmatist(Credit: elenabsl/Shutterstock)Corinne Le Quéré isn’t a climate optimist. Nor is she a climate pessimist. She simply refuses to get caught up in the emotion of climate change. The climate scientist, based at the University of East Anglia, changed conventional wisdom about carbon storage in the oceans. Le Quéré brings her groundbreaking work directly to policymakers. As the chair of France’s High Council on climate, advisor for the U.K. Committee on Climate Change and former member of the Intergovernmental Panel on Climate Change, she helps politicians implement science-driven policy. Read more: Scientist You Should Know: Corinne Le Quéré Tries to Stunt Climate Change4. Taylor Perron: The Planetary Detective(Credit: Karel Resl/Shutterstock)The magnitude of space and time captivates Taylor Perron. As a geomorphologist and professor at the Massachusetts Institute of Technology, he asks how landscapes shape human and planetary origins. Contemplating these questions is scary, he says, but they lead to existential questions about who we are and how we got here. Perron unearths the clues buried in planetary landscapes to piece together the past. His geological detective work spans from Hawaii to Mars, painting a clearer picture of the past and drafting a blueprint of the future. 5. Monika Schleier-Smith: The Atom Wrangler(Credit: luchschenF/Shutterstock)Monika Schleier-Smith’s lab at Stanford is filled with lasers and mirrors. She uses them to meticulously tune customizable atomic networks. Controlling this phenomenon — called quantum entanglement — augments the computational problems that quantum physics can solve. The applications are enormous. It could build the world’s most precise clocks or broaden the possibilities of quantum computing. But she also asks fundamental questions about the universe, like what happens when information falls into a black hole, and how gravity works at subatomic size scales? Schleier-Smith’s experimental prowess allows her to ask questions that theorists used to only dream of.
|
Physics
|
© Provided by ScienceAlert Illuminated tunnel of light Nothing can go faster than light. It's a rule of physics woven into the very fabric of Einstein's special theory of relativity. The faster something goes, the closer it gets to its perspective of time freezing to a standstill. Go faster still, and you run into issues of time reversing, messing with notions of causality. But researchers from the University of Warsaw in Poland and the National University of Singapore have now pushed the limits of relativity to come up with a system that doesn't run afoul of existing physics, and might even point the way to new theories. What they've come up with is an "extension of special relativity" that combines three time dimensions with a single space dimension ("1+3 space-time"), as opposed to the three spatial dimensions and one time dimension that we're all used to. Rather than creating any major logical inconsistencies, this new study adds more evidence to back up the idea that objects might well be able to go faster than light without completely breaking our current laws of physics. "There is no fundamental reason why observers moving in relation to the described physical systems with speeds greater than the speed of light should not be subject to it," says physicist Andrzej Dragan, from the University of Warsaw in Poland. This new study builds on previous work by some of the same researchers which posits that superluminal perspectives could help tie together quantum mechanics with Einstein's special theory of relativity – two branches of physics that currently can't be reconciled into a single overarching theory that describes gravity in the same way we explain other forces. Particles can no longer be modelled as point-like objects under this framework, as we might in the more mundane 3D (plus time) perspective of the Universe. Instead, to make sense of what observers might see and how a superluminal particle might behave, we'd need to turn to the kinds of field theories that underpin quantum physics. Based on this new model, superluminal objects would look like a particle expanding like a bubble through space – not unlike a wave through a field. The high-speed object, on the other hand, would 'experience' several different timelines. Even so, the speed of light in a vacuum would remain constant even for those observers going faster than it, which preserves one of Einstein's fundamental principles – a principle that has previously only been thought about in relation to observers going slower than the speed of light (like all of us). "This new definition preserves Einstein's postulate of constancy of the speed of light in vacuum even for superluminal observers," says Dragan. "Therefore, our extended special relativity does not seem like a particularly extravagant idea." However, the researchers acknowledge that switching to a 1+3 space-time model does raise some new questions, even while it answers others. They suggest that extending the theory of special relativity to incorporate faster-than-light frames of reference is needed. That may well involve borrowing from quantum field theory: a combination of concepts from special relativity, quantum mechanics, and classical field theory (which aims to predict how physical fields are going to interact with each other). If the physicists are right, the particles of the Universe would all have extraordinary properties in extended special relativity. One of the questions raised by the research is whether or not we would ever be able to observe this extended behavior – but answering that is going to require a lot more time and a lot more scientists. "The mere experimental discovery of a new fundamental particle is a feat worthy of the Nobel Prize and feasible in a large research team using the latest experimental techniques," says physicist Krzysztof Turzyński, from the University of Warsaw. "However, we hope to apply our results to a better understanding of the phenomenon of spontaneous symmetry breaking associated with the mass of the Higgs particle and other particles in the Standard Model, especially in the early Universe." The research has been published in Classical and Quantum Gravity.
|
Physics
|
Scientists have simulated what would happen if a nuclear bomb was dropped on a major city.Whether you're close enough to be vaporised in an instant, or within range of possible radiation poisoning, there's no such thing as a good place to be if one goes off where you live.
But a new peer-reviewed study, published in Physics of Fluids by the American Institute of Physics, aimed to focus on the specific impact on people who manage to shelter indoors.Using advanced computer modelling, researchers looked into how a nuclear blast from an intercontinental ballistic missile would sweep through buildings.According to their results, while some would be destroyed, even being in a sturdy structure which might ultimately survive the bomb itself is not enough to avoid risk of serious injury.
The already significant blast waves can be exacerbated by tight spaces, as the air generated reflects off walls, bends round corners and bounces through the building at speeds strong enough to lift people into the air.In the worst cases, it can produce a force equivalent to 18 times a human's body weight. Blast's airspeeds 'a considerable hazard'Research author Dimitris Drikakis, of the University of Nicosia, said: "Before our study, the danger to people inside a concrete-reinforced building that withstands the blast wave was unclear." He added that the "high airspeeds" caused by a nuclear blast are a "considerable hazard", in addition to more established threats like the explosion itself and subsequent radiation.There would only be a few seconds between the explosion and the arrival of the blast wave.'Most dangerous' places to shelter indoorsThe researchers highlighted three places of notable danger when sheltering indoors: windows, corridors and doors.Ioannis Kokkinakis said these were the "most dangerous, critical indoor locations to avoid"."People should stay away from these locations and immediately take shelter," he warned."Even in the front room facing the explosion, one can be safe from the high airspeeds if positioned at the corners of the wall facing the blast."The study's authors hope that understanding the impact of a nuclear explosion can help prevent injuries and guide rescue efforts, though they of course hope their advice will never be needed.It comes after New York's authorities released a video telling locals how to survive a nuclear attack, stressing the importance of staying indoors and washing off any radioactive dust or ash.The clip was released to bemusement and some alarm last summer, but officials stressed that it was not tied to any specific threats and was only meant to raise awareness.
|
Physics
|
Every field of science has its favorite anniversary.
For physics, it’s Newton’s Principia of 1687, the book that introduced the laws of motion and gravity. Biology celebrates Darwin’s On the Origin of Species (1859) along with his birthday (1809). Astronomy fans commemorate 1543, when Copernicus placed the sun at the center of the solar system.
And for chemistry, no cause for celebration surpasses the origin of the periodic table of the elements, created 150 years ago this March by the Russian chemist Dmitrii Ivanovich Mendeleev.
Mendeleev’s table has become as familiar to chemistry students as spreadsheets are to accountants. It summarizes an entire science in 100 or so squares containing symbols and numbers. It enumerates the elements that compose all earthly substances, arranged so as to reveal patterns in their properties, guiding the pursuit of chemical research both in theory and in practice.
Science News headlines, in your inbox
Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.
Thank you for signing up!
There was a problem signing you up.
“The periodic table,” wrote the chemist Peter Atkins, “is arguably the most important concept in chemistry.”
Mendeleev’s table looked like an ad hoc chart, but he intended the table to express a deep scientific truth he had uncovered: the periodic law. His law revealed profound familial relationships among the known chemical elements — they exhibited similar properties at regular intervals (or periods) when arranged in order of their atomic weights — and enabled Mendeleev to predict the existence of elements that had not yet been discovered.
“Before the promulgation of this law the chemical elements were mere fragmentary, incidental facts in Nature,” Mendeleev declared. “The law of periodicity first enabled us to perceive undiscovered elements at a distance which formerly was inaccessible to chemical vision.”
Mendeleev’s table did more than foretell the existence of new elements. It validated the then-controversial belief in the reality of atoms. It hinted at the existence of subatomic structure and anticipated the mathematical apparatus underlying the rules governing matter that eventually revealed itself in quantum theory. His table finished the transformation of chemical science from the medieval magical mysticism of alchemy to the realm of modern scientific rigor. The periodic table symbolizes not merely the constituents of matter, but the logical cogency and principled rationality of all science.
Laying the groundwork
Legend has it that Mendeleev conceived and created his table in a single day: February 17, 1869, on the Russian calendar (March 1 in most of the rest of the world). But that’s probably an exaggeration. Mendeleev had been thinking about grouping the elements for years, and other chemists had considered the notion of relationships among the elements several times in the preceding decades.
Subscribe to Science News
Get great science journalism, from the most trusted source, delivered to your doorstep.
In fact, German chemist Johann Wolfgang Döbereiner noticed peculiarities in groupings of elements as early as 1817. In those days, chemists hadn’t yet fully grasped the nature of atoms, as described in the atomic theory proposed by English schoolteacher John Dalton in 1808. In his New System of Chemical Philosophy, Dalton explained chemical reactions by assuming that each elementary substance was made of a particular type of atom.
Chemical reactions, Dalton proposed, produced new substances when atoms were disconnected or joined. Any given element consisted entirely of one kind of atom, he reasoned, distinguished from other kinds by weight. Oxygen atoms weighed eight times as much as hydrogen atoms; carbon atoms were six times as heavy as hydrogen, Dalton believed. When elements combined to make new substances, the amounts that reacted could be calculated with knowledge of those atomic weights.
Dalton was wrong about some of the weights — oxygen is really 16 times the weight of hydrogen, and carbon is 12 times heavier than hydrogen. But his theory made the idea of atoms useful, inspiring a revolution in chemistry. Measuring atomic weights accurately became a prime preoccupation for chemists in the decades that followed.
When contemplating those weights, Döbereiner noted that certain sets of three elements (he called them triads) showed a peculiar relationship. Bromine, for example, had an atomic weight midway between the weights of chlorine and iodine, and all three elements exhibited similar chemical behavior. Lithium, sodium and potassium were also a triad.
Other chemists perceived links between atomic weights and chemical properties, but it was not until the 1860s that atomic weights had been well enough understood and measured for deeper insights to emerge. In England, the chemist John Newlands noticed that arranging the known elements in order of increasing atomic weight produced a recurrence of chemical properties every eighth element, a pattern he called the “law of octaves” in an 1865 paper. But Newlands’ pattern did not hold up very well after the first couple of octaves, leading a critic to suggest that he should try arranging the elements in alphabetical order instead. Clearly, the relationship of element properties and atomic weights was a bit more complicated, as Mendeleev soon realized.
Organizing the elements
Born in Tobolsk, in Siberia, in 1834 (his parents’ 17th child), Mendeleev lived a dispersed life, pursuing multiple interests and traveling a higgledy-piggledy path to prominence. During his higher education at a teaching institute in St. Petersburg, he nearly died from a serious illness. After graduation, he taught at middle schools (a requirement of his scholarship at the teaching institute), and while teaching math and science, he conducted research for his master’s degree.
He then worked as a tutor and lecturer (along with some popular science writing on the side) until earning a fellowship for an extended tour of research at Europe’s most prominent university chemistry laboratories.
When he returned to St. Petersburg, he had no job, so he wrote a masterful handbook on organic chemistry in hopes of winning a large cash prize. It was a long shot that paid off, with the lucrative Demidov Prize in 1862. He also found work as an editor, translator and consultant to various chemical industries. Eventually he returned to research, earning his Ph.D. in 1865 and then becoming a professor at the University of St. Petersburg.
Soon thereafter, Mendeleev found himself about to teach inorganic chemistry. In preparing to master that new (to him) field, he was unimpressed by the available textbooks. So he decided to write his own. Organizing the text required organizing the elements, so the question of how best to arrange them was on his mind.
By early 1869, Mendeleev had made enough progress to realize that some groups of similar elements showed a regular increase in atomic weights; other elements with roughly equal atomic weights shared common properties. It appeared that ordering the elements by their atomic weight was the key to categorizing them.
By Mendeleev’s own account, he structured his thinking by writing each of the 63 known elements’ properties on an individual note card. Then, by way of a sort of game of chemical solitaire, he found the pattern he was seeking. Arranging the cards in vertical columns from lower to higher atomic weights placed elements with similar properties in each horizontal row. Mendeleev’s periodic table was born. He sketched out his table on March 1, sent it to the printer and incorporated it into his soon-to-be-published textbook. He quickly prepared a paper to be presented to the Russian Chemical Society.
“Elements arranged according to the size of their atomic weights show clear periodic properties,” Mendeleev declared in his paper. “All the comparisons which I have made … lead me to conclude that the size of the atomic weight determines the nature of the elements.”
Meanwhile, the German chemist Lothar Meyer had also been working on organizing the elements. He prepared a table similar to Mendeleev’s, perhaps even before Mendeleev did. But Mendeleev published first.
More important than beating Meyer to the publication punch, though, was Mendeleev’s use of his table to make bold predictions about undiscovered elements. In preparing his table, Mendeleev had noticed that some note cards were missing. He had to leave blank spaces to get the known elements to properly align. Within his lifetime, three of those blanks were filled with the previously unknown elements gallium, scandium and germanium.
Not only had Mendeleev predicted the existence of these elements, but he had also correctly described their properties in detail. Gallium, for instance, discovered in 1875, had an atomic weight (as measured then) of 69.9 and a density six times that of water. Mendeleev had predicted an element (he called it eka-aluminum) with just that density and an atomic weight of 68. His predictions for eka-silicon closely matched germanium (discovered in 1886) in atomic weight (72 predicted, 72.3 observed) and density (5.5 versus 5.469). He also correctly predicted the density of germanium’s compounds with oxygen and chlorine.
Mendeleev’s table had become an oracle. It was as if end-of-game Scrabble tiles spelled out the secrets of the universe. While others had glimpsed the periodic law’s power, Mendeleev was the master at exploiting it.
Mendeleev’s successful predictions earned him legendary status as a maestro of chemical wizardry. But today, historians dispute whether the discovery of the predicted elements cemented the acceptance of his periodic law. The law’s approval may have been more due to its power to explain established chemical relationships. In any case, Mendeleev’s prognosticative accuracy certainly attracted attention to the merits of his table.
By the 1890s, chemists widely recognized his law as a landmark in chemical knowledge. In 1900, the future Nobel chemistry laureate William Ramsay called it “the greatest generalization which has as yet been made in chemistry.” And Mendeleev had done it without understanding in any deep way why it worked at all.
A mathematical map
In many instances in the history of science, grand predictions based on novel equations have turned out to be correct. Somehow math reveals some of nature’s secrets before experimenters find them. Antimatter is one example, the expansion of the universe another. In Mendeleev’s case, the predictions of new elements emerged without any creative mathematics. But in fact, Mendeleev had discovered a deep mathematical map of nature, for his table reflected the implications of quantum mechanics, the mathematical rules governing atomic architecture.
In his textbook, Mendeleev had noted that “internal differences of the matter that comprises the atoms” could be responsible for the elements’ periodically recurring properties. But he did not pursue that line of thought. In fact, over the years he waffled about how important atomic theory was for his table.
But others could read the table’s message. In 1888, German chemist Johannes Wislicenus declared that the periodicity of the elements’ properties when arranged by weight indicated that atoms are composed of regular arrangements of smaller particles. So in a sense, Mendeleev’s table did anticipate (and provide evidence for) the complex internal structure of atoms, at a time when nobody had any idea what an atom really looked like, or even whether it had any internal structure at all.
By the time of Mendeleev’s death in 1907, scientists knew that atoms had parts: electrons, which carried a negative electric charge, plus some positively charged component to make atoms electrically neutral. A key clue to how those parts were arranged came in 1911, when the physicist Ernest Rutherford, working at the University of Manchester in England, discovered the atomic nucleus. Shortly thereafter Henry Moseley, a physicist who had worked with Rutherford, demonstrated that the amount of positive charge in the nucleus (the number of protons it contained, or its “atomic number”) determined the correct order of the elements in the periodic table.
Atomic weight was closely related to Moseley’s atomic number — close enough that ordering elements by weight differs in only a few spots from ordering by number. Mendeleev had insisted that those weights were wrong and needed to be remeasured, and in some cases he was right. A few discrepancies remained, but Moseley’s atomic number set the table straight.
At about the same time, the Danish physicist Niels Bohr realized that quantum theory governed the arrangement of electrons surrounding the nucleus and that the outermost electrons determined an element’s chemical properties.
Similar arrangements of the outer electrons would recur periodically, explaining the patterns that Mendeleev’s table had originally revealed. Bohr created his own version of the table in 1922, based on experimental measurements of electron energies (along with some guidance from the periodic law).
Bohr’s table added elements discovered since 1869, but it was still, in essence, the periodic arrangement that Mendeleev had discovered. Without the slightest clue to quantum theory, Mendeleev had created a table reflecting the atomic architecture that quantum physics dictated.
Bohr’s new table was neither the first nor last variant on Mendeleev’s original design. Hundreds of versions of the periodic table have been devised and published. The modern form, a horizontal design in contrast with Mendeleev’s original vertical version, became widely popular only after World War II, largely due to the work of the American chemist Glenn Seaborg (a longtime member of the board of Science Service, the original publisher of Science News).
Seaborg and collaborators had synthetically produced several new elements with atomic numbers beyond uranium, the last naturally occurring element in the table. Seaborg saw that these elements, the transuranics (plus the three elements preceding uranium) demanded a new row in the table, something Mendeleev had not foreseen. Seaborg’s table added the row for those elements beneath a similar row for the rare earth elements, whose proper place had never been quite clear, either. “It took a lot of guts to buck Mendeleev,” Seaborg, who died in 1999, said in a 1997 interview.
Seaborg’s contributions to chemistry earned him the honor of his own namesake element, seaborgium, number 106. It’s one of a handful of elements named to honor a famous scientist, a list that includes, of course, element 101, discovered by Seaborg and colleagues in 1955 and named mendelevium — for the chemist who above all others deserved a place at the periodic table.
|
Physics
|
1D bands take on electronic or magnetic properties. Courtesy: Yakobson Research Group/Rice University Researchers at Rice University in the US have proposed a new way of controlling the magnetic and electronic properties of single-layer two-dimensional materials that involves growing or stamping them on a carefully designed undulating surface. The approach could be a simpler alternative to the complex “twisting” technique, which involves rotating two stacked layers with respect to each other.
In recent years, physicists have been experimenting with techniques that use the weak coupling between layers of 2D materials to change the material’s behaviour. One dramatic example is twistronics, in which experimenters modify a 2D material’s electronic properties by varying the angle between the layers. For instance, graphene (a 2D sheet of carbon atoms) does not normally have an electronic band gap, but it develops one when placed in contact with another 2D material, hexagonal boron nitride (hBN).
This unusual effect comes about because graphene and hBN have a similar lattice constant, such that stacking them together forms a pattern known as a Moiré superlattice. If the layers are then twisted out of alignment, the band gap disappears. Hence, graphene can be tuned from a metallic state to a semiconducting one simply by varying the angle of the layers. Indeed, in 2018, researchers at the Massachusetts Institute of Technology (MIT) discovered that placing two layers of graphene together with a relative rotation of 1.1°– the so-called “magic angle” – transforms the normally metallic material into a superconductor.
Naturally straining the material’s lattice
In the new work, a team led by Boris Yakobson showed that simply stamping or growing a 2D material on hBN onto a bumpy surface naturally strains the material’s lattice, creating pseudo-electric and magnetic fields that can then be used to control its magnetic and electric properties without a need for twisting. The researchers found that the strain creates “flat” band states that cause the normally insulating hBN to become a semiconductor. These states are 1D in nature, which is radically different from those obtained in twisted materials and could be exploited to study the exciting physics of 1D quantum systems, they say.
The advantage of the technique, which the researchers describe in Nature Communications, is that the deformation can be precisely controlled by using standard processes such as electron beam lithography to create patterns on the surface. “Indeed, it would be much easier to create bumpy surfaces using this process than it currently is to twist 2D bilayers of graphene or other heterostructures like hBN to less than a single degree of accuracy,” says Sunny Gupta, a postdoctoral researcher at Rice and a co-author of the study.
The researchers developed a computational model of deformation that they compare to wrapping a sheet of paper around a ball. “It is impossible to do this without crumpling the paper,” Yakobson explains, “because of their different topography (curvature patterns). To make it adhere to the ball, the sheet of paper should, in principle, be significantly deformed (if tearing is not allowed). Similarly, a flat 2D material when grown or stamped on a substrate with different topography will be strained and its electronic properties modulated.”
Creating different strain patterns
Using this model, the team found that substrates with different topographies could be used to create different strain patterns, giving rise to new quantum states and functionalities that are inherently absent in a flat 2D system, Yakobson adds. “By combining topography and deformation (which is like a adding a new ‘dimension’ in a 2D material) as we show in our work, we can create new quantum phases, such as flat electronic bands and strongly correlated 1D electronic sates,” he tells Physics World. Read more Strain switches 2D phase-change transistor Such states, which are coveted by physicists, typically show unique properties such as magnetism and superconductivity. Creating them artificially is a highly active research field, but it is hard to do so via twisting because twisted systems require two layers of material and careful control of the twist angle between them. In contrast, the new technique can be employed even in single-layer materials. “Our proposed way of combining topographical modulations and 2D materials will overcome several limitations of current Moiré systems and allow us to explore physics in 1D systems, which is largely inaccessible by twisting 2D materials,” Yakobson concludes.
Using topography to alter the property of 2D materials is a new research direction, and in Yakobson’s view its broad scope means it could eventually garner similar attention as 2D twisted bilayer systems. “In the next stages of our work, we would like to explore how topographical undulations affect properties of a variety of 2D materials with inherent functional properties such as magnetism and electronic topological behaviour,” he reveals. “Importantly, we would also like to experimentally realize the material physics system we have offered the ‘recipe’ for in this study.”
|
Physics
|
News & Views Published: 22 December 2022 Integrated optics Nature Photonics (2022)Cite this article The resonance wavelengths of optical Möbius strip microcavities can be continuously tuned via geometric phase manipulation by changing the thickness-to-width ratio of the strip. Microring resonators have gained a prominent role in integrated optics1, owing to their practical applications as on-chip field enhancers, spectral filters, fast modulators for optical communications, sensors, lasers, and more as well. This is a preview of subscription content, access via your institution Access options Subscribe to Nature+Get immediate online access to Nature and 55 other Nature journalSubscribe to JournalGet full journal access for 1 year$99.00only $8.25 per issueAll prices are NET prices. VAT will be added later in the checkout.Tax calculation will be finalised during checkout.Buy articleGet time limited or full article access on ReadCube.$32.00All prices are NET prices. Additional access options: Log in Learn about institutional subscriptions Fig. 1: Parallel transport of ‘in-plane’ polarization modes in curved and Möbius strip cavities. ReferencesVahala, K. J. Nature 424, 839–846 (2003).Article ADS Google Scholar Zhang, D., Men, L. & Chen, Q. Opt. Commun. 465, 125571 (2020).Article Google Scholar Wang, J. et al. Nat. Photon. https://doi.org/10.1038/s41566-022-01107-7 (2022).Article Google Scholar Kreismann, J. & Hentschel, M. Europhys. Lett. 121, 24001 (2018).Article ADS Google Scholar Pancharatnam, S. Proc. Indian Acad. Sci. Sect. A 44, 398–417 (1956).Article MathSciNet Google Scholar Berry, M. V. Proc. R. Soc. Lond. Math. Phys. Sci. 392, 45–57 (1984).ADS Google Scholar Bhandari, R. Phys. Rep. 281, 1–64 (1997).Article ADS Google Scholar Liang, C. & Chen, X. In Electromagnetic Frontier Theory Exploration 275–288 (De Gruyter, 2020).Download referencesAuthor informationAuthors and AffiliationsDepartment of Physics “Ettore Pancini”, Università di Napoli “Federico II”, Napoli, ItalyBruno Piccirillo & Verónica Vicuña-HernándezINFN, National Institute for Nuclear Physics, Napoli Unit, Napoli, ItalyBruno Piccirillo & Verónica Vicuña-HernándezAuthorsBruno PiccirilloYou can also search for this author in PubMed Google ScholarVerónica Vicuña-HernándezYou can also search for this author in PubMed Google ScholarCorresponding authorCorrespondence to Bruno Piccirillo.Ethics declarations Competing interests The authors declare no competing interests. About this articleCite this articlePiccirillo, B., Vicuña-Hernández, V. Tuning optical cavities by Möbius topology. Nat. Photon. (2022). https://doi.org/10.1038/s41566-022-01136-2Download citationPublished: 22 December 2022DOI: https://doi.org/10.1038/s41566-022-01136-2
|
Physics
|
CAPE CANAVERAL, Fla. (AP) — A spacecraft that plowed into a small, harmless asteroid millions of miles away succeeded in shifting its orbit, NASA said Tuesday in announcing the results of its save-the-world test.The space agency attempted the first test of its kind two weeks ago to see if in the future a killer rock could be nudged out of Earth’s way.“This mission shows that NASA is trying to be ready for whatever the universe throws at us,” NASA Administrator Bill Nelson said during a briefing at NASA headquarters in Washington.The Dart spacecraft carved a crater into the asteroid Dimorphos on Sept. 26, hurling debris out into space and creating a cometlike trail of dust and rubble stretching several thousand miles (kilometers). It took days of telescope observations from Chile and South Africa to determine how much the impact altered the path of the 525-foot (160-meter) asteroid around its companion, a much bigger space rock.Before the impact, the moonlet took 11 hours and 55 minutes to circle its parent asteroid. Scientists had hoped to shave off 10 minutes but Nelson said the impact shortened the asteroid’s orbit by about 32 minutes. Neither asteroid posed a threat to Earth — and still don’t as they continue their journey around the sun. That’s why scientists picked the pair for the world’s first attempt to alter the position of a celestial body.”We’ve been imagining this for years and to have it finally be real is really quite a thrill,” said NASA program scientist Tom Statler.Launched last year, the vending machine-size Dart — short for Double Asteroid Redirection Test — was destroyed when it slammed into the asteroid 7 million miles (11 million kilometers) away at 14,000 mph (22,500 kph).Johns Hopkins University’s Applied Physics Laboratory in Maryland built the spacecraft and managed the $325 million mission.“This is a very exciting and promising result for planetary defense,” said the lab’s Nancy Chabot.___The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content.
|
Physics
|
Composite image of the Didymos-Dimorphos system taken on November 30, showing its new ejecta tail. Image: Magdalena Ridge Observatory/NM TechScientists continue to pore over the results of NASA’s stunningly successful DART test to deflect a harmless asteroid. As the latest findings suggest, the recoil created by the blast of debris spewing out from Dimorphos after impact was significant, further boosting the spacecraft’s influence on the asteroid.OffEnglishNASA’s fridge-sized spacecraft smashed into the 535-foot-long (163-meter) Dimorphos on September 26, shortening its orbit around its larger partner, Didymos, by a whopping 33 minutes. That equates to several dozen feet, demonstrating the feasibility of using kinetic impactors as a means to deflect threatening asteroids. A stunning side-effect of the test were the gigantic and complex plumes that emanated from the asteroid after impact. The Didymos-Dimorphos system, located 7 million miles (11 million kilometers) from Earth, even sprouted a long tail in the wake of the experiment. DART, short for Double Asteroid Redirection Test, had a profound impact on Dimorphos, kicking up a surprising amount of debris, or “ejecta,” in the parlance of planetary scientists.Animated image showing changes to the Didymos-Dimorphos system in the first month following DART’s impact. Gif: University of Canterbury Ōtehīwai Mt. John Observatory/UCNZDimorphos, as we learned, is a rubble pile asteroid, as opposed to it being a dense, tightly packed rocky body. This undoubtedly contributed to the excessive amount of ejected debris, but scientists weren’t entirely sure how much debris the asteroid shed as a result of the impact. Preliminary findings presented on Thursday at the American Geophysical Union’s Fall Meeting in Chicago are casting new light on this and other aspects of the DART mission.G/O Media may get a commissionNot only did DART kick up tons of ejecta, it also triggered a recoil effect that further served to nudge the asteroid in the desired direction, as Andy Rivkin, DART investigation team lead, explained at the meeting. “We got a lot of bang for the buck,” he told BBC News. Indeed, had Dimorphos been a more compact body, the same level of recoil likely wouldn’t have happened. “If you blast material off the target then you have a recoil force,” explained DART mission scientist Andy Cheng from the Johns Hopkins University Applied Physics Lab, who also spoke at the meeting. The resulting recoil is analogous to letting go of a balloon; as the air rushes out, it pushes the balloon in the opposite direction. In the case of Dimorphos, the stream of ejecta served as the air coming out of the balloon, which likewise pushed the asteroid in the opposite direction.Planetary scientists are starting to get a sense as to how much debris got displaced. DART, traveling at 14,000 miles per hour (22,500 km/hour), struck with enough force to spill over 2 million pounds of material into the void. That’s enough to fill around six or seven rail cars, NASA said in a statement. That estimate might actually be on the low side, and the true figure could possibly be 10 times higher, Rivkin said at the meeting.The scientists assigned DART’s momentum factor, known as “beta,” a value of 3.6, meaning that the momentum transferred into Dimorphos was 3.6 times greater than an impact event that produced no ejecta plume. “The result of that recoil force is that you put more momentum into the target, and you end up with a bigger deflection,” Cheng told reporters. “If you’re trying to save the Earth, this makes a big difference.” That’s a good point, as those values will dictate the parameters for an actual mission to deflect a legitimately dangerous asteroid. Cheng and his colleagues will now use these results to infer the beta values of other asteroids, a task that will require a deeper understanding of an object’s density, composition, porosity, and other parameters. The scientists are also hoping to figure out the degree to which DART’s initial hit moved the asteroid and how much of its movement happened on account of the recoil. The speakers also produced another figure—the length of the tail, or ejecta plume, that formed in the wake of the impact. According to Rivkin, Dimorophos sprouted a tail measuring 18,600 miles (30,000 km) long. “Impacting the asteroid was just the start,” Tom Statler, the program scientist for DART and a presenter at the meeting, said in the statement. “Now we use the observations to study what these bodies are made of and how they were formed—as well as how to defend our planet should there ever be an asteroid headed our way.”
|
Physics
|
On the morning of October 9, astronomers’ inboxes pinged with a relatively modest alert: NASA’s Swift Observatory had just detected a fresh burst of energy, assumed to be coming from somewhere within our own galaxy. But six hours later—when scientists realized an instrument on the Fermi Space Telescope had also flagged the event—another more pressing email arrived. “We believe that this source is now likely a gamma-ray burst,” it read. “This would suggest a highly energetic outburst, and therefore we strongly encourage follow-up.” In other words, this was a career-making chance to catch a rare celestial event in real time.Astronomers around the world sprang into action. They were eager to point their telescopes at this powerful, jetted explosion of the most energetic photons in our universe. “And by jetted, I mean like a firehose of emission,” says Wen-fai Fong, an astrophysicist at Northwestern University. Blasts like this are thought to be caused by the supernovae of giant stars, destructive collapses that give birth to black holes. The burst, dubbed GRB 221009A, went off about 2 billion light years away in the Sagitta constellation—one of the closest and most energetic ever observed—and it’s likely that one of the jets was fortuitously pointed directly at Earth. Together, these factors made for a flash at least 10 times brighter than all the others spotted in the three decades since such bursts were discovered, leading some astronomers to dub it the “BOAT”—brightest of all time.“I kept thinking, is this real? Because if it is, it’s an extremely rare, once-in-a-lifetime type of event,” Fong says. She and others are in the thick of collecting data that they hope will confirm that the rays actually came from a supernova, and help them isolate which stellar properties led to such an energetic explosion and how much of the collapsing material got spat out by the infant black hole. (Theoretical musings have already started appearing on the arXiv preprint server.)While detecting supernovae is now fairly common, it’s rarer to catch one in conjunction with a gamma-ray burst—they’re usually too faint to show up because they are so far away, and only a fraction of supernovae actually generate these explosions. But since this burst was so intense, scientists expect to see the supernova very clearly. “It’s really reinvigorated the community,” Fong says. “Everyone who has a telescope, even if they don’t normally study gamma-ray bursts, is trying to point their detectors at this to get the most complete dataset that we can.”Gamma rays from the blast were recorded for several hundred seconds. Next came a slew of lower energy photons, including x-rays, optical and infrared light, and radio waves. It’s this afterglow that astronomers at ground-based telescopes are hungry to capture, because observing how the influx of photons changes over time will help them characterize the types of stars producing such bursts, the mechanisms driving these explosions, and the resulting environments they produce. These insights could shed light on what influence gamma-ray bursts have on future generations of stars, and determine whether stellar deaths make life possible for us on Earth by producing the heavy elements that can heat a planet’s interior and help sustain its magnetic field.Because the emission spans nearly all wavelengths of light, many different instruments can observe it, which has turned the gamma-ray burst postmortem into a global scientific event. Orbiting satellites like NASA’s NuSTAR are measuring its high energy x-rays, while sites like the Australia Telescope Compact Array are collecting the burst’s radio emission. “If we don’t get data one night, we can pretty much guarantee that someone will,” says Jillian Rastinejad, a Northwestern graduate student working with Fong. Together, they’re spearheading observations of visible light from the burst using the Gemini South telescope in Chile, data that will be supplemented by measurements from the Lowell Discovery Telescope in Arizona, South Korea’s Bohyunsun Optical Astronomy Observatory, and India’s Devasthal Fast Optical Telescope. Even the James Webb Space Telescope got in on the action, as scientists reported the afterglow observed in infrared last Friday.These photons will linger for months—maybe years—though they’re already starting to dim. But scientists are also looking forward to what comes after the afterglow: a spike in optical and infrared light coming from the apparent supernova itself, which will confirm that the gamma-ray burst was indeed triggered by a cataclysmic stellar death. This signal typically appears 14 to 20 days after the burst.With this data, astronomers could potentially link the existence of heavy elements, like silver and gold, to the environment surrounding the blast. Back in 2017, they discovered that short gamma-ray bursts—those lasting just a couple of seconds—generated the hot, dense, and neutron-rich conditions necessary to forge heftier members of the periodic table, like thorium, which radioactively decays in Earth’s core to generate heat. But Brian Metzger, an astrophysicist at Columbia University, has a hunch that the longer bursts may do this, too. “We haven’t had many opportunities to see supernovae from these types of very energetic gamma-ray bursts,” he says, but this event will allow his theory to be tested. “So I expect some surprises.”At some telescopes, data-taking days—or rather, nights—are numbered. Soon, the burst will be at a celestial position that’s no longer visible from these vantage points after the sun goes down. Those astronomers will move on to characterizing first detections of the blast. But analyzing the data may prove challenging, particularly for those studying the initial gamma-rays. “Our instruments are very sensitive, and they’re meant to detect faint sources,” says Judith Racusin, a deputy project scientist of the Fermi Space Telescope. But this time, millions of photons were collected. “They all kind of get jumbled together,” she says. “So instead of detecting the energy of each individual gamma ray, we detect the sum of the energy of those gamma rays.” That’ll make it hard to extract information from the true number of photons observed and their energies—one downside to the blast being so extraordinarily bright.Still, astronomers know they’re lucky to have witnessed such an event. “It’s exciting to have such a monster in the backyard,” Metzger says. They expect to confirm stellar collapse as the genesis of the blast as soon as this week—though it will take time for those results to be published. In the meantime, they’ll continue to capture the glowing remnants of the explosion, share new observations with each other, and theorize about the physics behind what many are calling the celestial event of the century. “This is an ongoing effort,” Fong says. “And we’re just watching the story unfold.”
|
Physics
|
It’s not easy for human beings to clean up after a hurricane or oil spill. To find out what hazardous chemicals are in an area, or if the air is safe to breathe, disaster response teams risk putting themselves in danger.So David Lary, a physics professor at the University of Texas at Dallas, is developing a safer cleanup task force.It’s staffed with robots.Lary leads a research group called MINTS-AI, which stands for Multi-Scale Integrated Interactive Intelligent Sensing for Actionable Insights. The group is training a fleet of robots that can collect data about the environment all on their own by walking, swimming and flying.The robots can navigate environmental sites that might be dangerous or challenging for humans to enter, and collect thousands of data points in a matter of minutes. Lary hopes the tech can clue us into our environment without putting people in danger in the process.Lary’s journey to robot research has been 35 years in the making. After creating a model to study ozone loss in the atmosphere as part of his Ph.D. research at Cambridge University, he hit a roadblock.How could he analyze multiple different sets of data – for example, temperature, air pressure and humidity – to make one conclusive prediction about the environment?“Many of these observations don’t necessarily agree with each other,” Lary said. “About 25, 30 years ago … I was looking around earnestly to find a way to deal with this, because it’s a particularly pernicious problem.”Lary found his answer in machine learning: training computers, or robots, to make those accurate predictions.Related:Richardson physics professor hoping to create the Google Maps of air pollutionUsing machine learning to study the environment has advantages when it comes to cost as well as safety. The most accurate sensors to study air and water quality can be large and clunky and cost anywhere from $100,000 to $1,000,000.Lary’s team acquires sensors that cost around $500 to $10,000, and pairs them with machine learning so that they’re drawing conclusions with the accuracy of a more expensive sensor.“How do you go from $500 to $100,000?” asked Lakitha Wijeratne, a research associate for the UT Dallas Office of Information Technology and a member of Lary’s lab. “The difference can be made up using machine learning.”Lakitha Wijeratne, 33, poses for a portrait next to a drone, Tuesday, July 5, 2022 at University of Texas at Dallas in Richardson Texas. The sensing system drone is used to collect instant data about pollutants in the water and in the air in remote locations.(Rebecca Slezak / Staff Photographer)Teaching a robot is a bit like teaching a baby. Just like a parent might show a child a flashcard with a fluffy-eared mammal, and explain it’s called a dog, Lary’s team can teach robots what data points – for example, different air temperatures – correspond to good or bad air quality.After the teaching phase, a parent might test their child by showing them a picture of a dog, and asking them what it is. Lary’s team can show his robots new temperature levels they haven’t seen before and have the robots make predictions about air quality.The team can then check the robots’ work against their “flashcard” – in this case, a known answer sheet – to make sure they’re getting it right.Last year, Lary and his team tested a team of air and water robots in Plano to see how good the robots were at collecting accurate data on a body of water they had never encountered before. Their research was published in the journal Sensors last year.Their flying robot was equipped with several sensors, including a special kind of camera called a hyperspectral imager. While a regular camera takes pictures using three wavelengths of light – red, green and blue –, Lary’s hyperspectral camera takes pictures with 462 wavelengths. This means each pixel of the camera’s image has detailed information about the chemical composition of that square of water.“All of that information is captured in the spectrum of light that reflects from the water,” said John Waczak, a graduate student researcher in Lary’s lab. “That’s like its little chemical thumbprint.”After giving the aerial robot time to fly over the body of water and get its bearings, Lary’s team released a red substance into the water to “contaminate” it. They then sent the robot to test whether the contaminant would come up in its hyperspectral pictures. It did.Pratap Tokekar is an assistant professor of computer science at the University of Maryland who was not involved in the UTD research. He said Lary’s use of flying robots to predict measurements that would be slower to collect from the water’s surface is novel.“The fact that you can make predictions of surface-level measurements from aerial robots is, I think, exciting, and can help scale these systems to larger environments, to monitoring larger bodies of water, and so on,” Tokekar said.Lary’s team demo-ed their air and water robots, as well as a walking robot, for their funders at a property in Montague, Texas, this March.Environmental robots are only a part of Lary’s greater mission of harnessing sensing and machine learning to keep people safe. His group is also working on a set of air sensors that take in data about temperature, pressure, humidity and more to measure air quality at different points in Dallas.An air reader sits in the MINTS lab, Tuesday, July 5, 2022 at University of Texas at Dallas in Richardson Texas. Among the various data recorded by the reader, it can detect debris in the air, radiation exposure, the ozone and temperature(Rebecca Slezak / Staff Photographer)Data from around 30 sensors is already publicly available at the SharedAirDFW website, and Lary’s team has built around 100 more to install in the DFW area.Lary’s team continues to fine-tune their sensors, figuring out the best ways to get accurate, real-time data on air and water quality. They’re seeking resources and funding to create more air quality sensors and spread them through the DFW area. And they’re working on full-body sensors that measure the body’s response to the air around it, in hopes of letting people know in real-time what air they’re breathing.Related:Joppa study measuring effect of air quality on health is a first in Texas, researchers sayLary hopes his work could help researchers predict changes in the environment in real time. He compared the impact of his research to the canary in the coal mine, whose death let miners know that the air might not be safe to breathe.“What I’m trying to work towards, and it’s actually quite a challenging goal, is that no canary has to die,” Lary said. “It’s much better that stuff never hits the fan.”Adithi Ramakrishnan is a science reporting fellow at The Dallas Morning News. Her fellowship is supported by the University of Texas at Dallas. The News makes all editorial decisions.
|
Physics
|
Home News Science & Astronomy This composite image shows the distribution of dark matter, galaxies, and hot gas in the core of the merging galaxy cluster Abell 520, formed from a violent collision of massive galaxy clusters. The blend of blue and green in the center of the image reveals that a clump of dark matter resides near most of the hot gas, where very few galaxies are found. (Image credit: NASA, ESA, CFHT, CXO, M.J. Jee (University of California, Davis), and A. Mahdavi (San Francisco State University)) Astronomers estimate that roughly 85% of all the matter in the universe is dark matter, meaning only 15% of all matter is normal matter. Accounting for dark energy, the name astronomers give to the accelerated expansion of the universe, dark matter makes up roughly 27% of all the mass energy in the cosmos, according to CERN (opens in new tab) (the European Organization for Nuclear Research).Astronomers have a variety of tools to measure the total amount of matter in the universe and compare that to the amount of "normal" (also called "baryonic") matter. The simplest technique is to compare two measurements. The first measurement is the total amount of light emitted by a large structure, like a galaxy, which astronomers can use to infer that object's mass. The second measurement is the estimated amount of gravity needed to hold the large structure together. When astronomers compare these measurements on galaxies and clusters throughout the universe, they get the same result: There simply isn't enough normal, light-emitting matter to account for the amount of gravitational force needed to hold those objects together. Thus, there must be some form of matter that is not emitting light: dark matter. Related: What is dark matter?Different galaxies have different proportions of dark matter to normal matter. Some galaxies contain almost no dark matter, while others are nearly devoid of normal matter. But measurement after measurement gives the same average result: Roughly 85% of the matter in the universe does not emit or interact with light. Not enough baryonsThere are many other ways astronomers can validate this result. For example, a massive object, like a galaxy cluster, will warp space-time around it so much that it will bend the path of any light passing through — an effect called gravitational lensing. Astronomers can then compare the amount of mass that we see from light-emitting objects to the mass needed to account for the lensing, again proving that extra mass must be lurking somewhere.Astronomers can also use computer simulations to look at the growth of large structures. Billions of years ago, our universe was much smaller than it is today. It took time for stars and galaxies to evolve, and if the universe had to rely on only normal, visible matter, then we would not see any galaxies today. Instead, the growth of galaxies required dark matter "pools" for the normal matter to collect in, according to a lecture by cosmologist Joel Primack (opens in new tab).Lastly, cosmologists can look back to when the cosmos was only a dozen minutes old, when the first protons and neutrons formed. Cosmologists can use our understanding of nuclear physics to estimate how much hydrogen and helium were produced in that epoch.These calculations accurately predict the ratio of hydrogen to helium in the present-day universe. They also predict an absolute limit to the amount of baryonic matter in the cosmos, and those numbers agree with observations of present-day galaxies and clusters, according to astrophysicist Ned Wright (opens in new tab).Alternatives to dark matterAlternatively, dark matter may be a misunderstanding of our theories of gravity, which are based on Newton's laws and Einstein's general relativity.Astronomers can tweak those theories to provide explanations of dark matter in individual contexts, like the motions of stars within galaxies. But alternatives to gravity have not been able to explain all the observations of dark matter throughout the universe. All the evidence indicates that dark matter is some unknown kind of particle. It does not interact with light or with normal matter and makes itself known only through gravity. In fact, astronomers think there are trillions upon trillions of dark matter particles streaming through you right now. Scientists hope to nail down the identity of this mysterious component of the universe soon.Originally published on LiveScience. Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.
Paul M. Sutter is an astrophysicist at SUNY Stony Brook and the Flatiron Institute in New York City. Paul received his PhD in Physics from the University of Illinois at Urbana-Champaign in 2011, and spent three years at the Paris Institute of Astrophysics, followed by a research fellowship in Trieste, Italy, His research focuses on many diverse topics, from the emptiest regions of the universe to the earliest moments of the Big Bang to the hunt for the first stars. As an "Agent to the Stars," Paul has passionately engaged the public in science outreach for several years. He is the host of the popular "Ask a Spaceman!" podcast, author of "Your Place in the Universe" and "How to Die in Space" and he frequently appears on TV — including on The Weather Channel, for which he serves as Official Space Specialist.
|
Physics
|
Longstanding models of the brain describe it as something like a biological computer. According to this traditional picture, the brain processes information like a relay. Individual neural cells detect a stimulus, then pass that data along from one neuron to the next, through a sequence of gates.The model isn’t wrong, but it leaves a lot unexplained, particularly how sensory cells in animals can react differently to the same stimulus. For example, a quick flash of light might normally activate a sensory cell in an animal, but the sensory cell might not activate if the animal’s attention is focused on something else, other than the light. Experts want to know why that might happen.In a recent paper, a team of researchers from the Salk Institute for Biological Studies, in San Diego, Cali., offer a new mathematical model and possible explanation. Rather than comparing an interaction between individual neurons to a relay, it might make more sense to compare it to ocean waves. Sensory ExperiencesInformation processing might in some cases be better described as an interaction of waves, explains Sergei Gepshtein, a scientist specializing in perceptual psychology and sensorimotor neuroscience and one of the authors of the study. Instead of one neuron responding to a given stimulus, distributed patterns of neuronal activity across the brain form a wave pattern of alternating peaks and troughs, just like the peaks and troughs of electromagnetic waves or ocean waves. And like those more familiar waves, waves of brain activity — what the researchers call neural waves — either augment or cancel each other when they meet. “Sensory experiences arise in your mind as result of this interaction,” explains Gepshtein. Mathematical ModelThe researchers tested their mathematical model physiologically and behaviorally. In the behavioral study, researchers showed subjects briefly two light patterns made up of alternating strips of black and white lines, called luminance gratings. Between the patterns, a faint vertical line, called a probe, appears. Researchers asked subjects if the probe appeared in the top or bottom half of the luminance gratings. The subject’s ability to detect the probe was better at some locations and worse at others. When researchers plotted the results, they formed the wave pattern that the mathematical model predicted. In other words, the ability to see the probe depended on how the neural waves were superimposed at any particular location. There are many potential uses for this new framework for understanding perception. For example, scientists suggest it could clarify how organisms, including humans, process spatial information. While this study focused on visual perception, Gepshtein points out that neural waves are a property of many parts of the cerebral cortex. Scientists could then use this model to understand other kinds of perception as well. They could also use it to design artificial intelligence, he says. Gepshtein stresses that this new model does not replace the traditional one, but instead complements it. “It’s a different way of thinking about how the brain processes information, and it helps to understand phenomena that were difficult to understand from the traditional point of view,” says Gepshtein. A good analogy, he says, is the particle-wave duality in chemistry and physics — the discovery that electromagnetic waves, including light, have properties of both particles and waves. When thinking about how the brain processes information, we can sometimes use the traditional model of individual neurons responding to stimuli. But in many cases, we can get a clearer picture of what’s going on by thinking of the process as a wave of neuronal activity.
|
Physics
|
In a famous letter to a bereaved family friend, Einstein wrote: “For those of us who believe in physics, the distinction between past, present and future is only a stubbornly persistent illusion". This has been widely interpreted to mean that Einstein’s theory of relativity itself implies that the passage of time is an illusion and that time, like space, has no direction, a position often referred to as “the block universe”. But despite Einstein revolutionizing our understanding of time, nothing in his theory of relativity suggests that the distinction between past, present, and future is an illusion, argues Tim Maudlin. Albert Einstein had the double-edged gift of writing striking aperçus. Instead of saying “Quantum mechanics is on the right track, but I am not convinced that the laws of physics are indeterministic” he penned to his friend Max Born “The theory provides much, but it doesn’t bring us closer to the mystery of the Old One. In any case, I am convinced that he doesn’t roll dice”. The quotability of “God doesn’t play dice with the universe” has resulted in it being widely accepted as Einstein’s primary complaint about quantum mechanics, which is not the case. It was rather the non-locality—the “spooky action-at-a-distance” in his pungent phrase—that he really found unacceptable. SUGGESTED READING Einstein and why the block universe is a mistake By DeanBuonomano Another extremely widely cited Einstein text, also written as a personal note, has caused similar confusion. When his dear friend Michele Besso died in 1955, preceding Einstein himself by only a month, Einstein wrote a letter of condolence to Besso’s family. In it, he said “Now he has departed from this strange world a little ahead of me. That signifies nothing. For those of us who believe in physics, the distinction between past, present and future is only a stubbornly persistent illusion". Since one might reasonably say that time itself is the foundation of the distinction between past, present and future—it is the temporal relation of earlier/later between events that underpins these concepts—this passage has largely been taken to suggest that time itself is, in some sense, illusory. This quote has been the basis of what has come to be known as “the block universe”, which supposedly implies that the passage of time is an illusion, and that time is just like space, another dimension with no inherent fundamental direction. However, there is nothing in Einstein’s theory of relativity to support any of these claims.___In truth, I think that Einstein never regarded time as an illusion.___To begin with, if one of the discoveries of physics were that time is an illusion, it would be hard to even imagine a more momentous achievement. But it ought to immediately strike one as peculiar that if Einstein had really thought his theory of relativity implied that time is an illusion, he would only include this claim in a private note of condolence that he had no reason to expect ever to be made public.In truth, I think that Einstein never regarded time as an illusion. His note is simply a considerate attempt to console Besso’s family who was suffering a recent loss. Indeed, there is a subtly self-undermining character to the idea that such an illusion could be “persistent”, since something only persists if it remains the same through time. Nonetheless, Einstein’s remark is often taken to be emblematic of the suggestion that time is somehow not “real”, that time in the Theory of Relativity has become “spatialized”, that there is no fundamental distinction of earlier/later or past/present/future, that time does not “pass” at all.There is certainly nothing particular to the Theory of Relativity to support any of these claims – when it comes to the question of whether time is real or illusory, there is no way in which it essentially differs from Newtonian physics. Einstein and Newton do of course disagree about the details of the temporal structure of the universe. For Newton, an “instant of duration” is a global thing: as he says “the moment of duration is the same at Rome and at London, on the earth and on the stars, and throughout all the heavens.” This expresses the idea of absolute simultaneity: that for any given event there is an objective physical fact about which other events in other places take place “at the same time”. For Newton, time is the succession of the global instants. This is the everyday concept of time most of us operate with. If I snap my fingers here on Earth, I may not know what is happening at that very instant on, say, Alpha Centauri but the usual thought is that something precise is.___To postulate a novel structure for time is not to deny the existence of time or to render it an illusion. There is still a completely objective fact, according to Relativity, that some events precede others in time.___Relativity eliminates the notion of absolute simultaneity. In its place, with regards to the event of my snaping my fingers, it postulates a three-fold division of other events into those that are past- related, future related, and space-like related to it. The main difference from Newton’s picture is that there will not be just one “moment” on Alpha Centauri, as there is only one Newtonian moment that is simultaneous with it.But to postulate a novel structure for time is not to deny the existence of time or to render it an illusion. There is still a completely objective fact, according to Relativity, that some events precede others in time. Besso’s death preceded Einstein’s, wherever you are in the universe. That’s part of the reason that Einstein could write a letter of condolence to Besso’s family but not the other way around.Nonetheless, a rather widespread terminology has taken root in both physics and philosophy that appears to deny this. We are told that the Theory of Relativity—and maybe even Newtonian physics!—postulates a “block universe”, and that in a block universe “time is an illusion” or “there is no passage of time” or “time is just like space” or “there is no fundamental direction of time”. Whatever is meant by these locutions, they seem to be intended to express some astounding discovery or postulate about the nature of time itself. Certain, saying that “time is an illusion” suggests that. There is really nothing that seems to be less possibly illusory than time. Descartes, at the very peak of his skeptical fever in the Mediations, manages to cast doubt on the existence of space but not of time. He never doubts that he is engaged in a process of thinking in which certain thoughts give rise to others, for example.All proponents of a “block universe” hold that space-time is (at least macroscopically) four-dimensional. But that just means that locating a particular event (such as a particular snapping of my fingers) requires specifying four co-ordinates: a longitude, latitude, altitude and time, for example. Newton would of course not object to that. All proponents of a block universe believe that there is a unique past and a unique future. The past was as it was, and the future will be as it will be, irrespective of what we know about either. This is not an assumption of determinism: que sera, sera is an analytic triviality, not a contentious physical postulate. But if all of this is common ground, what makes the “block universe hypothesis” at all astonishing? Why would pointing these facts out console Besso’s family? SUGGESTED VIEWING The Illusion of Now With Julian Barbour, Tim Maudlin, Emily Thomas, Joanna Kavenna One scholar who has done us the favor of explicitly defining a “block universe” is Huw Price. In his book Time’s Arrow and Archimedes’ Point, Price makes clear exactly what he takes a “block universe” view to posit: “For now on, I will simply take for granted the main tenets of the block universe view. In particular, I’ll assume that the present has no special objective status, instead being perspectival in the way that the notion of here is. And I will take it for granted that there is no objective flow if time”. Note that Price’s characterization has two clauses, which are quite different. They differ both in content and more importantly in how surprising or revisionary they would be.To take the first: “here” is what is called a token-reflexive or indexical term. Unlike, say, “Addis Ababa”, the location referred to by a token of “here” depends on where the speaker is when it is pronounced or written. If the speaker happens to be in London, then “here” refers to London, and if in Addis Ababa then “here” refers to Addis Ababa. In that clear sense, “here” has no “objective status”: it refers to different places when spoken at different places. And similarly for temporal terms such as “now” or “yesterday” or “in a month”: obviously what particular regions of space-time are indicated depend on when the token is produced, just as the spatial locutions depend on where. There is no “objective now” in the sense of some moment of time that all tokens of “now” refer to. But so what? Nobody ever thought differently. The idea that “the now” has a “special objective status” in a way that “the here” does not is already universally rejected.So if the “block universe” is supposed to be something controversial, that has to be in the second clause “there is no objective flow of time”. But what is that supposed to mean? That it is not an objective fact that some events take place before a given event and others after? That would mean that time has no directionality at all. And that certainly would be a revolutionary discovery. But there is nothing at all in physics—classical or Relativistic—that suggests it.Spatial dimensions have no directionality. One can say that a certain longitude line runs north/south, but it would make no sense that it really runs north-to-south as opposed to south-to-north. But time does have a directionality, indicated by the asymmetric relation before/after. This is a distinction that all physicists—including Newton and Einstein—as well as all everyday folk take for granted. The before/after distinction grounds the cause/effect distinction: everyone would accept that an earlier configuration of the planets, together with the laws of gravity, cause and explain their later configuration, but no one would say the later configuration causes or explains the earlier. It may indicate or allow one to infer the earlier, but not explain it.___Temporal structure is fundamentally different from spatial structure. And temporal structure—time itself—is not an illusion.___Every physicist who makes reference to “the initial conditions” of the universe, or the “fate of the universe” presupposes a direction of time. And that is something Einstein did without a qualm. And he never suggested somehow trying to remove that direction or reduce it to something else. SUGGESTED READING Is Einstein still right? By CliffordWill Price is perfectly aware of this, although he does try to eliminate any objective notion of temporal direction—which usually underpins the notions of causation and explanation—from his fundamental account of the universe. And no one can prove it can’t be done, although the obstacles are high and the prospects dim. But that is certainly not a project that Einstein was engaged in, or thought he needed to be. The standard reading of cosmology done in accord with General Relativity is that there is an initial state of the universe which then evolves—in accordance with Einstein’s Field Equations—to later states. The later are accounted for by the earlier. Whether there is an earliest of all, and if so whether it could be explained in any way we could recognize, is a thorny question. But the present practice of physics, including Relativistic physics—takes a fundamental earlier/later distinction, and in that sense a “flow of time” for granted. Temporal structure is fundamentally different from spatial structure. And temporal structure—time itself—is not an illusion.
|
Physics
|
Nestled within the landscape of South Dakota lies the deepest subterranean laboratory in the United States -- a 1,490 meter below-ground, cutting-edge science cavern called the Sanford Underground Research Facility. The experiments conducted here are just as mysterious as you might expect. Things like neutrino physics; cosmic ray quests; nuclear fusion reactions.One of them -- prepare for a mouthful -- is called the LUX-ZEPLIN, or Large Underground Xenon and Zoned Proportional Scintillation In Liquid Noble Gases Experiment. And it relies on a massive, white, cylindrical, awfully sterile-looking, device which stands at its heart.This machine is a dark matter detector.Pictured here is the Liquid Xenon Time Projection Chamber, the heart of the LZ detector, in the clean room before assembly inside the titanium cryostat. Matthew Kapust/Sanford Underground Research Facility And on Thursday, the LZ experiment scientists – an international crew composed of 250 experts – announced a pretty big update on the endeavor. The enormous particle hunter has officially been booted up, successfully passed the check-out phase of its startup operations and delivered its first scientific results. Don't get too excited, though. These results are not our lens into dark matter. Yet. However, they do prove that LZ is definitely working -- and so well, in fact, that the team calls it "the world's most sensitive dark matter detector." It's at least 30 times larger and 100 times more sensitive to finding dark matter signals than its predecessor, the Large Underground Xenon Experiment, or LUX, and is deep enough underground to prevent background noise like cosmic rays from interfering with science observations."Considering we just turned it on a few months ago and during COVID restrictions, it is impressive we have such significant results already," Aaron Manalaysay, LZ physics coordinator and member of the Lawrence Berkeley National Lab, said in a statement. Those results are based on just 60 days' worth of data, but in the coming years, LZ is poised to capture about 20 times more. "We're only getting started," Hugh Lippincorr of the University of California, Santa Barbara and LZ spokesperson, said in a statement. What is dark matter and how will LZ find it?Dark matter, as its name suggests, doesn't emit any light. It doesn't transmit any information. In essence, think of dark matter as transparent. Elusive. Incompatible with human vision. But despite its evasiveness, we know dark matter is out there. It does seem to interact with the matter we know and see with our own eyes. The universe is simply expanding far quicker than our physics predicts it should be, and galaxies are strangely held together even though they're spinning so fast you'd expect all their stars inside to fling out like unhinged horses on a merry-go-round. So, unless we have our physics down incorrectly — or are being tricked into thinking we're safely anchored in the Milky Way — there has to be an explanation beyond our present equations. That's where dark matter comes into play.Crews at the Sanford Underground Research Laboratory in Lead, South Dakota, lowering the LUX-ZEPLIN central detector deep into the Earth. Nick Hubbard Scientists call whatever extra stuff is pushing the universe apart dark energy, and items anchoring galaxies in place, dark matter. These mysterious phenomena are estimated to make up a whopping 95% of matter in the universe. And LZ is on a mission to find some of it. In short, the LZ experiment is looking for a specific type of theoretical particle called a "weakly interacting massive particle." It's an electromagnetically neutral particle -- meaning it's neither positive like a proton nor negative like an electron -- regularly hypothesized to make up most of the universe's dark matter. It's also categorized as heavy, which is slightly ironic given its acronym. WIMP. The plan is basically to see whether it's possible to catch a WIMP in action, as it interacts with a xenon atom. To do this, the LZ mechanism contains two titanium tanks filled with 10 tons of very pure liquid xenon, Manalaysay explained, which can be penetrated by other particles. With a bit of luck and a lot of precision, the team hopes to catch a few faint interactions between the xenon atoms and those other intruder particles because of those particles could possibly be a WIMP. It could possibly be dark matter. If and when such an interaction does happen, the researchers will see it as a flash of light, followed by a second flash of light as the electrons knocked off the xenon atom -- due to the particle collision -- drift to the top of the chamber. This is why the LZ device also includes a whole lot of super calibrated light detectors as well as light amplifiers. That light flash is crux of the experiment. An incoming particle interacts with a xenon atom, producing a small flash of light and electrons, which are extracted at the top of the detector and produce additional light. LZ/SLAC "The characteristics of the light signals help determine the types of particles interacting in the xenon, allowing us to separate backgrounds and potential dark matter events," Luiz de Viveiros, assistant professor of physics at Penn State and LZ team member, said in a statement."We want to be ready for physics as soon as the first flash of light appears in the xenon," Lippincott said in a statement.
|
Physics
|
In this April 13, 2017 photo provided by NASA, technicians lift the mirror of the James Webb Space Telescope using a crane at the Goddard Space Flight Center in Greenbelt, Md. The telescope is designed to peer back so far that scientists will get a glimpse of the dawn of the universe about 13.7 billion years ago and zoom in on closer cosmic objects with sharper focus. Laura Betz/AP hide caption toggle caption Laura Betz/AP In this April 13, 2017 photo provided by NASA, technicians lift the mirror of the James Webb Space Telescope using a crane at the Goddard Space Flight Center in Greenbelt, Md. The telescope is designed to peer back so far that scientists will get a glimpse of the dawn of the universe about 13.7 billion years ago and zoom in on closer cosmic objects with sharper focus. Laura Betz/AP NASA's Webb telescope has discovered an exoplanet, which is any planet that is outside of our solar system, for the first time, the agency announced Wednesday. The planet, called LHS 475 b, is nearly the same size as Earth, having 99% of our planet's diameter, scientists said. However, it is several hundred degrees hotter than Earth and completes its orbit around its star in two days. LHS 475 b is in the constellation Octans and is 41 light-years away, which is relatively nearby. Scientists are still trying to determine if the planet has an atmosphere. It's possible LHS 475 b has no atmosphere or one made completely out of carbon dioxide, but one option can be totally eliminated. "There are some terrestrial-type atmospheres that we can rule out," said Jacob Lustig-Yaeger, a researcher at the Johns Hopkins University Applied Physics Laboratory in Marylan. "It can't have a thick methane-dominated atmosphere, similar to that of Saturn's moon Titan." Researchers were scanning the skies using NASA's Transiting Exoplanet Survey Satellite (TESS) when they came across the exoplanet, and used the Webb's spectrograph technology to further investigate. Spectrographs transmit light from an object to a spectrum, which can give information about the object's temperature, mass and chemical composition. "These first observational results from an Earth-size, rocky planet open the door to many future possibilities for studying rocky planet atmospheres with Webb," said Mark Clampin, astrophysics division director at NASA headquarters in D.C. "Webb is bringing us closer and closer to a new understanding of Earth-like worlds outside our solar system, and the mission is only just getting started."
|
Physics
|
The cosmic inflation hypothesis is needed for the Big Bang model to work, but in its current form, it remains a mere hypothesis, unable to be falsified. A new proposal for how it could be put to the test could result in overthrowing the Big Bang model altogether, opening up new possibilities regarding the origins of the universe, argues Avi Loeb.
Scientific theories often require tweaking to fit the data, but sometimes when those tweaks are big enough, they end up becoming theories of their own. The biggest tweak to the Big Bang model has been the introduction of the cosmic inflation hypothesis. According to this theory, the universe went through a phase of exponential expansion soon after its coming into existence. The only problem is, we can’t seem to test this theory. In the language of philosopher of science Karl Popper, cosmic inflation doesn’t appear to be falsifiable.
That might be about to change. In a recent paper, Sunny Vagnozzi and I propose a way of testing the cosmic inflation hypothesis. By developing detectors that search for the thermal gravitational wave background created 10-43 seconds after the Big Bang - the smallest possible fraction of time - we could put the hypothesis to the test. Detection would mean the falsification of the cosmic inflation hypothesis and, by extension, a challenge of the Big Bang theory and a radical transformation of our understanding of the origins of the cosmos.
According to the standard cosmological model, there is a relic from the event of the Big Bang called cosmic microwave background, and it accounts for a percentage of the static noise visible as “snow” in old-fashioned, analogue TV sets. The photosphere that last scattered this radiation, 400,000 years after the Big Bang. This spherical surface that last scattered the radiation around us marks the boundary of the transparent volume of the observable universe. We cannot see any farther. To give you sense of how far back in time we’re talking about, the first stars formed about a hundred million years later.
The actual edge of the observable Universe is at the distance that any signal could have travelled at the speed of light over the 13.8 billion years that elapsed since the Big Bang. As a result of the expansion of the Universe, this edge is currently located 46.5 billion light years away. The spherical volume within this boundary is like an archaeological dig centered on us: the deeper we probe into it, the earlier the layer of cosmic history that we uncover, all the way back to the Big Bang, which represents our ultimate horizon. What lies beyond the horizon is unknown.
Within the thin spherical shell between the Big Bang and the microwave background photosphere, the universe was opaque to light. Despite that, we have a way of probing into this layer. Neutrinos have a weak cross-section for interactions, and so the universe was transparent to them back to approximately a second after the Big Bang, when the temperature was ten billion degrees. The present-day universe should be filled with relic neutrinos from that time. And because the expansion of the universe cooled the neutrino background to a present-time temperature of 1.95 degrees above absolute zero, comparable to the 2.73 degrees of the cosmic microwave background, we can differentiate between the two.
___
The large flexibility displayed by numerous possible inflationary models raises concerns that the inflationary paradigm as a whole is not falsifiable
___
Can we probe even deeper into our cosmic archaeological dig? In principle, yes. Gravitational radiation has an even weaker interaction than neutrinos. So much so that the universe was transparent to gravitons all the way back to the earliest instant traced by known physics, the Planck time: 10 to the power of -43 seconds, when the temperature was the highest conceivable: 10 to the power of 32 degrees. A proper understanding of the Planck epoch requires a predictive theory of quantum gravity, which we still haven’t developed. But if gravitational radiation was thermalized at the Planck time, shouldn’t there be a relic background of thermal gravitational radiation with a temperature of about one degree above absolute zero?
Not so, according to the popular theory of cosmic inflation, which suggests that the universe went through a subsequent phase of exponential expansion that diluted all earlier relics to undetectable levels. Inflation was theorized to explain various fine-tuning challenges of the Big Bang model. However, the large flexibility displayed by numerous possible inflationary models raises concerns that the inflationary paradigm as a whole is not falsifiable, even if individual models of it can be ruled out. Is it possible in principle to test the entire inflationary paradigm in a model-independent way?
In our new paper, Sunny Vagnozzi, then a postdoc at the University of Cambridge, and I showed that future detectors could potentially discover the one-degree gravitational wave background, if it exists. This cosmic graviton background adds to the cosmic radiation budget, which otherwise includes microwave and neutrino backgrounds. It therefore affects the cosmic expansion rate of the early universe at a level that might be detectable by the next generation of cosmological probes.
___
Sometimes, the most beautiful possibilities are ruled-out, even if they were conceived by the most brilliant scientists on planet Earth, like Albert Einstein
___
A discovery of the thermal graviton background holds the potential of ruling out the inflationary paradigm and bringing us back to the drawing board of how the Universe began. Given that the interiors of black holes are hidden from view and are risky to venture into, the early universe might represent our best opportunity for testing predictive theories of quantum gravity.
There is no reason to assume that our cosmic roots started at the Planck time. Albert Einstein was inclined to think that our past timeline should have no beginning. But to his dismay, he later realized that the equations of the General Theory of Relativity do not admit a stable static solution, and moreover - the actual universe appears to be expanding. His philosophical preference for no beginning might be validated by the ultimate theory to combine General Relativity with Quantum Mechanics, which could explain what predated the Big Bang.
A predictive theory of quantum gravity could rid us of the Big Bang singularity. But just as with the development of quantum mechanics, we might need guidance from experimental data or else we will have too many possible theoretical scenarios.
Here’s hoping that the new paper I wrote with Sunny will lead to progress by eliminating theoretical possibilities. Scientific knowledge encapsulates what actually exists in the cosmos out of the many possibilities that could have existed. Sometimes, the most beautiful possibilities are ruled-out, even if they were conceived by the most brilliant scientists on planet Earth, like Albert Einstein. As known from social media or politics, beauty and truth are not necessarily the same.
|
Physics
|
On December 5, scientists at the Lawrence Livermore National Laboratory took the first step toward harnessing a new abundant, clean form of energy. For the first time, they harnessed extra power from nuclear fusion, a reaction in which hydrogen is heated up to extremely high temperatures and converted into helium — the process that keeps stars like the Sun glowing above us.To replicate this phenomenon on Earth, the team blasted 192 laser beams at an eraser-sized gold cylinder and triggered an implosion that released 3.15 MJ of energy. Their successful experiment marked the first time that researchers have been able to create excess energy through nuclear fusion, or, a simpler way to look at it: get more juice than they put in.But hold your horses: The lab triumph is just a small step toward nuclear fusion power plants. It took a humongous amount of energy (300 MJ) to send 2 MJ out of the laser and onto the fuel pellet containing the hydrogen isotopes deuterium and tritium.At the National Ignition Facility, 192 laser beams blasted a pencil eraser-sized container with hydrogen isotopes inside.San Francisco Chronicle/Hearst Newspapers via Getty Images/Hearst Newspapers/Getty ImagesAnd the experiment relied on decades-old technology. Now, it’s up to researchers to recreate this process with newer devices and scale things up. To jumpstart this investigative work, the White House aims to have a plant up and running within the coming decade. It isn’t clear, though, whether this lofty goal will succeed.To parse through what the future of fusion holds, Inverse spoke with Stephanie Hansen, a physicist and senior scientist at Sandia National Labs who studies laser-powered fusion.What does this breakthrough mean for the future of fusion energy?The scientists at LLNL’s National Ignition Facility showed that we can heat and compress a little bit of material to temperatures and densities similar to those at the center of our Sun to create initial fusion reactions, Hansen says. Most importantly, the team was able to hold that material together for long enough to recapture and use that initial fusion energy to reach even more extreme temperatures (over 5 million degrees Fahrenheit) and burn more fusion fuel.Past attempts have fizzled out: In the previous record set by NIF, scientists managed to produce 70 percent of the energy they put in, so they ended up with a net loss.“This experiment demonstrated that we understand the physics of material and radiation at extreme conditions well enough to create and control an igniting plasma, which means we will have more confidence in future fusion target designs and innovations that could bring fusion energy to the grid,” she says.What more work needs to be done to scale up laser-based fusion?NIF’s conversion from grid-sourced energy to laser energy is inefficient, but laser tech has come a long way since the NIF device was planned in the 1990s. For one, more efficient lasers have since been developed, explains Hansen. Now, scientists can work on integrating newer tech with advanced laser-blasting methods.There’s also the supply issue of hydrogen isotopes, as reported by Chemistry World. While there’s lots of deuterium on Earth, it produces a relatively low amount of energy. And tritium doesn’t appear naturally, so it needs to be created with the help of lithium. But lithium is already sought out in large quantities for batteries, which could create some sourcing competition.The laser beams cause the fuel capsule to implode and kick-off nuclear fusion.LLNLWhat type of technology could power fusion plants?The tokamak design is being considered for future nuclear fusion plants.ShutterstockIn addition to blasting hydrogen fuel with lasers, scientists are also working on devices that use magnetic fields to trap hydrogen isotopes so that they heat up to the temperatures required for fusion to take place (which is 20 times hotter than the Sun’s core). These include the tokamak, a metal vacuum chamber invented by Soviet scientists in the 1960s, that has attracted mainstream hype in recent years.“Magnetic confinement fusion has also made enormous strides in the last few years and holds significant promise as a continuous (rather than pulsed) fusion energy source,” Hansen says. “Although plasma densities, sizes, and time scales are vastly different between the two fusion methods, there are a lot of common physics and engineering challenges and many members of the two communities work closely together.”Hansen thinks the exciting result from NIF could even spur innovative new ways to create fusion energy. In fact, studies have already looked into combining lasers and magnets: Scientists at NIF have applied a magnetic field to the fuel-containing capsule and found that it could increase the fusion energy threefold.The precise formula to a successful fusion power plant remains elusive — but scientists could get there someday.
|
Physics
|
Scientists, including an Oregon State University College of Science materials researcher, have developed a better tool to measure light, contributing to a field known as optical spectrometry in a way that could improve everything from smartphone cameras to environmental monitoring.The study, published today in Science, was led by Finland’s Aalto University and resulted in a powerful, ultra-tiny spectrometer that fits on a microchip and is operated using artificial intelligence.The research involved a comparatively new class of super-thin materials known as two-dimensional semiconductors, and the upshot is a proof of concept for a spectrometer that could be readily incorporated into a variety of technologies – including quality inspection platforms, security sensors, biomedical analyzers and space telescopes.“We’ve demonstrated a way of building spectrometers that are far more miniature than what is typically used today,” said Ethan Minot, a professor of physics in the OSU College of Science. “Spectrometers measure the strength of light at different wavelengths and are super useful in lots of industries and all fields of science for identifying samples and characterizing materials.”Traditional spectrometers require bulky optical and mechanical components, whereas the new device could fit on the end of a human hair, Minot said. The new research suggests those components can be replaced with novel semiconductor materials and AI, allowing spectrometers to be dramatically scaled down in size from the current smallest ones, which are about the size of a grape.“Our spectrometer does not require assembling separate optical and mechanical components or array designs to disperse and filter light,” said Hoon Hahn Yoon, who led the study with Aalto University colleague Zhipei Sun Yoon. “Moreover, it can achieve a high resolution comparable to benchtop systems but in a much smaller package.”The device is 100% electrically controllable regarding the colors of light it absorbs, which gives it massive potential for scalability and widespread usability, the researchers say. Read the full story here.
|
Physics
|
Against the current: Alex Müller made pioneering contributions to superconductivity (Courtesy: Nobel Foundation) The Swiss condensed-matter physicist and Nobel laureate Alex Müller died on 9 January at the age of 95. Müller shot to fame in 1986 when he and his colleague Georg Bednorz discovered a material with superconducting transitions temperature far above those of so-called conventional metal superconductors. The work earned the pair the Nobel Prize for Physics the following year.
Born on 20 April 1927 in Basel, Müller received a diploma in physics and mathematics from the Swiss Federal Institute of Technology (ETH) in Zürich in 1952. After graduating, he stayed on at ETH Zurich to study paramagnetic resonance of solid-state materials, obtaining his PhD in 1958.
Following a stint as head of the magnetic-resonance group at the Battelle Memorial Institute in Geneva, he took up a position at the University of Zurich in 1962. A year later he also joined IBM Research Zurich, heading the physics department from 1971. In 1985 he left IBM but remained at the University of Zurich before retiring in 1994.
During his life, Müller published several groundbreaking papers in the field of magnetic resonance and phase transitions in ferroelectrics. At IBM and Zurich he also began working on oxide materials, especially perovskites, which would later be useful in his work in superconductivity in the 1980s.
Fighting for acceptance
It has been known for more than a century that when certain metals are cooled to extremely low temperatures, they become superconductors, conducting electrical current without resistance. But as materials had to be cooled to just a few degrees above absolute zero for this phenomenon to occur, research in to the field began to stagnate.
That all changed, however, in 1986, when Müller and Bednorz discovered that a material composed of copper oxide with lantanum and barium became superconducting at around 35 K. This was about 50% higher than the previous highest value of 23 K, which had been achieved more than a decade earlier in niobium-germanium (Nb3Ge).
Given that oxide materials are ceramics and are known to conduct poorly at higher temperatures, their finding did not convince everybody, with some researchers believing the materials were not exhibiting superconductivity.
“It was disappointing seeing their reaction,” Bednorz later told Physics World in 1988. “We had the impression that they didn’t believe what we had found out and after that experience we were convinced that we would have to fight for one, two or even more years to get our results accepted in the scientific community.” Read more Resistance is futile The results, however, were soon confirmed by other researchers, with Müller and Bednorz being credited with having discovered an entirely new class of superconductors. The buzz surrounding these so-called “cuprate superconductors” reached fever pitch at the American Physical Society’s March Meeting in New York in 1987. The meeting was later dubbed “the Woodstock of physics” due to the fervent nature of the talks and discussions that went on late into the night.
In 1987 Bednorz and Müller shared the Nobel Prize for Physics “for their important break-through in the discovery of superconductivity in ceramic materials”. The finding sparked decades of research into similar cuprate materials where superconducting transition temperatures over 100 K were later discovered, which Müller followed “with great interest and commitment“.
|
Physics
|
I aimed a 1,500-foot iron asteroid traveling at 38,000 miles per hour with a 45-degree impact angle at Gizmodo’s office in Midtown, Manhattan.Screenshot: Gizmodo/Neal.FunHundreds of thousands of asteroids lurk in our solar system, and while space agencies track many of them, there’s always the chance that one will suddenly appear on a collision course with Earth. A new app on the website Neal.fun demonstrates what could happen if one smacked into any part of the planet. OffEnglish Neal Agarwal developed Asteroid Simulator to show the potentially extreme local effects of different kinds of asteroids. The first step is to pick your asteroid, with choices of iron, stone, carbon, and gold, or even an icy comet. The asteroid’s diameter can be set up to 1 mile (1.6 kilometers); its speed can be anywhere from 1,000 to 250,000 miles per hour; and the impact angle can be set up to 90 degrees. Once you select a strike location on a global map, prepare for chaos.“I grew up watching disaster movies like Deep Impact and Armageddon, and so I always wanted to make a tool that would let me visualize my own asteroid impact scenarios,” Agarwal said to Gizmodo in an email. “I think this tool is for anyone who loves playing out ‘what-if’ scenarios in their head. The math and physics behind the simulation is based on research papers by Dr. Gareth Collins and Dr. Clemens Rumpf who both study asteroid impacts.”Once you’ve programmed the asteroid and launched it at your desired target, Asteroid Simulator will walk you through the devastation. First, it’ll show you the width and depth of the crater, the number of people vaporized by the impact, and how much energy was released. It will then walk you through the size and effects of the fireball, shock wave, wind speed, and earthquake generated by the asteroid.NASA has its eyes on more than 19,000 near-Earth asteroids. While no known space rock poses an imminent threat to Earth, events like the 2013 Chelyabinsk impact in Russia remind us of the need for robust planetary defense. Just this year, NASA tested an asteroid deflection strategy via its DART spacecraft, to resounding success.
|
Physics
|
Credit: Angewandte Chemie International Edition (2022). DOI: 10.1002/anie.202207975 In science, no matter what the field, expertise often intersects. At the Institut National de la Recherche Scientifique (INRS), this is especially true for many areas of study where faculty members collaborate to push the limits of a specific field that much farther. And it's equally true for a team led by Professor Andreas Ruediger, which has brought together specialists from several different backgrounds. Together, the team studied a catalytic problem and advanced our knowledge of catalytic applications. "The subject called for expertise in a variety of disciplines, including physics, chemistry, and materials science. We were fortunate to have a team that brought all of that expertise together." Andreas Ruediger, professor at the Énergie Matériaux Télécommunications Research Centre.
The research team came up with an unexpected discovery.
Shifting into high gear
The team's work was based on environmental considerations, energy performance, and of course, material costs. At the project's core: energy, specifically energy to support catalysis, a vital part of our everyday lives. The various forms of energy we use, including solar energy, use catalytic metals to speed up chemical reactions and achieve better performance. Could less expensive and less rare oxide nanomaterials offer new opportunities in catalysis? What if they also produced a small environmental footprint and solid results?
Catalytic processes are an integral part of the current energy sector and of any future energy prospects. Today, they are involved in the production of most basic necessities (textile fibers and electronic devices, for example). In simple terms, catalysis allows a reaction to be pointed in the desired direction. It also reduces the amount of energy needed to produce a faster reaction. The catalyst is the substance that increases the speed of a chemical reaction. Catalysis, a secondary player, directs the necessary energy in the right direction, at the right speed.
Improving process efficiency
Improving the efficiency of catalytic processes is already generating solutions for lowering energy demand. It also breaks down pollutants that are emitted into the environment. In addition, new processes, partly based on nanotechnologies, are being used to couple physical and chemical properties.
"Our work could lead to greater efficiency in the conversion and use of renewable energies, especially solar energy," said Andreas Ruediger
Ifeanyichukwu Amaechi, Azza Hadj Youssef, Andreas Dörfler, Yoandris González, Rajesh Katoch, and Andreas Ruediger, all researchers at INRS, have paid particular attention to distinguishing between the role of free and bound charges within certain photo- and piezocatalysts. This is crucial to disentangling the contributions to the catalytic reaction, leading to the overall improvement of catalytic performance in areas like wastewater treatment, and ultimately, water fractionation.
Non-centrosymmetric perovskite oxides exhibit the volume photovoltaic effect and piezoelectricity, while the additional presence of a polar axis generates to pyro- and possibly ferroelectricity.
Some properties of the catalyst were already known to improve photoelectric and photochemical reactions.
"The low carrier mobility of these materials was thought to hinder their applicability in charge transport during catalysis," said Ifeanyichukwu Amaechi, a postdoctoral researcher at INRS.
The group found that bound charge carriers, which were previously thought to play a negligible role, could substantially contribute to overall catalytic performance. "This work is a great example of the importance in materials science of having an eye for the complexity of processes and considering under what conditions even small effects can become significant," concludes Amaechi, lead author of the paper published in Angewandte Chemie International Edition. This discovery offers great potential for future improvements, which Professor Ruediger's team is eager to explore. More information: Ifeanyichukwu C. Amaechi et al, Catalytic Applications of Non‐Centrosymmetric Oxide Nanomaterials, Angewandte Chemie International Edition (2022). DOI: 10.1002/anie.202207975 Provided by Institut national de la recherche scientifique - INRS Citation: Researchers join forces to further explore the catalytic applications of semiconductors (2022, November 1) retrieved 6 November 2022 from https://phys.org/news/2022-11-explore-catalytic-applications-semiconductors.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
|
Physics
|
The performance of some quantum technologies could be boosted by exploiting interactions between nitrogen-vacancy (NV) centres and defects on the surface of diamond – according to research done by two independent teams of scientists in the US.
NV centres in diamond have emerged as a promising solid-state platform for quantum sensing and information processing. They are defects in the diamond lattice in which two carbon atoms are replaced with a single nitrogen atom, leaving one lattice site vacant. NV centres are a two-level spin system into which quantum information can be written and read out using laser light and microwaves. An important property of NV centres is that once they have been put into a specific quantum state, they can remain in that state for a relatively long “coherence” time – which makes them technologically useful.
Very sensitive
NV centres are very sensitive to magnetic fields, which means that they can be used to create high-performance magnetic field sensors for a wide range of applications. However, this sensitivity has its downside because sources of magnetic noise can degrade the performance of NV centres.
One source of magnetic noise are the interactions between NV centres and the spins of unpaired electrons on the surface of diamond. These spins cannot be detected using optical techniques, so they are referred to as “dark spins”.
As they interact with NV centres, dark spins can destroy quantum information that is stored in an NV centre or reduce the performance of NV-based sensors. Such interactions can be minimized by using NV centres that are deeper inside the bulk of the diamond. However, this solution makes it more difficult to use them to sense magnetic fields over very short length scales – something that is useful for studying individual spins, nuclei or molecules.
Technologically useful
Because of the difficulty of detecting dark spins, their behaviour has mostly remained a mystery. However, previous studies have shown that dark spins have long coherence times, which could make them useful in quantum technologies.
Both teams probed interactions between NV centres and dark spins using double electron-electron resonance (DEER). This is a technique that determines the distance between pairs of electron spins by applying microwave pulses to both simultaneously.
One team led by Nathalie de Leon at Princeton University used DEER measurements to develop a model of how NV centre coherence times vary with their depth below the surface of diamond. The team also discovered that the dark spins are not static, but instead “hop” between sites on the surface. These discoveries suggest that NV-based technologies could be optimized by selecting an appropriate depth for the NV centres – and by developing ways to control the hopping of dark spins.
Chemical vapour deposition
Meanwhile a team led by Norman Yao at the University of California, Berkeley used similar techniques to explore how NV centres interact with a different type of dark spin called P1s. These were created on a diamond surface by the chemical vapour deposition of nitrogen.
The diamond quantum revolution
In one experiment the researchers prepared a sparsely populated bath of P1s so that mutual interactions between NV centres dominated over the influence of the P1s. In this case, they could use microwave pulses to selectively decouple the NV centres either from each other, or from the impurities. This study revealed that in this case interactions between NV centres dominated the decoherence process, rather than interactions between NV centres and the P1s.
However, when Yao and colleagues prepared a denser bath of P1s, they could use the interactions to exchange quantum information between the NV centres and the P1s. This rich quantum environment could be particularly useful for performing quantum simulations that involve many interacting spins – including complex biomolecules and exotic states of matter.
Yao’s team describes its work in a paper on arXiv that has been accepted for publication in Nature Physics. De Leon and colleagues present their findings in Physical Review X.
|
Physics
|
The mathematician shares his latest theories on quantum consciousness, the structure of the universe and how to communicate with civilisations from other cosmological aeons Physics 14 November 2022 Dave Stock
EARLY in his career, the University of Oxford mathematician Roger Penrose inspired the artist M. C. Escher to create Ascending and Descending, the visual illusion of a loop of staircase that seems to be eternally rising. It remains a fitting metaphor for Penrose’s ever enquiring mind. During his long career, he has collaborated with Stephen Hawking to uncover the secrets of the big bang, developed a quantum theory of consciousness with anaesthesiologist Stuart Hameroff and won the Nobel prize in physics for his prediction of regions where the gravitational field would be so intense that space-time itself would break down, the so-called singularity at the heart of a black hole. Undeterred by the march of time – Penrose turned 91 this year – he is continuing to innovate, and even planning communications with future universes.
Michael Brooks: In 1965, near the start of your career, you used general relativity to make the first prediction of the existence of singularities, as in the centres of black holes. How did it feel to see the first photograph of a black hole more than half a century later?
Roger Penrose: If I’m honest, it didn’t make much impression on me because I was expecting these things by then. However, back when I first proved this [singularity] theorem, it was quite a curious situation: I was visiting Princeton to give a talk and I remember Bob Dicke – a well-known cosmologist, a very distinguished man – came and slapped me on the back and said, “You’ve done it, you’ve shown general relativity is wrong!” And that was quite a common view. I suspect that even Einstein would probably have had that …
|
Physics
|
An international team of researchers has created an extraordinary virtual representation of our universe. It is the largest and most accurate virtual simulation of its kind to date. The team used supercomputer simulations to recreate the entire evolution of the cosmos, from the Big Bang to the present day. Simulating our own universeOver the past two decades, cosmologists have developed a "standard model" of cosmology, the so-called cold dark matter model (CDM). This model is used to explain a wealth of observed astronomical data, from the properties of leftover heat from the Big Bang to the number of galaxies we observe around us today, as well as their spatial distribution. When simulating a virtual cold dark matter universe, most cosmologists track a "typical" or arbitrary patch of sky, similar to our own observed universe, but only in a statistical sense. The simulations performed in this study are different: they are tuned to reproduce our particular existing patch of the universe using advanced generative algorithms, thus containing existing structures near our own galaxy that astronomers have observed for decades.This means that well-known structures in the nearby universe, such as the Virgo, Coma, and Perseus clusters, the Great Wall, and the Local Void - our cosmic habitat - are precisely reproduced in the simulation. Software and hardwareThe simulation software developed at the Leiden Institute of Physics (the Netherlands) is the technological key to this effort. By deploying a giant supercomputer for over a month, the team was able to bring to life a virtual counterpart of our own universe. The unique geometry and properties of the simulation, combined with its raw size, made the simulation a challenge that could only be accomplished by virtue of the many years of experience of the local simulation group built on past efforts, such as the Leiden-led EAGLE project, which recently received the Group Award from the British Royal Astronomical Society.Comparing the virtual universe to the real universeThe research closely compares the output of the virtual universe with a series of observations in the real world, finding the right locations and properties for the virtual counterparts of known structures. The first findings show that our nearby universe could be unusual: the simulation predicts a lower average number of galaxies due to a local large-scale 'underdensity' of matter. While the authors believe that the level of this underdensity poses no threat to the Standard Model of cosmology, it may have implications for how astronomers interpret information from surveys of observed galaxiesCo-author Matthieu Schaller from the Leiden Observatory stated that the project is a milestone in the search for ways to test the current established model of the evolution of our universe. According to Schaller, these simulations show that the Standard Model of Cold Dark Matter is capable of producing all the galaxies we see in our environment. The simulation has been a critical test for the model.
|
Physics
|
Who will we be hearing from? The news briefing will be led by Jennifer Granholm - she's the US secretary of state for energy.Joining her at the Lawrence Livermore National Laboratory are:Jill Hruby, under secretary for nuclear security and National Nuclear Security Administration administratorDr Arati Prabhakar, White House office of science and technology policy directorDr Marvin Adams, National Nuclear Security Administration deputy administrator for defence programmesDr Kim Budil, director of Lawrence Livermore National Laboratory Less than 10 minutes to go... Not long to wait now until US scientists will reveal a major scientific breakthrough on nuclear fusion, which could one day lead to "basically unlimited" energy and aid the fight against climate change.The announcement will be made at 3pm UK time from the Lawrence Livermore National Laboratory in California.We'll bring you live updates once the news briefing begins.Before then, make sure you read analysis from our science and technology editor Tom Clarke on what would make self-sustaining nuclear fusion such a significant advance (14.36 post).You can also watch Tom explain nuclear fusion below: Analysis: A crucial step on a long road to making a 70-year dream a reality Scientists have been pursuing the dream of harnessing energy from nuclear fusion for more than 70 years. But compared to nuclear fission (splitting the atoms of heavy elements to release energy), fusion energy (produced by squeezing atoms together) has proved, much, much harder.From the discovery of nuclear fission in 1938, it was just 14 years before the first civilian nuclear power plant was built in Sellafield in 1952.Russian scientists built the first fusion machine in 1950, but no one has yet managed to get more energy out of a fusion reaction than the energy put in to create one.That is the result we're expecting to hear from scientists at the National Ignition Facility at California’s Lawrence Livermore National Laboratory.If that's what they've achieved, it's a crucial demonstration that we can harness fusion, and therefore a huge milestone in fusion research. But it's one on a very long road to making fusion power a reality.For starters, it's expected that the "energy gain" result being reported today is only concerned with deducting the amount of energy going into the fusion fuel, from the amount of energy coming out.But it doesn't include the amount of mains electricity used to power up the world's most powerful laser that they used to put that energy in. If you add that into the calculation, they're still getting about one percent of the energy out than they put in. 'This is not just fantasy' The excitement surrounding a breakthrough in nuclear fusion will understandably make the scientific community pretty giddy.But businesses will be watching closely, too.Ahead of what we expect will be confirmation that US scientists have managed to generate more energy from a nuclear fusion reaction than was consumed, thus heralding the possibility of self-sustaining energy supplies, the electricity industry has cautiously welcomed the news.Andrew Sowder, senior technology executive at EPRI, a non-profit energy research and development group, said: "It's the first step that says, 'yes, this is not just fantasy, this can be done'."But industry figures have said that the breakthrough should not slow down existing attempts to transition to renewables, such as wind and solar power.Of course, having another potential energy source added to the mix is no bad thing - Britain's ongoing wind drought is testament to the limitations of our existing renewable options.Sky's science correspondent Thomas Moore explains what's been happening there - and why the timing couldn't be worse - below. How does today's breakthrough compare to past efforts? Today is significant because it's the first time we've heard of scientists using nuclear fusion to generate more energy than was consumed.But the team at the US National Ignition Facility, which is the specific home of the research at the Lawrence Livermore National Laboratory, by no means have the playing field to themselves.Earlier this year, scientists here in the UK set a new record for generating energy from nuclear fusion.As Sky's science and technology editor Tom Clarke reported, the Joint European Torus, or JET, an experimental fusion machine near Abingdon in Oxfordshire, generated around 59 megajoules, or 11 megawatts of energy in a five-second burst.To put it into context, reports suggest the reaction at the US facility only produced about 2.5 megajoules - around 120% of the 2.1 megajoules consumed by the lasers used in the reaction.The UK's JET machine is designed differently - rather than use high-power lasers, it has a hollow doughnut-shaped reactor vessel called a tokamak.This heats its fusion fuel - hydrogen atoms called deuterium and tritium - to 150 million degrees Centigrade, creating plasma around 10 times hotter than the sun.Electromagnets surrounding the tokamak prevent this charged soup of ions from touching the sides and the whole reaction stopping, allowing fusion to occur. Fusion energy 'a long-sought milestone' - but are we getting ahead of ourselves? Make no mistake, the announcement we're expecting today is a big deal.But there is a long road ahead to a world where nuclear fusion can form the backbone of our energy supplies.As you can see in the report below, Sky News has long been reporting on the efforts to obtain clean nuclear energy - and it dates back much further than that...'Not yet economically viable'Professor Gianluca Gregori is a specialist in the kind of high power lasers used at California's Lawrence Livermore National Laboratory, where the breakthrough has taken place.The University of Oxford lecturer said that while today's news marks a "long-sought milestone", the amount of energy produced remains smaller than what's needed to power a wall plug."While this is not yet an economically viable power plant, the path for the future is much clearer," he added.Jeremy Chittenden, professor of plasma physics at Imperial College London, said the energy gain would need to be boosted much further to turn fusion into a real power source."We’ll also need to find a way to reproduce the same effect much more frequently and much more cheaply before we can realistically turn this into a power plant," he said. 'A new era of green energy' The implications of today's announcement for the fight against climate change are obvious.As the world looks for ways to wean itself off fossil fuels, nuclear fusion would be a huge shot in the arm for the push towards renewables.You can watch a report on an earlier UK breakthrough in nuclear fusion and its potential for renewables below:'Inexhaustible energy'Regarding today's news, Dr Mark Wenman of Imperial College London said it could prove a "remarkable point in human history".In future, the breakthrough "could usher in an era of green, secure and essentially inexhaustible form of compact energy, without long-lived nuclear waste".Daniel Kammen, a professor of energy and society at the University of California, said nuclear fusion offers the possibility of "basically unlimited" fuel if the technology can be made commercially viable. The elements needed are available in seawater, and the process does not produce the radioactive waste of nuclear fission. What is nuclear fusion - and why is today's announcement so exciting? For those among us lucky enough to have anything resembling a clear sky today, all we have to do is look up to see examples - as nuclear fusion reactions power the sun and other stars.This reaction happens when two light nuclei merge to form a single heavier nucleus.Because the total mass of that single nucleus is less than the mass of the two original nuclei, the leftover mass is energy that is released in the process.So, in simple terms, it is a process that involves more energy coming out than goes in - which has obvious and thrilling potential implications.With the sun, its intense heat - millions of degrees Celsius - and the pressure exerted by its gravity, allow atoms that would otherwise repel each other to fuse.Scientists have long understood how nuclear fusion has worked and have been trying to duplicate the process on Earth for almost a century.Current efforts focus on fusing a pair of hydrogen isotopes, deuterium and tritium, according to the US Department of Energy, which says that particular combination releases "much more energy than most fusion reactions" and requires less heat to do so. Scientists due to reveal 'holy grail' scientific breakthrough In a news conference scheduled for around 3pm UK time, the US Department of Energy is set to announce a "major scientific breakthrough" at the Lawrence Livermore National Laboratory in California.It is one of several sites worldwide where researchers have been trying to develop the possibility of harnessing near-limitless energy from nuclear fusion.Excitement around the announcement is based on the fact that it is a technology that has the potential to one day accelerate the planet's shift away from fossil fuels, which are the major contributors to climate change.Developments to date with the technology have repeatedly been beset by daunting challenges.However, it is believed that today's revelations could signal a seminal shift in the move towards what scientists have described as a "holy grail" of carbon free power. Hello and welcome to our live coverage of today's expected nuclear fusion announcement
|
Physics
|
On Sunday, October 9, Judith Racusin was 35,000 feet in the air, en route to a high-energy astrophysics conference, when the biggest cosmic explosion in history took place. “I landed, looked at my phone, and had dozens of messages,” said Racusin, an astrophysicist at NASA’s Goddard Space Flight Center in Maryland. “It was really exceptional.”
The explosion was a long gamma-ray burst, a cosmic event where a massive dying star unleashes powerful jets of energy as it collapses into a black hole or neutron star. This particular burst was so bright that it oversaturated the Fermi Gamma-ray Space Telescope, an orbiting NASA telescope designed in part to observe such events. “There were so many photons per second that they couldn’t keep up,” said Andrew Levan, an astrophysicist at Radboud University in the Netherlands. The burst even appears to have caused Earth’s ionosphere, the upper layer of Earth’s atmosphere, to swell in size for several hours. “The fact you can change Earth’s ionosphere from an object halfway across the universe is pretty incredible,” said Doug Welch, an astronomer at McMaster University in Canada.
Astronomers cheekily called it the BOAT — “brightest of all time” — and began to squeeze it for information about gamma-ray bursts and the cosmos more generally. “Even 10 years from now there’ll be new understanding from this data set,” said Eric Burns, an astrophysicist at Louisiana State University. “It still hasn’t quite hit me that this really happened.”
The initial analysis suggests that there are two reasons why the BOAT was so bright. First, it occurred about 2.4 billion light-years from Earth — fairly close for gamma-ray bursts (though well outside of our galaxy). It’s also likely that the BOAT’s powerful jet was pointed toward us. The two factors combined to make this the kind of event that occurs only once every few hundred years.
Perhaps the most consequential observation happened in China. There, in the Sichuan province, the Large High Altitude Air Shower Observatory (LHAASO) tracks high-energy particles from space. In the history of gamma-ray burst astronomy, researchers have seen only a few hundred high-energy photons coming from these objects. LHAASO saw 5,000 from this one event. “The gamma-ray burst basically went off in the sky directly above them,” said Sylvia Zhu, an astrophysicist at the German Electron Synchrotron (DESY) in Hamburg. Among those detections was a suspected high-energy photon at 18 teraelectron volts (TeV) — four times higher than anything seen from a gamma-ray burst before and more energetic than the highest energies achievable by the Large Hadron Collider. Such a high-energy photon should have been lost on the way to Earth, absorbed by interactions with the universe’s background light.
So how did it get here? One possibility is that, following the gamma-ray burst, a high-energy photon was converted into an axion-like particle. Axions are hypothesized lightweight particles that may explain dark matter; axion-like particles are thought to be slightly heftier. High-energy photons could be converted into such particles by strong magnetic fields, such as the ones around an imploding star. The axion-like particle would then travel across the vastness of space unimpeded. As it arrived at our galaxy, magnetic fields would convert it back into a photon, which would then make its way to Earth.
In the week following the initial detection, multiple teams of astrophysicists suggested this mechanism in papers uploaded to the scientific preprint site arxiv.org. “It would be a very incredible discovery,” said Giorgio Galanti, an astrophysicist at the National Institute for Astrophysics (INAF) in Italy, who coauthored one of the first of these papers.
Yet other researchers wonder if LHAASO’s detection might be a case of mistaken identity. Perhaps the high-energy photon came from somewhere else, and its just-right arrival time was simply a coincidence. “I’m very skeptical,” said Milena Crnogorčević, an astrophysicist at the University of Maryland. “I am currently leaning toward it being a background event.” (To further complicate matters, a Russian observatory reported a hit by an even higher-energy 251 TeV photon coming from the burst. But “the jury’s still out” on that, said Racusin, deputy project scientist on the Fermi telescope. “I’m a little skeptical.”)
So far the LHAASO team has not released detailed results of their observations. Burns, who is coordinating a global collaboration to study the BOAT, hopes they do. “I’m very curious to see what they have,” he said. But he understands why a degree of wariness may be warranted. “If I were sitting on data that had even a few percent chance of being defining proof of dark matter, I would be extraordinarily cautious at the moment,” said Burns. If the photon can be linked to the BOAT, “it would very likely be evidence of new physics, and potentially dark matter,” Crnogorčević said. The LHAASO team did not respond to a request for comment.
Even without LHAASO’s data, the sheer amount of light seen from the event could enable scientists to answer some of the biggest questions about gamma-ray bursts, including major puzzles about the jet itself. “How is the jet launched? What is going on in the jet as it is propagating out into space?” said Tyler Parsotan, an astrophysicist at Goddard. “Those are really big questions.” Other astrophysicists hope to use the BOAT to ascertain why only some stars produce gamma-ray bursts as they go supernova. “That is one of the big mysteries,” said Yvette Cendes, an astronomer at the Harvard-Smithsonian Center for Astrophysics. “It has to be a very massive star. A galaxy like ours will maybe every million years produce a gamma-ray burst. Why does such a rare population make gamma-ray bursts?”
Whether gamma-ray bursts result in a black hole or a neutron star at the core of the collapsed star is also an open question. A preliminary analysis of the BOAT suggests that the former happened in this case. “There’s so much energy in the jet it basically has to be a black hole,” said Burns.
What is certain is that this is a cosmic incident that will not be eclipsed for many, many lifetimes. “We’ll all be long dead before we get the chance to do this again,” said Burns.
|
Physics
|
Twist, bend and stretch The new stretchable sensor can detect even minor changes in strain with greater range of motion than previous technologies. The patterned cuts enable large deformation without sacrificing sensitivity. (Courtesy: Shuang Wu, NC State University) Soft and stretchable strain sensors are invaluable for use in wearable electronics such as motion tracking devices and physiological monitoring systems. Currently, however, the trade-off between sensitivity and sensing range is a major challenge. Strain sensors that are capable of detecting small deformations cannot be stretched very far, while those that can be stretched to greater lengths are typically not very sensitive.
When monitoring human physiology and motion, skin strain ranges from below 1% to over 50%. As such, separate sensors are typically used to detect subtle strains (such as those associated with blood pulse and respiration) and large strains (such as bending of body parts). But for monitoring certain diseases, use of a single device would be preferable. In Parkinson’s disease, for example, sensors must be sensitive enough to monitor small tremors while maintaining a large enough range to measure joint movements.
What’s really needed is a single sensor that can be attached to different parts of the body and can accurately measure the full range of strains on human skin. With this goal in mind, a team at North Carolina State University has developed a soft stretchable resistive strain sensor that offers high sensitivity, large sensing range and high robustness.
“The new sensor we’ve developed is both sensitive and capable of withstanding significant deformation,” explains corresponding author Yong Zhu in a press statement. “An additional feature is that the sensor is highly robust even when overstrained, meaning it is unlikely to break when the applied strain accidently exceeds the sensing range.”
The sensor, described in ACS Applied Materials & Interfaces, measures strain by measuring changes in electrical resistance. The device is made from a silver nanowire network embedded in the elastic polymer poly(dimethylsiloxane), with a series of mechanical cuts in its top surface, alternating from either side.
When the sensor is stretched, the cuts pull open. This forces the electrical signal to transition from a uniform current flow across the closed cracks to travelling further along the zigzag conducting path defined by the open cracks. Thus the resistance increases under applied strain. The opening up of the cuts also allows the device to withstand substantial deformation without reaching its breaking point. “This feature – the patterned cuts – is what enables a greater range of deformation without sacrificing sensitivity,” says first author Shuang Wu.
The team performed experiments and finite element analysis to assess the effects of slit depth, length and pitch on sensor performance. The optimized device exhibited a large gauge factor (the ratio of relative change in electrical resistance to mechanical strain) of 290.1 with a sensing range of over 22%. It was also robust to overstrain and 1000 repeated loading cycles.
Building devices
To demonstrate some potential applications of their new strain sensor, Zhu, Wu and colleagues integrated it into wearable health monitoring systems that measure vastly different levels of motion.
Personal health monitor The sensor is integrated with a rubber wristband to enable blood pressure monitoring. (Courtesy: Shuang Wu, NC State University)
First, they employed the sensor to monitor blood pressure, which requires extremely high sensitivity. Using a rubber band to secure the sensor, they placed it on a volunteer’s wrist to detect the pulse wave – one of the smallest strain signals on human skin.
When the blood pumps through the vein, the sensor ends remain fixed in place by the band while the centre is stretched, opening up the cracks on its top surface.
The researchers showed that this set-up could capture the pulse wave from the radial artery on the wrist. By placing another strain sensor on the brachial artery higher up on the arm and recording a second pulse wave simultaneously, they could measure the averaged pulse wave velocity, enabling calculation of blood pressure.
Measuring back strain The researchers created a wearable device for monitoring motion in a person’s back. (Courtesy: Shuang Wu, NC State University)
In the next example, the sensor was used to monitor large strains on the lower back during motion, which has utility for physical therapy. Here, the researchers integrated the sensor with a stretchable athletic tape and attached two sensors in parallel along the spine on a volunteer’s lower back. They also attached a Bluetooth board onto the back to collect and transmit the sensing signals.
Starting from a sitting straight position, the subject performed a series of movements while the sensor monitored lower back strains. When leaning forward, both sensors responded with resistance increases. While leaning forward and tilted sideways, the resistance of the sensor on the corresponding side remained near constant while the sensor on the opposite side showed substantially increased resistance. Read more Stretchable organic photodiode could improve the performance of wearable devices Finally, to demonstrate the sensor’s use in human-machine interfaces, the researchers created a soft 3D touch sensor that tracks both normal and shear stresses and can be used to control a video game. They also integrated a strain sensor on the fingertip of a glove that was then used to grasp a glass of water, demonstrating its potential for tactile sensing for robotics applications.
The team is now exploring the application of the strain sensor for biomedical and sports applications. “Biomedical applications include monitoring movement patterns during rehabilitation of stroke patients,” Zhu tells Physics World. “We are also working on scalable manufacturing of the sensors.”
|
Physics
|
After failing to reproduce last year’s record-breaking fusion-energy shot, scientists at the US National Ignition Facility have gone back to the drawing board. Edwin Cartlidge discusses their next steps One hit wonder? A record-breaking shot at the National Ignition Facility in 2021 that yielded 1.37 MJ has not been reproduced. (Courtesy: LLNL) On 8 August last year, physicists at the Lawrence Livermore National Laboratory in the US used the world’s biggest laser to carry out a record-breaking experiment. Employing the 192 beams of the $3.5bn National Ignition Facility (NIF) to implode a peppercorn-sized capsule containing deuterium and tritium, they caused the two hydrogen isotopes to fuse, generating a self-sustaining fusion reaction for a fraction of a second. With the process giving off over 70% of the energy used to power the laser, the finding suggested that giant lasers might yet enable a new source of safe, clean and essentially limitless energy.
The result put researchers at the Livermore lab in a celebratory mood, having struggled for more than a decade to make significant progress. But the initial excitement soon faded when several subsequent attempts to reproduce the achievement fell short – mustering at best just half of the record-breaking output. With the Livermore management having decided to try only a handful of repeat experiments, the lab put its quest for breakeven on hold and instead tried to figure out what was causing the variation in output.
For critics of NIF, the latest course correction came as no surprise, apparently illustrating once again the facility’s unsuitability as a test bed for robust fusion-energy production. But many scientists remain upbeat and the NIF researchers themselves have come out fighting, recently publishing the result from their record-breaking shot in Physical Review Letters (129 075001). They insist that they have, after all, achieved “ignition”, reaching the point at which heating from the fusion reactions outweighs cooling, creating a positive feedback loop that rapidly boosts plasma temperature.
Omar Hurricane, chief scientist of Livermore’s fusion programme, maintains that this physics-based definition of ignition – rather than the simple “energy breakeven” description – is the one that really counts. Describing the eventual achievement of breakeven as “the next public relations event”, he nevertheless says that it remains an important milestone that he and his colleagues want to reach. Indeed, physicists from beyond the Livermore lab are confident that the much-discussed target will be hit. Steven Rose at Imperial College in the UK believes that “there is every prospect” breakeven will be achieved.
Record gain
Attempting to harness fusion involves heating up a plasma of light nuclei to the point where those nuclei overcome their mutual repulsion and combine to form a heavier element. The process yields new particles– in the case of deuterium and tritium, helium nuclei (alpha particles) and neutrons – as well as huge amounts of energy. If the plasma can be kept at suitably immense temperatures and pressures for long enough, the alpha particles should provide enough heat to sustain the reactions on their own while the neutrons can potentially be intercepted to power a steam turbine.
Fusion tokamaks use magnetic fields to confine plasmas over fairly long periods. NIF, as an “inertial-confinement” device, instead exploits the extreme conditions created for a fleeting moment inside a tiny quantity of highly compressed fusion fuel before it re-expands. The fuel is placed inside a 2 mm-diameter spherical capsule, which is located at the centre of a roughly 1 cm-long cylindrical metal “hohlraum” and implodes when NIF’s precisely directed laser beams strike the inside of the hohlraum and generate a flood of X-rays.
In contrast to tokamaks, NIF was not designed primarily to demonstrate energy but instead serve as a check on the computer programs used to simulate explosions of nuclear weapons – given that the US ceased live testing in 1992. However, after switching on in 2009 it soon became apparent that the programs used to guide its own operations had underestimated the difficulties involved, in particular when dealing with plasma instabilities and creating suitably symmetric implosions. With NIF missing its initial target to achieve ignition by 2012, the US National Nuclear Security Administration, which oversees the lab, put that objective aside to concentrate on the time-consuming task of better understanding implosion dynamics.
In early 2021, following a series of experimental modifications, Hurricane and colleagues finally showed they could use the laser to create what is known as a burning plasma – in which the heat from alpha particles exceeds the external energy supply. They then made a series of further tweaks, including shrinking the hohlraum’s laser entrance holes and lowering the laser’s peak power. The effect was to shift some of the X-ray energy to later in the shot, which raised the power transferred to the nuclear fuel – pushing it high enough to outpace the radiative and conductive losses.
In August 2021 NIF researchers recorded their landmark “N210808” shot. The hotspot in the centre of the fuel in this case had a temperature of around 125 million kelvin and an energy yield of 1.37 MJ – some eight times higher than their previous best result, obtained earlier in the year. This new yield implied a “target gain” of 0.72 – when compared to the laser’s 1.97 MJ output – and a “capsule gain” of 5.8 when considering instead the energy absorbed by the capsule.
More importantly, as far as Hurricane is concerned, the experiment also satisfied what is known as the Lawson criterion for ignition. First laid out by engineer and physicist John Lawson in 1955, this stipulates the conditions in which fusion self-heating will exceed the energy lost via conduction and radiation. Hurricane says that the NIF results satisfied nine different formulations of the criterion for inertial confinement fusion, thereby demonstrating ignition “without ambiguity”.
Three shots and you’re out
Following the record-breaking shot, Hurricane and some of his fellow scientists at NIF were keen to replicate their success. But the lab’s management were not so enthusiastic. According to Mark Herrmann, then Livermore’s deputy director for fundamental weapons physics, several working groups were set up in the wake of N210808 to assess the next steps. He says that a management team consisting of around 10 experts in inertial confinement pulled those findings together and drew up a plan, which it presented in September.
Herrmann says that the plan contained three parts – attempting to reproduce N210808; analysing the experimental conditions that enabled the record-breaking shot; and trying to obtain “robust megajoule yields”. Discussion of the first point involved what Herrmann describes as “a large variety of opinions” among the roughly 100 scientists working on the fusion programme. In the end, given “limited resources”, and a limited number of targets in the batch containing N210808, he says that the management team settled on just three additional shots.
Hurricane has a slightly different recollection, saying there were four repeats. Those experiments, he says, were carried out over a roughly three-month period and achieved yields that ranged from less than a fifth to around half of that reached in August. But he maintains that these shots were still “very good experiments”, adding that they also satisfied some formulations of the Lawson criterion. The difference in performance, he says, is “not as binary as people have been portraying”.
The plasma-coating process is a recipe, so just like baking bread it doesn’t come out exactly the same every time
Omar Hurricane
As to what caused this huge variation in output, Herrmann says that the leading hypothesis is voids and divots in the fuel capsules, which are made from industrial diamond. He explains that these imperfections can be amplified during the implosion process, causing the diamond to enter the hot spot. Given that carbon has a higher atomic number than deuterium or tritium it can radiate much more efficiently, which cools the hot spot and lowers performance.
Hurricane agrees that the diamond likely plays an important role in varying the shot-to-shot performance. Pointing out that large variations in output are to be expected given the nonlinearity of NIF’s implosions, he says that the scientists involved do not fully understand the plasma-coating process used during fabrication of the capsules. “It’s a recipe,” he says, “so just like baking bread it doesn’t come out exactly the same every time.”
The road to fusion energy
Hurricane says the team is now investigating several ways to raise NIF’s output in addition to improving the capsule quality. These include altering the capsule thickness, changing the size or geometry of the hohlraum, or possibly increasing the laser pulse energy to around 2.1 MJ to lower the precision required for the target. He says there is “no magic number” when it comes to the target gain but adds that the higher the gain the larger the parameter space that can be explored when doing stockpile stewardship. He also points out that a gain of 1 does not mean the facility is generating net energy, given how little of the incoming electrical energy the laser converts into light on the target – in the case of NIF, less than 1%. Read more The long road to ignition Michael Campbell of the University of Rochester in the US reckons that NIF could achieve a gain of at least 1 “over the next 2–5 years”, given adequate improvements to the hohlraum and target. But he argues that getting up to commercially relevant gains of 50–100 would probably require a switch from NIF’s “indirect drive”, which generates X-rays to compress the target, to the potentially more efficient but trickier “direct drive” that relies on the laser radiation itself.
Despite the several billion dollars that are likely to be needed, Campbell is optimistic that a suitable direct-drive facility can demonstrate such gains by the end of the 2030s – particularly, he says, if the private sector is involved. But he cautions that commercial power plants would probably not start operating until at least the middle of the century. “Fusion energy is for the long term,” he says, “I think people have to be realistic about the challenges.”
|
Physics
|
10/06/2022 | Pressemitteilung Researchers at Paderborn and Ulm universities are developing the first programmable optical quantum memory
Tiny particles that are interconnected despite sometimes being thousands of kilometres apart – Albert Einstein called this ‘spooky action at a distance’. Something that would be inexplicable by the laws of classical physics is a fundamental part of quantum physics. Entanglement like this can occur between multiple quantum particles, meaning that certain properties of the particles are intimately linked with each other. Entangled systems containing multiple quantum particles offer significant benefits in implementing quantum algorithms, which have the potential to be used in communications, data security or quantum computing. Researchers from Paderborn University have been working with colleagues from Ulm University to develop the first programmable optical quantum memory. The study was published as an 'editor’s suggestion’ in the Physical Review Letters journal.
Entangled light particles The ‘Integrated Quantum Optics’ group ledby Prof. Christine Silberhorn from the Department of Physics and Institute for Photonic Quantum Systems (PhoQS) at Paderborn University is using minuscule light particles, or photons, as quantum systems. The researchers are seeking to entangle as many as possible in large states. Working together with researchers from the Institute of Theoretical Physics at Ulm University, they have now presented a new approach.
Previously, attempts to entangle more than two particles only resulted in very inefficient entanglement generation. If researchers wanted to link two particles with others, in some cases this involved a long wait, as the interconnections that promote(?) this entanglement only operate with limited probability rather than at the touch of a button. This meant that the photons were no longer a part of the experiment once the next suitable particle arrived – as storing qubit states represents a major experimental challenge.
Gradually achieving greater entanglement
‘We have now developed a programmable, optical, buffer quantum memory that can switch dynamically back and forth between different modes – storage mode, interference mode and the final release’, Silberhorn explains. In the experimental setup, a small quantum state can be stored until another state is generated, and then the two can be entangled. This enables a large, entangled quantum state to ‘grow’ particle by particle. Silberhorn’s team has already used this method to entangle six particles, making it much more efficient than any previous experiments. By comparison, the largest ever entanglement of photon pairs, performed by Chinese researchers, consisted of twelve individual particles. However, creating this state took significantly more time, by orders of magnitude.
The quantum physicist explains: ‘Our system allows entangled states of increasing size to be gradually built up – which is much more reliable, faster, and more efficient than any previous method. For us, this represents a milestone that puts us in striking distance of practical applications of large, entangled states for useful quantum technologies.’ The new approach can be combined with all common photon-pair sources, meaning that other scientists will also be able to use the method.
Read the study: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.129.150501
|
Physics
|
Gripping demonstration: researchers test the Octa-glove in the lab of Michael Bartlett. (Courtesy: Alex Parrish/Virginia Tech) Inspired by the way the skin on octopus arms works, researchers at Virginia Tech in the US have developed a new rapidly switchable adhesive that sticks securely to objects underwater. The material could find use in robotics, healthcare and in manufacturing for assembling and manipulating wet objects.
Adhesives that work underwater are difficult to make. This is because the hydrogen bonds and van der Waals and electrostatic forces that mediate adhesion in dry environments are much less effective in water. The animal world, however, contains lots of examples of strong adhesion in moist conditions: mussels secrete special adhesive proteins, creating a sticky plaque to attach to wet surfaces; frogs channel fluid through structured toe pads to activate capillary and hydrodynamic forces; and cephalopods like the octopus use suckers to adhere to surfaces via suction.
Strong adhesive bond
Cephalopod grippers are particularly good at holding things underwater. Octopi, for example, have eight long arms covered with suckers that can grab onto objects like prey. Shaped like the end of a plumber’s plunger, the suckers adhere to an object, quickly creating a strong adhesive bond that is difficult to break. “The adhesion can be quickly activated and released,” explains study team leader Michael Bartlett, “and the octopus controls over 2000 suckers across eight arms by processing information from diverse chemical and mechanical sensors.”
Indeed, an octopus’ sensing apparatus consists of a photoreception system that uses its eyes; mechanoreceptors that detect fluid flow, pressure, and contact; and chemoreception tactile sensors. Each sucker is independently controlled to activate or release adhesion – something that does not exist in synthetic adhesives.
The new Virginia Tech octopus-inspired adhesive consists of a silicone elastomer stalk capped with a stretchable pneumatically-actuated elastomer membrane to control adhesion. The stalk is made by 3D printing moulds and the silicone elastomer is then cast and cured. The adhesive element is connected to a pressure source that supplies positive, neutral, and negative pressure to control the shape of the active membrane.
“This design allows us to switch adhesion 450 times from the on to off state in less than 50 ms,” says Bartlett. “We tightly integrated these adhesive elements with an array of micro-LIDAR optical proximity sensors that sense how close an object is.”
The researchers then connected the suckers and LIDAR through a microcontroller for real-time object detection and adhesion control.
Glove with synthetic suckers and sensors
Underwater, an octopus winds its arms around objects and can attach to a variety of surfaces, including rocks, smooth shells and rough barnacles using its suckers. Bartlett and colleagues mimicked this by making a glove with synthetic suckers and sensors tightly integrated together. This device, dubbed Octa-glove, can detect differently-shaped objects underwater. This automatically triggers the adhesive so that the object can be manipulated.
“By merging soft, responsive adhesive materials with embedded electronics, we can grasp objects without having to squeeze,” said Bartlett. “It makes handling wet or underwater objects much easier and more natural. The electronics can activate and release adhesion quickly. Just move your hand toward an object, and the glove does the work to grasp. It can all be done without the user pressing a single button.”
These capabilities, which mimic the advanced manipulation, sensing and control of cephalopods, could find applications in the field of soft robotics for underwater gripping, applications in user-assisted technologies and healthcare, and in manufacturing for assembling and manipulating wet objects, he tells Physics World.
Several gripping modes Read more Octopus-inspired adhesive could heal wounds In their experiments, the researchers tested several gripping modes. They used a single sensor to manipulate delicate, lightweight objects and found that they could quickly pick up and release flat objects, metal toys, cylinders, a spoon and an ultrasoft hydrogel ball. By then reconfiguring the sensors to that multiple sensors were activated, they could grip larger objects such as plate, a box and a bowl.
The Virginia Tech team, reporting its work in Science Advances, says that there is still much to learn, both about how the octopus controls adhesion and manipulates underwater objects. “If we can better understand the natural system, this will allow to create more advanced bio-inspired, engineered systems,” says Bartlett.
|
Physics
|
How shade is cast reveals details of the rugged lunar landscape, allowing NASA to create 3D models for astronauts and rovers. AS EARLY AS 2025, NASA’s astronauts will be back on the moon. It will be the first return since the 1970s, and the first time humans will explore the moon’s south polar region. What they find there could change the course of lunar exploration. They will be investigating areas inside deep craters where the sun never rises above the surrounding walls. In these permanently shadowed regions, frigid temperatures may have persisted long enough to have trapped water, frozen below the surface. Such ice could potentially be used as drinking water and as a source of fuel, helping future explorers spend longer periods on the lunar surface. But before any of this can happen, NASA needs to select a safe landing site with navigable routes to these potential water deposits. It has drawn up a short list of places to touch down, using high-resolution models of the lunar surface. Now, there is a new tool that could help determine which is best. Researchers have developed an additional, novel way of creating 3D maps of the moon’s surface that could offer increased assurance of the actual terrain that explorers and rovers will encounter. The approach is rooted in a technique that has been used for approximately 50 years: using shadows to reveal the topography of the moon’s surface, such as changes in elevation within craters or the steepness of slopes. “It’s natural for our eyes to see the shapes and forms of objects when we look at shadows,” says Iris Fernandes, a geophysicist at the Niels Bohr Institute at the University of Copenhagen and lead author of the study detailing the new technique. This system of terrain modeling essentially does the same but uses multiple shadowed images of an area, data on the incoming angle of the light in each satellite image, and elevation data to build a 3D model of what’s casting the shadows in those pictures. For example, shadowed images of a crater taken at different times, when sunlight hits the terrain at different angles, can be used to work out, for instance, that the crater’s wall must have a 20-degree incline to produce the shadows observed. FEATURED VIDEOTrying on an Actual Astronaut Space Suit MOST POPULAR SCIENCEAnthony Fauci’s Sign-Off MessageSTEVEN LEVY SCIENCEScientists Are Uncovering Ominous Waters Under Antarctic IceMATT SIMON BUSINESSTwitter Users Have Caused a Mastodon MeltdownAMANDA HOOVER SECURITYHow to Prepare for the End of Card PaymentsREECE ROGERS Traditionally, to use this shadow technique, some assumptions need to be made about what the terrain looks like. Then an initial rough elevation model is created using the technique and repeatedly improved until it matches the shadowed images to an acceptable degree of accuracy. “This trial and error can take a long time,” says Fernandes. In their new method, Fernandes and her colleague Klaus Mosegaard worked around this by solving an equation that relates the angles of incoming sunlight and the shape of the terrain. This is the first time that anyone has produced a topographic model using this equation. The result is that the new approach doesn’t require any prior assumptions about the terrain, and produces high-resolution terrain maps in one try, making it faster than existing methods. This is a big advantage when building terrain models for multiple areas. The team tested their approach on an area centered in the Mare Ingenii, a region on the far side of the moon. They fed the algorithm the angles of incoming sunlight from photographs containing shadows taken by NASA’s Lunar Reconnaissance Orbiter (LRO)—a satellite that continuously circles the moon, capturing information—along with elevation data collected by its laser altimeter. The resulting high-resolution terrain model matched the shadowed photographs to a high degree of accuracy, and vastly improved the elevation resolution. The elevation data gathered by the LRO’s laser altimeter has a resolution of 60 meters per pixel; the new method’s final terrain model had a resolution of 0.9 meters per pixel. This meant that craters with diameters as small as three meters became identifiable. “It’s a different approach for understanding the topography of the moon that could help prepare for future human and robotic exploration,” says Noah Petro, a planetary geologist at NASA’s Goddard Space Flight Center who wasn’t involved in the research. The LRO has been orbiting the moon since 2009, collecting data that has been used to create a digital terrain model that covers 98 percent of the moon’s surface. This is the base map that any higher-resolution terrain models, such as the one from the new study, are placed on. Together, such high-resolution maps are the foundation for planning trips to the surface. Landing sites need to be flat with no boulders. Travel routes to and from craters ideally shouldn’t be steep, so that they can be navigated by rovers. High-resolution maps of the lunar landscape can be used to model light conditions too. Predicting when and where to expect shadows and sunlight is crucial for planning upcoming missions, says Paul Hayne, a planetary scientist at the University of Colorado Boulder’s Laboratory for Atmospheric and Space Physics. Potential landing sites will need to receive solar radiation for at least part of the day to recharge instruments and rovers. Sunlit areas directly adjacent to craters could also be useful, because exploring shadowed regions may take time, meaning rovers might need to be recharged as soon as they exit a crater. A more detailed understanding of the terrain can also help NASA decide which permanently shadowed regions to target when searching for water ice. For example, the steepness of crater walls can provide insight into how long ago the crater formed and whether the shadows and temperatures could have persisted for long enough for water ice to be present. “We often need highly accurate terrain models to turn a snapshot into a time history, to find the cold-traps where ice might be stable for long periods,” says Hayne. MOST POPULAR SCIENCEAnthony Fauci’s Sign-Off MessageSTEVEN LEVY SCIENCEScientists Are Uncovering Ominous Waters Under Antarctic IceMATT SIMON BUSINESSTwitter Users Have Caused a Mastodon MeltdownAMANDA HOOVER SECURITYHow to Prepare for the End of Card PaymentsREECE ROGERS And on top of all this, the new imaging approach should also help with navigation. Rovers need to be able to travel along precisely calculated routes. Onboard motion detectors can help rovers navigate, but sensor and estimation errors can add up over large distances, causing vehicles to drift off course. One way to overcome this is to have rovers use onboard cameras to create high-resolution terrain models on their own, and then pinpoint their location relative to known features and adjust their path accordingly, says Martin Schuster, a robotics scientist at the German Aerospace Center’s Institute of Robotics and Mechatronics. “Matching local terrain models to externally created high-resolution models, like the one produced in the new study, can help rovers localize,” he says. If the resolution of previously created terrain maps is too low, staying on path can be more difficult. The moon is a quarter of a million miles from Earth. Getting there is difficult, and if astronauts experience unexpected issues on the surface, they will be limited in how they can respond. Anticipating what terrain features explorers and rovers will encounter is therefore extremely important—and could even be lifesaving. Finding the best, most accurate ways to map the moon’s surface is an integral part of mission preparation. “We want to use all available data to tell us everything we can about the places we want to explore,” says Petro.
|
Physics
|
A few of the deeper returning questions people engage with in conversations and discussions about cosmology relate to what happened before the big bang and what instigated the big bang in the first place; in other words, how does something come from nothing. With these questions, we currently find ourselves at the edge of science, somewhere near the intersection of cosmology and philosophy. In this article, professor Alastair Wilson shares his understanding of the matter and takes you on a fascinating journey. Please enjoy!By Alastair Wilson, University of Birmingham How can our universe come from nothing? - (Image Credit: IgorZh via Shutterstock / HDR tune by Universal-Sci) “The last star will slowly cool and fade away. With its passing, the universe will become once more a void, without light or life or meaning.” So warned the physicist Brian Cox in the recent BBC series Universe. The fading of that last star will only be the beginning of an infinitely long, dark epoch. All matter will eventually be consumed by monstrous black holes, which in their turn will evaporate away into the dimmest glimmers of light. Space will expand ever outwards until even that dim light becomes too spread out to interact. Activity will cease.Or will it? Strangely enough, some cosmologists believe a previous, cold dark empty universe like the one which lies in our far future could have been the source of our very own Big Bang.The first matterBut before we get to that, let’s take a look at how “material” – physical matter – first came about. If we are aiming to explain the origins of stable matter made of atoms or molecules, there was certainly none of that around at the Big Bang – nor for hundreds of thousands of years afterward. We do in fact have a pretty detailed understanding of how the first atoms formed out of simpler particles once conditions cooled down enough for complex matter to be stable, and how these atoms were later fused into heavier elements inside stars. But that understanding doesn’t address the question of whether something came from nothing. Scientists have pretty thorough understanding of the developments in the early universe - (Image Credit: VectorMine via Shutterstock) So let’s think further back. The first long-lived matter particles of any kind were protons and neutrons, which together make up the atomic nucleus. These came into existence around one ten-thousandth of a second after the Big Bang. Before that point, there was really no material in any familiar sense of the word. But physics lets us keep on tracing the timeline backwards – to physical processes which predate any stable matter.This takes us to the so-called “grand unified epoch”. By now, we are well into the realm of speculative physics, as we can’t produce enough energy in our experiments to probe the sort of processes that were going on at the time. But a plausible hypothesis is that the physical world was made up of a soup of short-lived elementary particles – including quarks, the building blocks of protons and neutrons. There was both matter and “antimatter” in roughly equal quantities: each type of matter particle, such as the quark, has an antimatter “mirror image” companion, which is near identical to itself, differing only in one aspect. However, matter and antimatter annihilate in a flash of energy when they meet, meaning these particles were constantly created and destroyed.But how did these particles come to exist in the first place? Quantum field theory tells us that even a vacuum, supposedly corresponding to empty spacetime, is full of physical activity in the form of energy fluctuations. These fluctuations can give rise to particles popping out, only to be disappear shortly after. This may sound like a mathematical quirk rather than real physics, but such particles have been spotted in countless experiments.The spacetime vacuum state is seething with particles constantly being created and destroyed, apparently “out of nothing”. But perhaps all this really tells us is that the quantum vacuum is (despite its name) a something rather than a nothing. The philosopher David Albert has memorably criticised accounts of the Big Bang which promise to get something from nothing in this way. Suppose we ask: where did spacetime itself arise from? Then we can go on turning the clock yet further back, into the truly ancient “Planck epoch” – a period so early in the universe’s history that our best theories of physics break down. This era occurred only one ten-millionth of a trillionth of a trillionth of a trillionth of a second after the Big Bang. At this point, space and time themselves became subject to quantum fluctuations. Physicists ordinarily work separately with quantum mechanics, which rules the microworld of particles, and with general relativity, which applies on large, cosmic scales. But to truly understand the Planck epoch, we need a complete theory of quantum gravity, merging the two.We still don’t have a perfect theory of quantum gravity, but there are attempts – like string theory and loop quantum gravity. In these attempts, ordinary space and time are typically seen as emergent, like the waves on the surface of a deep ocean. What we experience as space and time are the product of quantum processes operating at a deeper, microscopic level – processes that don’t make much sense to us as creatures rooted in the macroscopic world.In the Planck epoch, our ordinary understanding of space and time breaks down, so we can’t any longer rely on our ordinary understanding of cause and effect either. Despite this, all candidate theories of quantum gravity describe something physical that was going on in the Planck epoch – some quantum precursor of ordinary space and time. But where did that come from?Even if causality no longer applies in any ordinary fashion, it might still be possible to explain one component of the Planck-epoch universe in terms of another. Unfortunately, by now even our best physics fails completely to provide answers. Until we make further progress towards a “theory of everything”, we won’t be able to give any definitive answer. The most we can say with confidence at this stage is that physics has so far found no confirmed instances of something arising from nothing.Cycles from almost nothingTo truly answer the question of how something could arise from nothing, we would need to explain the quantum state of the entire universe at the beginning of the Planck epoch. All attempts to do this remain highly speculative. Some of them appeal to supernatural forces like a designer. But other candidate explanations remain within the realm of physics – such as a multiverse, which contains an infinite number of parallel universes, or cyclical models of the universe, being born and reborn again.The 2020 Nobel Prize-winning physicist Roger Penrose has proposed one intriguing but controversial model for a cyclical universe dubbed “conformal cyclic cosmology”. Penrose was inspired by an interesting mathematical connection between a very hot, dense, small state of the universe – as it was at the Big Bang – and an extremely cold, empty, expanded state of the universe – as it will be in the far future. His radical theory to explain this correspondence is that those states become mathematically identical when taken to their limits. Paradoxical though it might seem, a total absence of matter might have managed to give rise to all the matter we see around us in our universe. In this view, the Big Bang arises from an almost nothing. That’s what’s left over when all the matter in a universe has been consumed into black holes, which have in turn boiled away into photons – lost in a void. The whole universe thus arises from something that – viewed from another physical perspective – is as close as one can get to nothing at all. But that nothing is still a kind of something. It is still a physical universe, however empty.How can the very same state be a cold, empty universe from one perspective and a hot dense universe from another? The answer lies in a complex mathematical procedure called “conformal rescaling”, a geometrical transformation that in effect alters the size of an object but leaves its shape unchanged.Penrose showed how the cold empty state and the hot dense state could be related by such rescaling so that they match with respect to the shapes of their spacetimes – although not to their sizes. It is, admittedly, difficult to grasp how two objects can be identical in this way when they have different sizes – but Penrose argues size as a concept ceases to make sense in such extreme physical environments.In conformal cyclic cosmology, the direction of explanation goes from old and cold to young and hot: the hot dense state exists because of the cold empty state. But this “because” is not the familiar one – of a cause followed in time by its effect. It is not only size that ceases to be relevant in these extreme states: time does too. The cold empty state and the hot dense state are in effect located on different timelines. The cold empty state would continue on forever from the perspective of an observer in its own temporal geometry, but the hot dense state it gives rise to effectively inhabits a new timeline all its own.It may help to understand the hot dense state as produced from the cold empty state in some non-causal way. Perhaps we should say that the hot dense state emerges from, or is grounded in, or realized by the cold, empty state. These are distinctively metaphysical ideas that have been explored by philosophers of science extensively, especially in the context of quantum gravity where ordinary cause and effect seem to break down. At the limits of our knowledge, physics and philosophy become hard to disentangle.Experimental evidence?Conformal cyclic cosmology offers some detailed, albeit speculative, answers to the question of where our Big Bang came from. But even if Penrose’s vision is vindicated by the future progress of cosmology, we might think that we still wouldn’t have answered a deeper philosophical question – a question about where physical reality itself came from. How did the whole system of cycles come about? Then we finally end up with the pure question of why there is something rather than nothing – one of the biggest questions of metaphysics.But our focus here is on explanations which remain within the realm of physics. There are three broad options to the deeper question of how the cycles began. It could have no physical explanation at all. Or there could be endlessly repeating cycles, each a universe in its own right, with the initial quantum state of each universe explained by some feature of the universe before. Or there could be one single cycle, and one single repeating universe, with the beginning of that cycle explained by some feature of its own end. The latter two approaches avoid the need for any uncaused events – and this gives them a distinctive appeal. Nothing would be left unexplained by physics.Penrose envisages a sequence of endless new cycles for reasons partly linked to his own preferred interpretation of quantum theory. In quantum mechanics, a physical system exists in a superposition of many different states at the same time, and only “picks one” randomly, when we measure it. For Penrose, each cycle involves random quantum events turning out a different way – meaning each cycle will differ from those before and after it. This is actually good news for experimental physicists, because it might allow us to glimpse the old universe that gave rise to ours through faint traces, or anomalies, in the leftover radiation from the Big Bang seen by the Planck satellite.Penrose and his collaborators believe they may have spotted these traces already, attributing patterns in the Planck data to radiation from supermassive black holes in the previous universe. However, their claimed observations have been challenged by other physicists and the jury remains out. Map of the cosmic microwave background radiation - (Image Credit: ESA and the Planck Collaboration via Wikimedia Commons) Endless new cycles are key to Penrose’s own vision. But there is a natural way to convert conformal cyclic cosmology from a multi-cycle to a one-cycle form. Then physical reality consists in a single cycling around through the Big Bang to a maximally empty state in the far future – and then around again to the very same Big Bang, giving rise to the very same universe all over again.This latter possibility is consistent with another interpretation of quantum mechanics, dubbed the many-worlds interpretation. The many-worlds interpretation tells us that each time we measure a system that is in superposition, this measurement doesn’t randomly select a state. Instead, the measurement result we see is just one possibility – the one that plays out in our own universe. The other measurement results all play out in other universes in a multiverse, effectively cut off from our own. So no matter how small the chance of something occurring, if it has a non-zero chance then it occurs in some quantum parallel world. There are people just like you out there in other worlds who have won the lottery, or have been swept up into the clouds by a freak typhoon, or have spontaneously ignited, or have done all three simultaneously.Some people believe such parallel universes may also be observable in cosmological data, as imprints caused by another universe colliding with ours.Many-worlds quantum theory gives a new twist on conformal cyclic cosmology, though not one that Penrose agrees with. Our Big Bang might be the rebirth of one single quantum multiverse, containing infinitely many different universes all occurring together. Everything possible happens – then it happens again and again and again.An ancient mythFor a philosopher of science, Penrose’s vision is fascinating. It opens up new possibilities for explaining the Big Bang, taking our explanations beyond ordinary cause and effect. It is therefore a great test case for exploring the different ways physics can explain our world. It deserves more attention from philosophers.For a lover of myth, Penrose’s vision is beautiful. In Penrose’s preferred multi-cycle form, it promises endless new worlds born from the ashes of their ancestors. In its one-cycle form, it is a striking modern re-invocation of the ancient idea of the ouroboros, or world-serpent. In Norse mythology, the serpent Jörmungandr is a child of Loki, a clever trickster, and the giant Angrboda. Jörmungandr consumes its own tail, and the circle created sustains the balance of the world. But the ouroboros myth has been documented all over the world – including as far back as ancient Egypt.
|
Physics
|
A visualization of a mathematical apparatus used to capture the physics and behavior of electrons moving on a lattice. Each pixel represents a single interaction between two electrons. Until now, accurately capturing the system required around 100,000 equations—one for each pixel. Using machine learning, scientists reduced the problem to just four equations. That means a similar visualization for the compressed version would need just four pixels. Credit: Domenico Di Sante/Flatiron Institute Using artificial intelligence, physicists have compressed a daunting quantum problem that until now required 100,000 equations into a bite-size task of as few as four equations—all without sacrificing accuracy. The work, published in the September 23 issue of Physical Review Letters, could revolutionize how scientists investigate systems containing many interacting electrons. Moreover, if scalable to other problems, the approach could potentially aid in the design of materials with sought-after properties such as superconductivity or utility for clean energy generation. "We start with this huge object of all these coupled-together differential equations; then we're using machine learning to turn it into something so small you can count it on your fingers," says study lead author Domenico Di Sante, a visiting research fellow at the Flatiron Institute's Center for Computational Quantum Physics (CCQ) in New York City and an assistant professor at the University of Bologna in Italy.
The formidable problem concerns how electrons behave as they move on a gridlike lattice. When two electrons occupy the same lattice site, they interact. This setup, known as the Hubbard model, is an idealization of several important classes of materials and enables scientists to learn how electron behavior gives rise to sought-after phases of matter, such as superconductivity, in which electrons flow through a material without resistance. The model also serves as a testing ground for new methods before they're unleashed on more complex quantum systems.
The Hubbard model is deceptively simple, however. For even a modest number of electrons and cutting-edge computational approaches, the problem requires serious computing power. That's because when electrons interact, their fates can become quantum mechanically entangled: Even once they're far apart on different lattice sites, the two electrons can't be treated individually, so physicists must deal with all the electrons at once rather than one at a time. With more electrons, more entanglements crop up, making the computational challenge exponentially harder.
One way of studying a quantum system is by using what's called a renormalization group. That's a mathematical apparatus physicists use to look at how the behavior of a system—such as the Hubbard model—changes when scientists modify properties such as temperature or look at the properties on different scales. Unfortunately, a renormalization group that keeps track of all possible couplings between electrons and doesn't sacrifice anything can contain tens of thousands, hundreds of thousands or even millions of individual equations that need to be solved. On top of that, the equations are tricky: Each represents a pair of electrons interacting.
Di Sante and his colleagues wondered if they could use a machine learning tool known as a neural network to make the renormalization group more manageable. The neural network is like a cross between a frantic switchboard operator and survival-of-the-fittest evolution. First, the machine learning program creates connections within the full-size renormalization group. The neural network then tweaks the strengths of those connections until it finds a small set of equations that generates the same solution as the original, jumbo-size renormalization group. The program's output captured the Hubbard model's physics even with just four equations.
"It's essentially a machine that has the power to discover hidden patterns," Di Sante says. "When we saw the result, we said, 'Wow, this is more than what we expected.' We were really able to capture the relevant physics."
Training the machine learning program required a lot of computational muscle, and the program ran for entire weeks. The good news, Di Sante says, is that now that they have their program coached, they can adapt it to work on other problems without having to start from scratch. He and his collaborators are also investigating just what the machine learning is actually "learning" about the system, which could provide additional insights that might otherwise be hard for physicists to decipher.
Ultimately, the biggest open question is how well the new approach works on more complex quantum systems such as materials in which electrons interact at long distances. In addition, there are exciting possibilities for using the technique in other fields that deal with renormalization groups, Di Sante says, such as cosmology and neuroscience. More information: Domenico Di Sante et al, Deep Learning the Functional Renormalization Group, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.136402 Provided by Simons Foundation Citation: Artificial intelligence reduces a 100,000-equation quantum physics problem to only four equations (2022, September 26) retrieved 26 September 2022 from https://phys.org/news/2022-09-artificial-intelligence-equation-quantum-physics.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
|
Physics
|
The James Webb Space Telescope accidentally found an asteroid, and NASA says it's likely the smallest observed to date by the observatory.
International astronomers detected an asteroid roughly the size of Rome's Colosseum, between 300 and 650 feet in length.
The space rock was found when analyzing data from the calibration of the Mid-InfraRed (MIRI) Instrument.
While more observations are necessary to better characterize this object’s nature and properties, it may be an example of an object measuring fewer than 0.6 miles in length within the main asteroid belt between Mars and Jupiter.
"We – completely unexpectedly – detected a small asteroid in publicly available MIRI calibration observations," Thomas Müller, an astronomer at the Max Planck Institute for Extraterrestrial Physics in Germany, said in a statement. "The measurements are some of the first MIRI measurements targeting the ecliptic plane and our work suggests that many new objects will be detected with this instrument."
The observations were published in the journal Astronomy and Astrophysics. They were not designed to hunt for new asteroids and were calibration images of the main belt asteroid 1998 BC1. The calibration team considered the observations to have failed for technical reasons, but data on the asteroid 10920 were used by the team to establish and test and new technique to constrain an object’s orbit and to estimate its size.
In an analysis of this data, the team's results suggest the object measures 100–200 meters, occupies a very low-inclination orbit and was located in the inner main-belt region at the time of the Webb observations.
Small asteroids have been studied in less detail than larger asteroids due to the difficulty of observing such objects. If confirmed as a new asteroid discovery, this detection would have implications for the understanding of the formation and evolution of the solar system.
In order to confirm that the object is a newly discovered asteroid, more position data relative to background stars is required.
The Associated Press contributed to this report.
|
Physics
|
Astronomers have captured the Milky Way’s supermassive, mysterious abyss, 27,000 light-years from Earth.Event Horizon TelescopeWe live in the inner rim of one of the Milky Way’s spiral arms, a shimmery curve against inky darkness. Travel for thousands of light-years in one direction, past countless stars, countless planets, and countless moons, and you’d reach the outer edge of the Milky Way, where the last bits of our galaxy give way to the sprawling stillness of the intergalactic medium. Travel about the same distance in the other direction, past still more stars and planets and moons, through glittering clouds of dust, and you’ll end up in the heart of the galaxy, at one of the most mysterious landmarks in the universe.For the first time in human history, you don’t have to imagine it. Using telescopes powerful enough to stretch our perception across unfathomable distances, astronomers have made a cosmic postcard: the first-ever picture of the supermassive black hole at the center of the Milky Way.Behold Sagittarius A* (pronounced “A-star”), a celestial object that has the mass of 4 million suns but could fit comfortably within the orbit of Mercury, the closest planet to the sun.The image comes from observations made by a network of radio telescopes spanning four continents, as part of a project called the Event Horizon Telescope. This is only the second time that astronomers in this effort have captured one of these objects in such detail. The first image, of the supermassive black hole at the center of the nearby galaxy Messier 87, or M87 for short, was released in 2019 with great fanfare. Einstein had predicted the existence of black holes—unseen points in the void where gravity warps the very fabric of space—more than a century earlier, and here, at last, was photographic evidence of one. That image marked a tremendous achievement in the field of science. But this one, of Sagittarius A*, feels a little different, more special. Astronomers believe that supermassive black holes are at the center of most big galaxies, which means that the universe is full of these objects. But this one is the closest to us. This one is ours."I don’t think I ever had an emotional attachment to M87,” Feryal Özel, an astrophysicist at the University of Arizona who works on the Event Horizon Telescope, told me. Özel has spent most of her career studying Sagittarius A*, trying to understand its distinct nature and quirks. This one, she said, “I feel like I know.”And still, we are not truly seeing Sagittarius A*, not really. Astronomers can’t take a real picture—in the way us non-astronomers would consider it—because black holes are, by definition, invisible. So the photo released today doesn’t show the black hole itself. Astronomers have captured Sagittarius A* in silhouette. The image reveals the shadow that the immensely dense black hole casts against the glowing, super-hot gas swirling around it. Like the black hole in M87, Sagittarius A* resembles a doughnut. In fact, it bears an uncanny resemblance to the fruit danishes served at the press conference that astronomers held in Washington, D.C. to reveal the result. Astronomers made the observations that produced this image in the spring of 2017. Eight ground-based telescopes—two each in Hawaii and Chile, and one each in Arizona, Mexico, Spain, and Antarctica—scanned the skies in tandem for several days. The observations, stored on hundreds of computer disk drives, were then shipped to labs in the United States and Germany, where scientists pored over the data like archaeologists at a dig site, brushing away the noise to excavate the signal of a supermassive black hole. They had followed a similar process to reveal M87’s black hole, which was observed during the same run in 2017.But drawing out Sagittarius A* was far more difficult. The supermassive black hole in M87 is 1,500 times more massive than Sagittarius A*, which means that the cosmic material around it orbits rather slowly, flickering on the timescale of days. The stuff around the smaller Sagittarius A* moves faster, changing within hours or even minutes, which makes the environment more challenging to capture, Özel told me. On top of that, although Sagittarius A* is only 27,000 light-years from Earth—and “only” is quite appropriate when you consider that the black hole in M87 is 55 million light-years from Earth—our supermassive black hole is harder to see. “We’re looking through everything that is between us and the center of the galaxy, whereas for M87, we’re looking out and away from the Milky Way,” Özel said. All the cosmic stuff between us and the galactic center can cause the light coming from the galactic center to appear distorted in the data. “We had to really understand this effect and subtract it from our images correctly,” Özel said.The new image is further proof that the supermassive black hole at the center of the galaxy is, well, exactly that. Einstein published the theories that predicted the existence of such objects in 1916, but the first real observation campaigns didn’t begin until the 1970s. In that decade, astronomers detected a mysterious, compact source of radio emissions in the galactic center that seemed like it could be a black hole, “but not many people believed us then,” Reinhard Genzel, an astrophysicist at the Max Planck Institute for Extraterrestrial Physics who studies Sagittarius A* but was not involved in the latest research, told me. It would take decades of additional research to show that there’s no other explanation for the mysterious object at the Milky Way’s core. In recent years, teams led by Genzel and the UCLA astrophysicist Andrea Ghez have captured in great detail some of the stars closest to the black hole, which, from our perspective, appear to swing wildly around an invisible point in space. In 2020, Genzel and Ghez shared the Nobel Prize in physics for providing the most convincing evidence for the existence of the Milky Way’s central black hole.And the galactic center, researchers have learned, is a weird place. They were surprised to discover, for example, that most of the stars clustered near Sagittarius A* are young rather than old, a discovery that goes against everything astronomers understand about star formation. “That means that the stars must have been formed very close to the black hole,” Tuan Do, an astronomer at UCLA who studies the galactic center, told me. But Sagittarius A* “creates enormous amounts of gravity in this region, so gas clouds that form stars should be ripped apart in this region.” Perhaps long ago, tens of millions of years in the past, the black hole was ringed by a swirling disk of gas that rotated so fast that pockets of it ignited into stars. That environment is gone today; Sagittarius A*, as far as supermassive black holes go, is considered to be relatively quiet.Quiet does not mean boring. Although Einstein’s theories led to the discovery of black holes, scientists still don’t know whether the rules of gravity as we understand them apply in such extreme, unknowable conditions. The 2019 result showed that the shadow of an event horizon is, as predicted, spherical. But “our best theories still are falling short,” Ghez told me. Astronomers still don’t know what transpires in the interior of a black hole, beyond that point of no return.“Black holes represent that fundamental breakdown in our understanding of how gravity works,” she said.Even in this very strange part of our cosmic neighborhood, some evidence suggests, stars could host planets, worlds shaped by the distinct chaos of their environment. “We do see binary star systems at the galactic center, which means two stars are able to stay bound together, despite the strong tidal forces of the black hole and the chaotic environment,” Jessica Lu, an astrophysicist at UC Berkeley who studies star formation in the galactic center, told me. “So perhaps planets can form and survive as well.” At the galactic center, the few, empty light-years separating our sun from its nearest stellar neighbor would be brimming with stars. And in a night sky at the center of the Milky Way, those stars would appear as bright as full moons. “We could visit them in reasonable amounts of time, and our star might be in danger of being hit by another star,” Do said. “We’d probably all be astronomers, because we’d care way more about what’s happening in the sky.”For Ghez, the new picture of Sagittarius A* is an important contribution to astrophysics. That’s her answer when she’s thinking like a scientist. When she takes a moment to consider the work in another, more sentimental way, she appreciates “the fact that we as humans, that are so finite and small, can have this understanding of things that are so immense.” And not only that, but to feel some kind of kinship with it. “I love to talk about our galaxy, as opposed to the Milky Way,” she said. “It’s our home.”
|
Physics
|
Illustration of the new bowtie structure, which can be seen in the middle of the picture. The bowtie structure compresses light spatially, and the nanostructures around it store it temporally. The result is a compression of light to the smallest scale to date – the world’s smallest photon in a dielectric material. Credit: DTU
This major scientific advance has implications for many fields, including energy-efficient computers and quantum technology.
Until recently, physicists widely believed that it was impossible to compress light below the so-called diffraction limit, except when utilizing metal nanoparticles, which also absorb light. As a result, it seemed to be impossible to compress light strongly in dielectric materials like silicon, which are essential for information technologies and had the significant advantage of not absorbing light. Interestingly, it was theoretically shown that the diffraction limit does not apply to dielectrics back in 2006. However, no one has been able to demonstrate this in the actual world due to the fact that it requires such complex nanotechnology that no one has yet been able to create the required dielectric nanostructures.
A research team from the Technical University of Denmark has created a device known as a “dielectric nanocavity” that successfully concentrates light in a volume 12 times smaller than the diffraction limit. The finding is groundbreaking in optical research and was recently published in the journal Nature Communications.
“Although computer calculations show that you can concentrate light at an infinitely small point, this only applies in theory. The actual results are limited by how small details can be made, for example, on a microchip,” says Marcus Albrechtsen, Ph.D.-student at DTU Electro and the first author of the new article. Measurement of the world’s smallest photon. a) Model of the nanocavity, where the calculated strength of the electric field is shown with the color scale. b) Magnification around the narrow strip of material in the bowtie structure in the center where photons are squeezed together. c) Measurement of the electric field that emerges when photons are sent into the cavity by shining it with a laser, i.e., a microscopic image of the world’s smallest photon. The white line shows the outline of the nanostructure for comparison. Credit: DTU
“We programmed our knowledge of real photonic nanotechnology and its current limitations into a computer. Then we asked the computer to find a pattern that collects the photons in an unprecedentedly small area – in an optical nanocavity – which we were also able to build in the laboratory.”
Optical nanocavities are structures that have been specially designed to retain light so that it does not travel normally but is tossed back and forth as if two mirrors were facing each other. The closer the mirrors are to one other, the more intense the light between them gets. For this experiment, the researchers created a bowtie structure, which is particularly effective in squeezing photons together due to its unique shape.
The diffraction limit
The theory of the diffraction limit describes that light cannot be focused to a volume smaller than half the wavelength in an optical system – for example, this applies to the resolution in microscopes.
However, nanostructures can consist of elements much smaller than the wavelength, which means that the diffraction limit is no longer a fundamental limit. Bowtie structures, in particular, can compress the light into very small volumes limited by the sizes of the bowtie and, thus, the quality of the nanofabrication.
When the light is compressed, it becomes more intense, enhancing interactions between light and materials such as atoms, molecules, and 2D materials.
Dielectric materials
Dielectric materials are electrically insulating. Glass, rubber, and plastic are examples of dielectric materials, and they contrast with metals, which are electrically conductive.
An example of a dielectric material is silicon, which is often used in electronics but also in photonics.
Interdisciplinary efforts and excellent methods
The nanocavity is made of silicon, the dielectric material on which most advanced modern technology is based. The material for the nanocavity was developed in cleanroom laboratories at DTU, and the patterns on which the cavity is based are optimized and designed using a unique method for topology optimization developed at DTU. Initially developed to design bridges and aircraft wings, it is now also used for nanophotonic structures.
“It required a great joint effort to achieve this breakthrough. It has only been possible because we have managed to combine world-leading research from several research groups at DTU,” says associate professor Søren Stobbe, who has led the research work.”
Important breakthrough for energy-efficient technology
The discovery could be decisive for developing revolutionary new technologies that may reduce the amount of energy-guzzling components in data centers, computers, telephones, etc.
The energy consumption for computers and data centers continues to grow, and there is a need for more sustainable chip architectures that use less energy. This can be achieved by replacing electrical circuits with optical components. The researchers’ vision is to use the same division of labor between light and electrons used for the Internet, where light is used for communication and electronics for data processing. The only difference is that both functionalities must be built into the same chip, which requires that the light be compressed to the same size as the electronic components. The breakthrough at DTU shows that it is, in fact, possible.
“There is no doubt that this is an important step to developing a more energy-efficient technology for, e.g., nanolasers for optical connections in data centers and future computers – but there is still a long way to go,” says Marcus Albrechtsen.
The researchers will now work further and refine methods and materials to find the optimal solution.
“Now that we have the theory and method in place, we will be able to make increasingly intense photons as the surrounding technology develops. I am convinced that this is just the first of a long series of major developments in physics and photonic nanotechnology centered around these principles,” says Søren Stobbe, who recently received the prestigious Consolidator Grant from the European Research Council of € 2 million for the development of a completely new type of light source based on the new cavities.
Reference: “Nanometer-scale photon confinement in topology-optimized dielectric cavities” by Marcus Albrechtsen, Babak Vosoughi Lahijani, Rasmus Ellebæk Christiansen, Vy Thi Hoang Nguyen, Laura Nevenka Casses, Søren Engelberth Hansen, Nicolas Stenger, Ole Sigmund, Henri Jansen, Jesper Mørk and Søren Stobbe, 21 October 2022, Nature Communications.
DOI: 10.1038/s41467-022-33874-w
The study was funded by the Villum Foundation Young Investigator Program, the Villum Foundation Experiment Program, the Danish National Research Foundation, the Independent Research Fund Denmark, and the Innovation Fund Denmark.
|
Physics
|
Meteor Crater in Arizona. Meteorite fragments from the impact contain lonsdaleite.Photo: DANIEL SLIM/AFP (Getty Images)New research indicates that a rare form of diamond may originate in the burbling cores of distant worlds, arriving on Earth thanks to violent cosmic collisions.According to a team of scientists in Australia, the mineral lonsdaleite—a type of diamond with a hexagonal crystal structure—can be found in meteorites that were likely created when an asteroid collided with a dwarf planet billions of years ago. They investigated 18 ureilite fragments using advanced electron microscopy, to better understand how the lonsdaleite within the space rocks formed. Their research is published today in PNAS.“This study proves categorically that lonsdaleite exists in nature,” said study co-author Dougal McCulloch, director of the Microscopy and Microanalysis Facility at RMIT in Australia, in a university release.Lonsdaleite has previously been found in meteorites, including the Diablo Canyon meteorite, a fragment found in Arizona’s famous Meteor Crater. The mineral has also been created in lab settings, but otherwise is vanishingly rare on Earth. The mineral differs from usual diamonds in its crystal structure, which is hexagonal (ordinary diamonds have a cubic crystal structure.) Separate research earlier this year indicated that lonsdaleite’s structure makes it harder than other diamonds.G/O Media may get a commissionIn their recent research, the team found that lonsdaleite occurs naturally in ureilite meteorites, a type of carbon-bearing space rock made up of silicates, sulfides, and metal. They think the recently studied ureilite rocks formed in a the mantle of an ancient dwarf planet, which collided with an asteroid early in the formation of the solar system. The hexagonal lonsdaleite diamonds formed within the ureilite rocks.Extreme physics tends to bring out uncommon mineral structures. In 1945, the Trinity bomb test showed the effectiveness of the newly developed hydrogen bomb and also created trinitite, a bizarre, glasslike quasicrystal formed from desert sand and copper wiring in the high-pressure, high-temperature environment of the explosion.An asteroid’s collision with a dwarf planet is a similarly extreme event, one with the high temperatures and pressures necessary to create diamonds. Lonsdaleite has also been found in the meteorite fragments left over from the Meteor Crater impact event, which occurred about 50,000 years ago. The recent work offers “strong evidence that there’s a newly discovered formation process for the lonsdaleite and regular diamond,” McCulloch added. By the team’s reckoning, the lonsdaleite formed in the ancient dwarf planet “shortly after a catastrophic collision.”If lonsdaleite’s structure makes it harder than ordinary diamonds, it could have applications in the materials sciences.“Nature has thus provided us with a process to try and replicate in industry,” said study co-author Andy Tomkins, a geologist at Monash University, in the release. “We think that lonsdaleite could be used to make tiny, ultra-hard machine parts if we can develop an industrial process that promotes replacement of pre-shaped graphite parts by lonsdaleite.”Certainly, making these diamonds in a lab would be a more efficient method than waiting for the remnants of another cosmic collision to arrive on Earth.More: Weird New Quantum Diamonds Offer Hope for a Quantum Internet
|
Physics
|
image: Experimental configuration of quantum interference between two independent solid-state QD single-photon sources separated by 302 km fiber. DM: dichromatic mirror, LP: long pass, BP: band pass, BS: beam splitter, SNSPD: superconducting nanowire single- photon detector, HWP: half-wave plate, QWP: quarter-wave plate, PBS: polarization beam splitter. view more Credit: You et al., doi 10.1117/1.AP.4.6.066003 This year’s Nobel Prize in Physics celebrated the fundamental interest of quantum entanglement, and also envisioned the potential applications in “the second quantum revolution” — a new age when we are able to manipulate the weirdness of quantum mechanics, including quantum superposition and entanglement. A large-scale and fully functional quantum network is the holy grail of quantum information sciences. It will open a new frontier of physics, with new possibilities for quantum computation, communication, and metrology. One of the most significant challenges is to extend the distance of quantum communication to a practically useful scale. Unlike classical signals that can be noiselessly amplified, quantum states in superposition cannot be amplified because they cannot be perfectly cloned. Therefore, a high-performance quantum network requires not only ultra-low-loss quantum channels and quantum memory, but also high-performance quantum light sources. There has been exciting recent progress in satellite-based quantum communications and quantum repeaters, but a lack of suitable single-photon sources has hampered further advances. What is required of a single-photon source for quantum network applications? First, it should emit one (only one) photon at a time. Second, to attain brightness, the single-photon sources should have high system efficiency and a high repetition rate. Third, for applications such as in quantum teleportation that require interfering with independent photons, the single photons should be indistinguishable. Additional requirements include a scalable platform, tunable and narrowband linewidth (favorable for temporal synchronization), and interconnectivity with matter qubits. A promising source is quantum dots (QDs), semiconductor particles of just a few nanometers. However, in the past two decades, the visibility of quantum interference between independent QDs has rarely exceeded the classical limit of 50% and distances have been limited to around a few meters or kilometers. As reported in Advanced Photonics, an international team of researchers has achieved high-visibility quantum interference between two independent QDs linked with ~300 km optical fibers. They report efficient and indistinguishable single-photon sources with ultra-low-noise, tunable single-photon frequency conversion, and low-dispersion long fiber transmission. The single photons are generated from resonantly driven single QDs deterministically coupled to microcavities. Quantum frequency conversions are used to eliminate the QD inhomogeneity and shift the emission wavelength to the telecommunications band. The observed interference visibility is up to 93%. According to senior author Chao-Yang Lu, professor at the University of Science and Technology of China (USTC), “Feasible improvements can further extend the distance to ~600 km.” Lu remarks, “Our work jumped from the previous QD-based quantum experiments at a scale from ~1 km to 300 km, two orders of magnitude larger, and thus opens an exciting prospect of solid-state quantum networks.” With this reported jump, the dawn of solid-state quantum networks may soon begin breaking toward day. Read the Gold Open Access article by X. You et al., "Quantum interference with independent single-photon sources over 300 km fiber,” Adv. Photon. 4(6), 066003 (2022), doi 10.1117/1.AP.4.6.066003. Journal Advanced Photonics Article Title Quantum interference with independent single-photon sources over 300 km fiber Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
|
Physics
|
Imagine an engine that needs no propellant. It sounds impossible, and it most likely is. That's not stopping one NASA engineer from testing theories around the EmDrive — a conceptual "helical" engine that could defy the laws of physics and create forward thrust without fuel.Such a creation would allow us to travel the far reaches of space and would arguably be the most exciting technological advancement of the century.What is the EmDrive?Back in 2001, British scientist Roger Shawyer theorized that we could generate thrust by pumping microwaves into a conical chamber.Shawyer suggested that the microwaves would, in theory, bounce exponentially off the chamber walls, creating enough propulsion to power a spacecraft without fuel.Some researchers do claim to have generated thrust in EmDrive experiments. The amount was so low, though, that the detractors believe the thrust may have even been caused by outside influences. These could be seismic vibrations or the Earth's magnetic field.New ResearchOver the last few months, several engineers and scientists have come out with contradictory positions on the EmDrive.Some have claimed it's impossible, while others continue to work at what might be a futile task, justifying their work by saying the payoff would be enormous.The most recent of these is NASA engineer David Burns, as New Scientist reports.“The engine itself would be able to get to 99 percent the speed of light if you had enough time and power,” Burns told New Scientist.Article from InterestingEngineering
|
Physics
|
Subsets and Splits
List Unique Topics
Simple retrieval of unique topics from the dataset, useful for basic exploration but lacks deeper insights.