article_text
stringlengths
294
32.8k
topic
stringlengths
3
42
Join the audience for a Quantum Week live webinar at 3 p.m. GMT on 3 November 2022 exploring a new approach to erasing quantum information and the growing field of “quantum thermodynamics” Want to take part in this webinar? We erase information every day: rubbing out a pencil scribble, deleting a paragraph from a word processor, or wiping a whiteboard clean. Erasing information has an inescapable work cost – a discovery that proved crucial to save the revered second law of thermodynamics from being violated by the “Maxwell’s Demon” thought experiment. A hypothetical demon, using a “quantum Szilard engine”, can erase quantum information. Maria Violaris will introduce a surprising potential link between quantum erasure and a novel reformulation of irreversibility, providing a new angle on the growing field of quantum thermodynamics. Want to take part in this webinar? Maria Violaris Maria Violaris is a quantum information PhD student working with Chiara Marletto and Vlatko Vedral at the University of Oxford, UK. Her research links fundamental questions, such as whether there is an exact arrow of time, with understanding the limits of future quantum technologies. She is a PhD student contributor to Physics World, initiated the Quantum on the Clock schools video competition with the IOP Quantum Optics, Quantum Information and Quantum Control Group, and creates videos about coding quantum paradoxes with the IBM Quantum Qiskit video team. She founded Oxford University Quantum Information Society and has also interned at quantum software company Riverlane, where she built a Raspberry Pi quantum computing lab. Speaker relationship with IOP Publishing Maria is a student contributor to Physics World.
Physics
A collaborative effort has installed electronic “brains” on solar-powered robots that are 100 to 250 micrometers in size – smaller than an ant’s head – so that they can walk autonomously without being externally controlled. While Cornell researchers and others have previously developed microscopic machines that can crawl, swim, walk and fold themselves up, there were always “strings” attached; to generate motion, wires were used to provide electrical current or laser beams had to be focused directly onto specific locations on the robots.  Alejandro Cortese, Ph.D. ’19 displays a silicon-on-insulator wafer that contains finished CMOS “brains.” “Before, we literally had to manipulate these ‘strings’ in order to get any kind of response from the robot,” said Itai Cohen, professor of physics in the College of Arts and Sciences. “But now that we have these brains on board, it’s like taking the strings off the marionette. It’s like when Pinocchio gains consciousness.” The innovation sets the stage for a new generation of microscopic devices that can track bacteria, sniff out chemicals, destroy pollutants, conduct microsurgery and scrub the plaque out of arteries. The team’s paper, “Microscopic Robots with Onboard Digital Control,” published Sept. 21 in Science Robotics. The lead author is postdoctoral researcher Michael Reynolds, M.S. ‘17, Ph.D. ‘21. The project brought together researchers from the labs of Cohen, Alyosha Molnar, associate professor of electrical and computer engineering in Cornell Engineering; and Paul McEuen, the John A. Newman Professor of Physical Science (A&S), all co-senior authors on the paper. The “brain” in the new robots is a complementary metal-oxide-semiconductor (CMOS) clock circuit that contains a thousand transistors, plus an array of diodes, resistors and capacitors. The integrated CMOS circuit generates a signal that produces a series of phase-shifted square wave frequencies that in turn set the gait of the robot. The robot legs are platinum-based actuators. Both the circuit and the legs are powered by photovoltaics. Itai Cohen, professor of physics, compares the innovation of installing CMOS circuits on microrobots with the moment "when Pinocchio gains consciousness.” “In some sense, the electronics are very basic. This clock circuit is not a leap forward in the ability of circuits,” Cohen said. “But all of the electronics have to be designed to be very low power, so that we didn’t have to put humungous photovoltaics to power the circuit.” The low-power electronics were made possible by the Molnar Group’s research. Former postdoctoral researcher Alejandro Cortese, Ph.D. ‘19, worked with Reynolds and designed the CMOS brain, which was then built by a commercial foundry, XFAB. The finished circuits arrived on 8-inch silicon-on-insulator wafers. At 15 microns tall, each robot brain – essentially also the robot’s body – was a “mountain” compared to the electronics that normally fit on a flat wafer, Reynolds said. He worked with the Cornell NanoScale Science and Technology Facility (CNF) to develop an intricate process using 13 layers of photolithography to etch the brains loose into an aqueous solution and pattern the actuators to make the legs. “One of the key parts that enables this is that we’re using microscale actuators that can be controlled by low voltages and currents,” said Cortese, who is CEO of OWiC Technologies, a company he founded with McEuen and Molnar to commercialize optical wireless integrated circuits for microsensors. “This is really the first time that we showed that yes, you can integrate that directly into a CMOS process and have all of those legs be directly controlled by effectively one circuit.” The team created three robots to demonstrate the CMOS integration: a two-legged Purcell bot, named in tribute to physicist Edward Purcell, who proposed a similarly simple model to explain the swimming motions of microorganisms; a more complicated six-legged antbot, which walks with an alternating tripod gait, like that of an insect; and a four-legged dogbot that can vary the speed with which it walks thanks to a modified circuit that receives commands via laser pulse. “Eventually, the ability to communicate a command will allow us to give the robot instructions, and the internal brain will figure out how to carry them out,” Cohen said. “Then we’re having a conversation with the robot. The robot might tell us something about its environment, and then we might react by telling it, ‘OK, go over there and try to suss out what’s happening.’” The new robots are approximately 10,000 times smaller than macroscale robots that feature onboard CMOS electronics, and they can walk at speeds faster than 10 micrometers per second. The fabrication process that Reynolds designed, basically customizing foundry-built electronics, has resulted in a platform that can enable other researchers to outfit microscopic robots with their own apps – from chemical detectors to photovoltaic “eyes” that help robots navigate by sensing changes in light. “What this lets you imagine is really complex, highly functional microscopic robots that have a high degree of programmability, integrated with not only actuators, but also sensors,” Reynolds said. “We’re excited about the applications in medicine – something that could move around in tissue and identify good cells and kill bad cells – and in environmental remediation, like if you had a robot that knew how to break down pollutants or sense a dangerous chemical and get rid of it.” In May, the team integrated their CMOS clock circuits into artificial cilia that were also built with platinum-based, electrically-powered actuators, to manipulate the movement of fluids. “The real fun part is, just like we never really knew what the iPhone was going to be about until we sent it out into the world, what we’re hoping is that now that we’ve shown the recipe for linking CMOS electronics to robotic actuating limbs, we can unleash this and have people design low-power microchips that can do all sorts of things,” Cohen said. “That’s the idea of sending it out into the ether and letting people’s imaginations run wild.” Co-authors include former postdoctoral researcher Marc Miskin; postdoctoral researchers Qingkun Liu and Sunwoo Lee; doctoral students Wei Wang, Samantha Norris; and Zhangqi (Jackie) Zheng ‘24. The research was supported by the Cornell Center for Materials Research, which is supported by the NSF’s MRSEC program; the National Science Foundation; the Air Force Office of Scientific Research; the Army Research Office; and the Kavli Institute at Cornell for Nanoscale Science.
Physics
What holds the proton together— Science News, September 16, 1972 An experiment … at the CERN Laboratory in Geneva … gives an important clue to structural arrangements deep within the proton…. The result hints at the existence of a new and very strong fundamental interaction — the process that holds [quarks] together inside the protons.… A number of theorists have speculated about its nature and have even proposed an intermediate particle for it called a gluon. Update Physicists finally found evidence for gluons in 1979, in the aftermath of electron-positron collisions at a German particle accelerator (SN: 4/21/79, p. 262). Gluons bind quarks inside protons via the strong force — the most powerful force in nature. Recent investigations of gluons’ role inside the proton suggest the particles’ energy makes up about 36 percent of the proton’s mass (SN: 12/22/18 & 1/5/19, p. 8). Future particle accelerators could gauge gluons’ contribution to the proton’s internal pressure, which averages a million trillion trillion times the strength of Earth’s atmospheric pressure (SN: 6/9/18, p. 10). More Stories from Science News on Particle Physics From the Nature Index Paid Content
Physics
Particle locations. (Courtesy: Keim research group / Penn State) Disordered materials can “remember” deformations they have previously experienced – and they can be made to forget them, too. This is the finding of researchers at Penn State University and Cal Poly San Luis Obispo in the US, whose experiments on erasing material memories could improve the design of foams and emulsions employed in the food and pharmaceutical industries. Disordered solids are commonplace in food science. Ice cream, for example, is made up of ice crystals, fat droplets and air pockets combined in an erratic way. Emulsions such as mayonnaise also contain particles arranged in a random fashion, and many cosmetics and pharmaceutical products share similar characteristics. Inscribing a memory of the deformation In the latest work, researchers led by physicist Nathan Keim studied a two-dimensional disordered material made by pouring oil on top of water in a dish, then spreading a closely-packed layer of 25 000 microscopic plastic particles at the boundary between the liquids. The particles are electrostatically charged and thus repel each other, which allows them to form a soft mayonnaise-like solid. This soft solid can be deformed in a controlled fashion, and the motion of the particles is tracked using a microscope. “We deform our material by shearing, which involves moving one side of the material relative to the other, like pulling the corner of a rectangle to the side so it becomes a parallelogram,” Keim explains. This type of deformation is known as mechanical annealing, and performing it lowers the overall energy of the structure. By repeating this annealing at the same magnitude many times, Keim says, “you can essentially inscribe a memory of the deformation” that subtly affects how the material responds to deformation of other magnitudes in the future. After the researchers prepared their material, they performed experiments designed to show that the annealing had indeed formed a memory. “Without knowing its past, we can probe a sample to reveal the strain amplitude γa that was used to anneal it,” they explain. To do this, they applied a series of cycles with increasing amplitude γread, starting with a small value and ending at a value higher than γa. At the end of each readout cycle, the researchers compared the positions of the particles with those at the end of annealing. For small γread, the average change in the particles’ positions – the mean squared displacement – grows, but it drops near γread = γa when the annealed state of the system is recovered. This observation and others show that the material approximates a generic behaviour known as “return point memory”, which appears to be a property of annealed samples. “Ring down” erasing The researchers also found a new way to erase this memory. To do this, they used a method called “ring down” that involves applying distortions of smaller and smaller magnitudes until the memory has been removed. This is somewhat similar to the method for removing memories in ferromagnets, where a strong magnetic field is applied and its direction alternated while gradually making the field weaker, Keim says. Read more Undulations could replace twists in 2D materials Keim hopes that some of the advances made in this work and other recent research will find their way into applications. “When a material has been deformed cyclically, it is possible to recover one or more of the past strains it has been subjected to,” he tells Physics World. “There may be a role for this kind of test alongside established techniques like failure analysis. There may also be a use for mechanically erasing the effects of past loading or for estimating a sample’s capacity to form memories.” Erasing a memory could provide materials scientists with a way to essentially start from a clean state and then prepare a material in the most advantageous way, he adds. The researchers, who detail their work in Science Advances, say their technique could be used to probe mechanical annealing and memory formation in a wide variety of disordered solids and other forms of glassy matter. “In the future, we’d like to verify these properties of material memory in three-dimensional disordered solids – the equivalent of mayonnaise or ice cream,” says Keim.
Physics
Study: Superconductivity switches on and off in 'magic-angle' graphene With some careful twisting and stacking, MIT physicists have revealed a new and exotic property in "magic-angle" graphene: superconductivity that can be turned on and off with an electric pulse, much like a light switch. The discovery could lead to ultrafast, energy-efficient superconducting transistors for neuromorphic devices—electronics designed to operate in a way similar to the rapid on/off firing of neurons in the human brain. Magic-angle graphene refers to a very particular stacking of graphene—an atom-thin material made from carbon atoms that are linked in a hexagonal pattern resembling chicken wire. When one sheet of graphene is stacked atop a second sheet at a precise "magic" angle, the twisted structure creates a slightly offset "moiré" pattern, or superlattice, that is able to support a host of surprising electronic behaviors. In 2018, Pablo Jarillo-Herrero and his group at MIT were the first to demonstrate magic-angle twisted bilayer graphene. They showed that the new bilayer structure could behave as an insulator, much like wood, when they applied a certain continuous electric field. When they upped the field, the insulator suddenly morphed into a superconductor, allowing electrons to flow, friction-free. That discovery gave rise to "twistronics," a field that explores how certain electronic properties emerge from the twisting and layering of two-dimensional materials. Researchers including Jarillo-Herrero have continued to reveal surprising properties in magic-angle graphene, including various ways to switch the material between different electronic states. So far, such "switches" have acted more like dimmers, in that researchers must continuously apply an electric or magnetic field to turn on superconductivity, and keep it on. Now Jarillo-Herrero and his team have shown that superconductivity in magic-angle graphene can be switched on, and kept on, with just a short pulse rather than a continuous electric field. The key, they found was a combination of twisting and stacking. In a paper appearing today in Nature Nanotechnology, the team reports that, by stacking magic-angle graphene between two offset layers of boron nitride—a two-dimensional insulating material—the unique alignment of the sandwich structure enabled the researchers to turn graphene's superconductivity on and off with a short electric pulse. "For the vast majority of materials, if you remove the electric field, zzzzip, the electric state is gone," says Jarillo-Herrero, who is the Cecil and Ida Green Professor of Physics at MIT. "This is the first time that a superconducting material has been made that can be electrically switched on and off, abruptly. This could pave the way for a new generation of twisted, graphene-based superconducting electronics." His MIT co-authors are lead author Dahlia Klein, Li-Qiao Xia, and David MacNeill, along with Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan. Flipping the switch In 2019, a team at Stanford University discovered that magic-angle graphene could be coerced into a ferromagnetic state. Ferromagnets are materials that retain their magnetic properties, even in the absence of an externally applied magnetic field. The researchers found that magic-angle graphene could exhibit ferromagnetic properties in a way that could be tuned on and off. This happened when the graphene sheets were layered between two sheets of boron nitride such that the crystal structure of the graphene was aligned to one of the boron nitride layers. The arrangement resembled a cheese sandwich in which the top slice of bread and the cheese orientations are aligned, but the bottom slice of bread is rotated at a random angle with respect to the top slice. The result intrigued the MIT group. "We were trying to get a stronger magnet by aligning both slices," Jarillo-Herrero says. "Instead, we found something completely different." In their current study, the team fabricated a sandwich of carefully angled and stacked materials. The "cheese" of the sandwich consisted of magic-angle graphene—two graphene sheets, the top rotated slightly at the "magic" angle of 1.1 degrees with respect to the bottom sheet. Above this structure, they placed a layer of boron nitride, exactly aligned with the top graphene sheet. Finally, they placed a second layer of boron nitride below the entire structure and offset it by 30 degrees with respect to the top layer of boron nitride. The team then measured the electrical resistance of the graphene layers as they applied a gate voltage. They found, as others have, that the twisted bilayer graphene switched electronic states, changing between insulating, conducting, and superconducting states at certain known voltages. What the group did not expect was that each electronic state persisted rather than immediately disappearing once the voltage was removed—a property known as bistability. They found that, at a particular voltage, the graphene layers turned into a superconductor, and remained superconducting, even as the researchers removed this voltage. This bistable effect suggests that superconductivity can be turned on and off with short electric pulses rather than a continuous electric field, similar to flicking a light switch. It isn't clear what enables this switchable superconductivity, though the researchers suspect it has something to do with the special alignment of the twisted graphene to both boron nitride layers, which enables a ferroelectric-like response of the system. (Ferroelectric materials display bistability in their electric properties.) "By paying attention to the stacking, you could add another tuning knob to the growing complexity of magic-angle, superconducting devices," Klein says. For now, the team sees the new superconducting switch as another tool researchers can consider as they develop materials for faster, smaller, more energy-efficient electronics. "People are trying to build electronic devices that do calculations in a way that's inspired by the brain," Jarillo-Herrero says. "In the brain, we have neurons that, beyond a certain threshold, they fire. Similarly, we now have found a way for magic-angle graphene to switch superconductivity abruptly, beyond a certain threshold. This is a key property in realizing neuromorphic computing." More information: Dahlia Klein, Electrical switching of a bistable moiré superconductor, Nature Nanotechnology (2023). DOI: 10.1038/s41565-022-01314-x. www.nature.com/articles/s41565-022-01314-x
Physics
Pacific Northwest National Laboratory researchers are visualizing how shear forces rearrange metal atoms in ways that translate to improved characteristics—like greater strength, ductility, and conductivity—to inform the custom design of next-generation metals with broad applications from batteries to vehicles. Credit: Composite image by Shannon Colson | Pacific Northwest National Laboratory How can studying metals manufacturing lead to longer-lasting batteries and lighter vehicles? It all comes down to physics. Researchers at Pacific Northwest National Laboratory (PNNL) are investigating the effects of physical forces on metals by taking a direct look at atomic-level changes in metals undergoing shear deformation. The forces applied during shear deformation to change a metal's shape also rearrange its atoms, but not in the same way for every metal or alloy. Atomic arrangement can affect metal properties like strength, formability, and conductivity—so better understanding how atoms move during shear is a key part of ongoing efforts to custom design next-generation metals with specific properties from the atom up. These visualizations form the foundation for understanding how shear deformation creates the improved characteristics observed in metals produced using Shear Assisted Processing and Extrusion (ShAPE), a PNNL innovation in metals manufacturing. During ShAPE manufacturing, metals are processed using shear forces to produce high-performance metal alloys for use in vehicles and other applications. "If we understand what happens to metals on an atomic level during shear deformation, we can use that knowledge to improve countless other applications where metals experience those same forces—from improving battery life to designing metals with specific properties, like lighter, stronger alloys for more efficient vehicles," said Chongmin Wang, PNNL Laboratory fellow and leader of the research team studying the forces of induced shear deformation. PNNL researchers also took a closer look at how atoms in an imperfect gold crystal—one with existing defects in its atomic structure—were rearranged during shear deformation. The existing defects in the atomic structure altered how the atoms moved, resulting in different structures that could yield different material properties. Credit: Animation by Sara Levine, Pacific Northwest National Laboratory Atomic mysteries Physical forces are universal. The forces that are purposefully applied during metals manufacturing to create alloys are the same forces that can damage structures inside batteries to cause eventual failure. Researchers also know that shear deformation can fundamentally alter the microstructure of metals in ways that can actually improve the material—making metals stronger, lighter, and more flexible. But how that happens is still a mystery. "If you were to snap a picture of a track runner at the start and end of their run, you might think they didn't move at all," explained Arun Devaraj, PNNL materials scientist. "But if you film the runner while they are going around the track, you'll know just how far they traveled. It's the same here. If we understand exactly what happens to metals on the atomic level during shear deformation, we could apply that knowledge strategically to design materials with specific properties." PNNL researchers also took a closer look at how atoms in an imperfect gold crystal—one with existing defects in its atomic structure—were rearranged during shear deformation. The existing defects in the atomic structure altered how the atoms moved, resulting in different structures that could yield different material properties. Credit: Animation by Sara Levine, Pacific Northwest National Laboratory The gold standard To watch how shear deformation rearranges metal atoms, researchers used a specialized probe inside a transmission electron microscope at PNNL, which is among only a handful of laboratories with this capability in the world. The research team used the microscope to record how individual rows of atoms within metals moved during shear deformation. They started by looking at gold—the standard because it is easiest to visualize on an atomic level. When researchers watched gold undergoing shear, they saw that crystals of gold were divided into smaller grains. They noticed that natural defects in the arrangement of gold atoms changed how shear deformation moved the atoms. This is useful information because defects are common in metals during deformation, but don't behave the same in all metals—which can directly affect metal properties. "The defects in crystal, grain size and microstructure in a metal can affect the metal's characteristics, like strength and toughness. That's why it's important to understand how shear deformation moves metal atoms around and affects the overall microstructure of the metal," said Shuang Li, PNNL postdoc and the first author on three studies sharing these results. Next, the research team looked at copper. They observed how shear deformation creates nanotwins—structural features that make metals stronger. Observing an alloy of copper and niobium, they found that shear deformation affects atoms differently inside the copper and niobium phases of the metal mixture. This is a valuable insight that can inform how to manufacture alloys with specific properties using shear deformation. The information gained from studying how these forces affect metals during controlled manufacturing processes can be directly translated and applied wherever metal experiences the same physical forces. For example, the atomic-level visualization capability at PNNL is also useful for understanding how materials used in extreme conditions (e.g., nuclear reactors) or clean energy applications (e.g., hydrogen transmission lines and storage tanks) will respond to external stresses. Longer lasting batteries, lighter alloys for more efficient vehicles, and custom design of next-generation metals with improved strength and conductivity could all be possible by better understanding the atomic physics of metals manufacturing. These studies appear in three research publications: In-situ TEM observation of shear induced microstructure evolution in Cu-Nb alloy in the journal Scripta Materialia, Nanotwin assisted reversible formation of low angle grain boundary upon reciprocating shear load in the journal Acta Materialia, and In-situ observation of deformation twin associated sub-grain boundary formation in copper single crystal under bending in Materials Research Letters. More information: Shuang Li et al, In-situ TEM observation of shear induced microstructure evolution in Cu-Nb alloy, Scripta Materialia (2021). DOI: 10.1016/j.scriptamat.2021.114214 Shuang Li et al, Nanotwin assisted reversible formation of low angle grain boundary upon reciprocating shear load, Acta Materialia (2022). DOI: 10.1016/j.actamat.2022.117850 Shuang Li et al, In-situ observation of deformation twin associated sub-grain boundary formation in copper single crystal under bending, Materials Research Letters (2022). DOI: 10.1080/21663831.2022.2057201 Citation: Designing next-generation metals, one atom at a time (2022, November 7) retrieved 8 November 2022 from https://phys.org/news/2022-11-next-generation-metals-atom.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Physics
The T-P phase of RbMn6Bi5 and the high-pressure approach. Courtesy: J-G Cheng Researchers from the Institute of Physics, Chinese Academy of Sciences, Beijing, have spotted the tell-tale signs of superconductivity in a quasi-one-dimensional manganese-based material, RbMn6Bi5. The material, which has a superconducting transition temperature (Tc) of 9.5 K at a pressure of around 15 GPa, is the latest in a relatively new family of superconductors, the first of which was discovered in 2015 by the same group. The classical theory of superconductivity (known as BCS theory after the initials of its discoverers) states that below a specific critical temperature, the fermionic electrons in a metal can pair up to create bosons called Cooper pairs. These bosons form a phase-coherent condensate that can flow through a material without scattering – with superconductivity as a consequence. Mn-based superconductors In 2015, researchers led by Jin-Guang Cheng discovered the first manganese-based superconductor, MnP. This material has a very low Tc  of just 1 K, and its properties are difficult to control because it is a three-dimensional binary molecule. Nevertheless, the fact that it superconducts at all was something of a surprise since conventional wisdom held that manganese-based materials cannot do so. Since then, the group has been screening other Mn-based magnetic materials under high pressures in the hopes of unearthing further examples of superconductivity. In the new work, the team found what they were looking for in a group of one-dimensional ternary or complex Mn-based compounds. “This study and another related one published in Physical Review Letters establish AMn6Bi5 (where A=K or Rb) as a new class of ternary Mn-based superconductors with relatively high Tc,” Cheng explains. “The optimal Tc reaches almost 10 K, which is an order of magnitude higher than that of MnP, implying that the Tc of Mn-based superconductors has the potential to go higher.” Cheng adds that the quasi-one-dimensional crystal structures and chemical composition of materials in the AMn6Bi5 family make it easier to tune their physical properties through chemical substitutions and/or other structural/electronic regulations. This means that such materials could be used as the basis for designing other Mn-based superconductors. “Quantum criticality” In addition to the material’s relatively high Tc, Cheng and colleagues found that the upper critical field for the superconducting state (that is, the field at which superconductivity disappears) is also high, exceeding the Pauli paramagnetic limit Hp=1.84 Tc. According to Cheng, this implies the presence of strong electron-phonon coupling or exotic pairing mechanisms. Read more Clathrate superhydride makes new high-temperature superconductor The researchers now plan to investigate the nature of the material’s superconductivity using microscopic probes such as nuclear magnetic resonance measurements. “We also hope to tune the physical properties of K/RbMn6Bi5 at ambient pressure through techniques such as chemical substitutions and gate-voltage regulations,” Cheng tells Physics World. “Ultimately, our goal is to find a Mn-based superconductor at ambient pressure with a Tc of over 10 K.” They detail their work in Chinese Physics Letters.
Physics
This episode of the Physics World Weekly podcast features Steven Prohira, who is co-leader of the Radar Echo Telescope collaboration, which aims to detect high energy cosmic neutrinos by sending radar waves through an Antarctic ice sheet. Based at the University of Kansas, Prohira explains the physics behind the project and talks about the fascinating history of previous attempts to use radar to detect particles from outer space. Also on hand is Physics World’s Matin Durrani, who chats about the remarkable life of the Hungarian-American physicist Leo Szilard – who encouraged the US to develop nuclear weapons during the Second World War, but later opposed their use.
Physics
SFGATE columnist Drew Magary on why this isn't the holy grail — but it's still a cause for celebrationDec. 16, 2022Updated: Dec. 16, 2022 4:49 p.m. Nuclear fusion on the surface of the sunDrPixel/Getty Images Since the beginning of time, mankind has yearned to create the sun. And for over 70 years, scientists have known how to create an artificial nuclear fusion reaction, forcing positively-charged particles to slam together and fuse, generating an enormous amount of heat. It’s something that the sun has been doing for roughly 4.6 billion years.  But whenever humans have tried it in the past, it’s taken enormous amounts of energy to heat up the protons and get them moving fast enough to overcome their natural repulsion to one another — far more than has been generated by the reactions themselves. Containing the reaction — that is, keeping it from turning into a bomb — has also proven dicey. Achieving those two goals has been a holy grail, a dream that holds enormous promise for clean energy, long-distance space flight and other sci-fi advances. It’s also been “20 years away” for as long as the concept has been around.  That all changed on Dec. 5, at 1:03 a.m., when a handful of night-owl scientists at the National Ignition Facility at Lawrence Livermore National Laboratory bombarded a tiny capsule of frozen hydrogen with 192 intense lasers, forcing the atoms to bounce around the container at an incredibly high speed. The protons in these atoms were positively charged, which means they repel each other.  But when they moved fast inside a relatively confined space…well, sometimes they couldn’t help but bump into each other. And forcing the protons together triggered them to fuse, emitting a burst of gamma rays with 1.5 times the energy the lasers had put into the capsule. It was the first time scientists ever managed to trigger a process called ignition, the same series of chain reactions that power the sun, producing more heat than they poured in.  In other words, they f—king did it. The idea of tabletop fusion has been a white whale for both scientists and consumers for the bulk of my lifetime. But now, the promise looks a lot closer to reality: Truly clean energy. No uranium. No plutonium. No nuclear waste. No gas, no coal. Hydrogen in, energy (and maybe a bit of helium) out. The lab didn’t explode. No one died in the reaction, or transformed into a superpowered octopus. So congratulations. You lived to see the dawn of the fusion age.   …Or did you? There are a lot of signs that the narrative around this discovery is a classic exercise in American salesmanship. First off, experiments like the one that NIF conducted do, in fact, produce radiation, in the form of stray neutrons. They can make anything they pass through radioactive, and not in the fun comic book way.  More damning, the scientists only produced a net gain in energy if you count the power from those 192 lasers themselves, and not the power needed to run those lasers. These are very hot, concentrated lasers, which means that it takes an awful lot of power — hundreds of times the amount of energy produced in the reactor. So the official energy surplus announced this week is something of a trick of accounting.  “Lasers are an incredibly inefficient way to do anything,” says Phillip Broughton, a health physicist (if you’re wondering what a "health physicist" is, Broughton says it’s actually a radiation safety specialist) and laser safety officer who previously worked at the Livermore lab. “It's so much more power at the wall to make the lasers… It's bulls—t for power. It's not a power generation thing, and never will be.” Broughton also noted that, while the scientists were able to produce an energy surplus from the fusion reaction, there was no way to HARNESS that energy. There was no water tank in their testing room that made steam to spin a turbine. There was just a brilliant burst of heat, and then nothing.  Still, those limitations didn’t stop other physicists I spoke with from registering their excitement (or, at the very least, their admiration) for what the Lawrence Livermore team has accomplished.  “This is huge,” said Matthew Bellis, associate professor in the Department of Physics and Astronomy at Siena College and a member of the CMS Collaboration involving the Large Hadron Collider. “There's a running joke that fusion energy is always 20 years in the future. You go back to when they first started working on this, probably in the '50s or '60s, and they were like, ‘Well, it's 20 years down the road.’ Then 20 years later, ‘Well, just give us 20 more years, and we'll have this. We'll be able to create more energy out than in.' That's the holy grail.” Bellis, like Broughton, is well aware of the power requirements of those big honking lasers. He believes that what the NIF team did qualifies as ignition, and pointed out that the techniques used could be applied by other labs, with more efficient lasers. Clean nuclear power generation was never the priority of the Lawrence Livermore lab, which was clandestinely founded in conjunction with The Manhattan Project to ensure the safety of America’s now-aging nuclear arsenal. Other labs, particularly in the private sector, have already been dedicating enormous resources to this project; now, they have something much closer to a blueprint for striking atomic gold.  There’s also the possibility that this is less of a scientific breakthrough, and more of a psychological one — one that Bellis believes could potentially accelerate a global race to producing fusion energy on a commercial scale. Let’s be frank: If something isn’t gonna happen in our lifetimes, we’re far less inclined to ever give a f—k about it, whether we’re investing in a technology or voting for our government to do so. Now, today, it feels possible — even reasonable — to give a f—k about fusion, because it might actually be coming. There’s a very big difference between something being theoretically possible and it being definitively so, and this event made the idea of fusion much more tangible, attainable, to a lot of very important and oddly useful people. “Funding agencies can show the general public that this money is being invested in something worthwhile. I think that's huge. I think it's really big to be able to show people like, ‘No, this is here. Now we just have to figure out how to scale it up,’” Bellis told me. “To do a proof of principle that mankind can produce the same conditions that happen in the sun, if only for like a millisecond, is huge in giving everybody the confidence that there is a path forward for this type of energy.” That confidence is no small matter. It has the potential to change both how the world sees fusion power and how aggressively it chases after it. You have been told, many times over, there’s no stopping the climate apocalypse, that democracy is on its deathbed, that guns will win, and on and on and on. It’s easy to forget that the future is unwritten, and that humankind is much more resourceful than we’re often given credit for.  But listen. We went to the f—king MOON. We’ve put robots on Mars. We’ve launched two probes that are currently floating outside of our home solar system. Almost all of those breakthroughs were preceded by accidents, wrong guesses, opposition, tragedy, red herrings and false hopes. We kept trying anyway.What the scientists in Livermore did was a reminder that we have some measure of control over our destiny — and that we don’t have to choose our own demise as that destiny.  We have to keep trying, and now, hopefully, we will. Because if we let reality hold us back, we’re all gonna f—king die. Editor's note: This story was updated at 9 a.m., Dec. 16, to correct the spelling of Siena College.
Physics
Statue of Leonardo da Vinci. Image: Victor Ovies Arenas via Getty ImagesABSTRACT breaks down mind-bending scientific research, future tech, new discoveries, and major breakthroughs.More than 500 years ago, Leonardo da Vinci was watching air bubbles float up through water—as you do when you’re a Renaissance-era polymath—when he noticed that some bubbles inexplicably started spiraling or zigzagging instead of making a straight ascent to the surface.For centuries, nobody has offered a satisfying explanation for this weird periodic deviation in the motion of some bubbles through water, which has been called “Leonardo’s paradox.” Now, a pair of scientists think they may have finally solved the longstanding riddle by developing new simulations that match high-precision measurements of the effect, according to a study published on Tuesday in Proceedings of the National Academy of Sciences. The results suggest that bubbles can reach a critical radius that pushes them into new and unstable paths due to interactions between the flow of water around them and the subtle deformations of their shapes.“The motion of bubbles in water plays a central role for a wide range of natural phenomena, from the chemical industry to the environment,” said authors Miguel Herrada and Jens Eggers, who are fluid physics researchers at the University of Seville and the University of Bristol respectively, in the study. “The buoyant rise of a single bubble serves as a much-studied paradigm, both experimentally and theoretically.”“Yet, in spite of these efforts, and in spite of the ready availability of enormous computing power, it has not been possible to reconcile experiments with numerical simulations of the full hydrodynamic equations for a deformable air bubble in water,” the team continued. “This is true in particular for the intriguing observation, made already by Leonardo da Vinci, that sufficiently large air bubbles perform a periodic motion, instead of rising along a straight line.”Indeed, bubbles are so ubiquitous in our daily lives that it can be easy to forget that they are dynamically complicated and often tricky to experimentally study. Rising air bubbles in water are influenced by a host of intersecting forces—such as fluid viscosity, surface friction, and any surrounding contaminants—that contort the shapes of the bubbles and shift the dynamics of the water flowing around them.What da Vinci noted, and other scientists have since confirmed, is that air bubbles with a spherical radius that is much smaller than a millimeter tend to follow a straightforward upward path through water, whereas larger bubbles develop a wobble that results in periodic spiral or zigzag trajectories. Herrada and Eggers used the Navier–Stokes equations, which are a mathematical framework for describing the motion of viscous fluids, to simulate the complex interplay between the air bubbles and their watery medium. The team were able to pinpoint the spherical radius that triggers this tilt—0.926 millimeters, which is about the size of a pencil tip—and describe the possible mechanism behind the squiggly motion.A bubble that has exceeded the critical radius becomes more unstable, producing a tilt that changes the curvature of the bubble. The shift in curvature increases the velocity of water around the surface of the bubble, which kicks off the wobble motion. The bubble then returns to its original position due to the pressure imbalance created by the deformations in its curved shape, and repeats the process on a periodic cycle.In addition to resolving a 500-year-old paradox, the new study could shed light on a host of other questions about the mercurial behavior of bubbles, and other objects that defy easy categorization.“While it was previously believed that the bubble’s wake becomes unstable, we now demonstrate a new mechanism, based on the interplay between flow and bubble deformation,” Herrada and Eggers concluded in the study. “This opens the door to the study of small contaminations, present in most practical settings, which emulate a particle somewhere in between a solid and a gas.”ORIGINAL REPORTING ON EVERYTHING THAT MATTERS IN YOUR INBOX.By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.
Physics
Dr. Charles Lim, Global Head of Quantum Communications and Cryptography, JP Morgan ChaseCourtesy: JP Morgan ChaseJPMorgan Chase has hired a Singapore-based quantum computing expert to be the bank's global head for quantum communications and cryptography, according to a memo obtained by CNBC.Charles Lim, an assistant professor at the National University of Singapore, will be focused on exploring next-generation computing technology in secure communications, according to the memo from Marco Pistoia, who runs the bank's global technology applied research group.Lim is a "recognized worldwide leader" in the area of quantum-powered communications networks, according to Pistoia.Hired from IBM in early 2020, Pistoia has built a team at JPMorgan focused on quantum computing and other nascent technologies. Unlike classical computers, which store information as either zeros or ones, quantum computing hinges on quantum physics. Instead of being binary, qubits can simultaneously be a combination of both zero and one, as well as any value in between.'New horizons'The futuristic technology, which involves keeping hardware at super-cold temperatures and is years away from commercial use, promises the ability to solve problems far beyond the reach of today's traditional computers. Technology giants including Alphabet and IBM are racing towards building a reliable quantum computer, and financial firms including JPMorgan and Visa are exploring possible uses for it."New horizons are going to become possible, things we didn't think would be possible before," Pistoia said in a JPMorgan podcast interview.In finance, machine-learning algorithms will improve to help fraud detection on transactions and other areas that involve "prohibitive complexity," including portfolio optimization and options pricing, he said.Drug development, materials science for batteries and other areas will be transformed by the dramatically advanced computing, he added.But if and when the advanced computing technology becomes real, the encryption techniques that underpin the world's communications and financial networks could immediately be rendered useless. That has spurred the study of next-generation quantum-resistant communication networks, which is Lim's area of expertise.'Quantum supremacy'New forms of cryptography and secure messaging are needed ahead of the so-called "quantum supremacy," which is the point when quantum computers are able to perform calculations beyond the scope of traditional computers in any reasonable timeframe, Pistoia said during the podcast.That could happen by the end of the decade, he said.An earlier moment will be the "quantum advantage," which is when the new computers are more powerful and accurate than classical computers, but the two will be competitive. That could happen in as soon as two to three years from now, he said."Even now that quantum computers are not yet that powerful, we don't have so much time left," Pistoia said. That's because bad actors are already preserving private communications to attempt to decrypt it later when the technology allows for it, he said.Lim will "pursue both foundational and applied research in quantum information, focusing on innovative digital solutions that will enhance the security, efficiency, and robustness of financial and banking services," Pistoia said in the memo.Lim is a recipient of the National Research Foundation Fellowship in Singapore and won the National Young Scientist Award in 2019 for his work in quantum cryptography, said Pistoia.Last year, Lim was asked to lead his country's effort to create quantum-resistant digital solutions, and he has been involved in international efforts to standardize quantum security techniques, Pistoia added.
Physics
Image source, Getty ImagesImage caption, Students sitting exams in 2023 will have experienced serious disruption because of the pandemic.Exam students will not have to memorise formulae and equations for some GCSE subjects next year.In mathematics, physics and combined science, pupils will be provided with a sheet containing formulae and equations, similar to the 2022 exams.Many of the students sitting exams will have experienced serious disruption to their studies because of the pandemic. The exams regulator Ofqual said the move would offer the support students needed "as we move towards normality."In September, it announced exams in England would mostly return to normal next summer. Students will no longer have advance information of exam content. And after three years of adjustments, grades, though still "protected", will be "much closer" to pre-pandemic levels.'Significant disruption'A consultation on the proposal to allow supporting materials found 93.7% of teachers and students working towards 2023 GCSEs said it was necessary. The students "will have faced significant disruption to their education due to Covid-19" and it would be a challenge to "get through all of the specification", one teacher said.And one of the students highlighted how they had gone into lockdown in Year 9, after the start of their GCSE maths course.Chief regulator Dr Jo Saxton said: "In 2023, students will again have the opportunity to show what they know and can do in exams," and "together with some protection on grading", Ofqual would "offer the degree of support students need as we move towards normality".
Physics
Home News Spaceflight (Image credit: University of Washington) The announcement this week of fusion ignition is a major scientific advancement, one that is decades in the making. More energy was produced than the laser energy used to spark the first controlled fusion triumph. The result: replicating the fusion that powers the sun.On Dec. 5, a team at Lawrence Livermore National Laboratory's National Ignition Facility (NIF) achieved the milestone. As noted by Kim Budil, director of the laboratory: "Crossing this threshold is the vision that has driven 60 years of dedicated pursuit — a continual process of learning, building, expanding knowledge and capability, and then finding ways to overcome the new challenges that emerged," Budil said.The nuclear fusion feat has broad implications, fueling hopes of clean, limitless energy. As for space exploration, one upshot from the landmark research is attaining the long-held dream of future rockets that are driven by fusion propulsion. But is that prospect still a pipe dream or is it now deemed reachable? If so, how much of a future are we looking at?Related: Major breakthrough in pursuit of nuclear fusion unveiled by US scientistsData pointsThe fusion breakthrough is welcomed and exciting news for physicist Fatima Ebrahimi at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory in New Jersey. Ebrahimi said the NIF success is extraordinary."Any data points obtained showing fusion energy science achievement is fantastic! Fusion energy gain of greater than one is quite an achievement," Ebrahimi said. However, engineering innovations are still requisite for NIF to be commercially viable as a fusion reactor, she added.Ebrahimi is studying how best to propel humans at greater speeds out to Mars and beyond. The work involves a new concept for a rocket thruster, one that exploits the mechanism behind solar flares. The idea is to accelerate particles using "magnetic reconnection," a process found throughout the universe, including the surface of the sun. It's when magnetic field lines converge, suddenly separate, and then join together again, producing loads of energy. By using more electromagnets and more magnetic fields, Ebrahimi envisions the ability to create, in effect, a knob-turning way to fine-tune velocity.As for the NIF victory impacting space exploration, Ebrahimi said for space applications, compact fusion concepts are still needed. "Heavy components for space applications are not favorable," she said.Physicist Fatima Ebrahimi in front of an artistic rendering of a fusion rocket. (Image credit: Elle Starkman, Princeton Plasma Physics Laboratory Office of Communications)Necessary precursorSimilar in thought is Paul Gilster, writer/editor of the informative Centauri Dreams website. "Naturally I celebrate the NIF's accomplishment of producing more energy than was initially put into the fusion experiment. It's a necessary precursor toward getting fusion into the game as a source of power," Gilster told Space.com. Building upon the notable breakthrough is going to take time, he said."Where we go as this evolves, and this seems to be several decades away, is toward actual fusion power plants here on Earth. But as to space exploration, we then have to consider how to reduce working fusion into something that can fit the size and weight constraints of a spacecraft," said Gilster.There's no doubt in Gilster's mind that fusion can be managed for space exploration purposes, but he suspects that's still more than a few decades in the future. "This work is heartening, then, but it should not diminish our research into alternatives like beamed energy as we consider missions beyond the solar system," said Gilster.The target chamber of Lawrence Livermore National Laboratory's National Ignition Facility. (Image credit: Lawrence Livermore National Laboratory )Exhaust speedsRichard Dinan is the founder of Pulsar Fusion in the United Kingdom. He's also the author of the book "The Fusion Age: Modern Nuclear Fusion Reactors." "Fusion propulsion is a much simpler technology to apply than fusion for energy. If fusion is achievable, which at last the people are starting see it is, then both fusion energy and propulsion are inevitable," Dinan said. "One gives us the ability to power our planet indefinitely, the other the ability to leave our solar system. It's a big deal, really."Exhaust speeds generated from a fusion plasma, Dinan said, are calculated to be roughly one-thousand times that of a Hall Effect Thruster, electric propulsion hardware that makes use of electric and magnetic fields to create and eject a plasma."The financial implications that go with that make fusion propulsion, in our opinion, the single most important emerging technology in the space economy," Dinan said.Pulsar Fusion has been busy working on a direct fusion drive initiative, a steady state fusion propulsion concept that's based on a compact fusion reactor.According to the group's website, Pulsar Fusion has proceeded to a Phase 3 task, manufacturing an initial test unit. Static tests are slated to occur next year, followed by an in-orbit demonstration of the technology in 2027.Pulsar Fusion's Direct Fusion Drive, a compact nuclear fusion engine that could provide both thrust and electrical power for spaceships. (Image credit: Pulsar Fusion)Aspirational glow"The net energy gain reported in the press is certainly a significant milestone," said Ralph McNutt, a physicist and chief scientist for space science at the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland. "As more comes out, it will be interesting to see what the turning point was that pushed this achievement past the previous unsuccessful attempts," he said.McNutt said that getting to a commercial electric power station from this recent milestone is likely to be a tough assignment. "But the tortoise did eventually beat the hare. Tenacity is always the virtue when one is handling tough technical problems."With respect to space exploration, it certainly does not hurt in providing an example that great things can still be accomplished, McNutt said. "All of that said, it should be still a sobering thought that despite all of the work on NERVA/Rover there is still no working nuclear thermal rocket engine, and the promise of nuclear electric propulsion for space travel only had a brief glimmer with SNAP-10A in April of 1965," recalled McNutt. The actual use of ICF in a functional spacecraft has been a long-held dream, McNutt said, but that is very unlikely to change for a long time to come.The cover of a 1989 NASA Lewis Research Center study on inertial confinement fusion propulsion. (Image credit: NASA)"Space travel has always been tough. That NASA has 'blazed the trail' that many commercial entities are now following does not mean space has gotten easier, but the new ICF results have added to the aspirational glow on the horizon of the future," McNutt added. "That said, no one should be fooled into thinking that space will somehow not be tough someday. It's called 'rocket science,' with all that implies in popular culture for a reason," he concluded. Follow us on Twitter @Spacedotcom (opens in new tab) or on Facebook (opens in new tab).   Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com. Leonard David is an award-winning space journalist who has been reporting on space activities for more than 50 years. Currently writing as Space.com's Space Insider Columnist among his other projects, Leonard has authored numerous books on space exploration, Mars missions and more, with his latest being "Moon Rush: The New Space Race" published in 2019 by National Geographic. He also wrote "Mars: Our Future on the Red Planet" released in 2016 by National Geographic. Leonard  has served as a correspondent for SpaceNews, Scientific American and Aerospace America for the AIAA. He was received many awards, including the first Ordway Award for Sustained Excellence in Spaceflight History in 2015 at the AAS Wernher von Braun Memorial Symposium. You can find out Leonard's latest project at his website and on Twitter.
Physics
The incredible feat of developing Covid-19 vaccines so rapidly showcased science at its very best. But as we applauded the heroic effort of our health care workers in March 2022, one of my neighbors asked, “Why hasn’t AI helped?” A fair question. Machine-learning techniques contributed in some specific areas, and they are helping with future pandemic preparedness today. But, in reality, this test came too soon for AI to show its full promise. But eight months later, still in the midst of the pandemic, AI solved a nearly 50-year-old grand challenge in biology: the protein-structure prediction problem. Life science experts described this breakthrough as “the singular and momentous advance in life science that demonstrates the power of AI.” Since that time, AI-powered protein-structure prediction has transformed biology. From accelerating research into new plastic-eating enzymes to expanding our understanding of how cells work, it is helping biologists discover new solutions to countless problems that can benefit the world. AI has also made progress in other areas of science, like astronomy, particle physics, organic chemistry, medical imaging, conservation, and fusion. Breakthroughs like these will keep coming. But we are also on the cusp of a more fundamental shift. In 2023, we will see artificial intelligence finally emerge as an essential and everyday tool for scientists across domains and disciplines. Just as millions of office workers today rely on email and word processors, scientists will begin to rely on machine-learning models and AI systems in the same way.For instance, thanks to AI-powered protein-structure prediction, what once took biologists thousands of dollars or years of painstaking research is as effortless as a Google search. We are certain to see this extend into adjacent fields. In genomics, AI will enable scientists to unlock a deeper understanding of disease and explore therapies that treat them. As we build more generalized systems that learn the underlying principles governing complex problems, we’ll see AI’s impact cut across traditionally isolated disciplines. Researchers investigating all sorts of problems will use it as a tool to augment human intelligence—optimizing processes, automating procedures, informing new theories, and providing a better understanding uncertainty.The drought in Europe, floods in South Asia, and extreme weather seen globally in recent years have shown the urgency of the climate crisis facing us. We must embrace more sustainable consumption and ambitious policymaking, but we cannot rely on this alone. AI and machine learning are also starting to help build better predictive models of what is happening to the climate. New meteorological models, like Nowcasting, will help us make better decisions and plans at the individual, national, and global levels. Digital twins—real-time virtual representations of real-world physical systems—could give us a better understanding of climate change, the price of inaction, and the likely impact of policy or technological solutions. AI and machine learning can provide the exponential technological advance we need to overcome the vastly complex problems that science and humanity are now grappling with. When they come, these scientific breakthroughs capture the imagination but often create misplaced expectations. It’s important that, in inevitably falling short, we don’t lower our ambitions. Instead, we must remind ourselves that these are tools, and the benefits come when scientists, researchers, and engineers use them in their everyday work. We’ve already seen that transformation in biology. In 2023, we will see AI finally take its place in every scientist's toolbox. I can’t wait to see what they discover.
Physics
image: 3D illustration of the simulated air blast and generated blast wave 10 seconds following the detonation of a 750 kT nuclear warhead above a typical metropolitan city; the radius of the shock bubble at ground level is 4.6 km. view more  Credit: I. Kokkinakis and D. Drikakis, University of Nicosia, Cyprus WASHINGTON, Jan. 17, 2023 – There is no good place to be when a nuclear bomb goes off. Anything too close is instantly vaporized, and radiation can pose a serious health threat even at a distance. In between, there is another danger: the blast wave generated by the explosion, which can produce airspeeds strong enough to lift people into the air and cause serious injury. In Physics of Fluids, by AIP Publishing, researchers from the University of Nicosia simulated an atomic bomb explosion from a typical intercontinental ballistic missile and the resulting blast wave to see how it would affect people sheltering indoors. In the moderate damage zone, the blast wave is enough to topple some buildings and injure people caught outdoors. However, sturdier buildings, such as concrete structures, can remain standing. The team used advanced computer modeling to study how a nuclear blast wave speeds through a standing structure. Their simulated structure featured rooms, windows, doorways, and corridors and allowed them to calculate the speed of the air following the blast wave and determine the best and worst places to be. "Before our study, the danger to people inside a concrete-reinforced building that withstands the blast wave was unclear," said author Dimitris Drikakis. "Our study shows that high airspeeds remain a considerable hazard and can still result in severe injuries or even fatalities." According to their results, simply being in a sturdy building is not enough to avoid risk. The tight spaces can increase airspeed, and the involvement of the blast wave causes air to reflect off walls and bend around corners. In the worst cases, this can produce a force equivalent to 18 times a human's body weight. "The most dangerous critical indoor locations to avoid are the windows, the corridors, and the doors," said author Ioannis Kokkinakis. "People should stay away from these locations and immediately take shelter. Even in the front room facing the explosion, one can be safe from the high airspeeds if positioned at the corners of the wall facing the blast." The authors stress that the time between the explosion and the arrival of the blast wave is only a few seconds, so quickly getting to a safe place is critical. "Additionally, there will be increased radiation levels, unsafe buildings, damaged power and gas lines, and fires," said Drikakis. "People should be concerned about all the above and seek immediate emergency assistance." While the authors hope that their advice will never need to be followed, they believe that understanding the effects of a nuclear explosion can help prevent injuries and guide rescue efforts. ### The article "Nuclear explosion impact on humans indoors" is authored by Ioannis William Kokkinakis and Dimitris Drikakis. The article will appear in Physics of Fluids on Jan. 17, 2023 (DOI: 10.1063/5.0132565). After that date, it can be accessed at https://doi.org/10.1063/5.0132565. ABOUT THE JOURNAL Physics of Fluids is devoted to the publication of original theoretical, computational, and experimental contributions to the dynamics of gases, liquids, and complex fluids. See https://aip.scitation.org/journal/phf. ### Journal Physics of Fluids Article Title Nuclear explosion impact on humans indoors Article Publication Date 17-Jan-2023 Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Physics
By Evan GoughIron is one of the most abundant elements in the Universe, along with lighter elements like hydrogen, oxygen, and carbon. Out in interstellar space, there should be abundant quantities of iron in its gaseous form. So why, when astrophysicist look out into space, do they see so little of it? First of all, there’s a reason that iron is so plentiful, and it’s related to a thing in astrophysics called the iron peak.In our Universe, elements other than hydrogen and helium are created by nucleosynthesis in stars. (Hydrogen, helium, and some lithium and beryllium were created in Big Bang nucleosynthesis.) But the elements aren’t created in equal amounts. There’s an image that helps show this.The reason for the iron peak has to do with the energy required for nuclear fusion and for nuclear fission. For the elements lighter than iron, on its left, fusion releases energy and fission consumes it. For elements heavier than iron, on its right, the reverse is true: its fusion that consumes energy, and fission that releases it. It’s because of what’s called binding energy in atomic physics.That makes sense if you think of stars and atomic energy. We use fission to generate energy in nuclear power plants with uranium, which is much heavier than iron. Stars create energy with fusion, using hydrogen, which is much lighter than iron.In the ordinary life of a star, elements up to and including iron are created by nucleosynthesis. If you want elements heavier than iron, you have to wait for a supernova to happen, and for the resulting supernova nucleosynthesis. Since supernovae are rare, the heavier elements are rarer than the light elements. It’s possible to spend an extraordinary amount of time going down the nuclear physics rabbit hole, and if you do, you’ll encounter an enormous amount of detail. But basically, for the reasons above, iron is relatively abundant in our Universe. It’s stable, and it requires an enormous amount of energy to fuse iron into anything heavier.Why Can’t We See It?We know that iron in solid form exists in the cores and crusts of planets like our own. And we also know that it’s common in gaseous form in stars like the Sun. But the thing is, it should be common in interstellar environments in its gaseous form, but we just can’t see it.Since we know it has to be there, the implication is that it’s wrapped up in some other process or solid form or molecular state. And even though scientists have been looking for decades, and even though it should be the fourth-most abundant element in the solar abundance pattern, they haven’t found it.Until now.Now a team of cosmochemists from Arizona State University say they’ve solved the mystery of the missing iron. They say that the iron has been hiding in plain sight, in combination with carbon molecules in things called pseudocarbynes. And pseudocarbynes are tricky to see because the spectra are identical to other carbon molecules which are abundant in space.The team of scientists includes lead author Pilarasetty Tarakeshwar, research associate professor in ASU’s School of Molecular Sciences. The other two members are Peter Buseck and Frank Timmes, both in ASU’s School of Earth and Space Exploration. Their paper is titled “On the Structure, Magnetic Properties, and Infrared Spectra of Iron Pseudocarbynes in the Interstellar Medium” and is published in the Astrophysical Journal.“We are proposing a new class of molecules that are likely to be widespread in the interstellar medium,” said Tarakeshwar in a press release.The team focused in on gaseous iron, and how only a few atoms of it might join with carbon atoms. The iron would combine with the carbon chains, and the resulting molecules would contain both elements.They also looked at recent evidence of cluster of iron atoms in stardust and meteorites. Out in interstellar space, where it is extremely cold, these iron atoms act kind of like “condensation nuclei” for carbon. Varied lengths of carbon chains would stick to them, and that process would produce different molecules than those produced with gaseous iron.We couldn’t see the iron in these molecules, because they masquerade as carbon molecules without iron.In a press release, Tarakeshwar said, “We calculated what the spectra of these molecules would look like, and we found that they have spectroscopic signatures nearly identical to carbon-chain molecules without any iron.” He added that because of this, “Previous astrophysical observations could have overlooked these carbon-plus-iron molecules.”Buckyballs and MothballsNot only have they found the “missing” iron, they may have solved another long-lived mystery: the abundance of unstable carbon chain molecules in space.Carbon chains that have more than nine carbon atoms are unstable. But when scientists look out into space, they find carbon chains with more than nine carbon atoms. It’s always been a mystery how nature was able to form these unstable chains.As it turns out, it’s the iron that gives these carbon chains their stability. “Longer carbon chains are stabilized by the addition of iron clusters,” said Buseck.Not only that, but this finding opens a new pathway for building more complex molecules in space, such as polyaromatic hydrocarbons, of which naphthalene is a familiar example, being the main ingredient in mothballs.Said Timmes, “Our work provides new insights into bridging the yawning gap between molecules containing nine or fewer carbon atoms and complex molecules such as C60 buckminsterfullerene, better known as ‘buckyballs.'”Sources:Universe TodayPress Release: Interstellar iron isn’t missing, it’s just hiding in plain sightResearch Paper: On the Structure, Magnetic Properties, and Infrared Spectra of Iron Pseudocarbynes in the Interstellar Medium If you enjoy our selection of content please consider following Universal-Sci on social media:
Physics
The transition to electric vehicles is putting pressure on power grids to produce more energy and on vehicles to use that energy much more efficiently, creating a gargantuan set of challenges that will affect every segment of the automotive world, the infrastructure that supports it, and the chips that are required to make all of this work. From a semiconductor standpoint, improvements in thermal management will be needed to prevent chips and subsystems from overheating as they process more data from more sensors, and communicate with each other and the outside world. Hardware and software will need to be designed together, so different functions can be partitioned and prioritized to improve efficiency. And there will need to be changes in battery management and battery chemistries, as well as improvements in aerodynamics. “There is an entire paradigm shift happening in that everybody’s thinking power,” observed Preeti Gupta, director of product management at Ansys. “Gone are the days where it was about performance and area. Power is such a critical design metric now. You’re hearing about performance per watt. It’s not just about performance alone, but at what cost of power are you delivering that performance. The automotive industry is very interested in understanding power and thermal impact early in the design flow. They are making huge decisions in terms of their design choices. When it comes to chips, it’s a lot about the packaging, not just what is on the chip and burning power. Like mobile or handheld device developers — which were concerned about power and thermal footprints because it should not burn your hand when you’re holding a mobile device, along with battery life considerations — when it comes to electric vehicles, the same battery life extension applies. Thermal aspects are even more important to model early in order to make the right design decisions. At a high level, everybody is moving to shift left, which is getting feedback as early in the design flow as possible.” Going forward, EV architectures will require many more compute operations as vehicles are increasingly digitalized. As a result, different architectures are evolving to meet EV vehicle compute demands. “One approach is a centralized compute domain, which is probably the ultimate architecture being discussed, and the automotive ecosystem is trending toward that architecture,” said Ramesh Chettuvetty, general manager of memory solutions and RAM at Infineon Technologies. “We’re not there yet. We are still in the domain/zonal architecture phase, which is a distributed domain approach. However, once we have so much compute — and autonomous driving picks up and there’s all kinds of sensor data coming in — the processing workload is definitely going to be massively high. This will increase the thermal dissipation of these compute elements.” These factors combined also increase design complexity, because chips need to be designed as parts of systems, or in the context of systems of systems. “It is already challenging to design semiconductors to work on a broad temperature range environment,” Chettuvetty said. “Most of it peaks at 125° Celsius. If the thermal dissipation is much higher, as part of the need for higher compute, there will definitely be challenges like what we are facing in data centers and other applications that require coolers or fans. Those are additional overheads that will consume some energy because it’s all ultimately going to run on the EV battery. This will bring down the mileage of the cars, which is one of the key features that OEMs are going to promote going forward, so they’ll definitely want innovative architectures to get around that problem. Power is the bottleneck everywhere, regardless of whether it is wall-powered or battery-operated. All engineering teams must move away from traditional ways of implementing features and look to innovative ways to do things.” Still, he expects automotive systems architects to start adopting innovative solutions to address these problems, and explore the alternatives that are available. “Designs should be approached in a comprehensive way. The engineering team has this habit of looking at it in a very limited scope. We have to put all those pieces together, and somebody has to take a comprehensive look at it to see whether all the assumptions that are being made by the engineering team are realistic in a practical world.” This represents a fundamental shift. Traditionally, the way the hardware engineers thought about designing these chips was to put in many operational modes. “They’d say, ‘I can turn down certain things or I can monitor things, and I can use that monitoring to meter what I’m doing.’ Maybe I slow things down in one area or another,” said Steven Woo, fellow and distinguished inventor at Rambus. “However, what we see more of in AI, which is likely to play out in all fields, is that the software guys really understand quite well the tradeoff between the performance of the system and the precision of the system. The way they think about it is if they are somehow limited in bandwidth, energy, or something else, they turn it into a software problem. ‘If I need more bandwidth, then I can reduce the precision of my numbers. Instead of 32-bit floating point, I do 16-bit floating point.’ Then in the bandwidth, they can get twice as many numbers. They do give up precision for this, but they know how to deal with that. As a result, they train the AI algorithm specifically for reduced precision or for sparsity.” In the AI arena, there is more of a holistic view for integrating hardware and software, Woo said. “In the same way over the last 20 years, programmers have been forced to become more architecturally aware. What size cache do I have? What exactly is the architecture of the processor I’m running on? Programmers will have to be more cognizant about things like power limitations in the system, and thus try to use tools and APIs that let them trade off power for performance. That’s how I believe the evolution will happen. It will take time, because it’s not easy to think about these things. It’s taken about a generation of programmers to really understand that you can’t be as abstracted anymore in what the architecture looks like when you’re writing your programs. It’s going to move in that direction over the next 20 years.” Where the power goes While this evolves, the ECUs themselves will employ advanced power management techniques and play an increasingly important role in overall power management within the architecture of an EV. A key consideration is knowing where the power is spent. “A lot of power is spent when you transfer data,” said Sumit Vishwakarma, principal product manager in the AMS Business Unit at Siemens Digital Industries Software. “You want to minimize what you’re transferring, and only send sensible information. That’s where the ECU comes into play. The ECU is mostly digital logic, and then lots of PHYs connect that to different parts of the car. There are also a number of sensors around the car connected to the main hub integrated ECU. Other than that, there are different ECUs working separately for different applications. For example, when you are a passenger, and you sit in a passenger seat, the first thing the car seat is going to do is detect there is some weight. That means it gives the signal to the main display unit to turn on the light indicating the passenger needs to put their seatbelt on. The moment they sat, you will see that it is on, and when they take the seatbelt and buckle it in, it sends the signal to turn off the light indicating the seatbelt is fastened. All of this communication is happening every time. But when the car is parked, then those things never happen because all of the sensors are not necessary to be working at that time, compared to when you’re driving the car. This mean there is always power management happening inside the ECUs.” The question then becomes what should be on, off, or something in between. That requires defining the power intent for the ECU, such as within UPF. “From a design perspective, the electronic control units, which are primarily digital circuits, could be implemented with some kind of system level language, such as System Verilog or other functional language or could even be implemented using FPGAs,” Vishwakarma said. “Those are primarily digital circuits, and their job is to make sure that it takes care of the sensor fusion and the decision-making. From there, power domains can be specified to indicate different power scenarios, and how much power each block should have. In addition, power gating, isolation, and retention can be used for controlling the flow of power between blocks. There are also a number of power considerations stemming from the movement of data in EVs. “There will be multiple networks, all chained together,” said Paul Graykowski, senior technical marketing manager at Arteris IP. “We’re putting more and more devices on a vehicle. It’s not just, ‘Drive me from Point A to Point B.’ It’s, ‘While driving me from here to here, I want my stereo. Passengers want their TV. I want my air-conditioned seats. I want my auto driving. I want the sunroof.’ There are so many little things, which means the user interface is going to be very complicated, and we must make sure we have a good flow with that. There are going to be power needs everywhere, and we’re going to have to address those power needs. It might be simple where we just say, ‘This device is going to be fine, but this other one needs a very complex network.’ We’ve got to have the timing parameters set forward as well. It’s all going to come into play.” CFD From an EDA tooling point of view, one tool area that formerly focused on mechanical design is now being applied in EV design. Computational fluid dynamics (CFD) are being brought to bear to improve the energy consumption profile of electric vehicles. Robert Schweiger, director of automotive solutions at Cadence, said that based on consumer surveys, the number one concern about buying an electric vehicle besides the price is the range of the vehicle. “The range needs to be beyond 400 kilometers/248 miles, because the charging takes quite some time, and it means an additional stop on your journey. The range can be significantly improved by optimizing the aerodynamics. Therefore, CFD will play a major role in optimizing the aerodynamics of a car, which has an impact on the range.” Heat is another consideration. With internal combustion engines, temperature can spike in traffic jams. EVs, in contrast, use less energy in stop-and-go traffic, but can overheat during rapid charging. “The challenge for an EV is during charging,” Schweiger said, “CFD software can be also used for cooling systems to simulate the water flow in how it is able to cool down systems. For batteries, new concepts are emerging whereby, alongside the battery pack, water is sprayed from the top of the system onto the battery while using a supercharger. With a tank underneath, the liquid is pumped up, and sprayed down again. This is a new concept that will be available to cool down batteries, which in turn will eventually help to improve the lifecycle of a battery, as well as cool the battery down.” Doing more earlier Understanding and predicting power needs begin very early in the design cycle of automotive chips and systems. In concert with that, many EDA companies now are doing more modeling to get tighter accuracy and improved predictability, said Ansys’ Gupta. “If you’re thinking about an early design phase, and you’re talking about a much later design phase, so much of the design is implemented,” she said. “How do you predict that early at RTL? What would the placement be like? What would the wire routing be like? Modeling all of these components is now possible. There are techniques that we can employ to model these now.” The goal for many of these design elements is lower power, and that has many elements, Gupta said. “How many power domains do you need? Can you really turn on or shut off for mission critical applications? You probably cannot have logic shutting down and waking up in some applications, so it’s all about understanding where power is going and why, where activity is and why. Once you understand the ‘why,’ then addressing it becomes easier. So it’s a lot about early power visibility, a key component of which is early power visibility from emulation use case scenarios. You can’t live within a well and just say, ‘I’ve optimized it.’ Look at the real traffic, the real application use cases. Is your device really energy-efficient and power-efficient?” New architectural challenges To address these numerous energy, power and thermal constraint factors for EVs requires a different way of looking at designs. “First of all, the focus on power consumption becomes more relevant,” said Tim Kogel, principal engineer for virtual prototyping at Synopsys. “With a traditional combustion engine, you basically have infinite energy. You don’t care if you burn 100 or 200 watts of power. But with electric vehicles, that all impacts the range in a much more significant way. We’ve heard that with 100 watts of more power consumption, it costs 10 to 30 kilometers of range. Vehicle range is so much a unique selling point, and range anxiety is one of the key concerns for electric vehicles.” On the other hand, with Level 3 and Level 4 autonomous driving, there are extremely complex, advanced chips. “There are multiple cameras, multiple other types of sensors,” Kogel said. “There are complex algorithms, like classification and segmentation, pathfinding, the most complex processing tasks. When you just put this in a generic, general-purpose GPU type of architecture, it burns too much power. We’ve all seen the photos of autonomous car prototypes where the whole trunk is racks of compute that require kilowatts of power, which is in total conflict with this requirement. To get that to an acceptable level, you need to move from the general-purpose architectures to much more dedicated architectures like vector DSP, application-specific instruction set processes — or even for some pieces, hardwired logic, because only that gives you the level of power reduction by one or even two orders of magnitude that you need to have this level of functionality at this power budget.” It’s the same for communication, which is also a big part of the architecture. “Here, the easy thing is always to use caches and cache coherent interconnect and then the data is moved around more or less automatically,” he said. But again, it’s burning large amounts of power, so people need to go to dedicated memories, dedicated memory transfers. The price that people pay for this is that these dedicated architectures are less flexible. You don’t have the easy push button software flows, so the software implementation becomes an effort to implement and to verify.” Conclusion Will the issues with energy, power, performance and thermal push back widespread deployment and adoption of electric vehicles? Combined with ADAS systems, which are extremely power-hungry, the challenges only seem to be growing. Vasanth Waran, senior director of business development for automotive at Synopsys noted that while electric cars bring a new challenge because of the limited range and limited battery power, ADAS has power problems even on the internal combustion engine side. “It’s a very complex problem, and it’s as much a thermal problem as it is a power problem, because the car is one of the most challenging environments you can design a semiconductor to operate in. Everybody gets irritated if their audio doesn’t work or their map suddenly gets shaky or their camera goes off. You always want the best performance. You still want the A/C to work. You still want the cameras to work. It’s a hard problem to crack. And because of this, over the last couple of years the emphasis has completely changed to getting more and more focused on power.” However, there are technology solutions available, such as those mentioned above along with others such as discrete clock and voltage scaling, which Waran said are starting to find their way into the automotive domain. In addition, cooling technologies are improving, and novel technologies like carbon nanotubes are being researched to address some of the most challenging issues. “While there are problems, the industry will find a way to solve them. If you look at some of the technologies that are coming out, like advanced packaging, those are all trying to address the same problem. It’s fundamentally a physics problem where there is a lot of die area, it’s dissipating a lot of heat. How do you pull that out? It’s the law of thermodynamics. I have a lot of heat generated in a small amount of area. How do I pull this out effectively? That’s where chiplets and things of that nature are coming into play. EVs are here to stay, and the progress is not going to be stunted because of these problems.” But what about new use cases? And what about software-defined vehicle? What do you do if you wake up and your car doesn’t boot? These are issues that still need to be addressed, and they will need to be addressed across the automotive ecosystem. “In the past, most of the architectures were defined at the OEM level or the tier one,” said Infineon’s Chettuvetty. “Then it would go to semiconductor companies, as a tier two. Those boundaries are getting diluted now, and there is direct interaction between OEMs and the semiconductor companies, which is a change. There needs to be much more collaboration so the OEMs know the capabilities of the semiconductor companies in the first place before they come with this architecture.”
Physics
Monitoring and controlling the radiation delivered to every patient is of utmost importance in radiation therapy. This is a current challenge in emerging ultrahigh-dose rate modalities such as electron FLASH (eFLASH) radiation therapy. FLASH radiotherapy delivers radiation at ultrahigh dose rates, shortening the treatment course and improving tissue sparing relative to conventional radiotherapy. “One of the things that we need to elucidate [with FLASH] is what is the biological mechanism behind the sparing effect and how does it depend on the way that we are delivering these ultrahigh dose rates. To determine that we need to know exactly what we’re delivering,” explains Emil Schüler from the University of Texas MD Anderson Cancer Center. “Having a good understanding of the exact parameters for each pulse that is being delivered seems to be important. Until we know more, we need to have that type of detailed understanding of our deliveries, and that is where conventional equipment has proved to be suboptimal.” In conventional radiotherapy, radiation delivery is monitored using transmission ion chambers. While ion pairs occasionally recombine in these dosimeters, ion recombination represents only a small percentage of measurements (less than 5%) and these events can be accounted for using models and correction factors. In high-dose rate eFLASH beams, however, over 90% of ion pairs might recombine, conventional models that correct for ion pair recombination break down, and accurate beam monitoring and control becomes challenging – if not impossible. Led by Schüler and Sam Beddar, a team of MD Anderson researchers has recently described a way to overcome the challenges inherent to eFLASH beam monitoring. Their solution has its roots in high-energy physics experiments. Beam current transformers for FLASH In their study, reported in the Journal of Applied Clinical Medical Physics, the researchers introduce an integrated beam current transformers (BCTs) system to monitor radiation beams produced by the Mobetron system, a commercial electron therapy linear accelerator manufactured by IntraOp. BCTs, which were originally used in the beamlines of high-energy physics experiments, measure the induced current of electrons passing through them. Building on work performed at Lausanne University, IntraOp engineers redesigned the Mobetron head to accommodate two BCTs: one located after the primary scattering foil; the other, downstream of the secondary scattering foil. The MD Anderson researchers then extensively characterized the BCT response to ultrahigh-dose rate electron beams at 6 and 9 MeV. They monitored beam output in different dosimetric setups and with different collimation as a function of dose, scattering conditions, and physical beam parameters including pulse width, pulse repetition frequency and dose per pulse. Dosimetric evaluations were performed with GafChromic EBT3 film, a standard dosimeter that gives total dose readings independent of dose rate. Experimental studies were performed three times to ensure repeatability and reproducibility. The team concluded that BCTs can accurately monitor eFLASH beams, quantify accelerator performance and capture essential physical beam parameters on a pulse-by-pulse basis. Now, they are investigating the source of, and ways to correct for, higher differential backscatter levels measured in the upper BCT relative to the lower BCT. These discrepancies were measured outside the range of likely clinical eFLASH beam parameters. Schüler and Beddar’s team is also developing methods to measure beam flatness and symmetry, which to date cannot be measured with BCTs. Photons, protons or electrons: which will bring FLASH radiotherapy to the clinic? The overarching goal of this research, Schüler says, is to make sure that radiation physicists can deliver eFLASH radiation treatments accurately and precisely. “It really comes down to making sure that we can guarantee a safe and robust clinical translation of this technology,” Schüler says. “For medical physicists, this is going a little bit outside of our comfort zone…going outside of the standard equipment that we are using now, when FLASH radiotherapy is becoming a reality. We also trying to develop the ion chamber technology for these ultrahigh dose rates, but for [beam] monitoring, especially when it comes to electron beamlines, it’s unlikely that we’re going to be able to use transmission chambers in the same fashion as we have previously with conventional dose rate radiotherapy.”
Physics
When a spacecraft slammed into an asteroid last month, it pushed it closer to its companion and sped up its orbit by about 32 minutes. It’s a huge milestone for the field of planetary defense; it establishes that it may be possible for humans to significantly change the path of a potentially hazardous asteroid — especially if we have warning that one is on the way.When the Double Asteroid Redirection Test (DART) mission sent a spacecraft crashing into its surface on September 26th, telescopes on Earth and in space were watching the action. Now, initial data from those observatories have shown that DART achieved its goal. Before the impact, the asteroid Dimorphos took about 11 hours and 55 minutes to orbit its much larger companion asteroid, Didymos. The same trip now takes 11 hours and 23 minutes.Shaving half an hour off of an asteroid’s orbit is a massive win for the mission, which would have categorized even a 73-second change as a success. Researchers think that one of the reasons for the big change in orbit is that the impact displaced tons of material, creating a dramatic-looking plume of debris in the process. This “recoil” gave the impact an extra boost, NASA said. There’s still a lot about the impact that will take scientists time to figure out. They’ll be pouring over many more observations to answer questions like: Is there a new shape to the orbit? Is Dimorphos wobbling? How much debris came off the asteroid when we slammed into it at 14,000 miles per hour? Once they have that information, the modeling will get even more intense; they’ll take the information from the observatories and run it through physics simulations again and again until they have a pretty good idea of what happened. That way, when the European Space Agency’s Hera spacecraft arrives at the asteroid system in a few years, researchers will have a pretty good idea of what it will find. “All of this information plays into our understanding of what really happened in the experiment. How effectively did the kinetic impact change the motion of the asteroid? How efficiently was momentum transferred? It’s too soon to say; there’s a lot of moving parts in this calculation,” said Tom Statler, DART program scientist at NASA, during a press conference. That’s all key information for any future mission to redirect an asteroid heading toward our planet — the basic tenant of planetary defense. Dimorphos and Didymos didn’t pose any threat to Earth, but researchers are on the lookout for other asteroids and near-Earth objects that might be hazardous. As exciting as these early results from the DART mission are, knowing how to move an asteroid is only part of any future efforts to defend our planet from space rocks. The much larger issue is knowing what hazards are out there — and knowing about them as soon as possible. “This is a four percent change in the orbital period of Dimorphos around Didymos — and it just gave it a small nudge,” said Nancy Chabot, DART coordination lead at the Johns Hopkins University Applied Physics Laboratory. A similar “small nudge” to a potentially hazardous asteroid might be enough to keep it out of Earth’s path, but the timing would be vital. “If you wanted to do this in the future, it could potentially work. But you’d want to do it years in advance. Warning time is really key here,” Chabot said.“The single most important factor that we need to know is which ones out there are potentially dangerous, and when might they be potentially dangerous,” said Lori Glaze, director of NASA’s Planetary Science Division. NASA is working on the Near-Earth Object Surveyor Mission, which would specifically look for these kinds of hazards. The mission, which has faced funding difficulties in the latest congressional budget cycles, is considered a top priority of the planetary science community.
Physics
Building an autonomous robot is no easy task. Until now, scientists have developed microscopic robots that require special wire harnesses or an external stimulant, like focused laser beams, to generate mobility. Currently featured in Science Robotics, a new type of microchip allows for onboard control in untethered robots only a tad bigger than the width of a human hair."Before, we literally had to manipulate these 'strings' in order to get any kind of response from the robot. But now that we have these brains on board, it's like taking the strings off the marionette. It's like when Pinocchio gains consciousness," said Itai Cohen, professor of physics in the College of Arts and Sciences.The new electronic brain-powered microbot is only 100 - 250 micrometers in size. It consists of three major systems: An integrated circuit for control and direction, a power source, i.e., a photovoltaic cell capable of harnessing energy from a light source, and a set of hinged legs capable of providing motion greater than 10 micrometers per second. Autonomous control comes from complementary metal-oxide-semiconductors, also known as CMOS. These semiconductors consist of thousands of transistors, diodes, capacitors, and resistors responsible for electronic device control. The motion is actuated using phase-shifted square wave frequency signals. The robot legs are made of platinum-based actuators. Both the electronic circuit and limbs of the device are powered via photovoltaics.The team created three robots to demonstrate the CMOS integration: a two-legged Purcell bot, a more complicated six-legged antbot that walks like an insect, and a four-legged dogbot that can vary the speed.
Physics
Home News Science & Astronomy Scientists know close to nothing about the asteroid Dimorphos. (Image credit: ESA–ScienceOffice.org) Before NASA's planetary defense probe DART self-destructs by slamming into the asteroid Dimorphos next week, it will offer views of only the sixth asteroid we will have ever seen up close. Scientists are eager to get their hands on those images, as they admit that we know extremely little about the space rocks potentially threatening Earth.Missions to asteroids are full of surprises. Almost two years before the Sept. 26 DART collision, NASA learned first-hand how unpredictable these space rocks can be when the OSIRIS-REx mission touched down briefly on asteroid Bennu to collect a sample. Against all expectations, the boulder-strewn surface of the 0.3-mile-wide (0.5 kilometers) asteroid was so soft that it nearly swallowed up the probe, sending shivers down the spacecraft controllers' spines and an enormous wall of debris up into space. With DART (short for "Double Asteroid Redirection Test"), NASA has sent a spacecraft to change the orbit of an asteroid, about which scientists know as little as they knew about Bennu before OSIRIS-REx's encounter."DART is going to be our first mission to study up close a binary asteroid system," Terik Daly, a deputy instrument scientist on DART's Didymos Reconnaissance and Asteroid Camera for Optical navigation (DRACO), and a planetary scientist at Johns Hopkins Applied Physics Laboratory, which manages the DART mission for NASA, told Space.com. Related: NASA's DART asteroid-impact mission will be a key test of planetary defenseThe sixth space rock ever seen in detailDART is aiming for the 520-foot-wide (160 meters) asteroid Dimorphos, which is orbiting a larger, 2,560-foot-wide (780 m) asteroid called Didymos. From ground-based measurements, scientists know the speed at which Dimorphos orbits Didymos and have a rough idea about the larger asteroid's chemical composition. Dimorphos, DART's ultimate target, however, is a complete unknown. "Dimorphos is small enough that it hasn't actually been studied separately from Didymos in any great detail," Daly said. "We know that it's a separate body, but we know very little about the shape. We don't know if Dimorphos is elongated or spherical; we don't know whether it's a single rock or a pile of boulders."Thanks to the DART mission, Dimorphos will become one of the best-studied asteroids in the universe, joining the OSIRIS-REx target asteroid Bennu, the Itokawa and Ryugu asteroids visited by the Japanese missions Hayabusa 1 and Hayabusa 2 and the Eros asteroid, which was explored by NASA's NEAR Shoemaker probe in the early 2000s. Dimorphos and Didymos will become only the sixth and seventh space rock ever seen up close by a spacecraft, and that is out of more than 26,000 asteroids currently known to regularly approach Earth's orbit. In addition to the four above, the asteroid Toutatis was briefly visited by the Chinese Chang'e 2 lunar probe, which took several images of it in 2012. The Didymos/Dimorphos couple is the first binary asteroid visited by a human-made spacecraft. (Image credit: ESA – Science Office)Unpredictable collision effectsBefore DART slams into the surface of Dimorphos at a mind-boggling speed of 13,680 mph (22,015 kph), the spacecraft will be transmitting images of the asteroid, captured by its DRACO camera, at the rate of one per second. At first, the camera will see both asteroids, then it will focus on Dimorphos, guiding DART toward it. As DART hurtles toward the smaller space rock, the views will become more and more detailed until the transmission abruptly stops ⁠— the moment of the collision.An Italy-built cubesat called LICIACube, which traveled as a passenger on DART but was released 11 days before the impact, will observe the crash from a safe distance of 600 miles (1,000 km), then zoom toward the freshly scarred surface to explore the impact in detail. Because scientists know so little about Dimorphos, they have no idea how the rock will respond to DART's attack. Will the asteroid be as soft as Bennu and swallow up DART like a swamp, or will it be a solid piece of rock that will completely flatten the van-sized DART? Asteroids are so small and their gravity so feeble that even seeing the rock from above doesn't help predict the impact effect. "Images can be deceiving, and unless you touch [the asteroid], you don't know," Patrick Michel, the principal investigator of the European Space Agency's (ESA) planned Hera mission, which will visit Dimorphos and Didymos in 2027 to conclude the investigation of the impact aftermath, said in a ESA news conference on Sept. 15. "The reason is that you are in a very very low gravity environment," added Michel, a planetary scientist at the Cote d’Azur University in France."And the response of the surface is sometimes totally counterintuitive because our intuition is based on what we experience on Earth."(Image credit: ESA-Science Office)What is Dimorphos made of? Based on how Didymos, the larger of the two rocks, reflects light, astronomers think the asteroid is mostly made of silicate-rich rocks, unlike Bennu, which is made of a less dense carbon-rich material. If Dimorphos is made of the same material as its bigger buddy, and the assumptions are correct, then the collision with DART will be less messy and possibly less efficient in changing Dimorphos' orbit than it would be if the asteroid were softer, Daly said. To know for sure, however, we will have to wait for LICIACube's data. The cubesat will also make a flyby around Dimorphos and view the entire asteroid to allow scientists to reconstruct its shape. It will take weeks to months, however, to download all the data and reveal Dimorphos' secrets. How do binary asteroids form?Since the Didymos-Dimorphos duo is the first binary asteroid to be studied in detail, scientists hope to learn something about how these space-rock couples form, said Daly. According to estimates, about 16% of near Earth asteroids wider than 650 feet (200 m) may be binaries. Even asteroid triplets are known to exist. According to some theories, such asteroid families might form when a larger rock starts to spin very fast, shedding some of its material in the process, said Daly. Other theories suggest that the binaries and triplets may be produced in collisions.  "And one of the things we'll be able to do with the DART mission is to look at what Didymos looks like in images and what Dimorphos looks like in images," said Daly. "And if they look similar — if their brightness is very similar, if they have similar kinds of morphologies — that would suggest that maybe Didymos and Dimorphos did sort of split apart. If it turns out that we look at Didymos and it looks more like Bennu but Dimorphos looks like a single rock in space, then maybe this division approach doesn't make sense."Asteroids are full of surprises, and seeing one doesn't mean we could predict the behavior of all the others. But learning as much as we can about Didymos and Dimorphos will help scientists make better guesses about other asteroids, said Daly. And the more we know, the better chance we have of getting things right when a dangerous space rock sets its sights on Earth. Follow Tereza Pultarova on Twitter @TerezaPultarova. Follow us on Twitter @Spacedotcom and on Facebook.  Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com. Tereza is a London-based science and technology journalist, aspiring fiction writer and amateur gymnast. Originally from Prague, the Czech Republic, she spent the first seven years of her career working as a reporter, script-writer and presenter for various TV programmes of the Czech Public Service Television. She later took a career break to pursue further education and added a Master's in Science from the International Space University, France, to her Bachelor's in Journalism and Master's in Cultural Anthropology from Prague's Charles University. She worked as a reporter at the Engineering and Technology magazine, freelanced for a range of publications including Live Science, Space.com, Professional Engineering, Via Satellite and Space News and served as a maternity cover science editor at the European Space Agency.
Physics
Professor Erminia Calabrese, from Cardiff University’s School of Physics and Astronomy, has been awarded the 2022 Institute of Physics Fred Hoyle Medal and Prize.Professor Calabrese, Deputy Director of Research, received her award for distinguished work on observational cosmology using the Cosmic Microwave Background to study the origins, content and evolution of the universe, and to probe new regimes of physics.The Institute of Physics (IOP) is the professional body and learned society for physics, and the leading body for practising physicists, in the UK and Ireland.Its annual awards proudly reflect the wide variety of people, places, organisations and achievements that make physics such an exciting discipline.The IOP awards celebrate physicists at every stage of their career; from those just starting out through to physicists at the peak of their careers, and those with a distinguished career behind them.They also recognise and celebrate companies which are successful in the application of physics and innovation, as well as employers who demonstrate their commitment and contribution to scientific and engineering apprenticeship schemes.Professor Calabrese said: “It is a great honour for me to receive this award. I have been very lucky to work in the field of observational cosmology during a decade which has seen tremendous progress driven by unprecedented cosmic microwave background observations and by the huge effort of international teams working together to analyse them. I am delighted to have played a role in this and look forward to what the next decade of exploration has to offer. Many aspects of the Universe are still very mysterious, and I am ready to delve into more data which next-generation experiments like the Simons Observatory will collect. Professor Erminia Calabrese Deputy Director of Research Astronomy Instrumentation Group Astronomy Group Cardiff Hub for Astrophysics Research and Technology Institute of Physics President, Professor Sheila Rowan, said: “On behalf of the Institute of Physics, I warmly congratulate all of this year’s Award winners. Each and every one of them has made a significant and positive impact in their profession, whether as a researcher, teacher, industrialist, technician or apprentice.“Recent events have underlined the absolute necessity to encourage and reward our scientists and those who teach and encourage future generations. We rely on their dedication and innovation to improve many aspects of the lives of individuals and of our wider society."Professor Peter Smowton, Head of School, added: “The whole school is delighted to congratulate Professor Calabrese on her achievement. As the citation states, Erminia’s  major achievements and continuing influence at the forefront of observational cosmology are fully deserving of recognition through the award of the Fred Hoyle Medal.”The award marks the second time the School has won the prestigious medal in recent years. Professor Jane Greaves triumphed in 2017 for her significant contribution to the understanding of planet formation and exoplanet habitability through her seminal imaging of debris discs around Sun-like stars and solar system bodies using far-infrared telescopes.
Physics
NASA’s James Webb Space Telescope may have been years behind schedule and cost more than $10 billion, but in its first year in orbit, the telescope has crushed its assignment. Webb launched on Christmas Day in 2021 and entered orbit about a month later on Jan. 24. It sees the universe through infrared light, which is invisible to the human eye but can pierce through dense gas and dust, revealing many hidden aspects of the cosmos.  Here are five recent discoveries from the Webb telescope that blew our minds.  Peering into the icy heart of a space cloud (NASA) This latest image is a stunner. Astronomers used Webb to peer into a molecular cloud called Chameleon 1. Located approximately 630 light-years from Earth, the wispy cloud is home to a diverse population of some of the coldest ices in the universe.  Inside the molecular cloud, which is forming dozens of new stars, are frozen forms of water, ammonia, methanol, methane and even carbonyl sulfide. These ingredients are not only perfect for building stars and planets, but they could also be the building blocks of life.  Ices like the ones found here can supply all the carbon, hydrogen, oxygen, nitrogen and sulfur needed to form new planets, including habitable ones like Earth. They’re also used in planetary atmospheres to make sugars, alcohols and even amino acids.  Identifying an exoplanet (NASA) NASA’s Transiting Exoplanet Survey Satellite (TESS) identified what it thought could be a planet: a small, rocky world orbiting a star in the Octans constellation, 41 light-years from Earth.  A team of researchers — led by staff astronomer Kevin Stevenson and postdoctoral fellow Jacob Lustig-Yaeger at Johns Hopkins University Applied Physics Laboratory — used Webb to watch for dips in starlight, which would happen if it had a planet orbiting it. These blips are called transits.  The researchers then used Webb to analyze the atmosphere of the planet, and while they haven’t been able to make any definitive conclusions, they did learn a few things.  Webb indicated that the world cannot have a super dense atmosphere like Saturn’s moon Titan and that it is a few hundred degrees warmer than Earth, which could make it more like Venus.  The astronomers will have another chance to observe the planet again over the summer and conduct follow-up analysis on the potential presence of an atmosphere. Follow-up observations are planned for this summer.  Webb spies on a stellar nursery (NASA) Approximately 200,000 light-years from Earth is an active stellar nursery called NGC 346. Embedded in the Small Magellanic Cloud (SMC), an offshoot of the Milky Way, this stellar nursery is the poster child for studying how stars form.  A new image of this region, captured by the Webb telescope, could reveal new insights of how early stars formed 10 billion years ago, during a period called the “cosmic noon.” In contrast to the Milky Way, the SMC contains lower concentrations of elements called “metals,” which mean elements heavier than hydrogen or helium. These concentrations reflect a unique opportunity to study what galaxies were like in the early universe, when star formation was at its peak.  Thanks to its close proximity, we’ve been able to study the SMC with multiple telescopes, yet it still remains an enigma. That is until Webb looked at it with fresh eyes, revealing more than 33,000 young stars.  Dusty debris disks (NASA) Despite being the largest stellar population in our galaxy, red dwarfs are notoriously difficult to see in the visible spectrum. To that end, they make an ideal target for Webb to view them in the infrared spectrum.  Webb recently gazed upon a nearby red dwarf, called AU Microscopi, which is surrounded by a planet-forming disk of gas and dust. Astronomers previously detected two exoplanet targets orbiting AU Mic thanks to NASA’s exoplanet hunting telescope, TESS. The researchers also noticed that the debris disk was brighter than expected and closer to the star.  Astronomers are hoping to confirm those planets with the help of Webb, as well as other exciting details that could be lurking in this dusty disk.  Galactic green peas (NASA) Webb recently imaged a group of distant galaxies in our cosmic backyard and found that they share characteristics with a rare group of galaxies called “green peas.”  These galaxies were first detected in Webb’s detailed deep field image and subsequently analyzed, with Webb’s unique sensitivity revealing the amount of oxygen present for the first time.  Green pea galaxies, first discovered in 2009, appear as small, round, unresolved green dots in the sky. Accounting for just 0.1 percent of nearby galaxies, what these miniscule galaxies lack in size, they make up for in stellar births.  According to the researchers, one of these galaxies is one of the most “chemically primitive” galaxies discovered and could hail from a time when the universe was very young.  “With detailed chemical fingerprints of these early galaxies, we see that they include what might be the most primitive galaxy identified so far,” research leader James Rhoads, an astrophysicist at NASA’s Goddard Space Flight Center in Maryland, said in a statement.  “At the same time, we can connect these galaxies from the dawn of the universe to similar ones nearby, which we can study in much greater detail,” he said.
Physics
The finding will help space- and balloon-based searches for antimatter that may have originated from dark matterArtist's impression of the ALICE study of the transparency of the Milky Way to antimatter (Image: ORIGINS Cluster, Technical University Munich)The antimatter counterpart of a light atomic nucleus can travel a long distance in the Milky Way without being absorbed, shows the international ALICE collaboration in an article published today in Nature Physics. The finding, obtained by feeding data on antihelium nuclei produced at the Large Hadron Collider (LHC) into models, will help space- and balloon-based searches for antimatter that may have originated from dark matter.Light antimatter nuclei such as antideuteron and antihelium have been produced on Earth, at particle accelerators, but they have yet to be observed with certainty coming from outer space. In space, such antinuclei, as well as antiprotons, could be created in collisions between cosmic rays and the interstellar medium, but they could also be produced when hypothetical particles that may make up the dark matter that pervades the Universe annihilate each other.Space-based experiments such as AMS, which was assembled at CERN and is installed on the International Space Station, are therefore looking for light antimatter nuclei in an effort to search for dark matter, as will the upcoming GAPS balloon mission.To find out whether dark matter is the source behind any potential detections of light antinuclei from outer space, physicists need to determine the number, or more precisely the “flux”, of light antinuclei that is expected to reach the near-Earth location of these experiments. This flux depends on features such as the exact type of antimatter source in our Galaxy and the rate at which it produces antinuclei, but also on the rate at which the antinuclei should later disappear through annihilation or absorption when they encounter normal matter on their journey to Earth.The latter is where the new study from the ALICE collaboration comes in. By investigating how antihelium-3 nuclei1 produced in collisions of heavy ions and of protons at the LHC interact with the ALICE detector, the ALICE researchers were able to measure, for the first time, the rate at which antihelium-3 nuclei disappear when they encounter normal matter. In this analysis, the ALICE detector’s material serves as the normal matter with which the antinuclei interact.Next, the ALICE researchers incorporated the obtained disappearance rate into a publicly available computer programme called GALPROP, which simulates the propagation of cosmic particles, including antinuclei, in the Galaxy. They considered two models of the flux of antihelium-3 nuclei expected near Earth after the nuclei’s journey from sources in the Milky Way. One model assumes that the sources are cosmic-ray collisions with the interstellar medium, and the other describes them as hypothetical dark-matter particles called weakly interacting massive particles (WIMPs).For each model, the ALICE team then estimated the transparency of the Milky Way to antihelium-3 nuclei, that is, the Galaxy’s ability to let the nuclei through without being absorbed. They did so by dividing the flux obtained with and without antinuclei disappearance.For the dark-matter model, the ALICE researchers obtained a transparency of about 50%, whereas for the cosmic-ray model the transparency ranged from 25% to 90% depending on the energy of the antinucleus. These transparency values show that antihelium-3 nuclei originating from dark matter or cosmic-ray collisions can travel long distances – of several kiloparsecs2 – in the Milky Way without being absorbed.“Our results show, for the first time on the basis of a direct absorption measurement, that antihelium-3 nuclei coming from as far as the centre of our Galaxy can reach near-Earth locations,” says ALICE physics coordinator Andrea Dainese.“Our findings demonstrate that searches for light antimatter nuclei from outer space remain a powerful way to hunt for dark matter,” says ALICE spokesperson Luciano Musa.Further information:Pictures of ALICE Experiment1 Antihelium-3 nuclei are made up of two antiprotons and one antineutron, the antimatter equivalents of the proton and the neutron, respectively.2 One kiloparsec is a thousand parsecs. One parsec is about 31 trillion kilometres.
Physics
Selected 21 November 2022 © Pixabay A breakthrough at the University of Twente (UT) in the Netherlands has brought new brain-like computers one step closer. An international group of researchers led by Prof. Dr. Christian Nijhuis has developed a new type of molecular switch that can learn from previously displayed behavior. The researchers published their findings today in the scientific journal Nature Materials, according to the university in a press release. “These molecules are learning in the same way our brains do,” Nijhuis says. Computers, data centers, and other electronics use massive amounts of energy. We are now building huge wind farms in order to satisfy that energy demand. But according to Prof. Dr. Christian Nijhuis, we can also shift our attention to making our electronics more efficient. “Our brains are the most efficient computers we know. They use ten thousand times less energy than the most economical computers,” Nijhuis points out. Brain-computer interface to end isolation of locked-in patients – Innovation OriginsAn implantable brain-computer interface could facilitate communication with patients with locked-in syndrome Efficient brains This is because our brains process data in completely different ways. Whereas computers process binary streams of information – with zeros and ones – our brains work analogously with time-dependent pulses. “Our brains process the information from millions of nerve cells coming from all our senses without any difficulties. When doing this, unlike traditional electronics, it uses only the brain cells and synapses that pulses pass through,” Nijhuis explains further. Because of the fact that energy is only expended during a pulse, our brains can process a lot of data at once much more efficiently.   Hardware for artificial intelligence The molecules Nijhuis and his team have engineered are capable of performing all the Boolean logic gate operations required for ‘deep learning’. “Deep learning is a form of machine learning based on artificial neural networks and is widely used not only where automatically recognizing images and speech is concerned, but also in the quest for new medicines and, more recently, in the creation of art. All of which are much more difficult for a computer to do than for our brain,” Nijhuis notes. Researchers are making great strides in the field of software for artificial intelligence, but these molecules are now also bringing the hardware for artificial intelligence closer. Artificial neurons To simulate the dynamic behavior of synapses at a molecular level, the researchers combined fast electron transfer with slow proton coupling constrained by diffusion. This resembles the fast pulses and slow uptake of neurotransmitters from the neurons in our brains. The molecules can alter the strength and duration of these pulses. As such, they demonstrate a form of classical conditioning. The molecules adapt their behavior to the stimuli that they have previously received. A form of learning. In the future, these kinds of molecules may also respond to other stimuli such as light. Numerous new applications This breakthrough opens up possibilities for developing a whole new range of adaptable and reconfigurable systems. These in turn can lead to new multifunctional adaptive systems that substantially simplify artificial neural networks. Nijhuis: ” By doing this, we will dramatically reduce the energy consumption of our electronics.” Multifunctional molecules that are also light-sensitive or can detect other molecules can also potentially lead to new types of neural networks or sensors. Online publication Christian Nijhuis leads the ‘Hybrid Materials for Opto-Electronics‘ group (HMOE; Faculty of TNW), part of the MESA+ Institute for Nanotechnology at the UT. He is also Principal Investigator in the research area Computing Molecules & (Opto)Electronics within the Molecules Centre of MESA+. This research was conducted in collaboration with Damien Thompson, Professor of Molecular Modeling and Director of SSPC (Science Foundation Ireland Research Centre for Pharmaceuticals at the University of Limerick) and Enrique del Barco, Pegasus Professor at the University of Central Florida. The publication entitled ‘Dynamic molecular switches with hysteretic negative differential conductance emulating synaptic behavior’ was published in the scientific journal Nature Materials. Nature Materials is a top-three journal in the fields of chemistry, physics and matrix science. This publication can be read online. Selected for you! Innovation Origins is the European platform for innovation news. In addition to the many reports from our own editors in 15 European countries, we select the most important press releases from reliable sources. This way you can stay up to date on what is happening in the world of innovation. Are you or do you know an organization that should not be missing from our list of selected sources? Then report to our editorial team. Donate
Physics
If you want your gaming laptop to be thin, it's going to run hot. That's just the physics of packing a powerful CPU and GPU into a 15.7mm-thick body. And since we've managed to make gaming laptops reasonably small, semistylish and powerful enough to shame even the latest game consoles, heat is really the last frontier that no one has cracked. On the small and stylish front, I can safely say the Alienware x15 R2 is one of the more attractive gaming laptops I've ever tested, and shows just how far we've come from the giant cement blocks of yesteryear. My spouse, who has worked in games and games media for much of the past 20 years, said it was one of the only gaming laptops that wasn't aesthetically offensive to her, so trust me, that's high praise.  LikeExcellent performance in a slim packageMod design that looks good... for a gaming laptopDedicated volume buttons Don't LikeCan get very hotCustom software is clunkySmall touchpad Like almost all Alienware and Dell computers, the x15 R2 (the second major revision of the current 15-inch x15 line) offers a wide array of configuration options, starting at $1,999 and going up from there. This review sample was $2,559, and highlighted by a 2.3GHz Intel Core i7-12700H CPU, Nvidia GeForce RTX 3070 Ti graphics card and 2,560x1,440 display with a decent 240Hz refresh rate. It also includes 32GB of RAM and a 1TB SSD. You can drop the GPU to an Nvidia 3060 (although in a $2,000 laptop, why you would do that is beyond me), or boost it to a 3080 Ti. Likewise, the display can drop to FHD (1,920x1,080) at 165Hz or jump to a fast 360Hz panel, although still locked in at FHD resolution. All things considered, this is probably close to the ideal configuration. Design and featuresThe matte-white outer chassis stands out as retro-futurism -- I thought of the clean lines of 2001: A Space Odyssey -- and the inset hinge both looked good and made the laptop feel very stable when propped up on a table. There's a big "15" stenciled on the back, because maybe you'd forget how big your laptop screen was? As well as the iconic backlit alien head logo, which I once found to be a tacky example of dorm-room-chic, but frankly, at this point it's got some nostalgic charm. But my absolute favorite feature had nothing to do with the lid design or even performance -- it's purely UX. Along the right side of the keyboard area is a row of media control keys, including dedicated buttons for raising and lowering the volume and muting both the speakers and mic. The media keys along the right side are a big plus. Dan Ackerman/CNET I can not emphasize enough just how important this kind of convenience is to my enjoyment of a gaming laptop. Pressing FN+F2 (or sometimes F5, etc.) to lower the volume in a game seems like an insane way to deal with sound. A few other laptops also have dedicated audio keys or sometimes volume wheels, but it's still rare. You don't get a separate number pad, which can show up in some 15-inch laptops. It's no great loss for my workflow, but it's something to keep in mind. The actual keyboard is fine for a shallow gaming laptop keyboard, with 1.5mm travel and decent island-style spacing between the keys. The touchpad is small and frill-free. Gamers will be using a mouse or game controller more often, but when you want to use this as a non-gaming laptop, which is probably most of the time, the touchpad is a letdown. Performance and battery life Performance was right in the middle of the pack when compared to similar high-end laptops with Nvidia 3070 and 3080-class GPUs. The performance boost from something like the Acer Predator Triton 500 SE, which we reviewed with an Intel Core i9 CPU and the Nvidia 3080 Ti GPU, shows what an extra $500 to $600 will get you, and you can configure the x15 R2 similarly. Based on the performance we saw, I still say the Core i7/3070 Ti combo is the overall best bang for the buck.Battery life was decent for a gaming laptop, running for 5:12 in our video-streaming test, which admittedly isn't particularly challenging. Other 15-inch premium gaming laptops scored in the same 4- to 5-hour ballpark. Ironically, Dell's latest slim 13-inch laptop, the XPS 13 Plus, had much shorter battery life. More gaming laptops should be this slim.  Dan Ackerman/CNET My biggest overall issue with the system was how hot it got. Running games for a while, then pulling out the old temperature gun, the area just above the keyboard hit 130 degrees F. With the big rear vents (with its associated rear ports) and fans, you could probably keep your coffee cup next to the system to keep it warm. It's not a specific-to-Dell problem, but by squeezing a gaming laptop down to this thin a profile, it's inevitably going to be a more noticeable issue. My other gripe is the somewhat impenetrable Alienware Command Center software for controlling system functions, including the various lighting zones, fan speed and other system details. It's clunky, opaque and makes arranging custom lighting much more difficult than it needs to be. It feels like the Dell UX team never got near this one. I'm not the only person that feels this way. Despite these issues, and a general sense that gaming laptop innovation has hit a bit of a plateau in the past few years, the slim Alienware x15 R2 has earned a place on my short list of premium gaming laptops to strongly consider if you're thinking about making a multiyear investment.  The best laptops in every category Best Laptop for 2022Best Windows LaptopsBest Laptop for College Best Laptop for High School StudentsBest Budget Laptop Under $500 Best Dell Laptops Best 15-Inch Work and Gaming LaptopsBest 2-in-1 LaptopBest HP LaptopsBest Gaming Laptop Best Cheap Gaming Laptop Under $1,000Best Chromebook: 8 Chromebooks Starting at Under $300 Geekbench 5 (multicore) Acer Predator Triton 500 SE (2022) 13734 Alienware x15 R2 13,296 Origin PC Evo17-S 13,170 Lenovo Legion 5i Pro 12,862 Razer Blade 15 (2022) 9,861 Acer Nitro 5 AN515-58 8,443 Note: Longer bars indicate better performance Cinebench R23 (multicore) Lenovo Legion 5i Pro 18,033 Origin PC Evo17-S 17,733 Acer Predator Triton 500 SE (2022) 17,511 Alienware x15 R2 17,071 Acer Nitro 5 AN515-58 13,583 Razer Blade 15 (2022) 11,224 Note: Longer bars indicate better performance PCMark 10 Pro Lenovo Legion 5i Pro 7,869 Acer Predator Triton 500 SE (2022) 7,762 Alienware x15 R2 7,372 Razer Blade 15 (2022) 7,029 Origin PC Evo17-S 7,006 Acer Nitro 5 AN515-58 6950 Note: Longer bars indicate better performance Online streaming battery drain test Acer Predator Triton 500 SE (2022) 344 Origin PC Evo17-S 338 Alienware x15 R2 312 Razer Blade 15 (2022) 305 Acer Nitro 5 AN515-58 277 Lenovo Legion 5i Pro 244 Note: In minutes, longer bars indicate better performance 3DMark Wild Life Extreme Origin PC Evo17-S 25,679 Lenovo Legion 5i Pro 23,866 Acer Predator Triton 500 SE (2022) 20,622 Alienware x15 R2 20,417 Razer Blade 15 (2022) 19,086 Acer Nitro 5 AN515-58 16,510 Note: Longer bars indicate better performance Guardians of the Galaxy (High @1920 x 1080) Alienware x15 R2 146 Acer Predator Triton 500 SE (2022) 138 Origin PC Evo17-S 135 Razer Blade 15 (2022) (Core i7/3070 Ti) 106 Acer Nitro 5 AN515-58 (Core i5/3060) 71 Note: Longer bars indicate better performance Shadow of the Tomb Raider (Highest @ 1920 x 1080) Alienware x15 R2 136 Origin PC Evo17-S 136 Acer Predator Triton 500 SE (2022) 129 Razer Blade 15 (2022) 115 Lenovo Legion 5i Pro 95 Acer Nitro 5 AN515-58 94 Note: Longer bars indicate better performance System Configurations Alienware x15 R2 Microsoft Windows 11 Home; 2.3GHz Intel Core i7-12700H; 32GB DDR5 6,400MHz; 8GB Nvidia GeForce RTX 3070 Ti; 512GB SSD Razer Blade 15 (2022) Microsoft Windows 11 Home; 2.4GHz Intel Core i7-12800H; 16GB DDR5 4,800MH; 8GB Nvidia GeForce RTX 3070Ti; 1TB SSD Origin PC Evo17-S Microsoft Windows 11 Home; 2.9GHz Intel Core i9-12900H; 32GB DDR5 4,800MHz; 16GB Nvidia Geforce RTX 3080Ti; 1TB SSD Acer Predator Triton 500 SE (2022) Microsoft Windows 11 Home; 2.9GHz Intel Core i9-12900H; 16GB DDR5 4,800MHz; 16GB Nvidia Geforce RTX 3080Ti; 1TB SSD Lenovo Legion 5i Pro Microsoft Windows 11 Home; 2.3GHz Intel Core i7-12700H; 16GB DDR5 6,400MHz; 8GB Nvidia GeForce RTX 3070 Ti; 512GB SSD Acer Nitro 5 AN515-58 Microsoft Windows 11 Home; 2.5GHz Intel Core i5-12500H; 16GB DDR4 3,200MHz; 6GB Nvidia GeForce RTX 3060; 512GB SSD
Physics
The cocktail of chemicals that make up the frozen surfaces on two of Jupiter's largest moons are revealed in the most detailed images ever taken of them by a telescope on Earth. Planetary scientists from the University of Leicester's School of Physics and Astronomy have unveiled new images of Europa and Ganymede, two future destinations for exciting new missions to the Jovian system. Some of the sharpest images of Jupiter's moons ever acquired from a ground-based observatory, they reveal new insights into the processes shaping the chemical composition of these massive moons – including geological features such as the long rift-like linae cutting across Europa's surface. Ganymede and Europa are two of the four largest moons orbiting Jupiter, known as the Galilean moons. Whilst Europa is quite similar in size to our own Moon, Ganymede is the largest moon in the whole Solar System. The Leicester team, led by PhD student Oliver King, used the European Southern Observatory's Very Large Telescope (VLT) in Chile to observe and map the surfaces of these two worlds. The new observations recorded the amount sunlight reflected from Europa and Ganymede's surfaces at different infrared wavelengths, producing a reflectance spectrum. These reflectance spectra are analysed by developing a computer model that compares each observed spectrum to spectra of different substances that have been measured in laboratories. The images and spectra of Europa, published in the Planetary Science Journal, reveal that Europa's crust is mainly composed of frozen water ice with non-ice materials contaminating the surface. Oliver King from the University of Leicester School of Physics and Astronomy said: “We mapped the distributions of the different materials on the surface, including sulphuric acid frost which is mainly found on the side of Europa that is most heavily bombarded by the gases surrounding Jupiter.” “The modelling found that there could be a variety of different salts present on the surface, but suggested that infrared spectroscopy alone is generally unable to identify which specific types of salt are present.” The observations of Ganymede, published in the journal JGR: Planets, show how the surface is made up to two main types of terrain: young areas with large amounts of water ice, and ancient areas mainly consisting of a dark grey material, the composition of which is unknown. The icy areas (blue in the images) include Ganymede's polar caps and craters – where an impact event has exposed the fresh clean ice of Ganymede's crust. The team mapped how the size of the grains of ice on Ganymede varies across the surface and the possible distributions of a variety of different salts, some of which may originate from within Ganymede itself. Located at high altitude in northern Chile, and with mirrors over 8 metres across, the Very Large Telescope is one of the most powerful telescope facilities in the world.  Oliver King adds: “This has allowed us to carry out detailed mapping of Europa and Ganymede, observing features on their surfaces smaller than 150 km across - all at distances over 600 million kilometres from the Earth. Mapping at this fine scale was previously only possible by sending spacecraft all the way to Jupiter to observe the moons up-close.” Professor Leigh Fletcher, who supervised the VLT study, is a member of the science teams for ESA’s Jupiter Icy Moons Explorer (JUICE) and NASA's Europa Clipper mission, which will explore Ganymede and Europa up close in the early 2030s. JUICE is scheduled to launch in 2023, and University of Leicester scientists play key roles in its proposed study of Jupiter's atmosphere, magnetosphere, and moons. Professor Fletcher said: “These ground-based observations whet the appetite for our future exploration of Jupiter's moons.” “Planetary missions operate under tough operating constraints and we simply can't cover all the terrain that we’d like to, so difficult decisions must be taken about which areas of the moons’ surfaces deserve the closest scrutiny. Observations at 150-km scale such as those provided by the VLT, and ultimately its enormous successor the ELT (Extremely Large Telescope), help to provide a global context for the spacecraft observations.” This work was funded by a Royal Society Enhancement Award number 180071 to Professor Leigh Fletcher in the School of Physics and Astronomy, entitled “The diversity of Jupiter’s Galilean moons: Earth-based pathfinder observations in preparation for JUICE”. Oliver King and Leigh N. Fletcher (2022), Global modelling of Ganymede's surface composition: Near-IR mapping from VLT/SPHERE, JGR: Planets, accepted. Oliver King, Leigh N. Fletcher, Nicolas Ligier (2022), Compositional mapping of Europa using MCMC modelling of Near-IR VLT/SPHERE and Galileo/NIMS observations, Planetary Science Journal 3, 72.
Physics
Space elevators are a fascinating concept as they will bring down the cost of space travel enormously. If we really want space travel to become available for the masses in the future we should invest in this concept. Two astronomers recently proposed a new concept building on the idea of a space elevator called a spaceline. Enjoy this curated article on the subject:By Matt Williams Humanity’s future may lie in space, but getting out there is a very big challenge. In short, launching payloads into space from the bottom of Earth’s gravity well is quite expensive, regardless of whether or not reusable rockets are involved. And while some have suggested that building a Space Elevator would be a long-term solution to this problem, this concept is also very expensive and presents all kinds of engineering hurdles.As an alternative, a pair of astronomy graduate students from the US and UK recommend an inspired alternative known as the Spaceline. This concept would consist of anchoring a high-tensile strength caple to the Moon that would extend deep within Earth’s gravity well. This would allow the free movement of people and materials between the Earth and Moon at a fraction of the cost.The pair responsible for this concept are Zephyre Penoyre and Emily Standford, who hail from The University of Cambridge’s Institute of Astronomy and the Dept. of Astronomy at Columbia University, respectively. The study which describes their findings recently appeared online and will be submitted for publication in the near future. The concept of the Space Elevator goes back many decades, and is believed to have originated with Russian scientists Konstantin Tsiolkovsky and Yuri Artsutanov. While it was Tsiolkovsky who envisioned a superstructure that could connect Earth to space, Artsutanov is the one who originally suggested a suspension structure with a counterweight in orbit.Since that time, many scientists have advocated for the creation of Space Elevator because of the benefits it would bring to spaceflight. As noted, sending rockets to space is quite expensive since any spacecraft looking to break free of Earth’s gravity must achieve an escape velocity of 11.186 km/s (40,270 km/h; 25,020 mph). That takes a lot of fuel, which costs a lot of money, and requires rather large spacecraft.By eliminating the need to launch payloads and crews into space, a Space Elevator would significantly reduce the cost of space exploration. Using SpaceX’s Falcon 9 rocket, it currently costs $62 million to send payloads of up to 22,800 kg (50,265 lb) to LEO.That works out to about $2,700 a kg, or $1,230 a pound. But with a Space Elevator, payloads could be sent into space at a rate of a few dollars per kg/lbs.This would allow us to put everything from space-based solar arrays and commercial habitats to new space stations, satellites and space telescopes in orbit, effectively commercializing (and even colonizing) LEO. At the same time, it would drastically reduce the costs of deep-space missions by removing the need to launch spacecraft into orbit.Spacecraft that would be destined for Mars, Venus, Mercury, and the outer Solar System could be built in orbit and launch from the elevator itself. These spacecraft could also be reusable and allow for habitats around other planets and bodies, giving us the ability to extend our presence across the Solar System. Unfortunately, all of this planning runs into a major snag, which also arises from Earth’s gravity.Simply put, throughout the 20th century, proposals for a Space Elevator all ran into the same problem: no known material was strong enough to support an orbiting structure tethered to the Earth. By the 21st century, the invention of carbon nanotubes revitalized interest in the concept. But as Penoyre told Universe Today via email, this has not resolved the issue:“The classical space elevator is, sadly, currently a physical impossibility. There are fundamental limits on the material strength and though carbon nanotubes (and other even more exotic materials) could be sufficiently strong, research on their mass-production and use is in it’s infancy. There are other issues… such as how we could deploy it safely and cheaply, it’s stability, and the fear of collision with orbiting material (as it passes through the busiest and most polluted region of space, between low Earth orbit and geostationary orbit).”To be fair, Penoyre believes that these challenges will be solved with time, mainly because the advantages of a Space Elevator are just that great. However, while the necessary breakthroughs in engineering and materials sciences are being waited on, there is much we can still do to reduce the costs of space exploration and expand our presence beyond Earth.This is where the concept of a Spaceline comes into play. The concept is similar in many ways to a Lunar Elevator, which would be built on the surface of the Moon and extend into Earth’s gravity well. In the past, this idea has been proposed as a way of circumventing the problems of creating a Space Elevator on Earth. Not only would the lower escape velocity make it easier to lift cargo into space, but it would also be able to use Earth’s gravity well to stabilize itself. The Spaceline builds on these benefits by introducing a concept that can be built using materials and techniques that are already available and proven. In short, extending a cable to geostationary orbit (GSO) is theoretically possible today and would allow payloads to be moved between GSO and the Moon.Compared to a Space Elevator, the Spaceline offers many of the same benefits while resolving the biggest engineering challenges. Using a simple analytical approach, Penoyre and Standford were able to show that the fundamental physical limits could be met. As Penoyre outlined them:“[T]he necessary strength of the material is much lower than an Earth-based elevator – and thus it could be built from fibers that are already mass-produced… and relatively affordable. The spaceline also avoids the region of space where collisions are most likely (near Earth), can be deployed relatively cheaply and easily (by spooling the cable simultaneously towards Earth and the Moon from the Lagrange point) and is more stable.Naturally, this proposal does not resolve all of the engineering issues presented by such a megastructure. But it is the hope of Penoyre and Standford that their study will help inspire further research and address these issues. They include (but are not limited to) the stability of the line itself, collision rates, and the optimal materials for building such a structure.Initially, Penoyre and Standford developed the concept for a Spaceline independently but quickly became aware of past proposals for a Lunar Elevator – which emerged to address the same issues. However, there are some key differences between the two ideas that merit mentioning. Not the least of these is their intended function.“[M]ost lunar space elevator concepts seem to imagine their major function to be moving freight to and from the lunar surface in large volume,” said Penoyre. “Many envision materials beyond current capacity and serve as later-stage tools for space exploration. One crucial point [in our understanding]… is the use of a counterweight (or anchor weight) – a large mass at the free end of the cable – increasing the stabilizing force.”“As gravitational force drops away with distance, a counterweight must always be more massive, to provide the same force, than the extra weight of simply extending the cable deeper into the Earth’s gravitational well. Thus if your cable is built primarily for use in relation to the moon, a counterweight is not an unreasonable inclusion.”In contrast, the intended purpose of a Spaceline is not strictly about facilitating trips and from the lunar surface. What’s more, Penoyre and Standford don’t envision that there could only be one Spaceline extending from the lunar surface. Once the first is constructed, they indicate, the cost of subsequent spacelines will diminish dramatically and greater payloads could be shipped between geostationary orbit and the Moon. But of course, a lot needs to happen before even a single Spaceline would be feasible. As with all plans that involve extending humanity’s presence beyond Earth, the biggest hurdle is infrastructure. Basically, a lot of payloads need to be sent into orbit and to cislunar space if we hope to build the very structures that will make getting to and from these places easier, cheaper and even profitable.“Gravity wells are costly – they require resources which are as good as wasted to climb in and out of,” Penoyre reminds us. “The most feasible and achievable next step for human space exploration is inhabiting parts of space outside of the strong gravity of any other body, and moving more freely between these regions. At the very least [the Spaceline] is a stepping stone to being able to move and inhabit the terrestrial bodies in the Solar System.”While a Spacline (like a Space Elevator or Lunar Elevator) is still a largely theoretical concept, the benefits that such a structure would entail are truly awesome when you think about it. According to Penoyre, one of the most exciting ideas is how a Spaceline between the Moon and Earth would allow for habitats in space – particularly in places like the L1 Lagrange Point. As he put it:“This point is not stable on its own, small motions towards or away from the Earth will cause any mass to accelerate away from that point. But when attached to a tethered line that radial motion is arrested (and L1 is stable to tangential motion). The extra ability to ship material to that point, comparatively cheaply and easily, further reduces the cost and increases the habitability of this region.  One or more outposts in this region would be able to conduct unprecedented levels of scientific research, with possibilities ranging from astronomy and gravity wave detection to particle physics and biological research in microgravity. Essentially, all the research that currently takes place aboard the ISS could be done at Lagrange Points, and more cheaply and effectively to boot.Getting into the truly speculative, one or more Spacelines could also allow for space habitats to be built at the Lagrange Points. Consider a whole bunch of O’Neill Cylinders orbiting from L1 to L5, providing a home for millions of people and rotating to simulate artificial gravity. in conditions that simulate life on Earth (including Earth-normal gravity).Between plans for new and exciting missions to the Moon, Mars, and beyond (and proposals for infrastructure that would allow us to stay there) the next few decades promise to be an exciting time!Source: Universe Today - Further Reading: arXiv If you enjoy our selection of content please consider following Universal-Sci on social media:
Physics
CNN  —  Director Christopher Nolan created the look of a nuclear explosion for “Oppenheimer” without using CGI. Nolan explained in a new interview with Total Film how he recreated the devastation of the first atomic bomb. “I think recreating the Trinity test without the use of computer graphics was a huge challenge to take on,” he told the outlet. “Andrew Jackson — my visual effects supervisor, I got him on board early on — was looking at how we could do a lot of the visual elements of the film practically, from representing quantum dynamics and quantum physics to the Trinity test itself to recreating, with my team, Los Alamos up on a mesa in New Mexico in extraordinary weather, a lot of which was needed for the film, in terms of the very harsh conditions out there — there were huge practical challenges.” Nolan also used a rotating hallway to film the big fight scene in “Inception” instead of using CGI. He calls “Oppenheimer” a challenge but had help from an “extraordinary” crew. “It’s one of the most challenging projects I’ve ever taken on in terms of the scale of it, and in terms of encountering the breadth of Oppenheimer’s story,” he said. “There were big, logistical challenges, big practical challenges. But I had an extraordinary crew, and they really stepped up. It will be a while before we’re finished. But certainly, as I watch the results come in, and as I’m putting the film together, I’m thrilled with what my team has been able to achieve.” The film stars Cillian Murphy as J. Robert Oppenheimer, the physicist called the “father of the atomic bomb.”
Physics
By Margaret Wertheim - Vice-Chancellor's Fellow in Science communication, University of Melbourne Slightly more than one hundred years ago this month, an obscure German physicist named Albert Einstein presented to the Prussian Academy of Science his General Theory of Relativity. Nothing prior had prepared scientists for such a radical re-envisioning of the foundations of reality.Encoded in a set of neat compact equations was the idea that our universe is constructed from a sort of magical mesh, now known as “spacetime”. According to the theory, the structure of this mesh would be revealed in the bending of light around distant stars.To everyone at the time, this seemed implausible, for physicists had long known that light travels in straight lines. Yet in 1919 observations of a solar eclipse revealed that on a cosmic scale light does bend, and overnight Einstein became a superstar.Einstein is said to have reacted nonchalantly to the news that his theory had been verified. When asked how he’d have reacted if it hadn’t been, he replied: “I would have felt sorry for the dear Lord. The theory is correct.”What made him so secure in this judgement was the extreme elegance of his equations: how could something so beautiful not be right?The quantum theorist Paul Dirac would latter sum up this attitude to physics when he borrowed from poet John Keats, declaring that, vis-à-vis our mathematical descriptions of nature, “beauty is truth, and truth beauty”.Art of scienceA quest for beauty has been a part of the tradition of physics throughout its history. And in this sense, general relativity is the culmination of a specific set of aesthetic concerns. Symmetry, harmony, a sense of unity and wholeness, these are some of the ideals general relativity formalises. Where quantum theory is a jumpy jazzy mash-up, general relativity is a stately waltz.As we celebrate its centenary, we can applaud the theory not only as a visionary piece of science but also as an artistic triumph.What do we mean by the word “art”?Lots of answers have been proposed to this question and many more will be given. A provocative response comes from the poet-painter Merrily Harpur, who has noted that “the duty of artists everywhere is to enchant the conceptual landscape”. Rather than identifying art with any material methods or practices, Harpur allies it with a sociological outcome. Artists, she says, contribute something bewitching to our mental experience.It may not be the duty of scientists to enchant our conceptual landscape, yet that is one of the goals science can achieve; and no scientific idea has been more enrapturing than Einstein’s. Though he advised there’d never be more than 12 people who’d understand his theory, as with many conceptual artworks, you don’t have to understand all of relativity to be moved by it. In essence the theory gives us a new understanding of gravity, one that is preternaturally strange. According to general relativity, planets and stars sit within, or withon, a kind of cosmic fabric – spacetime – which is often illustrated by an analogy to a trampoline.Imagine a bowling ball sitting on a trampoline; it makes a depression on the surface. Relativity says this is what a planet or star does to the web of spacetime. Only you have to think of the surface as having four dimensions rather than two.Now applying the concept of spacetime to the whole cosmos, and taking into account the gravitational affect of all the stars and galaxies within it, physicists can use Einstein’s equations to determine the structure of the universe itself. It gives us a blueprint of our cosmic architecture.SynthesisEinstein began his contemplations with what he called gedunken (or thought) experiments; “what if?” scenarios that opened out his thinking in wildly new directions. He praised the value of such intellective play in his famous comment that “imagination is more important than knowledge”.The quote continues with an adage many artists might endorse: “Knowledge is finite, imagination encircles the world.”But imagination alone wouldn’t have produced a set of equations whose accuracy has now been verified to many orders of magnitude, and which today keeps GPS satellites accurate. Thus Einstein also drew upon another wellspring of creative power: mathematics.As it happened, mathematicians had been developing formidable techniques for describing non-Euclidean surfaces, and Einstein realised he could apply these tools to physical space. Using Riemannian geometry, he developed a description of the world in which spacetime becomes a dynamic membrane, bending, curving and flexing like a vast organism.Where the Newtonian cosmos was a static featureless void, the Einsteinian universe is a landscape, constantly in flux, riven by titanic forces and populated by monsters. Among them: pulsars shooting out giant jets of x-rays and light-eating black holes, where inside the maw of an “event horizon”, the fabric of spacetime is ripped apart.One mark of an important artist is the degree to which he or she stimulates other creative thinkers. General relativity has been woven into the DNA of science fiction, giving us the warp drives of Star Trek, the wormhole in Carl Sagan’s Contact, and countless other narrative marvels. Novels, plays, and a Philip Glass symphony have riffed on its themes.At a time when there is increasing desire to bridge the worlds of art and science, general relativity reminds us there is artistry in science.Creative leaps here are driven both by playful speculation and by the ludic powers of logic. As the 19th century mathematician John Playfair remarked in response to the bizzarities of non-Euclidean geometry, “we become aware how much further reason may sometimes go than imagination may dare to follow”.In general relativity, reason and imagination combine to synthesise a whole that neither alone could achieve.Source: The ConversationFeatured articles on the subject of space & exploration:
Physics
CAPE CANAVERAL, Fla. (AP) — A NASA spacecraft rammed an asteroid at blistering speed Monday in an unprecedented dress rehearsal for the day a killer rock menaces Earth.The galactic grand slam was set to occur at a harmless asteroid 7 million miles (9.6 million kilometers) away, with the spacecraft named Dart plowing into the rock at 14,000 mph (22,500 kph). Scientists expected the impact to carve out a crater, hurl streams of rocks and dirt into space and, most importantly, alter the asteroid’s orbit.Telescopes around the world and in space were poised to capture the spectacle. Though the impact should be immediately obvious — with Dart’s radio signal abruptly ceasing — it will be days or even weeks to determine how much the asteroid’s path was changed.The $325 million mission is the first attempt to shift the position of an asteroid or any other natural object in space.“No, this is not a movie plot,” NASA Administrator Bill Nelson tweeted earlier in the day. ”We’ve all seen it on movies like ‘Armageddon,’ but the real-life stakes are high,” he said in a prerecorded video.Monday’s target: a 525-foot (160-meter) asteroid named Dimorphos. It’s actually a moonlet of Didymos, Greek for twin, a fast-spinning asteroid five times bigger that flung off the material that formed the junior partner.The pair have been orbiting the sun for eons without threatening Earth, making them ideal save-the-world test candidates.Launched last November, the vending machine-size Dart — short for Double Asteroid Redirection Test — navigated to its target using new technology developed by Johns Hopkins University’s Applied Physics Laboratory, the spacecraft builder and mission manager.A mini satellite followed a few minutes behind to take photos of the impact. The Italian Cubesat was released from Dart two weeks ago.Scientists insisted Dart would not shatter Dimorphos. The spacecraft packed a scant 1,260 pounds (570 kilograms), compared with the asteroid’s 11 billion pounds (5 billion kilograms). But that should be plenty to shrink its 11-hour, 55-minute orbit around Didymos.The impact should pare 10 minutes off that, but telescopes will need anywhere from a few days to nearly a month to verify the new orbit. The anticipated orbital shift of 1% might not sound like much, scientists noted. But they stressed it would amount to a significant change over years.Planetary defense experts prefer nudging a threatening asteroid or comet out of the way, given enough lead time, rather than blowing it up and creating multiple pieces that could rain down on Earth. Multiple impactors might be needed for big space rocks or a combination of impactors and so-called gravity tractors, not-yet-invented devices that would use their own gravity to pull an asteroid into a safer orbit.“The dinosaurs didn’t have a space program to help them know what was coming, but we do,” NASA’s senior climate adviser Katherine Calvin said, referring to the mass extinction 66 million years ago believed to have been caused by a major asteroid impact, volcanic eruptions or both.The non-profit B612 Foundation, dedicated to protecting Earth from asteroid strikes, has been pushing for impact tests like Dart since its founding by astronauts and physicists 20 years ago. Monday’s dramatic action aside, the world must do a better job of identifying the countless space rocks lurking out there, warned the foundation’s executive director, Ed Lu, a former astronaut.Significantly less than half of the estimated 25,000 near-Earth objects in the deadly 460-foot (140-meter) range have been discovered, according to NASA. And fewer than 1% of the millions of smaller asteroids, capable of widespread injuries, are known.The Vera Rubin Observatory, nearing completion in Chile by the National Science Foundation and U.S. Energy Department, promises to revolutionize the field of asteroid discovery, Lu noted.Finding and tracking asteroids, “That’s still the name of the game here. That’s the thing that has to happen in order to protect the Earth,” he said.___The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content.
Physics
Waste disposal: A process based on alternating magnetic fields could help process the mountains of medical waste produced during the coronavirus pandemic. (Courtesy: iStock/Snezhana Kudryavtseva) Alternating magnetic fields can be used to rapidly convert medical waste, such as plastic syringes, into hydrogen-rich gases and high-quality graphite, scientists in China have found. This catalytic technique is more environmentally friendly and less energy intensive than other waste management strategies, the researchers claim. It might also help us dispose of other types of medical waste such as masks and protective clothing. The coronavirus pandemic produced mountains of medical waste. According to the World Health Organization (WHO), between March 2020 and November 2021, the UN shipped 87,000 tonnes of personal protective equipment (PPE), like masks and gowns, to countries around the world. But this is only the tip of the iceberg, as it does not cover items purchased outside the UN initiative by governments and members of the public. More than 140 million test kits have also been shipped and the more than eight billion vaccine doses administered globally have produced 144 000 tonnes of waste products, such as syringes, needles and sharps bins. In the rush to secure PPE and administer vaccines, less attention was paid to waste disposal. But that plastic and biohazardous medical waste is threatening human and environmental health, according to a recent WHO report. Now researchers in China claim to have developed a new catalytic technique that rapidly decomposes disposable syringe plastic, which they say could help. Incineration is widely used to dispose of plastic waste. While it is quick and simple, it can produce large quantities of carbon dioxide and other toxic gases, and the only useful by-product is heat. Plastic in medical waste is rich in hydrogen and recently researchers have developed a two-stage technique that uses high-temperature pyrolysis followed by catalytic cracking to turn it into hydrogen-rich gases such as hydrogen, ethanol and methane. But, according to Xifeng Zhu from the University of Science and Technology of China and his colleagues, this process is very energy intensive. To address the challenge of efficiently converting medical waste into hydrogen-rich gases, Zhu turned to magnetic hyperthermia. Magnetic hyperthermia generates localized intense heat by subjecting magnetic nanoparticles to a high-frequency alternating magnetic field. So far, this technique has been primarily used within medicine to heat and destroy cancer cells. The researchers created a catalyst that would respond to a magnetic field by joining ten bent disposable syringe needles together in a chain-like loop. They then added crushed disposable syringes, which were mainly made from polypropylene, and a heavy fraction of bio-oil as an initiator. When the team subjected this mix to a high-frequency alternating magnetic field they found that the chain-shaped needles heated up. This heated the bio-oil, which then heated the rest of the system. As the temperature increased, the bio-oil and the plastic syringes decomposed, generating hydrogen, methane and other gases. Iron carbide also formed, along with high radio-frequency-electromagnetic-wave-absorbing carbon, which was deposited on the surface of the chain-shaped needle. This graphite caused the whole system to heat even further, reaching temperatures as high as 1200 °C. As more and more crushed plastic syringes were added to the reaction, without further bio-oil, the formation of graphite and the yield of hydrogen increased. The researchers claim the process was able to turn more than 75% of the hydrogen in the syringes into hydrogen gas and around 68% of their carbon into graphite. Read more COVID-19: how physics is helping the fight against the pandemic After ten cycles of adding additional syringes, electron microscopy confirmed the growth of a large amount of flaky carbon. It showed that there were seven layers of graphite sheets deposited on the chain-shaped needles, with a lattice structure that contained few defects. According to the researchers, this technique simplifies the treatment of medical waste by converting it into hydrogen and high-value graphite in a one step process. The use of a high-frequency alternating magnetic field also minimizes the amount of energy used compared with other catalytic process, as the whole reactor does not need to be heated simultaneously. The research is described in Cell Reports Physical Science.
Physics
Three scientists jointly won this year’s Nobel Prize in physics Tuesday for proving that tiny particles could retain a connection with each other even when separated, a phenomenon once doubted but now being explored for potential real-world applications such as encrypting information.Frenchman Alain Aspect, American John F. Clauser and Austrian Anton Zeilinger were cited by the Royal Swedish Academy of Sciences for experiments proving the “totally crazy” field of quantum entanglements to be all too real. They demonstrated that unseen particles, such as photons, can be linked, or “entangled,” with each other even when they are separated by large distances.It all goes back to a feature of the universe that even baffled Albert Einstein and connects matter and light in a tangled, chaotic way.Bits of information or matter that used to be next to each other even though they are now separated have a connection or relationship — something that can conceivably help encrypt information or even teleport. A Chinese satellite now demonstrates this and potentially lightning fast quantum computers, still at the small and not quite useful stage, also rely on this entanglement. Others are even hoping to use it in superconducting material.“It's so weird,” Aspect said of entanglement in a telephone call with the Nobel committee. “I am accepting in my mental images something which is totally crazy.”Yet the trio's experiments showed it happen in real life.“Why this happens I haven't the foggiest,” Clauser told The Associated Press during a Zoom interview in which he got the official call from the Swedish Academy several hours after friends and media informed him of his award. “I have no understanding of how it works but entanglement appears to be very real.”His fellow winners also said they can't explain the how and why behind this effect. But each did ever more intricate experiments that prove it just is.Clauser, 79, was awarded his prize for a 1972 experiment, cobbled together with scavenged equipment, that helped settle a famous debate about quantum mechanics between Einstein and famed physicist Niels Bohr. Einstein described “a spooky action at a distance” that he thought would eventually be disproved.“I was betting on Einstein,” Clauser said. “But unfortunately I was wrong and Einstein was wrong and Bohr was right.”Aspect said Einstein may have been technically wrong, but deserves huge credit for raising the right question that led to experiments proving quantum entanglement.“Most people would assume that nature is made out of stuff distributed throughout space and time," said Clauser, who while a high school student in the 1950s built a video game on a vacuum tube computer. "And that appears not to be the case.”What the work shows is “parts of the universe — even those at great distances from each other — are connected,” said Johns Hopkins physicist N. Peter Armitage. “This is something so unintuitive and something so at odds with how we feel the world ‘should’ be.”This hard-to-understand field started with thought experiments. But what in one sense is philosophical musings about the universe also holds hope for more secure and faster computers all based on entangled photons and matter that still interact no matter how distant.“With my first experiments I was sometimes asked by the press what they were good for,” Zeilinger, 77, told reporters in Vienna. “And I said with pride: ‘It’s good for nothing. I’m doing this purely out of curiosity.’”In quantum entanglement, establishing common information between two photons not near each other "allows us to do things like secret communication, in ways which weren’t possible to do before,” said David Haviland, chair of the Nobel Committee for Physics.Quantum information “has broad and potential implications in areas such as secure information transfer, quantum computing and sensing technology," said Eva Olsson, a member of the Nobel committee. "Its predictions have opened doors to another world, and it has also shaken the very foundations of how we interpret measurements.”The kind of secure communication used by China’s Micius satellite — as well as by some banks — is a “success story of quantum entanglement,” said Harun Siljak of Trinity College Dublin. By using one entangled particle to create an encryption key, it ensures that only the person with the other entangled particle can decode the message and "the secret shared between these two sides is a proper secret,” Siljak said.While quantum entanglement is “incredibly cool” security technologist Bruce Schneier, who teaches at Harvard, said it is fortifying an already secure part of information technology where other areas, including human factors and software are more of a problem. He likened it to installing a side door with 25 locks on an otherwise insecure house.At a news conference, Aspect said real-world applications like the satellite were “fantastic.”“I think we have progress toward quantum computing. I would not say that we are close," the 75-year-old physicist said. “I don’t know if I will see it in my life. But I am an old man.”Speaking by phone to a news conference after the announcement, the University of Vienna-based Zeilinger said he was “still kind of shocked” at hearing he had received the award.Clauser, Aspect and Zeilinger have figured in Nobel speculation for more than a decade. In 2010 they won the Wolf Prize in Israel, seen as a possible precursor to the Nobel.The Nobel committee said Clauser developed quantum theories first put forward in the 1960s into a practical experiment. Aspect was able to close a loophole in those theories, while Zeilinger demonstrated a phenomenon called quantum teleportation that effectively allows information to be transmitted over distances.“Using entanglement you can transfer all the information which is carried by an object over to some other place where the object is, so to speak, reconstituted," Zeilinger said. He added that this only works for tiny particles.“It is not like in the Star Trek films (where one is) transporting something, certainly not the person, over some distance,” he said.A week of Nobel Prize announcements kicked off Monday with Swedish scientist Svante Paabo receiving the award in medicine Monday for unlocking secrets of Neanderthal DNA that provided key insights into our immune system.Chemistry is on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 10.The prizes carry a cash award of 10 million Swedish kronor (nearly $900,000) and will be handed out on Dec. 10. The money comes from a bequest left by the prize’s creator, Swedish dynamite inventor Alfred Nobel, who died in 1895.___Jordans reported from Berlin, Borenstein from Kensington, Maryland, and Burakoff from New York. David Keyton in Stockholm and Masha Macpherson in Palaiseau, France, contributed.___Follow all AP stories about the Nobel Prizes at https://apnews.com/hub/nobel-prizes
Physics
The concept of living in a simulation has been a widespread topic of speculation and discussion in recent years. The hypothesis suggests that we are part of a virtual world comparable to a computer-generated simulation. It presents us with the question of whether our reality is a construct created by a higher entity or advanced technology. The idea sounds a bit like science fiction, yet it continues to captivate the minds of scientists, philosophers, and the general public alike. But what if we would like to determine whether we are actually living in a simulation or not? In this article, physicist Dr. Melvin M Vopson shares his views and understanding of how we should go about testing this hypothesis. By Dr. Melvin M. Vopson - Senior Lecturer in Physics at the University of Portsmouth Physicists have long struggled to explain why the universe started out with conditions suitable for life to evolve. Why do the physical laws and constants take the very specific values that allow stars, planets and ultimately life to develop? The expansive force of the universe, dark energy, for example, is much weaker than theory suggests it should be – allowing matter to clump together rather than being ripped apart. A common answer is that we live in an infinite multiverse of universes, so we shouldn’t be surprised that at least one universe has turned out as ours. But another is that our universe is a computer simulation, with someone (perhaps an advanced alien species) fine-tuning the conditions. The latter option is supported by a branch of science called information physics, which suggests that space-time and matter are not fundamental phenomena. Instead, the physical reality is fundamentally made up of bits of information, from which our experience of space-time emerges. By comparison, temperature “emerges” from the collective movement of atoms. No single atom fundamentally has temperature. This leads to the extraordinary possibility that our entire universe might in fact be a computer simulation. The idea is not that new. In 1989, the legendary physicist, John Archibald Wheeler, suggested that the universe is fundamentally mathematical and it can be seen as emerging from information. He coined the famous aphorism “it from bit”. Physicist Seth Lloyd from the Massachusetts Institute of Technology in the US took the simulation hypothesis to the next level by suggesting that the entire universe could be a giant quantum computer. And in 2016, business magnate Elon Musk concluded “We’re most likely in a simulation” (see video above). Empirical evidence There is some evidence suggesting that our physical reality could be a simulated virtual reality rather than an objective world that exists independently of the observer. Any virtual reality world will be based on information processing. That means everything is ultimately digitised or pixelated down to a minimum size that cannot be subdivided further: bits. This appears to mimic our reality according to the theory of quantum mechanics, which rules the world of atoms and particles. It states there is a smallest, discrete unit of energy, length and time. Similarly, elementary particles, which make up all the visible matter in the universe, are the smallest units of matter. To put it simply, our world is pixelated. The laws of physics that govern everything in the universe also resemble computer code lines that a simulation would follow in the execution of the program. Moreover, mathematical equations, numbers and geometric patterns are present everywhere – the world appears to be entirely mathematical. Another curiosity in physics supporting the simulation hypothesis is the maximum speed limit in our universe, which is the speed of light. In a virtual reality, this limit would correspond to the speed limit of the processor, or the processing power limit. We know that an overloaded processor slows down computer processing in a simulation. Similarly, Albert Einstein’s theory of general relativity shows that time slows in the vicinity of a black hole. Perhaps the most supportive evidence of the simulation hypothesis comes from quantum mechanics. This suggest nature isn’t “real”: particles in determined states, such as specific locations, don’t seem to exist unless you actually observe or measure them. Instead, they are in a mix of different states simultaneously. Similarly, virtual reality needs an observer or programmer for things to happen. Quantum “entanglement” also allows two particles to be spookily connected so that if you manipulate one, you automatically and immediately also manipulate the other, no matter how far apart they are – with the effect being seemingly faster than the speed of light, which should be impossible. This could, however, also be explained by the fact that within a virtual reality code, all “locations” (points) should be roughly equally far from a central processor. So while we may think two particles are millions of light years apart, they wouldn’t be if they were created in a simulation. Possible experiments Assuming that the universe is indeed a simulation, then what sort of experiments could we deploy from within the simulation to prove this? It is reasonable to assume that a simulated universe would contain a lot of information bits everywhere around us. These information bits represent the code itself. Hence, detecting these information bits will prove the simulation hypothesis. The recently proposed mass-energy-information (M/E/I) equivalence principle – suggesting mass can be expressed as energy or information, or vice versa – states that information bits must have a small mass. This gives us something to search for. I have postulated that information is in fact a fifth form of matter in the universe. I’ve even calculated the expected information content per elementary particle. These studies led to the publication, in 2022, of an experimental protocol to test these predictions. The experiment involves erasing the information contained inside elementary particles by letting them and their antiparticles (all particles have “anti” versions of themselves which are identical but have opposite charge) annihilate in a flash of energy – emitting “photons”, or light particles. I have predicted the exact range of expected frequencies of the resulting photons based on information physics. The experiment is highly achievable with our existing tools, and we have launched a crowdfunding site) to achieve it. There are other approaches too. The late physicist John Barrow has argued that a simulation would build up minor computational errors which the programmer would need to fix in order to keep it going. He suggested we might experience such fixing as contradictory experimental results appearing suddenly, such as the constants of nature changing. So monitoring the values of these constants is another option. The nature of our reality is one of the greatest mysteries out there. The more we take the simulation hypothesis seriously, the greater the chances we may one day prove or disprove it. Sources and futher reading on potential topcis of interest: Can consciousness be explained by quantum physics? My research takes us a step closer to finding out (Universal-Sci) FEATURED ARTICLES:
Physics
image: Hexbug Nanos used in online lab course to teach undergraduate research skills in physics. view more  Credit: Kristopher Vargas, Pomona College WASHINGTON, Oct. 21, 2022 – Although the sudden switch to remote and hybrid learning was seen as an enormous challenge during the COVID-19 pandemic, academic and commercial interest in creative online lab class development has since skyrocketed. In the American Journal of Physics, by AIP Publishing, researchers from Pomona College in California developed an online undergraduate physics lab course using small robotic bugs called Hexbug Nanos (TM) to engage students in scientific research from their homes.  Hexbug Nanos look like bright-colored beetles with 12 flexible legs that move rapidly in a semi-random manner. This makes collections of Hexbugs ideal models for exploring particle behavior that can be difficult for students to visualize. For the lab course, students used the Hexbugs that were mailed to them, along with a smartphone and common household items. "We found that the pandemic-inspired reliance on simple, home-built experiments, while de-emphasizing the use of sophisticated equipment, enabled students to more effectively achieve laboratory learning objectives such as designing, implementing, and troubleshooting an experimental apparatus," co-author Janice Hudgings said. Students first completed a short experiment to investigate the ideal gas law, which describes how pressure, volume, and temperature of a gas are related. They used a rectangular cardboard box divided by a movable wall, made from cardboard and bamboo skewers, that slid along the length of the box. Varying numbers of Hexbugs were placed on either side of the moving wall to model two gases of different pressures. Students used their smartphones to record the "gas molecules" colliding into the moving wall. Video tracking software was used to obtain the position of the wall as a function of time while it moved until the pressure in the two chambers equalized. Students then proposed semesterlong research projects of their choice, designing experiments using Hexbugs to investigate concepts in statistical mechanics and electrical conduction. One project focused on the Drude model, which uses classical physics to describe the movement of electrons in a metal. The at-home setup included a long rectangular cardboard box, with 2-inch cardboard rings at fixed locations used to model defects in the metal. Gravity is applied by raising one end of the box relative to the other end. The Hexbug "electrons" are released near the top of the box, randomly scattering from the defects as they are gradually "conducted" down the box due to the gravitational field. "The Hexbug experiment provides a clearly visible, macroscale model of carrier transport in a wire that is consistent with the Drude model," Hudgings said. Similar Hexbug experiments could also be useful as online or in-person lab or lecture demonstrations in statistical mechanics, physical chemistry, biophysics, or introductory electromagnetism. ### The article "Using HexbugsTM to model gas pressure and electrical conduction: A pandemic-inspired distance lab" is authored by Genevieve DiBari, Liliana West Valle, Refilwe Tanah Bua, Lucas Cunningham, Eleanor Hort, Taylor Venenciano, and Janice Hudgings. The article will appear in the American Journal of Physics on Oct. 21, 2022 (DOI: 10.1119/5.0087142). After that date, it can be accessed at https://aip.scitation.org/doi/10.1119/5.0087142. ABOUT THE JOURNAL The American Journal of Physics is devoted to the instructional and cultural aspects of physics. The journal informs physics education globally with member subscriptions, institutional subscriptions, such as libraries and physics departments, and consortia agreements. It is geared to an advanced audience, primarily at the college level. Contents include novel approaches to laboratory and classroom instruction, insightful articles on topics in classical and modern physics, apparatus, and demonstration notes, historical or cultural topics, resource letters, research in physics education, and book reviews. See https://aapt.scitation.org/journal/ajp. ### Journal American Journal of Physics Article Title Using Hexbugs (TM) to model gas pressure and electrical conduction: A pandemic-inspired distance lab Article Publication Date 21-Oct-2022 Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Physics
Key features: OA-US images illustrating the patterns used to define the feature set for benign and malignant lesions. Arrows indicate the OA feature (specified above the panels); the lesion histopathology is shown below. (Courtesy: CC BY 4.0/Photoacoustics 10.1016/j.pacs.2022.100383) Adding optoacoustic (OA) imaging to ultrasound (US) could improve the diagnosis of breast cancer, according to findings from a multidisciplinary research team in Cambridge, UK. The combination enables visualization of functional blood vasculature overlaid with structural features of the breast. To help accelerate the clinical application of this combined technique, the team developed a simple feature set using single-wavelength OA data from an integrated OA-US imaging system that can identify malignant breast lesions based on their vascular patterns. The researchers describe their findings in Photoacoustics. A low-cost OA-US device using this feature set could increase the number of early breast cancer diagnoses, especially in women living in low-income countries, where breast cancer survival rates are less than 40% (compared with 80% in high-income countries). The proposed device could also expand breast cancer screening in populations with limited access to mammography. Ultrasound imaging alone tends to have low sensitivity for breast cancer detection, and cannot always differentiate between benign and malignant lesions. OA imaging – a potentially low-cost technique based on optical excitation and acoustic detection – is being evaluated in clinical studies for breast cancer diagnosis, but the current analysis process is quite complex. Principal investigator: Sarah Bohndiek from the University of Cambridge. (Courtesy: Sarah Bohndiek) Principal investigator Sarah Bohndiek, of the University of Cambridge’s Cancer Research UK Cambridge Institute and department of physics, explains that the researchers’ objective was to simplify acquisition of OA-US data and create a simple imaging feature set that was easy to learn and clinically feasible to implement. The team generated the feature set using images from 96 breast lesions in 94 patients with benign, indeterminate or suspicious breast abnormalities at Cambridge University Hospitals NHS Foundation Trust. The first 38 lesions (including 14 malignant and eight benign) were used to develop the feature set; the others were used for validation. All patients in the study underwent mammography, breast ultrasound and OA imaging – performed using an OA device that also incorporated low-frequency tomographic ultrasound. The researchers used an excitation wavelength of 800 nm, which minimizes absorption by water and lipids, to create images showing the morphology of blood vessels surrounding a solid breast lesion. The use of a single wavelength simplified OA image processing and visualization, offering the possibility of future system simplification and cost reduction. The researchers analysed the OA and US images separately and in combination, looking for patterns of blood vessels representative of healthy breast tissue, benign disease and malignancy. Benign lesions demonstrated no vascularity or vessels that were draped over the lesion without penetrating it. Malignant lesions had irregular feeding vessels that penetrated into the lesion and/or a disorganized irregular pattern around them. The internal appearance of the lesions did not differentiate benign and malignant lesions and was not used. The researchers selected three features of malignancy that would upgrade any solid lesion to a BI-RADS 5 classification (highly suggestive of malignancy): irregular cap, irregular feeding vessel and claw sign. The presence of benign features – no vessels or vessels splayed over the lesion vessel – would downgrade a lesion to BI-RADS 2 (benign). Two breast radiologists validated the feature set by independently interpreting the OA-US validation images (31 malignant and 13 benign solid lesions). It took only 20 min of training for them to become proficient in using the feature set. They were asked to use the features to classify lesions by BI-RADS category, as well as to classify the patients’ diagnostic ultrasound exams and mammography images. The breast radiologists interpreted the OA-US images with a sensitivity of 96.8% and a specificity of 84.6%, with one false negative and two false positives for each reader. In comparison, mammography yielded three false negatives and two false positives for each reader, and ultrasound generated one false negative and six and seven false positives. Importantly, all of the mammography and ultrasound false negatives were correctly identified as positive by OA. Read more Deep learning accelerates super-resolution photoacoustic imaging Bohndiek points out that OA-US requires practical experience to optimize the standard operating procedure and obtain high-quality image data, and for this reason, future multicentre validation studies should consider operator dependence and independent calibration. “We have been undertaking validation studies of the OA-US device in the context of developing stable test objects (phantoms) that can be used by medical physicists for QA/QC once the devices are used routinely in the clinic,” she tells Physics World. “We are also planning to apply the system in the future to monitoring of response to radiotherapy treatment in breast cancer.”
Physics
SummaryWinners opened the way to powerful new quantum technologiesFindings enabled work on quantum computers, encryptionWinners' research is based on 'mind-boggling' insightZeilinger 'shocked but very positive' on hearing the newsScientists shone light on behaviour of subatomic particlesSTOCKHOLM, Oct 4 (Reuters) - Scientists Alain Aspect, John Clauser and Anton Zeilinger won the 2022 Nobel Prize in Physics for experiments in quantum mechanics that laid the groundwork for rapidly-developing new applications in computing and cryptography."Their results have cleared the way for new technology based upon quantum information," the Royal Swedish Academy of Sciences said of the laureates -- Aspect, who is French, Clauser, an American and Zeilinger, an Austrian.The scientists all conducted experiments into quantum entanglement, where two particles are linked regardless of the space between them, a field that unsettled Albert Einstein himself, who once referred to it in a letter as "spooky action at a distance".Register now for FREE unlimited access to Reuters.com"I'm very happy ... I first started this work back in 1969 and I'm happy to still be alive to be able get the prize," Clauser, 79, told Reuters by phone from his home in Walnut Creek, California.Clauser, who worked at institutions such as Lawrence Berkeley National Laboratory and the University of California, Berkeley, during his career, said he had witnessed his initial work snowball into much larger experiments.China's Micius satellite, part of a quantum physics research project, was constructed in part on his findings, he said."The configuration of the satellite and the ground station is almost identical to my original experiment. Mine was about 30 feet long, theirs is thousands of kilometers for quantum communication."Asked to explain his work in layman's terms, he joked he does not understand it himself but added that the interactions it describes permeate almost everything."Probably every particle in the universe is entangled with every other particle," Clauser said, chuckling.NATURE OF REALITYFrench President Emmanuel Macron tweeted his congratulations to the winners, adding "Einstein himself did not believe in quantum entanglement! Today, the promises of quantum computing are based on this phenomenon."Aspect, professor at Universite Paris-Saclay and Ecole Polytechnique, Palaiseau, near Paris, said he was happy his work had contributed to settling the debate between Einstein, who was sceptical about quantum physics, and Niels Bohr, one of the field's fathers. Both won Nobel physics prizes."Quantum physics, which has been fantastic field that has been on the agenda for more than a century, still offers a lot of mysteries to discover," Aspect, 75, told reporters.Secretary General of the Royal Swedish Academy of Sciences Hans Ellegren, Eva Olsson and Thors Hans Hansson, members of the Nobel Committee for Physics announce the winners of the 2022 Nobel Prize in Physics Alain Aspect, John F. Clauser and Anton Zeilinger, during a news conference at The Royal Swedish Academy of Sciences in Stockholm, Sweden, October 4, 2022. TT News Agency/ Jonas Ekstromervia REUTERS "This prize today anticipates what will be one day be quantum technologies."Zeilinger, 77, professor emeritus at the University of Vienna, told a news conference by phone after hearing the news that he was "shocked, but very positive."In an interview after being awarded an honorary doctorate earlier this year, Zeilinger said that protected quantum communication over potentially thousands of kilometres via cables or satellite would soon be on the cards."It is quite clear that in the near future we will have quantum communication all over the world," he said at the time.Quantum physics is the study of matter and energy at a subatomic level involving the smallest building blocks of nature, a realm governed by laws jarring with those of the classical Newtonian physics used in areas such as the motions of celestial objects.In background material explaining the prize, the academy said the laureates' work involved "the mind-boggling insight that quantum mechanics allows a single quantum system to be divided up into parts that are separated from each other but which still act as a single unit.""This goes against all the usual ideas about cause and effect and the nature of reality."PRIZE 'LONG OVERDUE'The laureates explored in ground-breaking experiments how two or more photons, or particles of light, that are "entangled" because they come from the same laser beam, interact even when they are separated far apart from each other.Sean Carroll, Professor of Natural Philosophy at Johns Hopkins University and author of books on topics such as quantum mechanics, told Reuters the prize for the trio was long overdue."Even though the ... experimental techniques that these folks pioneered might not be directly applicable, they're laying the ground work for using quantum entanglement as a technological resource," he said.The more than century-old prize, worth 10 million Swedish crowns ($902,315), is awarded by the Royal Swedish Academy of Sciences. Physics is the second Nobel to be awarded this week after Swedish geneticist Svante Paabo won the prize for Physiology or Medicine on Monday.The physics prize has often taken centre stage among the awards, featuring household names of science such as Einstein, Bohr and Max Planck, and rewarding breakthroughs that have reshaped how we see the world.($1 = 11.0826 Swedish crowns)Register now for FREE unlimited access to Reuters.comReporting by Niklas Pollard, Simon Johnson and Johan Ahlander in Stockholm, Jonathan Allen in New York and Ludwig Burger in Frankfurt; additional reporting by Terje Solsvik in Oslo, Anna Ringstrom in Stockholm, Geert De Clercq in Paris, and Marie Mannes in Gdansk; Editing by William Maclean and Nick ZieminskiOur Standards: The Thomson Reuters Trust Principles.
Physics
Home News Science & Astronomy The mixed time directions of the photon could help physicists probe inside black holes. (Image credit: Shutterstock) For the first time, physicists have made light appear to move simultaneously forward and backward in time. The new technique could help scientists improve quantum computing and understand quantum gravity.By splitting a photon, or packet of light, using a special optical crystal, two independent teams of physicists have achieved what they describe as a "quantum time flip," in which a photon exists in both forward and backward time states. The effect results from the convergence of two strange principles of quantum mechanics (opens in new tab), the counterintuitive rules that govern the behavior of the very small. The first principle, quantum superposition, enables minuscule particles to exist in many different states, or different versions of themselves, at once, until they are observed. The second — charge, parity and time-reversal (CPT) symmetry — states that any system containing particles will obey the same physical laws even if the particles' charges, spatial coordinates and movements through time are flipped as if through a mirror. Related: 10 mind-boggling things you should know about quantum physicsBy combining these two principles, the physicists produced a photon that appeared to simultaneously travel along and against the arrow of time. They published the results of their twin experiments Oct. 31 (opens in new tab) and Nov. 2 (opens in new tab) on the preprint server arXiv, meaning the findings have yet to be peer-reviewed. "The concept of the arrow of time is giving a word to the apparent unidirectionality of time that we observe in the macroscopic world we inhabit," Teodor Strömberg (opens in new tab), a physicist at the University of Vienna who was first author on one of the papers, told Live Science. "This is actually in tension with many of the fundamental laws of physics, which by and large are time symmetric, and which therefore do not have a preferred time direction."The second law of thermodynamics (opens in new tab) states that the entropy of a system, a rough analogue of its disorder, must increase. Known as the "arrow of time," entropy is one of the few quantities in physics that sets time to go in a particular direction.This tendency for disorder to grow in the universe explains why it's easier to mix ingredients than to separate them. It's also through this growing disorder that entropy is wedded so intimately to our sense of time. A famous scene in Kurt Vonnegut's novel "Slaughterhouse-Five" demonstrates how differently entropy makes one direction of time look to the other by playing World War II in reverse: Bullets are sucked from wounded men; fires are shrunk, gathered into bombs, stacked in neat rows, and separated into composite minerals; and the reversed arrow of time undoes the disorder and devastation of war.However, as entropy is primarily a statistical concept, it doesn’t apply to single subatomic particles. In fact, in every particle interaction scientists have observed so far — including the up to 1 billion interactions per second that take place inside the world's largest atom smasher, the Large Hadron Collider — CPT symmetry is upheld. So particles seeming to move forward in time are indistinguishable from those in a mirrored system of antiparticles moving backward in time. (Antimatter was created with matter during the Big Bang and doesn't actually move backward in time; it just behaves as if it is following an opposite arrow of time to normal matter.)The other factor at play in the new experiments is superposition. The most famous demonstration of quantum superposition is Schrödinger's cat, a thought experiment in which a cat is placed inside a sealed box with a vial of poison whose release is triggered by the radioactive decay of an alpha particle. Radioactive decay is a quantum mechanical process that occurs at random, so it is initially impossible to know what happened to the cat, which is in a superposition of states, simultaneously dead and alive, until the box is opened and the outcome observed. This superposition of states enables a particle to exist in both forward and backward time states at the same time, but witnessing this feat experimentally is tricky. To achieve it, both teams devised similar experiments to split a photon along a superposition of two separate paths through a crystal. The superposed photon moved on one path through the crystal as normal, but another path was configured to change the photon's polarization, or where it points in space, to move as if it were traveling backward in time.After recombining the superposed photons by sending them through another crystal, the team measured the photon polarization across a number of repeated experiments. They found a quantum interference pattern, a pattern of light and dark stripes that could exist only if the photon had been split and was moving in both time directions."The superposition of processes we realized is more akin to an object spinning clockwise and counter-clockwise at the same time," Strömberg said. The researchers created their time-flipped photon out of intellectual curiosity, but follow-up experiments showed that time flips can be paired with reversible logic gates to enable simultaneous computation in either direction, thus opening the way for quantum processors with greatly enhanced processing power.Theoretical possibilities also sprout from the work. A future theory of quantum gravity, which would unite general relativity and quantum mechanics, should include particles of mixed time orientations like the one in this experiment, and could enable the researchers to peer into some of the universe's most mysterious phenomena."A nice way to put it is to say that our experiment is a simulation of exotic scenarios where a photon might evolve forward and backward in time," Giulio Chiribella, a physicist at the University of Oxford who was the lead author of the other paper, told Live Science. "What we do is an analogue to some experiments that simulate exotic physics, such as the physics of black holes or time travel." Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com. Ben Turner is a U.K. based staff writer at Live Science. He covers physics and astronomy, among other topics like weird animals and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.
Physics
By Matt WilliamsSpecial Relativity. It’s been the bane of space explorers, futurists and science fiction authors since Albert Einstein first proposed it in 1905. For those of us who dream of humans one-day becoming an interstellar species, this scientific fact is like a wet blanket. Luckily, there are a few theoretical concepts that have been proposed that indicate that Faster-Than-Light (FTL) travel might still be possible someday. Image Credit: Rost9 via Shutterstock / HDR tune by Universal-Sci A popular example is the idea of a wormhole: a speculative structure that links two distant points in space time that would enable interstellar space travel. Recently, a team of Ivy League scientists conducted a study that indicated how “traversable wormholes” could actually be a reality. The bad news is that their results indicate that these wormholes aren’t exactly shortcuts, and could be the cosmic equivalent of “taking the long way”!Originally, the theory of wormholes was proposed as a possible solution to the field equations of Einstein’s Theory of General Relativity (GR). Shortly after Einstein published the theory in 1915, German physicists Karl Schwarzschild found a possible solution that not only predicted the existence of black holes, but of corridors connecting them.Unfortunately, Schwarzschild found that any wormhole connecting two black holes would collapse too quickly for anything to cross from one end to the other. The only way they could be traversable would be if they were stabilized by the existence of exotic matter with negative energy density. Daniel Jafferis, the Thomas D. Cabot Associate Professor of Physics at Harvard University, had a different take. As he described his analysis during the 2019 April meeting of the American Physical Society in Denver, Colorado:“The prospect of traversable wormhole configurations has long been a source of fascination. I will describe the first examples that are consistent in a UV completable theory of gravity, involving no exotic matter. The configuration involves a direct connection between the two ends of the wormhole. I will also discuss its implications for quantum information in gravity, the black hole information paradox, and its relation to quantum teleportation.”For the purposes of this study, Jafferis examined the work performed by Einstein and Nathan Rosen in 1935. Looking to expand upon the work of Schwarszchild and other scientists seeking solutions to GR, they proposed the possible existence of “bridges” between two distant points in space time (known as “Einstein–Rosen bridges” or “wormholes”) that could theoretically allow for matter and objects to pass between them.By 2013, this theory was used by theoretical physicists Leonard Susskind and Juan Maldacena as a possible resolution for GR and “quantum entanglement”. Known as the ER=EPR conjecture, this theory suggests that wormholes are why an elementary particles state can become entangled with that of a partner, even if they are separated by billions of light years. FRENCH - ENGLISH - PORTUGUESE SUBT. IN CC BUTTON Summer of 1982. Teo claims he has found a wormhole. His brother Óscar does not believe him - at least not for now. Written and directed by Olga Osorio With Teo Galiñanes, Óscar Galiñanes, Xúlio Abonjo, Ricardo de Barreiro Spanish premiere: 49th Sitges Fantastic Film Festival, Official Fantástic Curts (October 2016) International premiere: Leuven International Short Film Festival (December 2016)t was from here that Jafferis developed his theory, postulating that wormholes could actually be traversed by light particles (aka. photons). To test this, Jafferis conducted an analysis with the assistance with Ping Gao and Aron Wall (a Harvard graduate student and Stanford University research scientist, respectively).What they found was that while it is theoretically possible fir light to traverse a wormhole, they are not exactly the cosmic shortcut we were all hoping for them to be. As Jafferis explained in an AIP press statement, “It takes longer to get through these wormholes than to go directly, so they are not very useful for space travel.”Basically, the results of their analysis showed that a direct connection between black holes is shorter than that of a wormhole connection. While this certainly sounds like bad news to people who are excited by the prospect of interstellar (and intergalactic) travel someday, the good news is that this theory provides some new insight into the realm of quantum mechanics.“The real import of this work is in its relation to the black hole information problem and the connections between gravity and quantum mechanics,” said Jafferis. The “problem” he refers to is known as the Black Hole Information Paradox, something that astrophysicists have been struggling with since 1975, when Stephen Hawking discovered that black holes have a temperature and slowly leak radiation (aka. Hawking radiation). This paradox relates to how black holes are able to preserve any information that passes into them. Even though any matter accreted onto their surface would compressed to the point of singularity, the matter’s quantum state at the time of its compression would be preserved thanks to time dilation (it becomes frozen in time).But if black holes lose mass in the form of radiation and eventually evaporate, this information will eventually be lost. By developing a theory through which light can travel through a black hole, this study could represent a means of resolving this paradox. Rather than radiation from black holes representing a loss of mass-energy, it could be that Hawking Radiation is actually coming from another region of space time.It may also help scientists who are attempting to develop a theory that unifies gravity with quantum mechanics (aka. quantum gravity, or a “Theory of Everything”). This is due to the fact that Jafferis used quantum field theory tools to postulate the existence of traversable black holes, thus doing away with the need for exotic particles and negative mass (which appear inconsistent with quantum gravity). As Jafferis explained:“It gives a causal probe of regions that would otherwise have been behind a horizon, a window to the experience of an observer inside a spacetime, that is accessible from the outside. I think it will teach us deep things about the gauge/gravity correspondence, quantum gravity, and even perhaps a new way to formulate quantum mechanics.”As always, breakthroughs in theoretical physics can be a two-edged sword, giving with one hand and taking away with the other. So while this study may have thrown more cold water on the dream of FTL travel, it could very well help us unlock some of the Universe’s deeper mysteries. Who knows? Maybe some of that knowledge will allow us to find a way around this stumbling block known as Special Relativity!Source: Universe Today - Further Reading: Newswise, APS Physics
Physics
Inside what's called the amplifier bay of the Texas Petawatt Laser, where the energy of a laser pulse is boosted. The green light are the pump lasers that amplify or boost the energy of the main laser.Photo courtesy Todd DitmireAs the effects of climate change become more obvious, the promise of nuclear fusion — a virtually unlimited source of carbon-free energy — is getting a new wave of attention. The field has drawn almost $5 billion in funding, with recent interest off the charts.One of the newest efforts to commercialize fusion comes from startup Focused Energy, founded by a pair of physics professors with expertise in extremely high-powered lasers. The startup launched last summer and has collected a strong bench of veterans, including Prav Patel, who spent 23 years at the Lawrence Livermore National Laboratory, and two other scientists who also worked there, Todd Ditmire and Markus Roth.Now the startup has scored $15 million in early-stage funding from the venture capital firm Prime Movers Lab, plus Marc Lore (who sold e-commerce companies Diapers.com and Jet.com to Amazon and Walmart respectively), tech investor Tony Florence, and the former Yankee slugger Alex Rodriguez.Unlike nuclear fission, which powers all the commercial nuclear reactors in the world today, fusion does not generate long-lasting nuclear waste. But it requires a sustained reaction at extremely high temperatures, and despite decades of effort, nobody has yet figured out how to turn into a commercially viable energy source.There are two religions in the race to commercialize fusion: magnetic confinement fusion, which uses ultra strong magnets and a round device called a tokamak, and inertial confinement energy, which typically uses lasers. Prime Movers Lab has invested in Commonwealth Fusion Systems, a Boston-based fusion company spun out of Massachusetts Institute of Technology, as its best bet for magnetic confinement fusion. Focused Energy is its top pick for the "breakout" laser driven approach, according to partner Carly Anderson.Prav Patel (L) and Todd Ditmire, two of the leading scientists at Focused Energy.Photo courtesy Focused EnergyEven so, attempting to contain the same energy source that powers the sun will require a lot of research and effort."Focused Energy is capitalizing on over 70 years of government research into fusion," Matthew Moynihan, a nuclear fusion consultant, told CNBC. "They have a credible team and a good plan, but they have hard challenges ahead."Insanely powerful lasersDitmire worked at Livermore for three years, where he worked on the very first ultra-powerful petawatt laser and met Roth. In 2000, Ditmire joined University of Texas in where he built the Texas Petawatt Laser. Its power is hard to conceptualize."The U.S. electrical grid produces about a half a trillion watts of power. So that's a half a terawatt," Dimitre explained in a conversation in June. "A petawatt is 1,000 trillion watts. So a petawatt laser has the same power as 2,000 times the power output of the United States electrical grid."He continued, "You take all the wattage of the sunlight falling on the state of Texas, it's about 140 terawatts. So I would always say Petawatt is brighter than the Texas sun."Todd Ditmire, standing next to the optical amplifiers, a key piece of the Texas Petawatt laser.Photo courtesy Todd DitmireIn 2010, Ditmire started a company in Austin called National Energetics to design and build the kind of high-powered lasers he needed for his own research. "It turns out it in 2010, the only place you could get a customized power laser was from France," Ditmire told CNBC.At the company's peak, it had a staff of about 30. In 2014, National Energetics won a $40 million contract to deliver a 10 petawatt laser system to the Czech Republic, which the company is in the final stages of finishing. After that project is wrapped, so too will the company. Ditmire is transferring all the intellectual property from National Energetics into Focused Energy."I decided it was time to go off and slay a bigger dragon," Ditmire said. "And what what bigger dragon could there be than fusion energy?"For a six month period, Ditmire worked with Marvel Fusion, another startup working to commercialize fusion with lasers. Roth also worked at Marvel Fusion for a time. Marvel Fusion is using proton boron as the fuel source, which is an "interesting concept," according to Ditmire, but his "real interest" was in working with laser fusion and using fuel made from hydrogen isotopes deuterium and tritium.So Roth and Ditmire decided to start their own company.Building on decades of government researchIn 2009, the Lawrence Livermore finished building the National Ignition Facility, where 192 laser beams point to a central chamber, creating the kinds of temperatures and pressures that exist in the center of stars, planets, and an exploding nuclear weapon.Instruments are viewed inside the target chamber at the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory in Livermore, California, U.S., on Friday, May 29, 2009. The NIF will use 192 lasers aimed at a small hydrogen-filled target for the study of fusion reactions.Tony Avelar | Bloomberg | Getty ImagesPatel expected the team to get to a key fusion landmark, called ignition, within a few years. "I thought it would be quick. And that was almost that was 10 years ago. Then we spent the next 10 years getting the ignition," Patel told CNBC.The lab finally reached ignition on Aug. 8, 2021, meaning it had produced 1.3 megajoules of energy. Omar A. Hurricane, the chief scientist for the Inertial Confinement Fusion Program at the lab told CNBC at the time it was "a Wright Brothers moment" for the fusion industry. That validation was an important step for investors in the nascent industry."The NIF shot removed a major element of risk for Focused Energy's approach," Anderson told CNBC. "For Focused Energy to achieve cost-effective fusion, the team needs to develop cheap lasers and cheap fusion fuel. These are engineering challenges. Significant scientific risk (solving the plasma physics challenges needed for a machine that will produce burning plasma from scratch) has been retired."After that milestone achievement, Patel was willing entertain Roth and Ditmire's offer to join them -- an offer he'd previously declined. Focused Energy is going to use both the kind of lasers that Ditmire has been building and the kind that Patel has worked with for decades at Lawrence Livermore."We have conventional long pulse lasers like at NIF to compress this fuel. And then we have these Petawatt lasers that Todd was talking about to produce this intense beam of protons," Patel said. The goal then is to ignite the fuel with a spark. Ideally, that fuel continue to burn on its own, -- what's called a "propagating burn" -- to fuse the rest of the fuel, Patel said. "It's a scheme that potentially with lasers two or three times smaller than NIF, you could potentially get 30 times more energy out."Inside the compressor vacuum vessel, a central component of thePhoto courtesy Todd DitmireOne of the advantages of using laser-ignition fusion, as opposed to magnetic confinement fusion, according to Ditmire, is that it uses only tiny pellets of tritium, a mildly radioactive isotope.Will it be done in time to fight climate change?The company is in very early stages and faces significant challenges ahead."They need to be able to cheaply and reliably make small pellets of fusion fuel and place them inside a chamber for compression. Once fused, they have to covert the resulting fusion energy into safe, economic and reliable electricity," Moynihan told CNBC. "All of this needs to be done within the legal frameworks for fusion written by the NRC (Nuclear Regulatory Commission) and those rules are still evolving."Focused Energy aims to prove out its fusion process with an ignition facility by 2030. That will cost $3 billion to build. The $15 million raised so far will be spent on a laser system at University of Texas at Austin and to build out experimental facilities in Darmstadt, Germany.Focused Energy's new research and test facility being constructed in Darmstadt, Germany.Photo courtesy Focused EnergyDemo power plants in the 2030s is "not inconceivable," Ditmire said. "And that's not too late. It's probably just in time. We probably don't have time to dally, but it's just in time."A commercial fusion facility will cost Focused Energy about $5 billion to build. But the money is flowing in fusion right now. It took Ditmire four years of going back and forth to Washington D.C., lobbying on the Hill, to get $15 million to build the Texas Petawatt laser. "It took me four years of work. You know, we went out and raised $15 million with 4 Zoom calls," for Focused Energy, Ditmire said. "The investor interest was just stunning."The fusion industry depends on investors' continued interest in the "holy grail" of energy."We stand on the shoulders of giants," Ditmire said. "But times are changing, which means the rate at which science progress has happened is going to have to accelerate, and the place that's going to happen is in is in the private sector."For Ditmire, it's personal, too. "It is my raison d'être in my career to make this happen before the end of my career," he told CNBC. "This is it, baby. This is the dragon in Mordor."
Physics
A year after its launch, astronomers are revealing the secrets of the universe, as the first scientific results from observations made by the James Webb Space Telescope (JWST) are released. This month, Physics World is publishing a series of blog posts on the discoveries. This is the fourth post in the series – you can read the previous one here. The journey is just beginning: It’s been a year since the JWST launched, and it’s now well on its way to transforming astronomy. (Courtesy: ESA/ATG Medialab). It’s been a year since the James Webb Space Telescope (JWST) launched, and after its dangerous deployment and careful collimation, it’s finally sending back incredible images and data. Getting from the launchpad to full operations, however, was no easy task. Here’s a reminder of how it all happened. Christmas Day 2021: After nearly 25 years of development, the JWST soared into space atop an Ariane 5 rocket. Its launch was a triumph over technological tribulations, budget and schedule overruns, and even a (temporary) cancellation by the US Congress. Consequently, emotions were high as the launchpad countdown neared zero. “It was tense,” admits Susan Mullally, the JWST’s deputy project scientist at the Space Telescope Science Institute (STScI) in Baltimore. “I couldn’t believe it was real,” adds Naomi Rowe-Gurney, a JWST GTO (Guaranteed Time Observations) postdoc at NASA’s Goddard Space Flight Center where she is supporting the Planetary Systems Team. “I was expecting another delay of some kind. I thought it was never going to launch.” A hazardous journey The stop-start nature of the project’s development came about in part because of the increasing complexity of the telescope, which features a segmented 6.5-metre primary mirror as well as a fragile, five-layer, tennis-court-sized insulating sunshield. Both elements had to unfold like origami after being scrunched up to fit inside the rocket faring – a 30-day process that coincided with the telescope’s journey to the L2 Lagrange point on the other side of the Sun, 1.6 million kilometres from Earth. This point is much too far away for the kind of astronaut-assisted servicing the Hubble Space Telescope received for its faulty optics in 1993. If something had gone wrong with the JWST’s mirror during its deployment, astronomers would have been left with a $10 billion white elephant floating in deep space. “Those first 30 days were pretty nerve-wracking, because any problem was a single-point failure and would mean we wouldn’t have a telescope,” Rowe-Gurney says. All told, there were 344 such possible points of failure: 344 points where the telescope’s intricate moving parts had to work perfectly in the cold vacuum of space. Yet work they did – “phenomenally so” according to NASA Goddard’s Jane Rigby, who spoke at the First Science Results from JWST conference held at STScI earlier this month. “The day when I knew this was actually going to work was when that main boom swung out, and the secondary mirror folded out, and we actually had a telescope,” Rowe-Gurney says. “Even if the subsequent deployments didn’t work, we could capture light and put it into the instruments.” Focusing the telescope With both mirrors deployed, the next step was to focus the 18 hexagonal beryllium segments of the primary mirror. This was accomplished in seven phases. Initially, each segment produced a different unfocused image, so the first phase was to recognize which image belonged to which mirror segment. The next step was to roughly align the mirrors so that the 18 images were all in focus. After that, the segments were further adjusted so that they began to focus at the same point. This was followed by various degrees of fine-tuning and making sure that the focus fell within the fields of view of the different instruments, and then by a series of corrections to ensure the segments were aligned to within 50 nm of each other. Finally, after a three-month process, the telescope was in focus. Breaking the speed limit With the telescope in good shape, the next step was to calibrate its individual instruments: the Near-Infrared Camera (NIRCam), the Near-Infrared Spectrometer (NIRSpec), and MIRI, the suite of detectors that make up the Mid-Infrared Instrument. Caught on camera: the JWST was able to image the aftermath of the DART impact by tracking it at three times its nominal safe tracking speed. (Courtesy: NASA/ESA/CSA/STScI) Distant, deep-space objects appear fixed on the sky, but objects in the solar system move against that background of stars, nebulae and galaxies. Therefore, to image planets, moons, comets and asteroids, the JWST has to track them by physically turning the spacecraft. Prior to launch, a tracking speed limit was introduced: 30 milliarcseconds per second, where one arcsecond is 1/3600th of a degree). Once in space, however, the team realized this limit was a little pessimistic. “We were testing how fast we could track, and we realized that we could actually do much faster,” says Rowe-Gurney, who was involved in commissioning instruments for collecting data on moving targets and scattered light. The increased tracking speed came in useful a few months later, when the JWST observed the aftermath of the DART (Double Asteroid Redirection Test) impact on the small asteroid Dimorphos. The DART mission was Physics World’s scientific breakthrough of the year for 2022, and the JWST was able to image debris ejected from its impact by tracking three times faster than the initial limit, keeping the asteroid in the field of view without blurring. Indeed, the telescope has since achieved tracking speeds of up to 120 milliarcseconds per second. However, the faster it tracks, the lower its tracking efficiency, leading to a middle-ground compromise. “In the next year the safe tracking rate will be put up to 75 milliarcseconds per second, more than doubling the speed limit, so we’ll be able to follow even more objects in the solar system without breaking the telescope,” Rowe-Gurney says. Removing scattered light When the JWST stares at a bright object – a planet, a star, even a distant quasar – some of the excess light forms a diffraction pattern. This pattern is the cause of the “spikes” seen around foreground stars in many of the JWST’s images, and although pretty, it can obscure scientific details. Fortunately, every telescope’s unique diffraction pattern can be described as a point spread function, and by characterizing the shape of this point spread function for the JWST and its instruments, astronomers can remove the extraneous light from images when necessary. Extra light: A JWST image of IC 1623, which is a system of two merging galaxies. The galactic core is bright enough to create diffraction spikes in the image. (Courtesy: ESA/Webb/NASA/CSA/L Armus and A Evans) A case in point was the JWST’s image of the Wolf–Rayet star WR 140, which is located 5000 light-years away. When first imaged by the JWST, astronomers were stunned to see 17 concentric rings, or shells, around the star. These rings were initially thought to be imaging artefacts from the telescope, but after removing the point spread function, the rings were still there. Further investigation based on simulations showed that stellar winds from binary stars can produce rings of dust where they clash and condense. What is more, the pattern of the simulated rings precisely matched the pattern of rings around WR 140, even down to a linear feature cutting through the rings due to enhanced infrared emission in our line of sight. The observations of WR 140 represent the first time a colliding wind structure around a binary star has been mapped in 3D. But if astronomers had not first modelled the pattern of scattered light leaking into the telescope so they could remove it, it would have been impossible to discern what the observations were telling us. Astronomers’ new toy The Wolf–Rayet star example shows how vital it is to get to know the telescope while making observations. “It’s something you have to think about a lot,” Mullally says. “Every step of the way you’re hoping to have an expert on your team who knows as much as possible either about the instrument or about how those types of observations are taken.” Ring around the rosie: Concentric rings of dust around the star WR 140. (Courtesy: NASA/ESA/CSA/STScI/JPL–Caltech) Accordingly, one of the motivations behind the JWST’s Early Release Science (ERS) was to help a few astronomers become familiar with the telescope and its instruments so they can bring others up to speed for later observing cycles. “It’s like a new toy,” says Rowe-Gurney. “There’s a lot of work going into how to process and calibrate the data to make sure it’s reliable.” Fortunately, the JWST is playing ball. “Instrument scientists might say they are still getting to know their instruments and how to go about removing little systematics and artefacts and things like that in your data,” says Mullally, “but overall the impression I’m getting from everybody is that the telescope is performing wonderfully.” Impact risk So far, there is only one caveat to the JWST’s performance: the damage caused by micrometeoroid impacts. On average, the telescope’s mirror is struck once a month by something large enough to affect wavefront sensing, which is the telescope’s ability to detect errors in the alignment its optics that can manifest as light waves going out of phase. This reduction in wavefront sensing can make images less sharp. Such impacts were anticipated before launch, and were not expected to be big enough to threaten the telescope’s lifespan. However, in May 2022 one of the mirror segments received a larger-than-typical impact. In her talk at the First Science Results from JWST conference, Rigby reported that this impact left a wound a foot across, increasing the telescope’s total wavefront error by 9 nm. This is significant because if the wavefront error reaches 150 nm, the telescope will no longer be sensitive enough to meet its scientific targets – meaning that just 10 impacts of a similar scale would be “game over” for the JWST. Somewhat alarmed by this prospect, NASA has convened a micrometeoroid working group to investigate the risk. The micrometeoroid population at L2 is well known; what isn’t clear is the relationship between the kinetic energy of impacts and the degradation of wavefront sensing. Are such large impacts extremely rare and the JWST was simply unlucky in May? Or will the telescope experience more serious impacts at a greater frequency than predicted? Read more Quasars, exoplanets and the atmospheres of distant worlds: more on the first results from the JWST Until the working group comes up with answers, the telescope’s managers are mitigating the risk by encouraging astronomers to time their observations (where possible – time-sensitive observations are exempt) so that the telescope is not pointing into the “rain” of micrometeoroids. If this system succeeds, or the working group comes up with a reassuring answer about impact odds, the JWST should have a long life ahead of it. Thanks to its flawless launch and a journey to L2 that required minimal course corrections, the scope has enough propellant on board to continue its mission for at least another 27 years. If the mission’s first 12 months are any indication, these 27 years should produce reams of sensational new views and data from a superb instrument, with a high likelihood of transforming astrophysics, exoplanet studies, cosmology and more. The rollercoaster ride of the JWST’s launch may be over, but the real journey is just beginning.
Physics
This is the moment a retired US Marine Corps combat engineer shows off a new tool capable of separating microplastics from organic sea matter. The Trash Time Machine (TTM) is an open-source invention created by Ray Aivazian III, a former Marine Corps combat engineer and the founder of Stimulating Education and Ecological Design (SEED), a non-profit organisation aimed at tackling the problem of microplastics pollution in the world’s oceans. SEED provides freely available inventions and information in the hopes that this will encourage local communities to give back to the environment, while the TTM uses water density and the principles of physics to separate microplastics from organic sea matter. The Hawaii-based environmentalist and inventor told Newsflash: “Our goals are to help educate the world on the detrimental effects of microplastic pollution on our ecosystem and future while also providing the people with open-source information to create the two inventions we developed for the removal of shoreline microplastics and the separation of natural and synthetic material.” Picture shows natural and synthetic marine debris in Oahu, Hawaii, in undated footage. SEED’s Trash Time Machine uses water and physics to separate natural and synthetic material. (@seed.world/Newsflash) The video was filmed on the island of Oahu in Hawaii this year and features the SEED founder explaining how the TTM uses a vacuum chamber to remove synthetic materials from natural seaside matter. The device would ideally be used after beach clean-up sessions with SEED’s second invention, the Buoyancy Separation Device (BSD), a “simplistic but effective technology” aimed at removing shoreline debris, according to the organisation’s official page. These clean-up sessions typically see bits of sandy sticks and wood being collected along with the plastic debris. Ray explained: “The TTM forces the natural material to become waterlogged in a matter of minutes compared to the natural, unknown, time it would take for the debris to become waterlogged. “Certain plastics (PP, LDPE, HDPE) will always float in water. “Our system (BSD) uses ocean water to float out these plastics and separate it from the beach sand. “This process removes virtually all synthetic marine debris and leaves behind nothing but clean sand. Picture shows synthetic plastic, in Oahu, Hawaii, in undated footage. SEED’s Trash Time Machine uses water and physics to separate natural and synthetic material. (@seed.world/Newsflash) Ray grew up in California and now lives on Oahu, which is affectionately known as “The Gathering Place” in Hawaii and is reportedly home to nearly one million people, or over two-thirds of the archipelago state’s entire population. He added: “Living in Hawaii, the ocean is a part of my everyday life. “Seeing the amount of waste wash upon our shores daily bothered me and I didn’t see anyone doing anything to remove these microplastics from our beaches due to the difficulty in removing such small material. “Being an engineer, I put my skills to the test to see what I could do to help with this issue.” The inventor appears to have lived a prolific life as he also happens to be the current President of the Environmental and Climate Justice Committee for the National Association for the Advancement of Colored People (NAACP) in the United States and is the former chair of Surfrider Foundation in Oahu. He went on to state that SEED’s newly developed tools are now being used in collaboration with the Hawaii Pacific University Center for Marine Debris Research (CMDR) to conduct scientific research on the accumulation of shoreline microplastics. He added: “Through our work at CMDR, preliminary data is showing that we can remove microplastic particles down to 25 microns. “This ground-breaking research shows that plastic pollution is persisting on a significantly smaller scale than previously believed by scientists and individuals in the polymer industry.” Picture shows natural and synthetic marine debris in Oahu, Hawaii, in undated footage. SEED’s Trash Time Machine uses water and physics to separate natural and synthetic material. (@seed.world/Newsflash) Instructions on how to build the new inventions, are described in detail on SEED’s website, which can be accessed here: https://www.seed.world/build. Read more articles
Physics
Chameleon like: researchers at MIT have created a way to create large-scale images that change colour when stretched. (Courtesy: Mathias Kolle/Benjamin Miller/Helen Liu) Building on a largely forgotten photography technique, researchers in the US have developed a photographic material that changes colour when stretched. Working at the Massachusetts Institute of Technology (MIT), the team showed how colour images can be created by modifying the nanoscale structure of the film. These structures reflect light at different wavelengths, which change as the as the film is stretched. The researchers say that their method offers a low-cost, scalable approach to creating new optical materials. Structural colour is common in nature and familiar examples include the feathers of some birds and the wings of some butterflies. Instead of using pigments, structural colour is created by the interference of light that has been reflected from microscopically textured surfaces. The result is that certain colours are visible at certain viewing angles, while other colours are not. A related phenomenon called iridescence occurs when the structural colour of an object changes with the viewing angle. Today, researchers are exploring how structural colour can be used in advanced optical materials. However, the appropriate nanoscale structures are often expensive and complex to produce, especially on large scales. Nobel-winning technique Now, MIT’s Benjamin Miller, Helen Liu and Mathias Kolle have developed a potential solution to this limitation. It is based on an early photographic technique that was first developed by the French physicist Gabriel Lippmann and which earned him the 1908 Nobel Prize for Physics. To capture images, Lippmann placed a thin, transparent emulsion of tiny, light-sensitive grains between two plates of glass. A mirror is positioned behind the back plate so that it reflects the light that passes through the emulsion. When exposed to a visual scene, incident light waves entering the emulsion interfere with their reflections. This produces standing waves in the emulsion that gradually alter the nanoscale arrangements of the grains. This causes periodic variations in the film’s refractive index, capturing optical information from the visual scene. After up to several days of exposure, the arrangement of the grains is fixed, and the result is a colour image of the scene – an image that is much like a modern hologram. However, Lippmann’s process was more time consuming and difficult than other colour photography techniques emerging at the time and has therefore been largely forgotten. Now, Kolle and colleagues have revisited the technique using modern holographic materials. Light-sensitive polymer The MIT trio began by placing a thin sheet of a stretchy, light-sensitive polymer against a mirror, and exposing it to a bright projected image. Just like with Lippmann’s approach, this created a pattern of standing waves, which altered the film’s refractive index. After just a few minutes of exposure, they then bonded the film to a silicone backing, creating large and detailed colour images. Read more Fast-switching structural colour could be used in low power video displays As they stretched the film – by pulling on it or pressing objects into it – the nanostructures are distorted in a reversible way. This distortion alters the colour of the light reflected by the film (see figure). When the team made an all-red film, green images could be created by pressing objects onto the back of the film. The team could also hide secret images in a film by capturing the image at a tilted angle of incidence. The resulting image is only visible in the near infrared – which cannot be seen by the human eye. However, when the material is stretched the image is shifted towards the red and becomes visible. Kolle’s team hope that their fast, scalable, and affordable production technique could soon lead to practical optical materials that respond to mechanical stimuli. As well as encoding secret messages, other applications include clothing fabrics that change colour when they are stretched; and bandages that change colour as the pressure on a wound changes. The research is described in Nature Materials.
Physics
Enlarge / Artist’s representation of a cosmic neutrino source shining above the IceCube Observatory at the South Pole. Beneath the ice are photodetectors that pick up the neutrino signals.IceCube/NSF Ever since French physicist Pierre Auger proposed in 1939 that cosmic rays must carry incredible amounts of energy, scientists have puzzled over what could be producing these powerful clusters of protons and neutrons raining down onto Earth's atmosphere. One possible means for identifying such sources is to backtrack the paths that high-energy cosmic neutrinos traveled on their way to Earth, since they are created by cosmic rays colliding with matter or radiation, producing particles that then decay into neutrinos and gamma rays. Scientists with the IceCube neutrino observatory at the South Pole have now analyzed a decade's worth of such neutrino detections and discovered evidence that an active galaxy called Messier 77 (aka the Squid Galaxy) is a strong candidate for one such high-energy neutrino emitter, according to a new paper published in the journal Science. It brings astrophysicists one step closer to resolving the mystery of the origin of high-energy cosmic rays. "This observation marks the dawn of being able to really do neutrino astronomy," IceCube member Janet Conrad of MIT told APS Physics. "We've struggled for so long to see potential cosmic neutrino sources at very high significance and now we've seen one. We've broken a barrier." As we've previously reported, neutrinos travel near the speed of light. John Updike's 1959 poem, "Cosmic Gall," pays tribute to the two most defining features of neutrinos: They have no charge and, for decades, physicists believed they had no mass (they actually have a teeny bit of mass). Neutrinos are the most abundant subatomic particle in the universe, but they very rarely interact with any type of matter. We are constantly being bombarded every second by millions of these tiny particles, yet they pass right through us without our even noticing. That's why Isaac Asimov dubbed them "ghost particles." Enlarge / When a neutrino interacts with molecules in the clear Antarctic ice, it produces secondary particles that leave a trace of blue light as they travel through the IceCube detector.Nicolle R. Fuller, IceCube/NSF That low rate of interaction makes neutrinos extremely difficult to detect, but because they are so light, they can escape unimpeded (and thus largely unchanged) by collisions with other particles of matter. This means they can provide valuable clues to astronomers about distant systems, further augmented by what can be learned with telescopes across the electromagnetic spectrum, as well as gravitational waves. Together, these different sources of information have been dubbed "multimessenger" astronomy. Most neutrino hunters bury their experiments deep underground, the better to cancel out noisy interference from other sources. In the case of IceCube, the collaboration features arrays of basketball-size optical sensors buried deep within the Antarctic ice. On those rare occasions when a passing neutrino interacts with the nucleus of an atom in the ice, the collision produces charged particles that emit UV and blue photons. Those are picked up by the sensors. So IceCube is well-positioned to help scientists advance their knowledge of the origin of high-energy cosmic rays. As Natalie Wolchover cogently explained at Quanta in 2021: A cosmic ray is just an atomic nucleus—a proton or a cluster of protons and neutrons. Yet the rare ones known as “ultrahigh-energy” cosmic rays have as much energy as professionally served tennis balls. They’re millions of times more energetic than the protons that hurtle around the circular tunnel of the Large Hadron Collider in Europe at 99.9999991% of the speed of light. In fact, the most energetic cosmic ray ever detected, nicknamed the “Oh-My-God particle,” struck the sky in 1991 going something like 99.99999999999999999999951 percent of the speed of light, giving it roughly the energy of a bowling ball dropped from shoulder height onto a toe. But where do such powerful cosmic rays originate? One strong possibility is active galactic nuclei (AGNs), found at the center of some galaxies. Their energy arises from supermassive black holes at the center of the galaxy, and/or from the black hole's spin.
Physics
The first detection of gravitational waves in 2016 provided decisive confirmation of Einstein’s general theory of relativity. But another astounding prediction remains unconfirmed: According to general relativity, every gravitational wave should leave an indelible imprint on the structure of spacetime. It should permanently strain space, displacing the mirrors of a gravitational wave detector even after the wave has passed.Since that first detection almost six years ago, physicists have been trying to figure out how to measure this so-called “memory effect.”“The memory effect is absolutely a strange, strange phenomenon,” said Paul Lasky, an astrophysicist at Monash University in Australia. “It’s really deep stuff.”Their goals are broader than just glimpsing the permanent spacetime scars left by a passing gravitational wave. By exploring the links between matter, energy, and spacetime, physicists hope to come to a better understanding of Stephen Hawking’s black hole information paradox, which has been a major focus of theoretical research for going on five decades. “There’s an intimate connection between the memory effect and the symmetry of spacetime,” said Kip Thorne, a physicist at the California Institute of Technology whose work on gravitational waves earned him part of the 2017 Nobel Prize in Physics. “It is connected ultimately to the loss of information in black holes, a very deep issue in the structure of space and time.”A Scar in SpacetimeWhy would a gravitational wave permanently change spacetime’s structure? It comes down to general relativity’s intimate linking of spacetime and energy.First consider what happens when a gravitational wave passes by a gravitational wave detector. The Laser Interferometer Gravitational-Wave Observatory (LIGO) has two arms positioned in an L shape. If you imagine a circle circumscribing the arms, with the center of the circle at the arms’ intersection, a gravitational wave will periodically distort the circle, squeezing it vertically, then horizontally, alternating until the wave has passed. The difference in length between the two arms will oscillate—behavior that reveals the distortion of the circle, and the passing of the gravitational wave.According to the memory effect, after the passing of the wave, the circle should remain permanently deformed by a tiny amount. The reason why has to do with the particularities of gravity as described by general relativity.The objects that LIGO detects are so far away, their gravitational pull is negligibly weak. But a gravitational wave has a longer reach than the force of gravity. So, too, does the property responsible for the memory effect: the gravitational potential.In simple Newtonian terms, a gravitational potential measures how much energy an object would gain if it fell from a certain height. Drop an anvil off a cliff, and the speed of the anvil at the bottom can be used to reconstruct the “potential” energy that falling off the cliff can impart.But in general relativity, where spacetime is stretched and squashed in different directions depending on the motions of bodies, a potential dictates more than just the potential energy at a location—it dictates the shape of spacetime.“The memory is nothing but the change in the gravitational potential,” said Thorne, “but it’s a relativistic gravitational potential.” The energy of a passing gravitational wave creates a change in the gravitational potential; that change in potential distorts spacetime, even after the wave has passed.How, exactly, will a passing wave distort spacetime? The possibilities are literally infinite, and, puzzlingly, these possibilities are also equivalent to one another. In this manner, spacetime is like an infinite game of Boggle. The classic Boggle game has 16 six-sided dice arranged in a four-by-four grid, with a letter on each side of each die. Each time a player shakes the grid, the dice clatter around and settle into a new arrangement of letters. Most configurations are distinguishable from one another, but all are equivalent in a larger sense. They are all at rest in the lowest-energy state that the dice could possibly be in. When a gravitational wave passes through, it shakes the cosmic Boggle board, changing spacetime from one wonky configuration to another. But spacetime remains in its lowest-energy state.Super SymmetriesThat characteristic—that you can change the board, but in the end things fundamentally stay the same—suggests the presence of hidden symmetries in the structure of spacetime. Within the past decade, physicists have explicitly made this connection.The story starts back in the 1960s, when four physicists wanted to better understand general relativity. They wondered what would happen in a hypothetical region infinitely far from all mass and energy in the universe, where gravity’s pull can be neglected, but gravitational radiation cannot. They started by looking at the symmetries this region obeyed.
Physics
When the Hunga Tonga-Hunga Ha’apai volcano in the Pacific Ocean erupted earlier this year, the event was one for the record books — in several surprising ways. The January 15 eruption was so explosive that it injected water vapor so high that it touched space, a first-of-its-kind observation for an earthly volcano. And the event produced the greatest concentration of lightning ever detected — making it far flashier than the 2018 eruption of Krakatau in Indonesia or the 2021 tornado outbreak across the U.S. South. Science News headlines, in your inbox Headlines and summaries of the latest Science News articles, delivered to your email inbox every Friday. The eruption also released so much energy that its disturbance of a charged layer of Earth’s atmosphere, called the ionosphere, rivaled that of a solar geomagnetic storm. Seismologists, geophysicists and oceanographers described these and other eruption superlatives at a news conference on December 12 and in several presentations in Chicago at the American Geophysical Union’s fall meeting. “These are once in a lifetime … observations,” said Larry Paxton, an astrophysicist at Johns Hopkins University Applied Physics Laboratory in Laurel, Md. He and colleagues examined data from NASA’s Global Ultraviolet Imager, on a spacecraft in orbit around the Earth. On the day of the eruption, Paxton said, the instrument revealed “something unusual” in the far-ultraviolet light portion of the electromagnetic spectrum: a roundish spot in the satellite data coinciding with the volcano’s location where there was a temporary decrease in those UV emissions. The instrument doesn’t see anything in the atmosphere below about 100 kilometers above sea level, what’s typically thought of as the boundary of space. That means that some sort of emitted material — most likely water vapor from the undersea volcano — had reached high enough into space to briefly absorb those particles of light, the researchers reported. Scientists had previously estimated that the eruption extended past the stratosphere and into the mesosphere. The new finding suggests the explosion reached even higher. The volcano woke up in December 2021 (SN: 1/21/22). By early January, the ongoing eruption was already “one of the most prolific lightning producers” on the planet, said Chris Vagasky, a meteorologist with Vaisala Inc., an environmental instruments company headquartered in Vantaa, Finland. Using Vaisala’s Global Lightning Detection Network, Vagasky and colleagues estimate that on the day of the big blast on January 15 alone, there were at least 400,000 lightning strikes at and around the volcano — an order of magnitude higher than generally observed in Earth’s most powerful supercell thunderstorms, Vagasky said. “This was the most extreme lightning event ever detected by the global network.” From astronomy to zoology Subscribe to Science News to satisfy your omnivorous appetite for universal knowledge. Some of the volcano’s explosive energy made it to the ionosphere, the layer of Earth’s atmosphere where charged plasma coexists with other atmospheric particles. Atmospheric pressure waves from the eruption propagated into space, shifting the plasma around (SN: 8/29/22). Those plasma shifts then rippled along Earth’s magnetic field lines, resonating through the ionosphere to disturb plasma thousands of kilometers away. “It’s like plucking a guitar string,” said Claire Gasque, a space physicist at the University of California, Berkeley. (Gasque is the daughter of Science News’ news director, Macon Morehouse, who wasn’t involved in this article.) In the vicinity of the volcano, that effect on the ionosphere from the January 15 eruption rivaled, and even surpassed, the impact of a minor solar geomagnetic storm that began on January 14, Gasque added. “Despite a simultaneous geomagnetic storm, the volcano dominated changes in ionospheric dynamics.” “Most people think of space weather as caused by solar influences,” Gasque said. But these data suggest a volcano can have just as much power. The volcano may yet break other records, the researchers said, as scientists continue studying data from the powerful explosion.
Physics
The Five-hundred-meter Aperture Spherical Telescope (FAST), located in China, is currently the world’s largest and most sophisticated radio observatory. While its primary purpose is to conduct large-scale neutral hydrogen surveys (the most common element in the Universe), study pulsars, and detect Fast Radio Bursts (FRBs), scientists have planned to use the array in the Search for Extraterrestrial Intelligence (SETI). Integral to this field of study is the search for technosignatures, signs of technological activity that indicate the presence of an advanced civilization. While many potential technosignatures have been proposed since the first surveys began in the 1960s, radio transmissions are still considered the most likely and remain the most studied. In a recent survey, an international team of SETI researchers conducted a targeted search of 33 exoplanet systems using a new method they call the “MBCM blind search mode.” While the team detected two “special signals” using this mode, they dismissed the idea that they were transmissions from an advanced species. Nevertheless, their survey demonstrated the effectiveness of this new blind mode and could lead to plausible candidate signals in the future. The survey was conducted by researchers representing the FAST collaboration, Breakthrough Listen, and multiple universities and institutes. This included the Institute for Frontiers in Astronomy and Astrophysics at Beijing Normal University, the Beijing Academy of Science and Technology, the Space Sciences Laboratory (SSL) at UC Berkeley, the Institute for Astronomical Science at Dezhou University, the College of Physics and Electronic Engineering at Qilu Normal University, and the University of Glasgow. The paper that describes their work has been accepted for publication by the Astrophysical Journal. Remove All Ads on Universe Today Join our Patreon for as little as $3! Get the ad-free experience for life The first SETI experiment (Project Ozma) took place in 1960 under the direction of Professor Frank Drake, for whom the Drake Equation is named. Since then, most SETI experiments have searched for radio communications as technosignatures due to their effectiveness at propagating through interstellar space. The earliest experiments searched at specific frequencies, like the absorption line of neutral hydrogen (21 cm) and hydroxyl (18 cm), which correspond to radio frequencies of 6.3 and 5.4 gigahertz (GHz). But with the advancement of technology, the available bandwidth of SETI systems has expanded into the tens of GHz range. In addition, SETI surveys have come to rely on a strategy known as Multibeam Coincidence Matching (MBCM) to address RFI and filter it out of their signal noise. Dr. Vishal Gajjar, a SETI Institute at UC Berkeley and a co-author on the study, explained to Universe Today via email: “Single-dish radio telescopes observe a small portion of the sky, known as a beam, which is about the size of the tip of a pencil held at arm’s length. Despite their accuracy, these telescopes often pick up interference from nearby terrestrial sources. To overcome this issue, some telescopes are equipped with multiple beams, allowing them to observe several small areas of the sky at the same time. By searching for signals of interest in all beams simultaneously, we can determine if a signal is truly from a source in the sky or if it is a result of interference. When a signal is detected in multiple beams, it is likely to be terrestrial interference.” According to Gajjar, MCBM is considered better than conventional methods for three main reasons. These include: - Increased accuracy and robustness: MBCM can eliminate false positive detections caused by terrestrial interference, resulting in more accurate results. MBCM is less susceptible to interference from terrestrial sources, making it more robust and reliable than conventional methods. - Faster processing: MBCM can be performed in real-time, making it faster than traditional methods that require post-processing. - Increased coverage: MBCM allows for a wider field of view by using multiple beams, providing more coverage than a single beam. This third advantage was integral to the work of Dr. Gajjar and the international team. The FAST telescope is the world’s largest radio array and is equipped with a 19-beam receiver, allowing astronomers to simultaneously observe 19 different positions in the sky. When paired with FAST’s instruments, the MCMB technique effectively eliminates sources of interference and ensures accurate observations. For their study, the team observed 33 nearby exoplanets using the traditional MBCM strategy and a new search method they call the “MBCM blind search mode.” As they indicate in their paper, the blind search mode was inspired by the multibeam blind search mode that was recently developed to study FRBs. The basic idea is to use all 19 of FAST’s beams to search for ETI signals, where the central beam (Beam 1) tracks a target while the others serve as reference beams. If a signal covers non-adjacent beams, more than four adjacent beams, or three or more beams in a line, the team classified the signal as RFI. They also identify four beam coverage arrangements that could indicate radio signals that are ETI in origin. As illustrated in the diagram below, these included any one of FAST’s 19 beams, two of the adjacent beams (Figure 1a), three adjacent beams forming an equilateral triangle (Figure 1b), and four adjacent beams forming a compact rhombus (Figure 1c). Any beam coverage arrangements that did not fit into these four categories (like the three examples in the second line of the diagram) were considered false positives and rejected. As Gajjar indicated, this paper builds on previous work where they conducted targeted observations with FAST of the same 33 exoplanetary systems: “During those observations, we aimed the central beam of our 19-beam receiver at each individual target and only analyzed data from the central beam where the target was situated. If a signal of interest was detected, we cross-checked the same frequency across other beams to eliminate terrestrial interference. In the present paper, we perform a more comprehensive search by blindly searching for signals across all 19 beams, regardless of the presence of any exoplanetary system in the field of view. This approach allows us to conduct an agnostic search without prior knowledge of any potential targets of interest present in our beams.” After scanning these 33 exoplanets, the team discerned two rather unusual and intriguing signals. As Gajjar related, while it was challenging to evaluate these signals (as they only appeared in one beam), after a thorough examination, they determined that they were just RFI interference: “One of the signals was only present in one of the two polarizations of the telescope. Normally, sky-based sources would show similar intensity in both polarizations over a longer period of observation, but this wasn’t the case for the first signal, making it easy to dismiss. The second signal was more intriguing as it showed the same intensity in both polarizations. Upon closer inspection, we discovered that the frequency of the second signal was very close to known sources of interference.” In another case, further examination of the data revealed a signal in one beam with a very low signal-to-noise (STN) ratio. The team also rejected this signal because its behavior was similar to other instances of RFI they had identified. While no clear technosignatures were detected, the survey was invaluable because of the way it tested the team’s silent mode technique. What’s more, the two signals identified are fitting targets for follow-up observations, which could be conducted by Breakthrough Listen (the largest SETI effort ever mounted) in the coming years. “This is a groundbreaking stride in the field of SETI,” said Gajjar. “In SETI, this technique has been deployed for the first time. This unique technique can be useful because it reduces the amount of false positives, allowing for a more efficient search for signals from extraterrestrial civilizations. By reducing the amount of interference, multibeam coincidence rejection increases the sensitivity of the search and makes it easier to detect weak signals that might otherwise be overlooked.”
Physics
Taking advantage of a phenomenon known as emergent behavior in the microscale, MIT engineers have designed simple microparticles that can collectively generate complex behavior, much the same way that a colony of ants can dig tunnels or collect food. Working together, the microparticles can generate a beating clock that oscillates at a very low frequency. These oscillations can then be harnessed to power tiny robotic devices, the researchers showed. “In addition to being interesting from a physics point of view, this behavior can also be translated into an on-board oscillatory electrical signal, which can be very powerful in microrobotic autonomy. There are a lot of electrical components that require such an oscillatory input,” says Jingfan Yang, a recent MIT PhD recipient and one of the lead authors of the new study. The particles used to create the new oscillator perform a simple chemical reaction that allows the particles to interact with each other through the formation and bursting of tiny gas bubbles. Under the right conditions, these interactions create an oscillator that behaves similar to a ticking clock, beating at intervals of a few seconds. “We're trying to look for very simple rules or features that you can encode into relatively simple microrobotic machines, to get them to collectively do very sophisticated tasks,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. Strano is the senior author of the new paper, which appears today in Nature Communications. Along with Yang, Thomas Berrueta, a Northwestern University graduate student advised by Professor Todd Murphey, is a lead author of the study. Collective behavior Demonstrations of emergent behavior can be seen throughout the natural world, where colonies of insects such as ants and bees accomplish feats that a single member of the group would never be able to achieve. “Ants have minuscule brains and they do very simple cognitive tasks, but collectively they can do amazing things. They can forage for food and build these elaborate tunnel structures,” Strano says. “Physicists and engineers like myself want to understand these rules because it means we can make tiny things that collectively do complex tasks.” In this study, the researchers wanted to design particles that could generate rhythmic movements, or oscillations, with a very low frequency. Until now, building low-frequency micro-oscillators has required sophisticated electronics that are expensive and difficult to design, or specialized materials with complex chemistries. The simple particles that the researchers designed for this study are discs as small as 100 microns in diameter. The discs, made from a polymer called SU-8, have a platinum patch that can catalyze the breakdown of hydrogen peroxide into water and oxygen. When the particles are placed at the surface of a droplet of hydrogen peroxide on a flat surface, they tend to travel to the top of the droplet. At this liquid-air interface, they interact with any other particles found there. Each particle produces its own tiny bubble of oxygen, and when two particles come close enough that their bubbles interact, the bubbles pop, propelling the particles away from each other. Then, they begin forming new bubbles, and the cycle repeats over and over. “One particle by itself stays still and doesn’t do anything interesting, but through teamwork, they can do something pretty amazing and useful, which is actually a difficult thing to achieve at the microscale,” Yang says. MIT chemical engineers showed that specialized particles can oscillate together, demonstrating a phenomenon known as emergent behavior. At left, two particles oscillate together, and at right, eight particles. Video courtesy of the researchers. The researchers found that two particles could make a very reliable oscillator, but as more particles were added, the rhythm would get thrown off. However, if they added one particle that was slightly different from the others, that particle could act as a “leader” that reorganized the other particles back into a rhythmic oscillator. This leader particle is the same size as the other particles but has a slightly larger platinum patch, which enables it to create a larger oxygen bubble. This allows this particle to move to the center of the group, where it coordinates the oscillations of all of the other particles. Using this approach, the researchers found they could create oscillators containing up to at least 11 particles. Depending on the number of particles, this oscillator beats at a frequency of about 0.1 to 0.3 hertz, which is on the order of the low-frequency oscillators that govern biological functions such as walking and the beating of the heart. Oscillating current The researchers also showed that they could use the rhythmic beating of these particles to generate an oscillating electric current. To do that, they swapped out the platinum catalyst for a fuel cell made of platinum and ruthenium or gold. The mechanical oscillation of the particles rhythmically alters the resistance from one end of the fuel cell to the other, which converts the voltage generated by the fuel cell to an oscillating current. “Like a dripping faucet, catalytic microdiscs floating at a liquid interface use a chemical reaction to drive the periodic growth and release of gas bubbles. The study shows how these oscillatory dynamics can be harnessed for mechanical actuation and electrochemical signaling relevant to microrobotics,” says Kyle Bishop, a professor of chemical engineering at Columbia University, who was not involved in the study. Generating an oscillating current instead of a constant one could be useful for applications such as powering tiny robots that can walk. The MIT researchers used this approach to show that they could power a microactuator, which was previously used as legs on a tiny walking robot developed by researchers at Cornell University. The original version was powered by a laser that had to be alternately pointed at each set of legs, to manually oscillate the current. The MIT team showed that the on-board oscillating current generated by their particles could drive the cyclic actuation of the microrobotic leg, using a wire to transfer the current from the particles to the actuator. “It shows that this mechanical oscillation can become an electrical oscillation, and then that electrical oscillation can actually power activities that a robot would do,” Strano says. One possible application for this kind of system would be to control swarms of tiny autonomous robots that could be used as sensors to monitor water pollution. The research was funded in part by the U.S. Army Research Office, the U.S. Department of Energy, and the National Science Foundation.
Physics
We’re (hopefully) going to be able to watch the Dart spacecraft’s collision with Dimorphos live, or at least on a few minutes’ delay, thanks to what Nasa calls the mission’s own “mini-photographer”, the LICIACube (short for Light Italian CubeSat for Imaging Asteroids).The satellite craft will fly past Dimorphos about three minutes after Dart crashes, Nasa says, aiming “to confirm the spacecraft impact, observe the evolution of the ejected plume, potentially capture images of the newly formed impact crater, and image the opposite hemisphere of Dimorphos that Dart will never see”.The cameras have already been busy. Earlier this week, as part of the calibration process, LICIACube captured images of a crescent Earth, and the Pleiades star cluster, also known as the Seven Sisters. Las primeras imágenes tomadas por un satélite italiano en el espacio profundo.La @ASI_spazio mostró las primeras fotografías tomadas por @LICIACube que sigue a la sonda DART para observar el choque. Esta imagen del cúmulo abierto de las Pléyades fue captada por su cámara LUKE. pic.twitter.com/a0sTOpb4xm— Ana Julia (@anajuliabanlei) September 26, 2022 The image part of the project is managed by the Italian space agency’s robotic exploration mission office, while overall responsibility for managing Dart rests with Johns Hopkins university’s applied physics laboratory in Laurel, Maryland, for Nasa’s planetary defense coordination office.You can read more about LICIACube here.And here’s an explainer we published earlier about the Dart mission, which Nasa is running in association with scientists from Johns Hopkins university.It’s important to point out that there is no current threat to Earth from an asteroid … this test mission is taking place to assess our readiness if such a peril ever materialized.But it’s a subject that’s been in the public eye recently, notably through last year’s Netflix comedy Don’t Look Up, in which Earth faces impending doom from a menacing asteroid and barely anybody seems to care or notice.Good afternoon blog readers, space enthusiasts, and those who just want to know if humanity can be saved from the apocalypse of a giant asteroid slamming into Earth.In about two hours from now, at 7.14pm ET, Nasa will take the first steps towards finding out. The space agency will intentionally crash a spacecraft the size of a small car into Dimorphos, the moon of the asteroid Didymos, orbiting about 6.8m miles away.The aim of the Dart (double asteroid redirection test) mission is to see if the asteroid’s trajectory can be altered by the force of the impact, thereby suggesting humankind has the capability of at least attempting to avert such an Armageddon-style event.The unprecedented “planetary defense test” is a venture several years and $325m in the making, and is the first of what Nasa intends to be a series of missions to assess our readiness for the threat of a large asteroid impact.We’ll bring you all the developments as they happen over the next few hours, but before we get started, let’s take a look at the mission itself:
Physics
All guns can kill, but they do not kill equally.Compare the damage an AR-15 and a 9mm handgun can do to the human body: “One looks like a grenade went off in there,” says Peter Rhee, a trauma surgeon at the University of Arizona. “The other looks like a bad knife cut.”The AR-15 is America’s most popular rifle. It has also been the weapon of choice in mass shootings from Sandy Hook to Aurora to San Bernardino. In Orlando, the shooter used a Sig Sauer MCX, an AR-15 style rifle originally developed for special ops, to kill 49 people in the Pulse nightclub. The carnage sparked new calls to reinstate a ban on assault rifles like the AR-15, which were designed as weapons of war.It’s possible to argue about everything when it comes to the politics of guns---including about the definition of “assault rifle” itself---but it’s harder to argue about physics. So let's consider the physics of an AR-15.A bullet with more energy can do more damage. Its total kinetic energy is equal to one-half the mass of the bullet times its velocity squared. The bullet from a handgun is---as absurd as it may sound---slow compared to that from an AR-15. It can be stopped by the thick bone of the upper leg. It might pass through the body, only to become lodged in skin, which is surprisingly elastic.The bullet from an AR-15 does an entirely different kind of violence to the human body. It’s relatively small, but it leaves the muzzle at three times the speed of a handgun bullet. It has so much energy that it can disintegrate three inches of leg bone. “It would just turn it to dust,” says Donald Jenkins, a trauma surgeon at University of Texas Health Science Center at San Antonio. If it hits the liver, “the liver looks like a jello mold that’s been dropped on the floor.” And the exit wound can be a nasty, jagged hole the size of an orange.These high-velocity bullets can damage flesh inches away from their path, either because they fragment or because they cause something called cavitation. When you trail your fingers through water, the water ripples and curls. When a high-velocity bullet pierces the body, human tissues ripples as well---but much more violently. The bullet from an AR-15 might miss the femoral artery in the leg, but cavitation may burst the artery anyway, causing death by blood loss. A swath of stretched and torn tissue around the wound may die. That’s why, says Rhee, a handgun wound might require only one surgery but an AR-15 bullet wound might require three to ten.Then, multiply the damage from a single bullet by the ease of shooting an AR-15, which doesn’t kick. “The gun barely moves. You can sit there boom boom boom and reel off shots as fast as you can move your finger,” says Ernest Moore, a trauma surgeon at Denver Health and editor of the Journal of Trauma and Acute Surgery, which just published an issue dedicated to gun violence.Handguns kill plenty of people too, of course, and they’re responsible for the vast majority of America’s gun deaths. But a single bullet from a handgun is not likely to be as deadly as one from an AR-15.
Physics
Image source, NASA/ESA/CSA/STScIImage caption, It would take years to traverse the pillars even moving at the speed of lightWhy satisfy yourself with one course when you can have a double helping?The US space agency Nasa has issued a second image of the famous "Pillars of Creation" taken by the new super space telescope, James Webb.This week we get a rendering of the active star-forming region as seen by Webb's Mid-Infrared Instrument (MIRI).Last week, it was the observatory's Near-Infrared Camera (NIRCam) that was highlighting this remarkable location some 6,500 light-years from Earth.The pillars lie at the heart of what astronomers refer to as Messier 16 (M16), or the Eagle Nebula. They are the subject of intense study. Every great telescope is pointed in their direction to try to understand the physics and the chemistry in play as new stars are birthed in great clouds of gas and dust.Webb, with its 6.5m-wide mirror and high-fidelity sensors, is the latest, biggest and best space observatory to take in the scene.What's interesting about the new MIRI picture is the choice of wavelengths used to display the pillars.Ordinarily, astronomers might filter the light to make the dusty columns go virtually translucent, so that their interior, nascent stars can be seen in greater detail. This is what the NIRCam image did: it emphasised the thousands of young blue stars that are present.And MIRI is capable of taking this approach on another step. But on this occasion, the filtering has selected those wavelengths at which the dust itself actually glows."Defying expectations that mid-infrared observations let you see through dust, this stunning image shows that they're also great for studying dust and complex molecules made to glow by the intense light of nearby hot stars," explained Prof Mark McCaughrean, the Senior Advisor for Science at the European Space Agency.Image source, NASA/ESA/CSA/STScIImage caption, NIRCam's view: The pillars are cool, dense clouds of hydrogen gas and dustSome of the complex chemistry this helps accentuate involves polycyclic aromatic hydrocarbons, PAHs. These are very carbon-rich compounds. You find them on burnt toast and in the exhaust from motor vehicles. PAHs produced by stars are thought to enrich the carbon content throughout the Universe.MIRI was developed in a collaborative effort between scientists and engineers from 10 European countries, led by the UK, and Nasa's Jet Propulsion Laboratory.Its co-principal investigator is Prof Gillian Wright."It's simply thrilling to see how well MIRI is performing. It's producing radically new science information - stuff we've never had before," the director of the UK Astronomy Technology Centre told BBC News."What we see in this new image is akin to the 'skin' of the pillars, if you like. You can see filamentary structures which are where the stars are starting to burn through the dust. And you can see regions that are dark - they're so dense and cold that they're not even lighting up for MIRI."James Webb is a collaborative project of the US, European and Canadian space agencies. It was launched in December last year and is regarded as the successor to the Hubble Space Telescope.AstronomyNasaJames Webb Space Telescope
Physics
In the vast majority of space-based science fiction, ships that can travel faster than the speed of light are pretty much a given. Most credit Gene Rodenberry for creating the first warp drive spacecraft for his 1967 premiere of the TV show Star Trek, but before too long, other space dramas like 1977’s Star Wars were using their own “hyperdrives” to power ships like the Millennium Falcon from planet to planet in a matter of hours, or even minutes. Still, these technological terrors were more deus ex machina than actual hard science concepts, used by their creators to move the story seamlessly from planet to planet or star system to star system. That all changed in 1994 when Mexican mathematician Miguel Alcubierre laid out the mathematical foundation for a real-world warp drive spacecraft. Critics immediately pointed out the shortcomings of Alcubierre’s work, primarily the mammoth amounts of energy required to power his spaceship as well as the so-called, purely theoretical “exotic” matter required to build the thing. Nonetheless, his mathematics were sound, moving at least the idea of a warp-capable spaceship from science fiction closer to science fact. Since Alcubierre’s breakthrough, a number of physicists and engineers have laid out their own formulas for theoretical warp drives, with each design seemingly inching the concept closer and closer to reality. The Debrief has covered many of these concepts, including a 2013 refinement of Alcubierre’s model by former NASA engineer and warp pioneer Harold G. “Sonny” White which dramatically lowered the power requirements from a completely unfathomable amount to a slightly less impossible yet still unattainable (by present-day engineering standards) amount. More recently, The Debrief spoke to Dr. Eric Lentz, a Ph.D. physicist who offered his own take on the warp drive, as well as a pair of engineers headquartered in Sweden who put forth designs for a warp drive that requires no exotic matter. We also interviewed a Chicago area engineer who has applied for a patent on his unique approach to building a warp drive, along with a professor who envisions traveling faster than the speed of light using something called fluidic space. Then, in December of 2021, The Debrief broke the news that Dr. White and his Eagleworks team discovered what they claimed to be the first real-world warp bubble while experimenting with extremely small structures known as Casimir cavities.  White’s views about the feasibility of warp drive spacecraft and their related effects have garnered a fair amount of attention from critics over the years. Among them is astrophysicist Ethan Siegel who has noted that the warp drive concept “remains an interesting possibility and one worthy of continued scientific investigation, but one that you should remain tremendously skeptical about given the current state of affairs.” Faster-than-light travel theories are very much a dark art, but for those new to this subject (or who have never watched an episode of any Star Trek series), a warp bubble is an area of localized space/time that surrounds a faster-than-light ship, allowing it to seemingly break the laws of physics without, you know, breaking the laws of physics. This significant step seemed to mark a potentially dramatic shift from theory to practice, potentially offering future engineers and inventors a roadmap to try to actually construct a working, real-world war drive. Enter Greg Hodgin, Ph.D., a chemical engineer and political scientist who has started his own company, ZC Inc., with the primary goal of building a warp-capable spaceship within his lifetime.  Dr. Hodgin recently sat down with The Debrief to discuss his lofty goals and the evolving roadmap he has laid out to achieve them. And unlike the handful of theorists who have preceded him in this nascent field, Hodgin believes he has the right people and the right plan to make warp drive spacecraft a physical reality. ZC Founder and Star Trek Fan Greg Hodgin, Ph.D. The Inspiration In many of the emails between Hodgin and The Debrief leading up to that interview, the nascent warp pioneer repeatedly pointed out how the wide range of Star Trek series and movies have inspired him to try and build the thing. “I want to be the first person to fly the Phoenix,” he explained, referencing the first human-made spaceship to break the warp barrier in Star Trek lore. Hodgin even challenged The Debrief to guess the inspiration for the name of his company. And yes, we ultimately figured out he meant Zefram Cochran, the fictional inventor of the warp drive whose first flight of The Phoenix drew the attention of some Vulcans who happened to be in the neighborhood, kicking off humanity’s membership in the collection of spacefaring species known as The Federation. But unlike the millions of folks who have dreamt of exploring strange new worlds to seek out new life and new civilizations, Hodgin decided to put his reputation on the line and start a company whose primary goal is to actually do it. “I’d love our maiden flight of a full-sized warp craft to take place, oh, somewhere around April 5th, 2063,” Hodgin told The Debrief, referencing the date of Cochran’s fictional first flight. “Although, I doubt we’ll be able to do it from Bozeman, Montana (where Cochran’s flight took place).” The People  Like many entrepreneurs who venture into highly-speculative, high-tech endeavors, Hodgin admits he isn’t an expert in warp fields or the physics behind them and has spent the last couple of years researching quantum physics, general relativity, and warp field mechanics. “I know just enough to be dangerous,” he joked before rattling off the basics behind his warp drive concept. First, he began reaching out to folks like Dr. Lentz or the duo in Sweden, learning the ins and outs of their warp concepts and what it would take to build them. He also connected with fusion energy experts, whose power source he believes can propel his first generations of warp drive spacecraft. “We can’t create enough anti-matter in the lab to realistically drive our warp engine,” he conceded, “but we believe that fusion is close enough and powerful enough to do the job (at least) for our initial goals.” Next, Hodgin partnered up with a retired Army colonel, Emmett Spurlock, whose area of expertise is operations and logistics. Hodgin says he feels that his own experience in creating and running a UN-recognized non-profit and Spurlock’s 30+ years of experience in operating complex projects dovetails nicely with the theoretical physicists and engineers whose ideas he hopes to test end evaluate until he finds the one that can work. “Because these theoretical physicists have been pioneers in doing the math, now we know it’s possible,” Hodgin told The Debrief. Of course, the wide-eyed, sometimes manic CEO often pointed out that there is a very small community of warp theorists and engineers for him to pick from, and if he hopes to engage them on a long-term basis on his project, he will need to do everything to keep them happy. “There’s only like 40 people doing warp research in the world,” he says. “And if I don’t treat them well, they’re going to go somewhere else.” The Energy “A lot of people, when we talk about warp drive, and even when you look at the papers, you see yes, it’s theoretical, and yes, we’ve lowered the energy requirements,” Hodgin told The Debrief, “but to build this thing you’re gonna need a ton of energy and a ton of power.” That, says Hodgin, is where fusion energy comes in. “We recently signed an MOU with Jason Cassibry, who you interviewed last year,” he said. An associate professor in the University of Huntsville’s Department of Mechanical and Aerospace Engineering, Cassibry spoke with The Debrief last year about his work in Pulsed Fusion. Coincidently, Cassibry’s endeavor into fusion research covered by The Debrief was funded by the Limitless Space Institute, a group that includes Dr. Sonny White as a board member. The Debrief reached out to Cassibry to ask him about ZC and their goals to use fusion energy to power their warp drive spacecraft. “Broadly speaking, what Greg is trying to do is assemble a cadre of scientists to work out the challenges of warp bubbles for propulsion, power, and other industrial applications,” Cassibry explained. “Since the world is chasing abundant sources of energy, Greg conjectured that warp-enabled fusion could be an ultimate route to inexhaustible energy.”  In simple terms, nuclear fusion generates a lot of heat and radiation, most of which gets lost and wasted because it doesn’t become usable energy. By using a distortion in spacetime, such as a small warp bubble, gravity itself will keep that heat and radiation contained, much like it does with the Sun’s energy.  “We also know that we can create gravity-like fields to interact with matter in a variety of ways using current, voltage, frequency, and material properties. It has been speculated since the 1800’s that both gravity and electromagnetism create, or are emergent properties, of spacetime curvature, Cassibry said. “The generation of curvature by electromagnetic fields in an efficient way has not been formalized in a general way that is accepted by the mainstream. We also have not generated a significant thrusting force or motion from pure fields yet. Greg sees the application of fusion as a strong motivator for paying for some of the critical research that needs to be done to figure these problems out.” Hodgin freely admits that fusion energy may still be years or even decades away, but for the purposes of his company ZC, he is hoping it can become a reality in the next five years. In fact, he thinks he can take the core concepts in warp theory and apply them to fusion, greatly reducing the power requirements needed to create the initial fusion reaction. “The idea would be, with several warp fields, you can basically take a confined space and just crunch it to pieces,” he explains. Hodgin says this is the idea that most attracted Cassibry and the one that his team will have to conquer first before they can make their first warp drive spacecraft. The First Warp Drive spaceCraft Once they have perfected fusion energy (because no ‘crazy’ theory would be complete without a seemingly impossible first step), Hodgin says his team will be able to build the first warp-capable ship. But unlike the Enterprise or even the Phoenix, ZC’s first warp-capable craft will be incredibly tiny. “We’re actually looking to create a warp bubble within the next five years,” Hodgin told The Debrief. “And again, microscopic. We’re talking nanometer, angstrom level. The whole point is to show we can actually distort space/time.” Size is the key, says Hodgin, because of the tremendous amount of power required to warp space/time. “By shrinking the warp ship from 100m (large enough to carry human crew) to the nano-scale, you can dramatically reduce the energy requirements,” he explained. After talking with the various warp field theorists and engineers, Hodgin says his extremely tiny warp craft will also feature some significant advantages over previous theoretical designs. For example, his warp field will not be constant but pulsed on and off. This allows for more control and also allows for ongoing communication between the ship and the engineers. “One of the problems with Alcubierre is once you turn it on, you can’t turn it off,” he explained. “Ours mitigates that problem.” Hodgin also says that his first nano-scale craft (and their first macro-scale craft planned for somewhere around year 10 of his business plan) will use a warp drive for propulsion but will not be designed to go faster than light. This, he says, removes the need for exotic matter, allowing his engineers to work with materials already available. “Creating the microscopic warp bubble allows us to get the power to perfect that technology, and then we can go bigger,” he told The Debrief. Hodgin also notes that even a sub-light warp ship would be a huge advantage over current propulsion technology. “At .33 C (one-third the speed of light), we can get to orbit in 8 seconds,” Hodgin says as his voice gets higher and his eyes open wide. “We can get to Mars in 8 minutes!” The Next Steps Like many projects of a speculative nature, Hodgin says that ZC will need to secure some first-stage funding to get their project out of the planning stages and into implementation. “Our goal right now is to be chasing government grants,” he said. Toward that end, Hodgin and professor Cassibry have partnered on a grant application to the National Science Foundation that could help propel their fusion research component forward. He is also speaking to venture capitalists that seem to see the value in his fusion energy first approach. “You tell these guys you want to build a warp drive they look at you like your crazy,” he explains. “But you tell them you are working on nuclear fusion, suddenly their eyes open up, and they start to see dollar signs.” When The Debrief noted the speculative nature of his endeavor, Hodgin just smiled again before pointing out how he and his COO have the hands-on experience with politics and operations that few, if any, of the warp theorists whose work he hopes to build upon possess. “A lot of scientists (chasing VC money) make the same mistake,” he said. “The idea is good. If you just see the idea, it is fine. But that’s not all that you need. There are political and social, cultural, ethical, and moral issues that come with that. And if you don’t think about the whole box, they’ll talk to you, but they’re not going to give you (money).” In fact, he says, his recent appearance at The New Worlds Conference in October (where he was able to touch base with Dr. White, who was also on his panel) resulted in some pretty interesting connections, offering him hope. “We had several people come up and talk to us (at the new worlds conference), including Space Force,” he told The Debrief. “That was fascinating. And not in a bad way. It seems that a lot of people think this is less crazy than it was a year or two ago. That’s a very good place to be.” Hodgin readily admits his venture is definitely out there and that he and his team won’t really know if it can work or not until they get into the lab and try to build it, but he argues that is typical of big pharma and biotech investments that are often years away before tangible success. “We don’t know if it’s going to work at all,” he said. “It might be beyond our technological capacity. It might be” Furthermore, he also notes that even if they are successful in turning their warp ship on, there may be some unforeseen consequences they need to prepare for. “Let’s make damn sure that when we turn this thing on, we don’t open a hole to another dimension,” he said only half-jokingly. Hodgin also admits he hasn’t chosen one specific warp theory to pursue. Instead, he is hoping to undertake what he calls a “wheel and spoke” model where multiple warp teams are working on all of the different theories at the same time until one of them finds the magic bullet to make the whole concept work. All in all, the ZC venture is a fun idea, and Hodgin is, without a doubt, the first mover in the field of building an actual warp spacecraft. Fortunately, the starry-eyed dreamer says, the theoretical work has more or less been accomplished, and it is now his job to take those theories (and the people behind them) and make a warp-capable spacecraft a reality. “Guys like Alcubierre, White, Lentz, and others have done the physics for us,” he explains with a sly grin. “And there’s enough of a loophole here that we can drive a starship through it.” Follow and Connect with Authors MJ Banias and Christopher Plain on Twitter: @mjbanias and @plain_fiction.
Physics
By Csaba Balaz - Associate Professor in Physics, Monash UniversityWhen it comes to electrons, Higgs bosons or photons, they don’t have much going for them. They possess spin, charge, mass and … that’s about it.Sometimes they only carry a vanishing amount of some of these features at that. So the mass of a particle is an important property to understand, because it goes to the root of fundamental particle physics.What is mass then, in the sense of its physical meaning? Why do some particles have mass and others don’t? And you may not think this would be important, but the biggest question is: why do particles have mass at all? To answer those questions, and go well beyond what Albert Einstein knew about mass, let’s dive into particle physics and general relativity.The measure of itA professor once told me that the best definition of a physical property is its way of measurement. Following this definition, let’s see how we measure mass.When you step on a scale, like it or not, it registers your weight. This is because the Earth attracts you with the gravitational force. The force between you and the Earth exists because both you and the Earth have mass.If you stepped on the same scale on the moon it would register a fraction of your weight on Earth. About one sixth, to be precise. (There has never been a more effective diet plan: lose 83% of your body weight just by flying to the moon.) Your moon weight is less because the mass of the moon is less than Earth’s mass, and the gravitational force between the moon and you is proportional to the mass of the moon (M) and your mass (m). This is given by the formula F = GMm/(R2) where R is the radius of the moon and G is called Newton’s gravitational constant.Mass is the charge of the gravitational interaction and without it no gravitational force exists. Physicists refer to this manifestation of mass as gravitational mass.When you open a door, you have to push it with a force, otherwise the door won’t move. This is because the door has mass manifested as inertia, that is, it counteracts you to change the state of its motion.Newton’s second law says that the force you need to change the state of motion of an object is proportional to its inertial mass (F = ma). It’s easier to push a light door than a heavy one with the same acceleration.Mass unifiedEinstein connected gravitational and inertial mass via his gravitational equivalence principle. The equivalence principle simply says that gravitational and inertial mass are one and the same thing.This simple statement, however, coupled with the mathematical idea that the equations of physics should not depend on the reference frame, leads very far. A main consequence of the equivalence principle are Einstein’s gravitational equations. These equations specify how mass curves space and warps time.The meaning of Einstein’s gravitational equations is simple: mass warps space-time and curved space-time moves mass around. If you have ever seen a coin spiralling down a funnel shaped wishing well, you know what I’m talking about. According to Einstein’s geometric picture of gravity, the Earth orbits around the sun because the latter creates a funnel shaped gravitational well in the fabric of space-time and Earth rotates in it just as the coin rotates in the wishing well.If the sun had no mass, the gravitational well around it wouldn’t exist and Earth would fly straight away. If Earth had no mass, it wouldn’t feel the curvature of the well and would fly away in a straight line. That’s general relativity in a funnel shaped nut-shell.Einstein knew all this and much more. After all, he wrote the books on relativity – both on special and general. He figured out how mass is connected to gravity and energy.The first relation is encapsulated by his gravitational field equations, and the second is the widely known E = mc2. Unfortunately, he never had a chance to learn WHY anything has the property of mass.There’s more to massModern fundamental particle physics gave us the answer in 2012 when the Higgs boson was finally discovered.The question is fairly important because, as we saw earlier, without mass there’s no gravity. Or is there? Well, actually, there is.Take a photon, for example. A photon is the quintessence of masslessness. According to our present understanding, one of the deepest fundamental laws of particle physics, called gauge symmetry, prevents any force carrier particles, including photons, from acquiring even the tiniest of mass.Yet, a photon is attracted by the sun. Observations clearly show that light from a galaxy far far away, positioned exactly behind the sun, can be observed on either side of the sun. The fact that the sun’s gravitational field bends light was used to prove that general relativity was correct in 1919.Light interacts with gravitational fields because of E = mc2. This equation tells us that, from the gravitational perspective, energy and mass are equivalent. A photon carries a tiny bit of energy, so it is slightly attracted by the sun.The fact that energy gravitates is important, because the bulk of mass around us is, in fact, energy. All the visible parts of galaxies and stars are known to be made mostly of hydrogen, which is just protons and electrons.Earth is made of many different atoms, but those are just made of nucleons (protons and neutrons) and electrons. Electrons are 2,000 times lighter than nucleons, so they bring much less to the table in terms of mass. And remarkably, most of the mass of protons and neutrons is energy stored in glue.Glue (or gluon, in scientific terms) is the stuff that keeps protons and neutrons together. It is the carrier of the strong force. Binding energy stored in gluons makes up most of the mass of protons, neutrons, hydrogen and any atom for that matter.The role of the Higgs bosonWe could stop here, because we’ve understood the origin of most of the visible mass in the universe. Einstein didn’t know where the mass of macroscopic objects came from, but particle physics revealed this late in the 20th century.There is, however, one more twist in the story. Perhaps the most amazing one. If Einstein had known about it, he would certainly have loved it.It is the role of the Higgs boson in generating mass. The Higgs boson, which is the excitation of the Higgs field, is what provides mass at the fundamental level: it lends mass to the elementary particles.The Higgs story began with a serious problem in particle physics. By the late 20th century it was evident that gauge symmetries, mentioned earlier, are fundamental laws and they forbid any mass of force carriers.Yet in 1983 massive force carries, the W and Z bosons, were discovered by the Large Electron-Positron (LEP) (the predecessor of the Large Hadron Collider (LHC)).This was a serious conundrum: one of the most fundamental laws of nature, gauge invariance was at stake. Giving up gauge invariance would have meant starting particle physics over from scratch.Amazingly, smart theorists figured out a way to have their cake and eat it too! They introduced the Higgs mechanism, which allows us to preserve gauge symmetries at the fundamental level but break them such that in our particular universe massive W and Z particles are still possible.This incredible trick won Sheldon Glashow, Abdus Salam, and Steven Weinberg the 1979 Nobel Prize in Physics. Besides force carriers, the Higgs mechanism also lends mass to fundamental matter particles, explaining why electrons, neutrinos or quarks have mass.The contribution of fundamental electron, quark or neutrino mass, however, is negligible compared to the mass generated by glue around us. So does this mean that the Higgs is negligible at the atomic level?The answer is no! Without the Higgs boson, electrons would have no mass and all atoms would fall apart. Neutrons would not decay, so even atomic nuclei would look very different. Altogether, the universe would be a very-very different place, lacking galaxies, stars and planets.And then came the dark stuffSo, now we know everything about mass, right? Unfortunately not. Only 5% of the mass in the whole universe comes from ordinary matter (the mass of which is understood).Nearly 70% of the mass of the universe comes from dark energy and about 25% from dark matter.Not only do we not have a clue about what kind of mass that is, we don’t even know what the dark sector is composed of. So stay tuned because the story of mass continues, well into the millennium.Source: The ConversationInteresting articles in Health, Mind & Brain:Communication skills are more important than math skills when it comes to learning to codeWill AI rescue us from a world without working antibiotics?This unusual mini-robot can transport small packages without the use of any chips or batteries!Who’s really at risk when it comes to the coronavirus outbreak?
Physics
Electrons flowing in vortices Top: Schematic experimental layout showing samples of Au (a) and WTe2 (b) together with the device used to measure current flow. Bottom: Normalized current densities measured experimentally in c) Au and d) WTe2 showing laminar and vortical flows. (Courtesy: A Aharon) An international team of physicists has observed electrons flowing in whirlpool-like patterns known as vortices for the first time. Long predicted, but never before seen in experiments, this evidence of fluid-like behaviour could be exploited to make more efficient electronics. In ordinary materials, the flow of electrons is strongly influenced by impurities and atomic vibrations, both of which cause electrons to scatter. In ultraclean materials and at near-zero temperatures, where such classical processes are absent, the electrons move unimpeded across the material, like billiard balls. In the rare cases, however, when the electrons are strongly interacting between themselves, the electrons are predicted to move collectively, like a fluid. In 2017, a team led by Leonid Levitov at the Massachusetts Institute of Technology in the US, together with colleagues at the University of Manchester in the UK, observed fluid-like electron behaviour in a sample of graphene (a sheet of carbon atoms just one atom thick) that contained a thin channel with several pinch points. Current sent through the channel flowed through the constrictions with hardly any resistance, implying that the electrons that make up the current could squeeze through the pinch points collectively rather than passing through them individually. Electrons behave like quantum waves In the new work, Eli Zeldov, together with Levitov and colleagues from Israel’s Weizmann Institute of Science and the University of Colorado at Denver in the US, studied electrons in tungsten ditelluride (WTe2). This material is an ultraclean type II Weyl semimetal, a recently discovered class of topological material (one that can be insulating in the bulk but has conducting surface states due to symmetry-protected topological order). WTe2 is known to have exotic electronic properties when made into two-dimensional flakes a single atom thick. Indeed, it is one of several new quantum materials in which electrons interact strongly and behave as quantum waves rather than particles, Levitov explains. To observe electrons flowing in vortices, the researchers first synthesized pure single crystals of WTe2 and shaved off thin flakes of the material. They then used electron-beam lithography and plasma etching to pattern each flake into a narrow channel and two circular chambers connected to its sides. “This geometry was designed to allow possible shear forces to steer the electron fluid in the chambers by the electric current flowing in the narrow channel,” team member Amit Aharon-Steinberg tells Physics World. “We then used an extremely sensitive scanning magnetometer, designed in our laboratory, which senses the magnetic fields generated by the flowing electric current.” Finally, the researchers reconstructed the electric current from the measured magnetic field images to explicitly highlight the vortices. The hydrodynamic regime The analyses revealed that electrons flowing through the channel caused the electrons in each side chamber to swirl in whirlpools. What is more, the vortices were only present for small apertures, with the flow being laminar (that is, without vortices) for larger ones. Near the vortical-to-laminar transition, a single vortex in the chamber was seen splitting into two – behaviour that is only expected in the hydrodynamic (fluid-like) regime. Read more Electrons flow like a fluid in a metal superconductor The findings suggest that a new hydrodynamic mechanism in thin pure crystals may exist such that the diffusion of the momentum of electrons is enabled by small-angle scattering on the surface of the material rather than conventional electron-electron scattering, which becomes very weak at low temperatures. This surface-induced para-hydrodynamics, as the researchers have dubbed it, shares many of aspects of ordinary hydrodynamics, including vortices. According to the Weizmann-MIT-Colorado team, the findings could help researchers design and develop more efficient electronics. “We know when electrons go in a fluid state, [energy] dissipation drops, and that’s of interest in trying to design low-power electronics,” Levitov says. “This new observation is another step in that direction.” The research is detailed in Nature.
Physics
Home News Science & Astronomy An image of the Orion Nebula as seen by the Spitzer Space Telescope and other observatories. A recent study of the nebula found new secrets of protostars "burping." (Image credit: ESA/NASA/JPL-Caltech) Infant stars in the Orion Nebula are emitting bright bursts of radiation as they frantically feed on gas and dust to grow.The surprisingly frequent feeding frenzies of the newborn stars or 'protostars' in the Orion Nebula, the closest star-forming region to Earth, were revealed by data from NASA's now-retired Spitzer Space Telescope. Outbursts from stars occur during their earliest stage of development when they are aged around 100,000 years old, and repeat approximately every 400 years, the research reveals. The bright eruptions are clear signs of intense feeding as the infant stars gobble up material from disks of gas and dust that surround them as they accumulate mass. "When you’re watching star formation, clouds of gas collapse to form a star," research co-author and University of Toledo astronomer, Tom Megeath, said in a statement (opens in new tab). "It's literally the process of star creation in real-time."The findings could represent a significant step forward in the understanding of the physics at play during the earliest years of a star's life, including how young stars rapidly gather mass. This period of stellar evolution has been shrouded in mystery as young stars are hidden inside clouds of cool molecular gas and dust that make up the building blocks from which they form.Within these dense clouds, protostars younger than 100,000 years old (classified as "class 0 protostars") produce outbursts that are tough to observe with ground-based telescopes. The first outburst of this kind was detected almost 100 years ago and since then very few have been sighted.Between 2004 and 2017, the infrared Spitzer Space Telescope broke this run of bad fortune for astronomers by seeing through the thick clouds of gas and dust to see bright flares from the young stars wrapped within the Orion Nebula. The space telescope's 16-year mission ended in 2020. Protostar peekabooWhile observing 92 previously known class 0 protostars, the team discovered  three outbursts, two  of which were previously unknown. This data pointed to a 'burst rate' from the infant stars of around one every 400 years. This is more frequent than the rate of bursts measured from older protostars that are further along in their evolution, the researchers said.. The team was also able to estimate that these bursts last around 15 years. During the class 0 period, the protostars also accumulated around 50% or more of their total mass, the scientists found. This conclusion was reached by combining Spitzer data with observations made by NASA's space-based Wide-field Infrared Survey Explorer (WISE) and by the retired telescopes the Herschel Space Telescope, and the airborne Stratospheric Observatory for Infrared Astronomy (SOFIA)."By cosmic standards, stars grow rapidly when they are very young," Megeath said. "It makes sense that these young stars have the most frequent bursts."Unraveling starbirth mysteriesThe findings could also indicate how the consumption of gas and dust from the surroundings of young protesters and the accumulation of mass could go on to influence the formation of planets around stars."The disks around them are all raw material for planet formation," Megeath added. "Bursts can actually influence that material." This influence could extend to triggering the appearance of molecules, grains, and crystals that can stick together to form larger structures  —  structures like planets. This means that there’s a chance that over 4.5 billion years ago before Earth was formed, the sun was one of these "burping baby stars."“The sun is a bit bigger than most stars, but there's no reason to think that it didn't undergo bursts,” Megeath said. "It probably did. When we witness the process of star formation, it is a window into what our own solar system was doing 4.6 billion years ago."The team’s research is published in the Astrophysical Journal Letters.Follow us on Twitter @Spacedotcom or on Facebook. Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com. Robert Lea is a science journalist in the U.K. whose articles have been published in Physics World, New Scientist, Astronomy Magazine, All About Space, Newsweek and ZME Science. He also writes about science communication for Elsevier and the European Journal of Physics. Rob holds a bachelor of science degree in physics and astronomy from the U.K.’s Open University. Follow him on Twitter @sciencef1rst.
Physics
Lawrence Livermore National Laboratory via AP This illustration provided by the National Ignition Facility at the Lawrence Livermore National Laboratory depicts a target pellet inside a hohlraum capsule with laser beams entering through openings on either end. The beams compress and heat the target to the necessary conditions for nuclear fusion to occur. Earlier this week, the Lawrence Livermore National Laboratory (LLNL) announced a momentous breakthrough in harnessing controlled nuclear fusion. The LLNL’s National Ignition Facility (NIF) achieved “ignition” — a fusion experiment that produced more energy than consumed by the lasers needed to drive it. This piece of scientific news received significant publicity, even briefly capturing the front pages of major news outlets. What does it all mean?  Nuclear fusion powers our sun and all other stars. In it, light nuclei of hydrogen fuse into heavier nuclei of helium and generate tremendous amounts of energy. Hydrogen used in fusion is an incredibly dense energy source, holding more than 1 million times more energy in a unit of mass than natural gas. As hydrogen is easily produced from water, commercial nuclear fusion would effectively offer a limitless source of energy with zero greenhouse gas emissions. Compared to its established relative, the nuclear fission — which is used in commercial nuclear power plants and functions by breaking up heavy nuclei — fusion’s radioactive waste would be shorter-lived and easier to handle.   But problems abound. One of the key ones is that fusion is difficult to kick-start, requiring high temperatures comparable to those in the sun, which create an unusual state of matter known as plasma. These temperatures are achieved by extremely powerful lasers, which typically consume more energy that the fusion generates. This is the crux of NIF’s announcement: For the first time, they produced 50 percent more energy in a fusion experiment than was consumed by the lasers powering it.  What does this mean for the role of fusion in our future energy supply? NIF’s discovery is undoubtedly significant, but much work remains. The amount of energy generated is still tiny, about 0.9 kilowatt-hours (kWh) from around 0.6 kWh input. In comparison, an average American home uses about 900 kWh per month. The obvious next task is to increase both the absolute output, and the ratio of output to input energy. This task will fall to the International Thermonuclear Experimental Reactor (ITER), currently under construction in the south of France (with the U.S. as one of the three dozen partner countries) and scheduled to begin operation in 2025. By the end of the decade, ITER aims to produce a power of 500 MW, similar to the output of a mid-sized coal-fired power plant, using only 50 MW of laser power to kick-start the process.   However, even ITER is just a proof-of-concept: fusion will produce heat, not usable electricity delivered to the grid. Based on ITER’s expected insights, new generation of even larger demonstration (DEMO) reactors will be built and use fusion to produce electricity. These DEMO reactors are scheduled for operation only in the late 2040s, making this limitless source of energy about two decades away. NIF’s announcement is on track with this timeline: It is progress, but is it enough?  Unfortunately, it may not be. Our energy landscape will need to modify quickly and dramatically to avoid the worst consequences of climate change. Nuclear fusion will most likely be late to the party, not entering commercial use in time to participate in this shift. Critics point out that we have a functioning, but underutilized fusion reactor already: the sun delivers enough energy to Earth in 90 minutes to satisfy all our annual energy needs — and yet the global utilization of solar power remains miniscule. If billions of dollars invested in fusion development were deployed to improve and subsidize solar panels, climate change woes may be solved much earlier.   The dream of developing controlled fusion maybe strikes at more than just the concerns about energy supply and climate change. Humans have developed many innovative technologies, replicating and improving upon nature’s ingenuity. But never have they come close to making their own sun — that remained firmly the domain of gods. Perhaps getting closer to that dream, of moving our knowledge beyond a long-impossible limit, is the true cause for celebration. Along the way, fusion-inspired advances in physics and material science will influence our world well beyond nuclear fusion.  Ognjen Miljanić is a professor of chemistry at the University of Houston, where he teaches on energy and sustainability. His is the author of “Introduction to Energy and Sustainability,” published by Wiley. Follow him on Twitter: @MiljanicGroup
Physics
It’s been nearly 50 years since the Viking 1 lander snapped the first image from the surface of Mars. And yet, until recently, that landscape remained silent to the human ear. Now, thanks to two microphones aboard the Perseverance rover, researchers can tune in from more than a million miles away to probe the Red Planet’s alien atmosphere and unique sound propagation patterns.Fields such as astronomy and astrophysics have long surveyed outer space using the electromagnetic spectrum — from gamma rays to radio waves and everything in between. By contrast, the acoustic exploration of the universe has only just begun. Although we use sound waves here on Earth to map the ocean floor, infer wind patterns, track lightning, and accomplish other tasks, NASA has only ever equipped a few missions with dedicated microphones. The first bound for Mars in 1999 literally crashed and burned, and the second, launched in 2007, had technical issues. It wasn’t until early 2021 when Perseverance touched down that the researchers listening in could start to piece together the Martian soundscape. (Anyone can listen to the eerie audio recordings on NASA’s site.) They shared their findings in a study published on April 1 in the journal Nature.Capturing Martian SoundIt took some coaxing to convince NASA that the microphones were a worthwhile addition to Perseverance’s payload, says Baptiste Chide, one of the study’s lead authors and a postdoc at the Los Alamos National Laboratory.Chide completed his Ph.D. in 2020 at the University of Toulouse with planetary scientists Sylvestre Maurice and David Mimoun, who collaborated with a team at Los Alamos to build an instrument called SuperCam. SuperCam is mounted on Perseverance’s mast and houses one of the rover’s two microphones; the other mic is located in a camera on the rover’s side. It was Chide’s job as a Ph.D. student to devise the scientific rationale that would persuade NASA that the mics were indeed a valuable asset. And the stakes were high: Chide’s entire thesis project hinged upon getting those mics into space. “It was a risky bet,” he says — but one that paid off.(Credit: NASA/JPL-Caltech/Shutterstock)Without the bustle of humans and animals, Mars is nearly silent, save the gusty wind. So, the SuperCam team also decided to analyze the sounds coming from their own scientific equipment.Part of SuperCam’s job is to zap nearby rocks with a laser and record the acoustic and optical signals that reverberate back, in order to determine hardness and chemical composition. The mic was also able to pick up farther away sounds, including the whir of the blades on NASA’s Ingenuity helicopter — the first aircraft to make a powered, controlled flight on another planet.Read More: Chasing Life on MarsInterpreting the NoiseBased on the noises coming from the laser and the helicopter, the researchers were able to determine that the speed of sound is much slower on Mars than it is on Earth. Moreover, different frequencies travel at different speeds. On Earth, sounds typically disperse at about 767 mph. On Mars, however, high-pitched sounds move at 559 mph and low-pitched ones move even slower at 537 mph. A similar phenomenon occurs on Earth as well, but only at frequencies outside our hearing range, so we don’t usually notice the discrepancy.Chide says that this difference in speed between low and high frequencies would be most apparent at long distances. If you were to attend a concert on Mars, for example, and stand a few hundred feet away from the stage, you would receive the high frequencies a few milliseconds before the low ones, which would lead to sound distortion. The band would also have to play quite loud in order for the music to reach you because sounds on Mars don’t carry nearly as far as they do on Earth.These otherworldly acoustic patterns are due to the peculiar atmosphere on Mars. Unlike Earth’s atmosphere, which is primarily oxygen and nitrogen, the Martian atmosphere is 96 percent carbon dioxide (CO2), and extremely cold and thin. This combination of factors causes the CO2 molecules to vibrate in such a way that they absorb higher-frequency sounds, preventing those noises from traveling long distances.Given that temperature has such a drastic effect on sound propagation, Chide and his colleagues suspect that the speed of sound on Mars will vary by season and even time of day. As the dust storm season approaches, the researchers anticipate their microphones will detect additional wind and temperature changes. SuperCam’s mic has unprecedented sensitivity, and can detect pressure fluctuations at scales 1,000 times smaller than ever before recorded on Mars. As a result, the mic can sense tiny eddies of wind called “micro-turbulence,” which whisk up dust — sculpting the planet’s surface, mixing chemicals and aerosols in the atmosphere, and controlling the temperature by absorbing solar radiation.Beyond the TheoreticalPrior to these sound recordings (which totaled just under six hours), models of the Martian soundscape were purely theoretical and offered conflicting predictions. Andi Petculescu, a physicist from the University of Louisiana at Lafayette who studies acoustic properties of planetary atmospheres and was not involved in the research, considers the new study to be “a breakthrough” that will help develop more consistent acoustic models.Given that the cold, thin, CO2-rich atmosphere renders Mars acoustically “unfriendly,” Petculescu was impressed that the researchers were able to obtain such clean signals from their microphones. He advocates equipping future spacecraft with acoustic sensors in order to learn more about the sound properties of other atmospheres. For example, Saturn’s largest moon Titan has a denser atmosphere and a wealth of background noises to record, including methane rainstorms. In 2027, NASA is scheduled to launch its Dragonfly mission, which will carry two microphones on its meteorological experiment, complementing previous recordings from the Huygens Probe.“The study of acoustics is coming of age in planetary science, and we are learning new things about the way sound propagates in different atmospheres,” explains Roger Wiens, a co-author on the recent study and principal investigator of SuperCam.The physics of sound will be important to understand in advance of humans setting foot on the Red Planet, he adds, in order to deduce information like wind direction and speed — and even gauge the health of scientific instruments by listening to the sounds they make as they operate. When humans finally get there, we’ll likely have to use devices like radios to communicate because sound won’t travel very far. As Wiens puts it: “You can't yell to somebody at the other side of the block.” Not to mention, you’d be shouting from inside your space helmet.
Physics
Protons might be stretchier than they should be. The subatomic particles are built of smaller particles called quarks, which are bound together by a powerful interaction known as the strong force. New experiments seem to show that the quarks respond more than expected to an electric field pulling on them, physicist Nikolaos Sparveris and colleagues report October 19 in Nature. The result suggests that the strong force isn’t quite as strong as theory predicts. It’s a finding at odds with the standard model of particle physics, which describes the particles and forces that combine to make up us and everything around us. The result has some physicists stumped about how to explain it — or whether to even try. Sign Up For the Latest from Science News Headlines and summaries of the latest Science News articles, delivered to your inbox “It is certainly puzzling for the physics of the strong interaction, if this thing persists,” says Sparveris, of Temple University in Philadelphia. Such stretchiness has turned up in other labs’ experiments, but wasn’t as convincing, Sparveris says. The stretchiness that he and his colleagues measured was less extreme than in previous experiments, but also came with less experimental uncertainty. That increases the researchers’ confidence that protons are indeed stretchier than theory says they should be. At the Thomas Jefferson National Accelerator Facility in Newport News, Va., the team probed protons by firing electrons at a target of ultracold liquid hydrogen. Electrons scattering off protons in the hydrogen revealed how the protons’ quarks respond to electric fields (SN: 9/13/22). The higher the electron energy, the deeper the researchers could see into the protons, and the more the electrons revealed about how the strong force works inside protons. For the most part, the quarks moved as expected when electric interactions pulled the particles in opposite directions. But at one point, as the electron energy was ramped up, the quarks appeared to respond more strongly to an electric field than theory predicted they would.   But it only happened for a small range of electron energies, leading to a bump in a plot of the proton’s stretch. “Usually, behaviors of these things are quite, let’s say, smooth and there are no bumps,” says physicist Vladimir Pascalutsa of the Johannes Gutenberg University Mainz in Germany. Pascalutsa says he’s often eager to dive into puzzling problems, but the odd stretchiness of protons is too sketchy for him to put pencil to paper at this time. “You need to be very, very inventive to come up with a whole framework which somehow finds you a new effect” to explain the bump, he says. “I don’t want to kill the buzz, but yeah, I’m quite skeptical as a theorist that this thing is going to stay.” It will take more experiments to get theorists like him excited about unusually stretchy protons, Pascalutsa says. He could get his wish if Sparveris’ hopes are fulfilled to try the experiment again with positrons, the antimatter version of electrons, scattered from protons instead. A different type of experiment altogether might make stretchy protons more compelling, Pascalutsa says. A forthcoming study from the Paul Scherrer Institute in Villigen, Switzerland, could do the trick. It will use hydrogen atoms that have muons in place of the electrons that usually orbit atoms’ nuclei. Muons are about 200 times as heavy as electrons, and orbit much closer to the nucleus of an atom than do electrons — offering a closer look at the proton inside (SN: 10/5/17). The experiment would involve stimulating the “muonic hydrogen” with lasers rather than scattering other electrons or positrons from them.  “The precision in the muonic hydrogen experiments will be much higher than whatever can be achieved in scattering experiments,” Pascalutsa says. If the stretchiness turns up there as well, “then I would start to look at this right away.”
Physics
Everybody's different Researchers aim to provide a more personalized approach to radiotherapy planning by incorporating factors related to the unique radiosensitivity of tumours and organs-at-risk. (Courtesy: Shutterstock/Mark Kostich) The goal of radiotherapy is to deliver a prescribed radiation dose to the tumour target while limiting damage to surrounding normal tissues. This is currently achieved using population-based treatment plan optimization, based on predefined dose-based objectives and organ-at-risk (OAR) constraints developed from the aggregated response to radiation of a broad patient population. Unfortunately, the effectiveness and toxicity of such standardized treatment plans vary, because patients and their tumours have individual biological characteristics. Aiming to provide a more personalized approach to radiotherapy planning, researchers at the University of Michigan have developed a novel intensity-modulated radiotherapy (IMRT) optimization strategy that directly incorporates patient-specific dose-response models into the planning process. Their technique, described in Medical Physics, is based on maximizing the predicted value of overall treatment utility – defined as the probability of local control minus the weighted sum of toxicity probabilities. The new planning method, called prioritized utility optimization (PUO), augments standard approaches by incorporating personalized factors related to the radiosensitivity of tumours and OARs. OAR radiotoxicity, for instance, can be influenced by age, smoking status, gene expression, molecular markers and pre-existing conditions such as cardiac disease. Other concurrent treatments may also impact the efficacy of radiation therapy. The researchers: Lead author Daniel Polan (left) and principal investigator Martha Matuszak. (Courtesy: D Polan) To validate their strategy, principal investigator Martha Matuszak and colleagues used the PUO method to create IMRT plans for five patients with non-small cell lung cancer (NSCLC). They report that PUO planning improved local control for all patients compared with the conventional plans that had been used for their treatments. “NSCLC patients represent a highly heterogeneous group with variability in extent and localization of disease,” explains lead author Daniel Polan. “In combination with other anatomic variability, these factors can drastically impact treatment planning, including any anticipated gains from differing optimization methods. Therefore, for initial feasibility testing of our method, we selected five cases to represent diversity in patient size, tumour size, location and laterality, in addition to diversity in dose covariates influencing predicted outcomes.” To create patient-specific IMRT plans, the researchers first used a commercial treatment planning system to calculate dose based on an influence matrix of beamlet-dose contributions to regions-of-interest. They then solve two optimization problems to generate optimal beamlet weights that can be imported back into the TPS. The first optimization problem maximizes the overall plan utility subject to typical clinical dose constraints, by optimizing the trade-off between efficacy and toxicity based on individualized dose-response models. The second minimizes conventional dose-based objectives, subject to the same dose constraints as the first, while maintaining the optimal utility determined from the first optimization. For all five patients, the PUO approach successfully generated optimal beamlet weights that maximized utility while remaining within dose-based constraints. For the study, the researchers compared these PUO IMRT plans with the clinically delivered 3D conformal radiotherapy (CRT) plans, and with retrospectively generated dose-only optimization (DOO) IMRT and volumetric-modulated arc therapy (VMAT) plans. Dosimetry comparisons: Absolute and relative dose–volume histograms for dose-only optimization (DOO, dashed lines) and prioritized utility optimization (PUO, solid lines) plans, for two lung cancer patients. For one patient (left panels), PUO led to a large increase in dose to the planning target volume (PTV) with smaller increases in oesophagus and cord doses. For the other patient (right panels), PUO slightly improved PTV coverage and decreased lung and oesophagus dose. (Courtesy: Med. Phys. 10.1002/mp.15940) When compared with the 3DCRT, VMAT and DOO IMRT plans, the PUO method improved plan utility by an average of 40%, 32% and 31%, respectively. The PUO plans demonstrated an average 17% improvement in local control with similar toxicity to conventional planning. As anticipated, the extent of benefits from the PUO IMRT plans differed among patients. Polan reports that for one patient, PUO resulted in a utility improvement of 70% over conventional DOO. “This corresponds to a 32% absolute improvement in the predicted probability of progression-free survival, while only increasing the predicted probability of radiation-induced lung toxicity by 2%,” he says. “This substantial trade-off has the potential to greatly improve disease survivability while minimizing the impact to a patient’s post-treatment quality-of-life.” For another patient who had a large tumour, however, improvements were minimal. Polan explains that for larger tumours, treatment planning typically becomes more constrained due to increased integral dose requirements and a decreased ability to avoid bordering normal tissues. Read more AI framework uses medical images to individualize radiotherapy dose The team emphasizes that the PUO method provides a quantitative way to determine which patients may benefit from dose escalation or redistribution, based on patient-specific clinical factors and biomarkers, while also accounting for patient geometry and OAR dose limits. The researchers are currently conducting large-scale retrospective studies with the goal of developing a prospective clinical trial employing the PUO treatment planning strategy. Their research centres around integrating patient data and personalized outcome predictions directly into radiotherapy planning, with a current focus on liver, lung and head-and-neck cancers, where balancing the positive and negative effects of radiotherapy could significantly impact a patient’s overall quality-of-life.
Physics
The James Webb Space Telescope is prepared for testing at NASA's Johnson Space Center in Houston. It successfully launched into space on Dec. 25, 2021. Photo courtesy of NASA/Chris Gunn. Jan. 11, 2023Contact: Eric Stann, 573-882-3346, StannE@missouri.edu   In a new study, a team of astronomers led by Haojing Yan at the University of Missouri used data from NASA’s James Webb Space Telescope (JWST) Early Release Observations and discovered 87 galaxies that could be the earliest known galaxies in the universe. Haojing Yan The finding moves the astronomers one step closer to finding out when galaxies first appeared in the universe — about 200-400 million years after the Big Bang, said Yan, associate professor of physics and astronomy at MU and lead author on the study. “Finding such a large number of galaxies in the early parts of the universe suggests that we might need to revise our previous understanding of galaxy formation,” Yan said. “Our finding gives us the first indication that a lot of galaxies could have been formed in the universe much earlier than previously thought.” In the study, the astronomers searched for potential galaxies at “very high redshifts.” Yan said the concept of redshifts in astronomy allows astronomers to measure how far away distant objects are in the universe — like galaxies — by looking at how the colors change in the waves of light that they emit. “If a light-emitting source is moving toward us, the light is being ‘squeezed,’ and that shorter wavelength is represented by blue light, or blueshift,” Yan said. “But if that source [of light] is moving away from us, the light it produces is being ‘stretched,’ and changes to a longer wavelength that is represented by red light, or redshift.” Yan said Edwin Hubble’s discovery in the late 1920s that our universe is ever-expanding is key to understanding how redshifts are used in astronomy. “Hubble confirmed that galaxies external to our Milky Way galaxy are moving away from us, and the more distant they are, the faster they are moving away,” Yan said. “This relates to redshifts through the notion of distances — the higher the redshift an object is at, such as a galaxy, the further away it is from us.” Therefore, Yan said the search for galaxies at very high redshifts gives astronomers a way to construct the early history of the universe. “The speed of light is finite, so it takes time for light to travel over a distance to reach us,” Yan said. “For example, when we look at the sun, we aren’t looking at it what it looks like in the present, but rather what it looked like some eight minutes ago. That’s because that’s how long it takes for the sun’s radiation to reach us. So, when we are looking at galaxies which are very far away, we are looking at their images from a long time ago.” Using this concept, Yan’s team analyzed the infrared light captured by the JWST to identify the galaxies. “The higher the redshift a galaxy is at, the longer it takes for the light to reach us, so a higher redshift corresponds to an earlier view of the universe,” Yan said. “Therefore, by looking at galaxies at higher redshifts, we are getting earlier snapshots of what the universe looked like a long time ago.” A pair of color composite images from the galaxy cluster SMACS 0723-27 and its surrounding area taken by NASA’s James Webb Space Telescope through its Early Release Observations (ERO). A team of astronomers led by Haojing Yan at the University of Missouri used the data from these images to identify the objects of interest for their study. These include galaxies that could be the earliest known galaxies in the universe — about 200-400 million years after the Big Bang. The location of each object of interest is indicated by one of three different colored circles — blue, green or red — on the color images. These colors correspond with the range of redshifts where they were found — high (blue), very high (green), or extremely high (red). Graphic by Haojing Yan and Bangzheng Sun. Photos courtesy of NASA, European Space Agency, Canadian Space Agency and the Space Telescope Science Institute. The JWST was critical to this discovery because objects in space like galaxies that are located at high redshifts — 11 and above — can only be detected by infrared light, according to Yan. This is beyond what NASA’s Hubble Space Telescope can detect because the Hubble telescope only sees from ultraviolet to near-infrared light. “JWST, the most powerful infrared telescope, has the sensitivity and resolution for the job,” Yan said. “Up until these first JWST data sets were released [in mid-July 2022], most astronomers believed that the universe should have very few galaxies beyond redshift 11. At the very least, our results challenge this view. I believe this discovery is just the tip of the iceberg because the data we used only focused on a very small area of the universe. After this, I anticipate that other teams of astronomers will find similar results elsewhere in the vast reaches of space as JWST continues to provide us with a new view of the deepest parts of our universe.” “First batch of z ≈ 11–20 candidate objects revealed by the James Webb Space Telescope Early Release Observations on SMACS 0723-73,” was published in The Astrophysical Journal Letters. Co-authors are Chenxiaoji Ling at MU; Zhiyuan Ma at the University of Massachusetts-Amherst; and Cheng Cheng and Jia-Sheng Huang at the Chinese Academy of Sciences South America Center for Astronomy and National Astronomical Observatories of China.
Physics
Close-up: the last complete image of asteroid moonlet Dimorphos as taken by the DRACO imager on NASA’s DART mission from about 12 kilometers from the asteroid and 2 seconds before impact.(Courtesy: NASA/Johns Hopkins APL) NASA has announced that its asteroid-deflection mission has managed to successfully hit its target with scientists now studying how much the body has been deflected by the impact. At 7:14 p.m. EDT yesterday, the $330m Double Asteroid Redirection Test (DART) craft – the first mission dedicated to demonstrating “kinetic impact” – hit a small asteroid with the aim to put the body in a slightly different orbit around its companion body. DART, which was launched in November 2021, was sent on a roughly 11 million kilometre journey towards a binary, near-Earth asteroid system. This system consists of a 780 m-diameter asteroid called “Didymos” and a smaller, 160 m body “Dimorphos” that orbits it. The plan was to slam into Dimorphos to see if the kinetic impact of a spacecraft could one day successfully deflect an asteroid that is on a collision course with Earth. At its core, DART represents an unprecedented success for planetary defense, but it is also a mission of unity with a real benefit for all humanity Bill Nelson Some 15 days ago, the mission released the Light Italian CubeSat for Imaging of Asteroids (LICIACube) – a CubeSat that has been contributed by the Italian Space Agency – that carries two optical cameras. As DART then neared the asteroid system, yesterday it began to taking images of Didymos and Dimorphos with a high-resolution imager called DRACO. Travelling about 6 kilometres per second, DART then impacted Dimorphos and as it did so LICIACube flew past to image the kinetic impact itself, the resultant ejecta plume and possibly the impact crater. Ground-based observations carried out at several facilities – including the Lowell Discovery Telescope in Arizona, Las Campanas Observatory in Chile, the Las Cumbres Observatory global network, and the Magdalena Ridge Observatory in New Mexico – also tracked track the impact of DART and the subsequent response by Dimorphos. Next steps Scientists will now characterize the ejecta produced and precisely measure Dimorphos’ orbital change to determine how effectively DART deflected the asteroid. Researchers expect the impact to shorten Dimorphos’ orbit by about 1%, or roughly 10 minutes. They will then compare the results of DART’s kinetic impact with computer simulations to evaluate the effectiveness of this approach and assess how best to apply it to future planetary defence scenarios. “At its core, DART represents an unprecedented success for planetary defense, but it is also a mission of unity with a real benefit for all humanity,” noted NASA administrator Bill Nelson. “As NASA studies the cosmos and our home planet, we’re also working to protect that home, and this international collaboration turned science fiction into science fact, demonstrating one way to protect Earth.” Read more NASA launches first-of-a-kind DART mission to deflect asteroid “This first-of-its-kind mission required incredible preparation and precision, and the team exceeded expectations on all counts,” says Ralph Semmel, director of the Johns Hopkins Applied Physics Laboratory in Laurel, Maryland. “Beyond the truly exciting success of the technology demonstration, capabilities based on DART could one day be used to change the course of an asteroid to protect our planet and preserve life on Earth as we know it.” In 2024 the European Space Agency’s Hera mission will launch to the asteroid system and, once it arrives two years later, it will perform a close-up “crime-scene” investigation of DART’s impact.
Physics
Over the next nine months, Joshua Semeter will study footage of unidentified flying objects to help figure out their origin It’s not a bird or a plane, but there are objects in the sky that we can’t quite explain. Fascination with unidentified flying objects was reignited last year with the release of the US government’s first UFO report; an updated defense-intelligence report was due last month.  Despite popular images of flying saucers crashing through trees or little ETs turning up Earthside, one of the challenges with UFOs—now officially called unidentified aerial phenomena, or UAP—is that we can’t clearly see the objects in question and haven’t been able to properly study them.  “One of the problems is that the instruments used to record these things were not designed for this purpose whatsoever,” says Joshua Semeter, a Boston University College of Engineering professor of electrical and computer engineering and director of BU’s Center for Space Physics. Many UAP sightings come from Navy pilots, who have the technology to shoot objects down, not take a high-resolution picture, he explains. Semeter has been appointed to a NASA team charged with studying UAPs and creating a roadmap to better observe, study, and ultimately identify the phenomena.  Even though UFOs are not believed to be extraterrestrial, it’s enticing to imagine that maybe, just maybe, there’s more we can’t explain. Officials say the most likely explanation for UAP sightings are surveillance operations by foreign powers or weather balloons—but most documented accounts remain unexplained. “It excites the imagination,” says Semeter. His research focuses on the ionosphere—the layer of the atmosphere that interacts with solar wind and the magnetic field of Earth, creating phenomena like the aurora borealis. He also looks at other atmosphere and ionosphere events, such as how the ionosphere interferes with GPS signaling. Semeter’s speciality of using sensors and atmospheric signals to better understand the environment makes him well suited for the job of uncovering UAP mysteries. This 2015 image shows an unidentified object that rotated as it flew along clouds, according to the fighter pilots tracking it. Photo by Department of Defense via AP He and the NASA team of 16 aerial-space experts first convened in late October and, over the course of nine months, will figure out the best available tools and techniques for investigating the origin of UAPs. They will use existing data and declassified footage from a range of government departments, commercial data, and other sources to make recommendations. Their full report is expected by mid-2023, according to NASA.  The Brink spoke to Semeter about the task ahead, how his research on the ionosphere is related to UAPs, and what he hopes the NASA-assembled team will accomplish. Q&Awith Joshua Semeter The Brink: Can you explain how this group came to be? Semeter: Well, some strange video evidence leaked into the popular press around 2007 that came from conscientious fighter pilots who were seeing things they couldn’t explain. They thought their testimony wasn’t receiving proper attention, and after years of pushing, the US Department of Defense (DOD) decided to declassify some of the footage. Over the years, a lot of speculation has ensued [about what these objects are]. The DOD can’t release all of their methods and technologies to the public because there are bonafide national security issues at play—this creates fertile ground for conspiracy theories and things like that. So, NASA decided, correctly I think, in collaboration with DOD, that they have an important role to play in helping understand how to explain what we’re calling unidentified aerial phenomena.  The Brink: Why are they called UAPs, and not UFOs? Semeter: UFO has come to refer to the source of the phenomenon as being extraterrestrial. If you ask anybody what UFO means, I’m sure they would tell you, “Oh, yeah, that’s aliens visiting the planet.” UFO no longer has the ability to refer to terrestrial technologies that haven’t been identified yet—which, so far, with the limited data we have, is the most likely explanation. UAP is a better unbiased term. The Brink: What will the group be doing? Semeter: There’s no UAP expert on the panel, and that’s by design. It’s made up of scientists, technologists, an oceanographer, educated observers like a couple of fighter pilots, all who bring their own unique perspectives. We’re putting a critical eye on the available data, which is very limited. Then, figuring out what recommendations we can make going forward in terms of how to direct NASA assets toward researching this problem. One of the problems in the DOD sector is that the instruments that they’ve used to record these things were not designed for this purpose. You can imagine that, right? If you’re on a combat aircraft, your instruments are designed to detect targets and help you shoot them down. They’re not designed to carry out fundamental scientific research. In the spectrum of work from NASA, all the way from cosmology and astrophysics to Earth observing, it’s all sensor related. The types of sensors that are staring down at Earth from orbit may not be optimized to detect and understand small objects that appear in the fields of view, but we will be trying to understand the available data, how it might contribute to the small minority of these phenomena that are not yet explained or accounted for, and make recommendations for observational programs going forward. The Brink: How is your research related to UAPs? In my work at BU’s Center for Space Physics, I am primarily interested in plasma physics and the ionized layer of Earth called the ionosphere. This plasma environment [in the ionosphere] interacts with the magnetic fields of the Earth, interacts with the solar wind, and produces a lot of different phenomena that affect technologies and the planet’s habitability. Atmosphere and ionosphere interactions produce phenomena that we don’t understand. We try to develop physical theories, physical models, we try to develop sensing technologies that can better resolve it. So, I have a lot of familiarity with applying radar, radio, and optical sensing technologies to understanding natural phenomena. It’s a good fit for trying to understand unknown technologies we may be observing. The Brink: What goes on in the ionosphere? Every planet is going to have some atmosphere around it, some body of gasses that reside near the surface and maybe change in various ways as we go upward in altitude. The atmosphere becomes thinner as you go up in altitude. And at the same time, the sun is injecting energy in the form of light and heat and ionizing radiation. At some altitude, you have a combination of energy and low density that produces ionization and forms a layer of plasma that’s electrically conducting. As most people see it, plasma is the fourth state of matter. The ionosphere is evidence for a shielding mechanism that keeps ionizing radiation away from us….The evolution of us in our habitable environment has a lot to do with what’s happening at the higher altitudes where we find this plasma environment and where we find interactions with magnetic fields. And that magnetic field also helps shield out particles that could be deleterious to our health and our existence here. The Brink: What do you hope the group will contribute to our understanding of UAPs? The panel is designed to figure out a roadmap, where we need to go, not to actually resolve this mystery. We want to know what NASA assets can be tuned and turned on to this problem. We want to make specific recommendations about what NASA could do to answer specific questions. We’re trying to play a role that’s consistent with NASA’s mission, which is being rigorous about fundamental science and addressing this from a scientific perspective. Explore Related Topics: Jessica Colarossi Science Writer Twitter Profile is a science writer for The Brink. She graduated with a BS in journalism from Emerson College in 2016, with focuses on environmental studies and publishing. While a student, she interned at ThinkProgress in Washington, D.C., where she wrote over 30 stories, most of them relating to climate change, coral reefs, and women’s health. Profile Jackie Ricciardi Staff photojournalist Jackie Ricciardi is a staff photojournalist at BU Today and Bostonia magazine. She has worked as a staff photographer at newspapers that include the Augusta Chronicle in Augusta, Ga., and at Seacoast Media Group in Portsmouth, N.H., where she was twice named New Hampshire Press Photographer of the Year.   Profile
Physics
photo credits: NASAUp until 2015, if you wanted to drink coffee in space, you had to use freeze dried coffee crystals and something like an airtight Capri Sun pouch. If you have ever tried freeze dried ice cream, the kind you find at space camp or the Smithsonian, then you know that despite the very impressive advances in food & beverage technology, the product leaves something to be desired.  How appealing space coffee packaging used to look. Astronaut holding "coffee beverage container" during the 9th Space Shuttle flight (Columbia, STS-9) during November-December 1983. This mission was launched on 28.11.83 from Kennedy Space Center, Florida. Making coffee in space comes with a unique set of challenges because every aspect of coffee consumption requires gravity - something in short supply on spacecraft. Gravity gives you the very popular pour over method. It also allows the coffee to drop into your cup and...  stay there. These might be obvious to the casual observer of gravity, but the earthly force also helps perform several tasks we take for granted. Gravity influences the processes that cool your cup. It is even responsible for the stimulating scent of coffee wafting up from espresso foam.  According to Mark Weslogel, a fluid physicist at Portland State University, brewing espresso in the microgravity would be a far less sensory experience than at your local café. “On Earth, gravity is responsible for making bubbles rise and liquids fall. Such mechanisms vanish in the weightless environment of orbiting spacecraft,” he explains. Many of the human factors of drinking coffee are hampered by space conditions. It’s no wonder the astronauts, who on average spend six months weightless, wanted to engineer a better way to bring the whole coffee experience into orbit with them. Not just to make coffee, but to savor it.  La macchina italiana (The Italian machine) Enter the Italians who became the first to send espresso to space. The coffee company Lavazza and the Italian engineering firm Argotec collaborated with the Italian Space Agency to produce the world’s first extraterrestrial espresso machine.  Named after the International Space Station, the ISSpresso weighs 44 pounds, is the size of a microwave, and is an engineering marvel that manages to brew coffee in the extreme environment of microgravity. It does so by using a system of special pipes that can withstand extreme pressure and the same capsules Lavazza uses on Earth. "Making coffee in space isn't easy," Argotec officials said. "This is the first capsule espresso machine that can work in the extreme conditions in space, where the principles that determine the fluid dynamic characteristics of liquids and mixtures are very different from those typically found on earth." According to their press release, the machine works by  “pouring” the coffee and, through a patented new system, cleans the final section of the hydraulic circuit.  At the same time, it generates a small pressure difference inside a transparent pouch -  the space "espresso cup” - so that when the straw is inserted, all the aroma of the coffee is released. The specialized clear pouch keeps the cream and coffee separate until the astronaut can draw them together by straw, a setup which makes every sip a study in fluid dynamics.  Drinking coffee while orbiting 250 miles above Earth helps astronauts cope with homesickness and more. It expands our scientific knowledge. The design of ISSpresso and its espresso pouch allows researchers to apply the principles of physics and fluid dynamics that might help solve other problems of managing liquids in space at high pressure and temperature, such as rocket fuel transfers. One Giant Leap This brings us back to another milestone for coffee and science. On May the 3rd, 2015, just one day shy of May the 4th (be with you), the Space X Dragon capsule delivered ISSpresso to the attending crew. First to receive the heavenly espresso was Samantha Cristoforetti.   The opportunity gave rise to several sci fi references, all of which appeal to coffee and space geeks alike. As Cristoforetti took her first sip, astronaut Scott Kelly quipped, “That’s one small step for woman, one giant leap for coffee.”  She then changed into a “Star Trek” commander’s uniform to pose for more photos, now using another specialized vessel, the zero-g mug designed by Mark Weislogel.  Italian astronaut Samantha Cristoforetti became the first person in space to sip from a freshly-made cup of coffee. “We designed the Space Cup with the central objective of delivering the liquid passively to the lip of the cup. To do this we exploit surface tension … [and] the special geometry of the cup itself,” Weislogel explains.  Thanks to surface tension and fluid dynamics, when an astronaut connects her mouth with the lip of the cup, "a capillary connection is formed and the liquid travels up the vessel and forms sippable balls of coffee." [eater.com; cnn.com] While gravity (or the lack of it) will still greatly impact the way astronauts can smell and taste their morning coffee, Weislogel and his team are hopeful that espresso cups will improve on the overall experience.  Each day, on the International Space Station the crew witnesses 15 to 16 sunrises. Since its debut mission, Isspresso has traveled more than 650 kilometers and seen 15,500 sunrises and sunsets. Now, thanks to the zero-g espresso cups, those sunrises will be enjoyed with an espresso. --- https://blogs.nasa.gov/ISS_Science_Blog/2015/05/01/space-station-espresso-cups-strong-coffee-yields-stronger-science/ https://science.nasa.gov/science-news/science-at-nasa/2013/15jul_coffeecup/ https://www.sciencephoto.com/media/336300/view http://www.collectspace.com/news/news-050415a-isspresso-coffee-space-station.html https://www.lavazza.com/en/about-us/media-centre/isspresso-successfully-completes-the-mission-coffee-in-space.html https://www.prnewswire.com/news-releases/espresso-coffee-conquers-space-502347221.html
Physics
Illustration of John Clauser, Alain Aspect, and Anton Zeilinger, the 2022 Nobel laureates. The Nobel Prize in physics for 2022 is being awarded to Alain Aspect, John F. Clauser and Anton Zeilinger for their work on quantum mechanics, the Royal Swedish Academy of Sciences announced on October 4, 2022. The 2022 Nobel Prize in Physics has been awarded “for experiments with entangled photons, establishing the violation of Bell inequalities, and pioneering quantum information science,” the academy said.The 2022 physics laureates’ development of experimental tools has laid the foundation for a new era of quantum technology. Being able to manipulate and manage quantum states and all their layers of properties gives us access to tools with unexpected potential. Intense research and development are underway to utilise the special properties of individual particle systems to construct quantum computers, improve measurements, build quantum networks and establish secure quantum encrypted communication. Sign up for newsletters, unlock features and do more on The Hindu LOG IN Support our reporting. SUBSCRIBE NOW This year’s Nobel Prize laureate John Clauser built an apparatus that emitted two entangled photons at a time, each towards a filter that tested their polarisation. The result was a clear violation of a Bell inequality and agreed with the predictions of quantum mechanics.Alain Aspect – awarded the 2022 Nobel Prize in Physics – developed a setup to close an important loophole. He was able to switch the measurement settings after an entangled pair had left its source, so the setting that existed when they were emitted could not affect the result. Anton Zeilinger, 2022 Nobel Prize laureate in physics, researched entangled quantum states. His research group has demonstrated a phenomenon called quantum teleportation, which makes it possible to move a quantum state from one particle to one at a distance.The 2022 Nobel Prize laureates in physics have conducted groundbreaking experiments using entangled quantum states, where two particles behave like a single unit even when they are separated. The results have cleared the way for new technology based upon quantum information.The 2022 physics laureates’ development of experimental tools has laid the foundation for a new era of quantum technology. Being able to manipulate and manage quantum states and all their layers of properties gives us access to tools with unexpected potential, the accademy said. Intense research and development are underway to utilise the special properties of individual particle systems to construct quantum computers, improve measurements, build quantum networks and establish secure quantum encrypted communication, it added.Anton Zeilinger, one of the winners of the prize, said, “I’m still very shocked but it is a very positive shock. I was surprised to get the call an hour ago.”His area of research has demonstrated a phenomenon called quantum teleportation, which makes it possible to transfer a quantum state from one particle to one at a distance.Explaining the concept of quantum teleportation, he said that it uses the features of entanglement which can be used to transport information, carried by the object, to another place where the object is then reconstituted. This can be done without knowing the information, because to know the information would violate Heisenberg’s uncertainty principle which states that the position and the momentum of an object cannot both be measured exactly, at the same time, even in theory.“So far this can be done on very small particles.  It is fundamentally important for transferring information between quantum computer,” he said.Last year, the academy honoured Syukuro Manabe, of Japan and the United States, and German Klaus Hasselmann for their research on climate models, while Italian Giorgio Parisi also won for his work on the interplay of disorder and fluctuations in physical systems.The physics prize is followed by chemistry on Wednesday, with the literature and peace prizes announced on Thursday and Friday respectively. Swedish paleogeneticist Svante Paabo, who sequenced the genome of the Neanderthal and discovered the previously unknown hominin Denisova, on Monday won the Nobel Medicine Prize. This is your last free article. to read unlimited content from The Hindu SUBSCRIBE NOW
Physics
Astronomers from the University of Montreal recently took a closer look at the ultra-large exoplanet WASP-107b and found that these so-called 'super-puff' planets can get even weirder than they imagined. WASP-107b is not a newly discovered exoplanet as it was first detected all the way back in 2017, orbiting a star at a distance of approximately 200 light-years away. One peculiar fact about the planet is that it orbits its host star at a distance more than ten times closer than Earth orbits the sun. WASP-107b is referred to as a 'super-puff' because it is incredibly light. The planet is the same size as Jupiter but has only one-tenth of its mass. To determine the actual mass of the planet, astronomers recently took a closer look using telescopes at the famous Keck observatory in Hawaii. Applying the so-called radial velocity observation technique which allows researchers to establish the mass of an exoplanet by observing its host star's wobbling motion created by the exoplanet's gravitational pull. The observations confirmed that WASP-107b is approximately thirty times as massive as Earth.The mysterious formation of WASP-107bUsing the Keck data, the team did an analysis to identify what the internal structure of the planet might be. They came to the astonishing conclusion that there must be a solid core that is no more than four times the mass of Earth. This means that over 80% of its mass comes from its dense layer of gas. To put this in perspective, Neptune, a solar system analog when it comes to mass, only derives 10% from its mass from its surrounding gas layer! The fact that WASP-107b is so incredibly light naturally gives rise to additional questions. How is it possible that an exoplanet of such low formed at all? And why it is not losing its enormous layer of gas given the proximity to its host star? Contemporary models for gas-giant formation are founded on the familiar gas-giants found in our own solar system like Saturn. The hypothesis is that a core of at least ten times the mass of Earth is necessary to collect enough gas from the early planet-forming disc surrounding a star. Professor Eve Lee, a renowned authority on super-puff exoplanets, stated in an interview that there are several hypotheses, the most plausible of which is that the exoplanet began its life further away from its host star. According to Lee, the gas in a planet-forming disc is cold enough that gas buildup can happen a lot faster at more considerable distances. WASP-107b would have migrated towards the center of its star system at a later moment, probably following interactions with other planets or with the planet-forming disc itself. Yet another exoplanet discovery: WASP-107cDue to the extra attention that was given to the WASP-107 system, the team discovered another exoplanet was discovered. WASP-107c is a lot larger than 107b, approximately 100 times the mass of Earth. It seems to be a more conventional gas giant as it is located further out in the system, akin to our own gas giants. A WASP-107c year is approximately three times longer than an Earth year. A huge difference between a 107b year, which takes a little less than six days.The orbit of WASP-107c is more oval than circular. According to Caroline Piaulet, Ph.D. student at UdeM's Institute for Research on Exoplanets, this eccentric orbit gives a hint of what happened in the system. "Its great eccentricity hints at a rather chaotic past, with interactions between the planets which could have led to significant displacements, like the one suspected for WASP-107b."There are many more questions about WASP-107b scientists would like to see answered. Previous Hubble observations showed that the planet has only a minimal amount of methane in its atmosphere. This is somewhat puzzling because, according to Piaulet, these types of planets should have vast quantities of the molecule. Piaulet hopes to be able to do additional research once the James Webb Space Telescope is operational. Who knows what further mysteries will come to light. The Montreal team published their findings in the Astronomical Journal, together with colleagues from Europe, Japan, and the US. It's listed below, be sure to check it out for a more in-depth look.Further reading:WASP-107b's Density Is Even Lower: A Case Study for the Physics of Planetary Gas Envelope Accretion and Orbital MigrationJames Webb Space TelescopeSuper-Puff planets
Physics
image: Computations for the project was performed at Princeton's High Performance Computing Research Center. view more  Credit: Photo by Denise Applewhite, Princeton University Office of Communications EMBARGOED BY SCIENCE UNTIL 2 P.M. EST ON THURSDAY NOV. 24, 2022 Researchers at Princeton and Rice universities have combined iron, copper, and a simple LED light to demonstrate a low-cost technique that could be key to distributing hydrogen, a fuel that packs high amounts of energy with no carbon pollution. The researchers used experiments and advanced computation to develop a technique using nanotechnology to split hydrogen from liquid ammonia, a process that until now has been expensive and energy intensive. In an article published online Nov. 24 in the journal Science, the researchers describe how they used light from a standard LED to crack the ammonia without the need for high temperatures or expensive elements typically demanded by such chemistry. The technique overcomes a critical hurdle toward realizing hydrogen’s potential as a clean, low-emission fuel that could help meet energy demands without worsening climate change. “We hear a lot about hydrogen being the ultimate clean fuel, if only it was less expensive and easy to store and retrieve for use,” said Naomi Halas, a professor at Rice University and one of the study’s principal authors. “This result demonstrates that we are moving rapidly towards that goal, with a new, streamlined way to release hydrogen on-demand from a practical hydrogen storage medium using earth-abundant materials and the technological breakthrough of solid-state lighting.” Hydrogen offers many advantages as a green fuel including high energy density and zero carbon pollution. It is also used ubiquitously in industry, for example to make fertilizer, food, and metals. But pure hydrogen is expensive to compress for transport and is difficult to store for long periods. In recent years, scientists have sought to use intermediate chemicals to transport and store hydrogen. One of the most promising hydrogen carriers is ammonia (NH3), comprised of three hydrogen atoms and one nitrogen atom. Unlike pure hydrogen gas (H2), liquid ammonia, although hazardous, has existing systems for safe transportation and storage. “This discovery paves the way for sustainable, low-cost hydrogen that could be produced locally rather than in massive centralized plants,” said Peter Nordlander, a professor at Rice and another principal author. One persistent problem for advocates has been that cracking ammonia into hydrogen and nitrogen often requires high temperatures to drive the reaction. Conversion systems can require temperatures above 400 degrees Celsius (732 degrees Fahrenheit). That demands a lot of energy to convert the ammonia, as well as special equipment to handle the operation. Researchers led by Halas and Nordlander at Rice University, and Emily Carter, the Gerhard R. Andlinger Professor in Energy and the Environment and Professor of Mechanical and Aerospace Engineering and Applied and Computational Mathematics at Princeton, wanted to transform the splitting process to make ammonia a more sustainable and economically viable carrier for hydrogen fuels. Using ammonia as a hydrogen carrier has drawn considerable research interest because of its potential to drive a hydrogen economy, as a recent review by the American Chemical Society shows. Industrial operations often crack ammonia at high temperatures using a wide variety of materials as catalysts, which are materials that accelerate a chemical reaction without being changed by the reaction. Previous research has demonstrated that it is possible to lower the reaction temperature by using a ruthenium catalyst. But ruthenium, a metal in the platinum group, is expensive. The researchers believed they could use nanotechnology to allow cheaper elements like copper and iron to be used as a catalyst instead. The researchers also wanted to tackle the energy cost of cracking ammonia. Current methods use a lot of heat to break the chemical bonds that hold ammonia molecules together. The researchers believed they could harness light to sever the chemical bonds like a scalpel rather than using heat to shatter them like a hammer. To do so, they turned to nanotechnology, along with a much cheaper catalyst containing iron and copper. The combination of nanotechnology’s tiny metal structures and light is a relatively new field called plasmonics. By shining light into structures smaller than a single wavelength of light, engineers can manipulate the light waves in unusual and specific ways. In this case, the Rice team wanted to use this engineered light to excite electrons in the metal nanoparticles as a way to split the ammonia into its hydrogen and nitrogen components without the need for intense heat. Because plasmonics requires certain types of metals, such as copper, silver, or gold, the researchers added the iron to copper before creating the tiny structures. When finished, the copper structures behave as antennas to manipulate the light from the LED to excite the electrons to higher energies, while the iron atoms embedded in the copper act as catalysts to accelerate the reaction carried out by excited electrons. The researchers created the structures and conducted the experiments in laboratories at Rice. They were able to adjust many variables around the reaction such as the pressure, the intensity of the light and the light’s wavelength. But calibrating the exact parameters was daunting. To investigate how these variables affected the reaction, the researchers worked with principal author Carter, who specializes in detailed investigations of reactions at the molecular level. Using Princeton’s high-performance computing system, the Terascale Infrastructure for Groundbreaking Research in Engineering and Science (TIGRESS), Carter and her postdoctoral fellow, Junwei Lucas Bao, ran the reactions through her specialized quantum mechanics simulator uniquely able to study excited electron catalysis. Molecular interactions of such reactions are incredibly complex, but Carter and her fellow researchers are able to use the simulator to understand which variables should be adjusted to further the reaction. “With the quantum mechanics simulations, we can determine the rate-limiting reaction steps,” said Carter, who also holds appointments at Princeton’s Andlinger Center for Energy and the Environment, in applied and computational mathematics, and at the Princeton Plasma Physics Laboratory. “These are the bottlenecks.” By fine-turning the process, while utilizing the atomic-scale understanding Carter and her team provided, the Rice team was able to consistently extract hydrogen from ammonia using only light from energy-efficient LEDs at room temperature with no additional heating. The researchers say the process is scalable. In further research, they plan to investigate other possible catalysts with an eye to increasing the process efficiency and decreasing the cost. Carter, who also currently chairs the National Academies’ committee on carbon utilization, said a critical next step will be to decrease the costs and carbon pollution involved with creating the ammonia that begins the transportation cycle. Currently, most ammonia is created at high temperatures and pressures using fossil fuels. The process is both energy intensive and polluting. Carter said many researchers are working to develop green techniques for the production of ammonia as well. “Hydrogen is used ubiquitously in industry and will be used increasingly as fuel as the world seeks to decarbonize its energy sources,” she said. “However, today it is mostly made unsustainably from natural gas – creating carbon dioxide emissions – and is difficult to transport and store. Hydrogen needs to be made and transported sustainably where it is needed. If carbon-emission-free ammonia could be produced, for example by electrolytic reduction of nitrogen using decarbonized electricity, it could be transported, stored, and possibly serve as an on-demand source of green hydrogen using the LED-illuminated iron-copper photocatalysts reported here.”   The article, Earth-abundant photocatalyst for H2 generation from NH3 with light-emitting diode illumination, was published in the Nov. 25 issue of Science. Besides Carter, Halas and Nordlander, co-authors include Hossein Robatjazi, who received his doctorate at Rice and is now chief scientist of Syzygy Plasmonics; Junwei Lucas Bao, who is now a professor at Boston College; Yigao Yuan, Jingyi Zhou, Aaron Bales, Lin Yuan, Minghe Lou and Minhan Lou of Rice University; Linan Zhou of both Rice and South China University of Technology, and Suman Khatiwada of Syzygy Plasmonics. Halas and Nordlander are co-founders of Syzygy and hold equity in the company. Support for the research was provided in part by the Welch Foundation, the Air Force Office of Scientific Research, Syzygy Plasmonics, and the Department of Defense. - Rice University’s Office of Public Affairs contributed to this article. Method of Research Experimental study Subject of Research Not applicable Article Title Earth-abundant photocatalyst for H2 generation from NH3 with light-emitting diode illumination Article Publication Date 25-Nov-2022 COI Statement Halas and Nordlander are Syzygy co-founders and hold an equity stake in the company. Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Physics
Jupiter is a gas giant. Scientists can only theorize about how many layers it has, how dense these layers are and what its core looks like. If a needle hits Jupiter at the speed of light, it will burst like a balloon and fly out of the solar system. A needle travelling at extremely high speeds is expected to penetrate the gas giant easily. The concept of a needle piercing through Jupiter at high speeds is a topic of scientific discussion. While it is theorized that a needle travelling at the speed of light would reach the centre of Jupiter in less than half a second, it is essential to note that this is still just a theory and has not been proven. Additionally, the composition of Jupiter’s core is still not fully understood. The most widely accepted theory is that it is composed of an extreme substance known as metallic hydrogen. However, more research is needed to understand the properties and structure of Jupiter’s core fully. Theory of a Needle Hitting Jupiter In 2017, scientists from Harvard University, Isaac Silvera and Renga Diaz made a groundbreaking discovery by successfully synthesizing metallic hydrogen at a pressure of 500 gigapascals, which is 5,000,000 times greater than Earth’s atmospheric pressure. In the laboratory, the substance lasted only a few seconds before evaporating. However, it is believed that due to the extreme conditions within Jupiter, large quantities of metallic hydrogen may exist at the planet’s core. According to the theory of relativity, at the speed of light, the energy of even a tiny amount of matter, such as 2/10 of a gram, would be virtually infinite. This means that upon collision with the first atom of Jupiter’s atmosphere, the needle would transform into a powerful pulse of energy, effectively destroying the entire gas layer surrounding Jupiter. The intense pressure and heat generated by this impact would likely cause the metallic hydrogen in the core to evaporate. Composition of Jupiter’s Core Scientists have yet to understand Jupiter’s physics fully, and there is an ongoing debate about the nature of its core. Some theories suggest that it may not be solid but a liquid composed of hydrogen with metallic properties, which would generate a magnetic field. If this is the case, then a needle launched at the speed of light would likely have a destabilizing effect on all of the internal layers of Jupiter due to its impact. How Will a Needle Hit Jupiter? Initially, the needle would slice through the ammonia clouds on Jupiter’s surface. At a depth of 250 kilometres, it would encounter a layer of gaseous hydrogen mixed with traces of helium. While the needle itself may not heat up from friction, it would undoubtedly heat the surrounding atmosphere. Since hydrogen is highly flammable, it could explode with a single spark. However, oxygen is also required for an explosive mixture to occur, which is not present in Jupiter’s atmosphere. Impact of Needle on Jupiter The potential impact of a needle crashing into Jupiter’s core at the speed of light could be catastrophic, potentially triggering a chain reaction that leads to an explosion of immense power. It is worth considering that the weight of a typical metal needle is insignificant in comparison, weighing only 2/10 of a gram, while Jupiter’s mass is measured in octillions of kilograms. This significant disparity in size is even more pronounced when compared to a human and a bacterium. An impact of a needle travelling at the speed of light to the core of Jupiter could lead to significant changes in its shape and gravity, potentially causing disruptions to the orbits of its satellites. Furthermore, if the core of Jupiter were to be composed of liquid hydrogen, the intense compression caused by the needle could result in a powerful and unpredictable event. In other words, all the conditions for a nuclear reaction like in the very centre of the sun. The gas giant will turn into an enormous gas cloud, quickly spreading throughout the solar system and creating a chain reaction. Now Jupiter holds thousands of asteroids of various sizes in its orbit, and if Jupiter suddenly evaporates, it scatters randomly, crashing into all the planets in a row. Additionally, the sudden loss of the most vital gravitational force apart from the sun will throw the entire system off balance. The orbits of inner planets like Earth will narrow so that we can expect asteroid bombardments and unbearable heat that can be deadly to all living things. Conclusion In conclusion, no object can travel at the speed of light. But in this hypothetical scenario, if a needle hits Jupiter at the speed of light, it will burst or even destroy our solar system.
Physics
Dark energy is the enigma at the heart of modern physics: the universe is supposed to be awash with the stuff, but it has never been seen and its nature is unknown.When faced with a mystery of such epic proportions, simply eliminating certain options is considered a success. This week such an advance, using an ingeniously simple desktop experiment, was recognised by the prestigious Blavatnik award for young scientists.Prof Clare Burrage, of the University of Nottingham and recipient of the £100,000 prize, said: “We don’t know what dark energy is. It’s the name we give to something we don’t understand so we can start talking about it. And when so little is known, even ruling things out feels like big progress.”Dark energy was dreamed up to fill an enormous void in theoretical physics. Scientists had predicted that, due to the inward tug of gravity, the expansion of the universe ought to be slowing down. But observations of distant stars showed that, instead, the expansion of the universe is accelerating.Dark energy is a placeholder for whatever is propelling this expansion and, to balance the necessary equations, it needs to account for 70% of the contents of the universe.A popular theory is that dark energy is a “chameleon force”, which adjusts its properties according to the local environment. “In dense environments, your force becomes very short-range, but in empty space it becomes very long-range,” said Burrage.This could explain how the elusive force could be powerful enough to govern the fate of the entire universe, but remain imperceptible in our own solar system.Typically dark energy experiments involve space observatories, enormous particle accelerators or detectors buried deep underground. However, Burrage’s theoretical work proved that small and light objects in nearly vacuum environment on Earth may still feel the full force of dark energy. With colleagues at Nottingham and Imperial College London, Burrage devised a “chameleon trap” that could be built in a laboratory.The setup involved dropping ultra-cold atoms into a bowling-ball sized vacuum chamber containing a lump of aluminium. If a chameleon force existed, it should have a higher value in the empty space and be “hidden” close to the heavy lump of metal.By precisely tracking the motion of the atoms using pulsed laser light, the team were looking for any unexpected accelerations that could be due to a chameleon force.“You’re looking to see if there’s an extra force pulling the atoms sideways,” said Burrage. “Obviously it would’ve been wonderful to see something.”Unfortunately, no mysterious forces were uncovered, but the experiment was able to squeeze down the possible values that a chameleon force could take into a small window. “With one upgrade of the experiment we hope to close that window,” Burrage said. “It’s definitely technologically achievable.”The findings have had a positive reception, Burrage said, despite some theorists having spent years devising hypothetical chameleon forces. “When you’re a theorist, you put stuff out into the world and a lot of the time people just say ‘Oh, that’s nice’ and move on,” she said. “So people were just excited to see tests being done.”Some may be deterred by the narrow odds of a breakthrough in a field where so little is known, but for Burrage this is the attraction of working on dark energy. “I’m very stubborn – it’s a family trait,” she said. “I’m a rock climber in my spare time. I like a challenge and I don’t give up easily.”Her current work is focused on using data from the European Space Agency’s (Esa) Gaia mission, which is making detailed measurements of stars in the Milky Way. Esa’s Euclid mission, which is squarely focused on the dark energy question, is expected to launch this year. The mission will look at how the universe evolved over the past 10bn years to look for imprints of dark energy.“Euclid is the big one,” said Burrage. “It’s going to map the distribution of galaxies that we can see on the sky.”The mission will observe up to 2bn galaxies using infrared and visible light to study their shape and motion. The aim is to get a more precise picture of the competing forces of gravity, which cause galaxies to clump together, and dark energy, which is driving the accelerated expansion of space.“The fact that so little is known is exciting,” said Burrage. “It feels like somewhere you can make big progress.”
Physics
In a recent paper accepted to Contemporary Physics, a physicist from Imperial College London uses past missions and recent findings to encourage the importance of searching for life in the atmosphere of the solar system’s most inhospitable planet, Venus. This comes as a 2020 announcement claimed to have discovered the presence of phosphine in Venus’ atmosphere followed by follow-up observations from NASA’s recently-retired SOFIA aircraft in late 2022 that refuted it. Despite this, Dr. David Clements, who is a Reader in Astrophysics in the Department of Physics at Imperial College London, recently told Universe Today that “there is something odd going on in the atmosphere of Venus.” “The phosphine detection has not gone away, and there are other anomalies, possibly joined by the presence of ammonia,” Dr. Clements told Universe Today. “We don’t know the origin of these anomalies, and much further work is needed, but they are persisting in spite of properly rigorous review. We also may be starting to understand why different observations have given apparently contradictory results.” For the study, Dr. Clements asks what life is and how we can search for it in the universe but with an emphasis on Venus, referring to the second planet from our Sun in the paper as “an unlikely candidate for astrobiology”. He discusses Venus’ current and ancient planetary conditions, along with Venus’ atmosphere and the alleged detection of phosphine by ground-based telescopes from the James Clerk Maxwell Telescope in Hawaii in 2017 and the Atacama Large Millimeter/Submillimeter Array in Chile in 2019, with follow-up observations by NASA’s SOFIA aircraft in 2021. This study comes as NASA’s Cassini confirmed the existence of water vapor jets emanating from the south pole of Saturn’s moon, Enceladus, in the 2000s; NASA’s Curiosity and Perseverance rovers presently scouring the surface of Mars searching for signs of past life; NASA gearing up to launch its Europa Clipper mission to examine Jupiter’s water world, Europa in 2024; and launching the Dragonfly mission to Saturn’s largest moon, Titan in 2027. But given all those potential targets for astrobiology, how much of a priority is searching for life in the atmosphere of Venus? “More work is needed before we can add Venus to the list of prime sites for the possibility of life,” Dr. Clements told Universe Today. “That work is being done, both from the ground and from space missions. The interesting thing is that Venus is a more convenient target than (say) Europa or Enceladus, so missions there are cheaper and faster.” One such upcoming NASA mission specifically designed to study Venus’ atmosphere is the DAVINCI mission, which is slated to launch in 2029 and arrive at Venus in 2031. With its suite of instruments, DAVINCI will examine Venus’ atmosphere like never before. This includes dropping a titanium probe through the atmosphere where it will collect thousands of measurements as it makes its hour-long descent to the surface. Scientists don’t expect it to survive the landing due to Venus’ crushing air pressure and searing heat, but they hope to squeeze almost 20 minutes of extra science if it does. “I think DAVINCI will be very important as it will be able to provide far better ‘ground truth’ than we currently have,” Dr. Clements told Universe Today. “There is also the chance that some additions to the instrumentation will add to its capabilities to look for specific things like phosphine and ammonia.” With a plethora of data from past observations, along with upcoming missions to Venus, Dr. Clements conveyed to Universe Today that “the phosphine on Venus story continues, more data is coming from the ground and space, and that we still don’t know if the presence of phosphine is down to life or some complex abiotic chemistry that we do not currently understand.” As always, keep doing science & keep looking up! Like this:Like Loading...
Physics
One of the more unsettling discoveries in the past half century is that the universe is not locally real. “Real,” meaning that objects have definite properties independent of observation—an apple can be red even when no one is looking; “local” means objects can only be influenced by their surroundings, and that any influence cannot travel faster than light. Investigations at the frontiers of quantum physics have found that these things cannot both be true. Instead, the evidence shows objects are not influenced solely by their surroundings and they may also lack definite properties prior to measurement. As Albert Einstein famously bemoaned to a friend, “Do you really believe the moon is not there when you are not looking at it?” This is, of course, deeply contrary to our everyday experiences. To paraphrase Douglas Adams, the demise of local realism has made a lot of people very angry and been widely regarded as a bad move. Blame for this achievement has now been laid squarely on the shoulders of three physicists: John Clauser, Alain Aspect and Anton Zeilinger. They equally split the 2022 Nobel Prize in Physics “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science.” (“Bell inequalities” refers to the pioneering work of the Northern Irish physicist John Stewart Bell, who laid the foundations for this year’s Physics Nobel in the early 1960s.) Colleagues agreed that the trio had it coming, deserving this reckoning for overthrowing reality as we know it. “It is fantastic news. It was long overdue,” says Sandu Popescu, a quantum physicist at the University of Bristol. “Without any doubt, the prize is well-deserved.” “The experiments beginning with the earliest one of Clauser and continuing along, show that this stuff isn’t just philosophical, it’s real—and like other real things, potentially useful,” says Charles Bennett, an eminent quantum researcher at IBM.  “Each year I thought, ‘oh, maybe this is the year,’” says David Kaiser, a physicist and historian at the Massachusetts Institute of Technology. “This year, it really was. It was very emotional—and very thrilling.” Quantum foundations’ journey from fringe to favor was a long one. From about 1940 until as late as 1990, the topic was often treated as philosophy at best and crackpottery at worst. Many scientific journals refused to publish papers in quantum foundations, and academic positions indulging such investigations were nearly impossible to come by. In 1985, Popescu’s advisor warned him against a Ph.D. in the subject. “He said ‘look, if you do that, you will have fun for five years, and then you will be jobless,’” Popescu says. Today, quantum information science is among the most vibrant and impactful subfields in all of physics. It links Einstein’s general theory of relativity with quantum mechanics via the still-mysterious behavior of black holes. It dictates the design and function of quantum sensors, which are increasingly being used to study everything from earthquakes to dark matter. And it clarifies the often-confusing nature of quantum entanglement, a phenomenon that is pivotal to modern materials science and that lies at the heart of quantum computing. “What even makes a quantum computer ‘quantum’?” Nicole Yunger Halpern, a National Institute of Standards and Technology physicist, asks rhetorically. “One of the most popular answers is entanglement, and the main reason why we understand entanglement is the grand work participated in by Bell and these Nobel Prize–winners. Without that understanding of entanglement, we probably wouldn’t be able to realize quantum computers.” For Whom the Bell Tolls The trouble with quantum mechanics was never that it made the wrong predictions—in fact, the theory described the microscopic world splendidly well right from the start when physicists devised it in the opening decades of the 20th century. What Einstein, Boris Podolsky and Nathan Rosen took issue with, laid out in their iconic 1935 paper, was the theory’s uncomfortable implications for reality. Their analysis, known by their initials EPR, centered on a thought experiment meant to illustrate the absurdity of quantum mechanics; to show how under certain conditions the theory can break—or at least deliver nonsensical results that conflict with everything else we know about reality. A simplified and modernized version of EPR goes something like this: Pairs of particles are sent off in different directions from a common source, targeted for two observers, Alice and Bob, each stationed at opposite ends of the solar system. Quantum mechanics dictates that it is impossible to know the spin, a quantum property of individual particles prior to measurement. When Alice measures one of her particles, she finds its spin to be either up or down. Her results are random, and yet, when she measures up, she instantly knows Bob’s corresponding particle must be down. At first glance, this is not so odd; perhaps the particles are like a pair of socks—if Alice gets the right sock, Bob must have the left. But under quantum mechanics, particles are not like socks, and only when measured do they settle on a spin of up or down. This is EPR’s key conundrum: If Alice’s particles lack a spin until measurement, how then when they whiz past Neptune do they know what Bob’s particles will do as they fly out of the solar system in the other direction? Each time Alice measures, she effectively quizzes her particle on what Bob will get if he flips a coin: up, or down? The odds of correctly predicting this even 200 times in a row are 1 in 1060—a number greater than all the atoms in the solar system. Yet despite the billions of kilometers that separate the particle pairs, quantum mechanics says Alice’s particles can keep correctly predicting, as though they were telepathically connected to Bob’s particles. Although intended to reveal the imperfections of quantum mechanics, when real-world versions of the EPR thought experiment are conducted the results instead reinforce the theory’s most mind-boggling tenets. Under quantum mechanics, nature is not locally real—particles lack properties such as spin up or spin down prior to measurement, and seemingly talk to one another no matter the distance. Physicists skeptical of quantum mechanics proposed that there were “hidden variables,” factors that existed in some imperceptible level of reality beneath the subatomic realm that contained information about a particle’s future state. They hoped in hidden-variable theories, nature could recover the local realism denied to it by quantum mechanics. “One would have thought that the arguments of Einstein, Podolsky and Rosen would produce a revolution at that moment, and everybody would have started working on hidden variables,” Popescu says. Einstein’s “attack” on quantum mechanics, however, did not catch on among physicists, who by and large accepted quantum mechanics as is. This was often less a thoughtful embrace of nonlocal reality, and more a desire to not think too hard while doing physics—a head-in-the-sand sentiment later summarized by the physicist David Mermin as a demand to “shut up and calculate.” The lack of interest was driven in part because John von Neumann, a highly regarded scientist, had in 1932 published a mathematical proof ruling out hidden-variable theories. (Von Neumann’s proof, it must be said, was refuted just three years later by a young female mathematician, Grete Hermann, but at the time no one seemed to notice.) Quantum mechanics’ problem of nonlocal realism would languish in a complacent stupor for another three decades until being decisively shattered by Bell. From the start of his career, Bell was bothered by the quantum orthodoxy and sympathetic toward hidden variable theories. Inspiration struck him in 1952, when he learned of a viable nonlocal hidden-variable interpretation of quantum mechanics devised by fellow physicist David Bohm—something von Neumann had claimed was impossible. Bell mulled the ideas over for years, as a side project to his main job working as a particle physicist at CERN. In 1964, Bell rediscovered the same flaws in von Neumann’s argument that Hermann had. And then, in a triumph of rigorous thinking, Bell concocted a theorem that dragged the question of hidden variables from its metaphysical quagmire onto the concrete ground of experiment. Normally, hidden-variable theories and quantum mechanics predict indistinguishable experimental outcomes. What Bell realized is that under precise circumstances, an empirical discrepancy between the two can emerge. In the eponymous Bell test (an evolution of the EPR thought experiment), Alice and Bob receive the same paired particles, but now they each have two different detector settings—A and a, B and b. These detector settings allow Alice and Bob to ask the particles different questions; an additional trick to throw off their apparent telepathy. In local hidden-variable theories, where their state is preordained and nothing links them, particles cannot outsmart this extra step, and they cannot always achieve the perfect correlation where Alice measures spin down when Bob measures spin up (and vice versa). But in quantum mechanics, particles remain connected and far more correlated than they could ever be in local hidden-variable theories. They are, in a word, entangled. Measuring the correlation multiple times for many particle pairs, therefore, could prove which theory was correct. If the correlation remained below a limit derived from Bell’s theorem, this would suggest hidden variables were real; if it exceeded Bell’s limit, then the mind-boggling tenets of quantum mechanics would reign supreme. And yet, in spite of its potential to help determine the very nature of reality, after being published in a relatively obscure journal Bell’s theorem languished unnoticed for years. The Bell Tolls for Thee In 1967, John Clauser, then a graduate student at Columbia University, accidentally stumbled across a library copy of Bell’s paper and became enthralled by the possibility of proving hidden-variable theories correct. Clauser wrote to Bell two years later, asking if anyone had actually performed the test. Clauser’s letter was among the first feedback Bell had received. With Bell’s encouragement, five years later Clauser and his graduate student Stuart Freedman performed the first Bell test. Clauser had secured permission from his supervisors, but little in the way of funds, so he became, as he said in a later interview, adept at “dumpster diving” to secure equipment—some of which he and Freedman then duct-taped together. In Clauser’s setup—a kayak-sized apparatus requiring careful tuning by hand—pairs of photons were sent in opposite directions toward detectors that could measure their state, or polarization. Unfortunately for Clauser and his infatuation with hidden variables, once he and Freedman completed their analysis, they could not help but conclude that they had found strong evidence against them. Still, the result was hardly conclusive, because of various “loopholes” in the experiment that conceivably could allow the influence of hidden variables to slip through undetected. The most concerning of these was the locality loophole: if either the photon source or the detectors could have somehow shared information (a plausible feat within the confines of a kayak-sized object), the resulting measured correlations could still emerge from hidden variables. As Kaiser puts it pithily, if Alice tweets at Bob which detector setting she’s in, that interference makes ruling out hidden variables impossible. Closing the locality loophole is easier said than done. The detector setting must be quickly changed while photons are on the fly—“quickly” meaning in a matter of mere nanoseconds. In 1976, a young French expert in optics, Alain Aspect, proposed a way for doing this ultra-speedy switch. His group’s experimental results, published in 1982, only bolstered Clauser’s results: local hidden variables looked extremely unlikely. “Perhaps Nature is not so queer as quantum mechanics,” Bell wrote in response to Aspect’s initial results. “But the experimental situation is not very encouraging from this point of view.” Other loopholes, however, still remained—and, alas, Bell died in 1990 without witnessing their closure. Even Aspect’s experiment had not fully ruled out local effects because it took place over too small a distance. Similarly, as Clauser and others had realized, if Alice and Bob were not ensured to detect an unbiased representative sample of particles, they could reach the wrong conclusions. No one pounced to close these loopholes with more gusto than Anton Zeilinger, an ambitious, gregarious Austrian physicist. In 1998, he and his team improved on Aspect’s earlier work by conducting a Bell test over a then-unprecedented distance of nearly half a kilometer. The era of divining reality’s nonlocality from kayak-sized experiments had drawn to a close. Finally, in 2013, Zeilinger’s group took the next logical step, tackling multiple loopholes at the same time. “Before quantum mechanics, I actually was interested in engineering. I like building things with my hands,” says Marissa Giustina, a quantum researcher at Google who worked with Zeilinger.  “In retrospect, a loophole-free Bell experiment is a giant systems-engineering project.” One requirement for creating an experiment closing multiple loopholes was finding a perfectly straight, unoccupied 60-meter tunnel with access to fiber optic cables. As it turned out, the dungeon of Vienna’s Hofburg palace was an almost ideal setting—aside from being caked with a century’s worth of dust. Their results, published in 2015, coincided with similar tests from two other groups that also found quantum mechanics as flawless as ever. Bell’s Test Reaches the Stars One great final loophole remained to be closed, or at least narrowed. Any prior physical connection between components, no matter how distant in the past, has the possibility of interfering with the validity of a Bell test’s results. If Alice shakes Bob’s hand prior to departing on a spaceship, they share a past. It is seemingly implausible that a local hidden-variable theory would exploit these loopholes, but still possible. In 2017, a team including Kaiser and Zeilinger performed a cosmic Bell test. Using telescopes in the Canary Islands, the team sourced its random decisions for detector settings from stars sufficiently far apart in the sky that light from one would not reach the other for hundreds of years, ensuring a centuries-spanning gap in their shared cosmic past. Yet even then, quantum mechanics again proved triumphant. One of the principal difficulties in explaining the importance of Bell tests to the public—as well as to skeptical physicists—is the perception that the veracity of quantum mechanics was a foregone conclusion. After all, researchers have measured many key aspects of quantum mechanics to a precision of greater than 10 parts in a billion. “I actually didn’t want to work on it. I thought, like, ‘Come on; this is old physics. We all know what’s going to happen,’” Giustina says. But the accuracy of quantum mechanics could not rule out the possibility of local hidden variables; only Bell tests could do that. “What drew each of these Nobel recipients to the topic, and what drew John Bell himself, to the topic was indeed [the question], ‘Can the world work that way?’” Kaiser says. “And how do we really know with confidence?” What Bell tests allow physicists to do is remove the bias of anthropocentric aesthetic judgments from the equation; purging from their work the parts of human cognition that recoil at the possibility of eerily inexplicable entanglement, or that scoff at hidden-variable theories as just more debates over how many angels may dance on the head of a pin. The award honors Clauser, Aspect and Zeilinger, but it is testament to all the researchers who were unsatisfied with superficial explanations about quantum mechanics, and who asked their questions even when doing so was unpopular. “Bell tests,” Giustina concludes, “are a very useful way of looking at reality.”ABOUT THE AUTHOR(S)Daniel Garisto is a freelance science journalist covering advances in physics and other natural sciences. His writing has appeared in Nature News, Science News, Undark, and elsewhere.
Physics
The James Webb Space Telescope prior to launch. The telescope is designed to peer back so far that scientists will get a glimpse of the dawn of the universe about 13.7 billion years ago and zoom in on closer cosmic objects with sharper focus.Laura Betz/AP file Allison Strom sent a note on her astronomer text chain reading “Holy Jesus! Did you see that?” after the first images taken by the James Webb Space Telescope were revealed this week.Strom, an assistant professor of physics and astronomy at Northwestern University, is pumped up. And for good reason. Allison Strom, an assistant professor of physics and astronomy at Northwestern UniversityProvided She’s basically next in a long line of astronomers who got permission to use the telescope.She’ll get 40 hours to focus the device on a tiny patch of the cosmos to study what galaxies were made of billionsof years ago when the universe was like a teenager.“You know, extremely messy, trying to figure out what they want to do with their lives,” Strom joked.Her project has been dubbed CECILIA (Chemical Evolution Constrained using Ionized Lines in Interstellar Aurorae) and is an acronym designed to fit the name of Cecilia Payne-Gaposchkin, one of the first women to earn a doctorate in astronomy.Emails from Strom to her team occasionally include a doctored photo of the pioneering astronomer wearing a little party hat.At some point in the next three weeks, the Webb telescope will do Strom’s bidding for a window of 40 hours, and then the fun begins for her and her colleagues who will analyze the data.In the meantime, Strom, 33, hopes to find a reasonablypriced condo or rent an apartment somewhere in Ravenswood or Lake View. She’s a new hire and moving from New Jersey, where she worked as a professor at Princeton University.She’s one of severalastronomers at Northwestern and the University of Chicago who have been granted coveted time with the telescope — a $10 billion device that launched Christmas Day and traveled a million miles before sticking the landing in a cosmicparkinglot known as L2, where it will orbit the sun.The risky process was the source of countless hours of lost sleep for scientists the world over. But it was a success.“The telescope works, and it works brilliantly. Our knowledge about the universe is about to take a giant leap forward,” said University of Chicago astronomy professor Jacob Bean, who viewed the first images revealed Monday by President Joe Biden from the comfort of his couch in Hyde Park. Jacob Bean, University of Chicago astronomy professor John Zich “I just had the biggest grin on my face as I sat there,” said Bean, 42. “It’s just fantastic. It’s the moment of a lifetime, the moment of my professional career to use the telescope and analyze the data.”He is a co-leader of a team of about 150 scientists who are in the process of receiving data from Webbthat will shedlight on exoplanets, or planets that orbit stars outside of Earth’s solar system.Their goal is to look for clues about what the composition of them, how cold or warm they are, and whether they are habitable.Bean, who grew up in rural Georgia, said the exoplanet field of research was viewed as a bit of a kooky sideshow as recently as 10 years ago.“But now to reach the point where it’s been chosen for use by NASA’s flagship telescope, it shows how far our smallgroup of people have come,” he said. “Everythingwe’ve planned and worked for has become a reality.”
Physics
December 19, 2022• Physics 15, s173The film surrounding a soap bubble can be up to 8 °C cooler than the environment, a finding that has implications for bubble stability. F. Boulogne et al. [1] Bubbles are ubiquitous, existing in everything from the foam on a beer to party toys for children. Despite this pervasiveness, there are open questions on the behavior of bubbles, such as why some bubbles are more resistant to bursting than others. Now François Boulogne and colleagues from the University of Paris-Saclay have taken a step toward answering that question by measuring the temperature of the film surrounding a soap bubble, finding that it can be significantly lower than that of its local environment [1]. The team says that the result could help industrial manufacturers of bubbles better control the stability of their products. On a sunny day, our bodies cool down by releasing energy into the environment through the evaporation of sweat. Soap films also release energy by losing liquid via evaporation. Researchers studying bubbles have tracked the evaporation of a soap film’s liquid content under different conditions. But those experiments all assumed that the film’s temperature matched that of the environment, an assumption the results of Boulogne and his colleagues challenge. In their experiments Boulogne and colleagues created a soap bubble from a mixture made of dishwashing liquid, water, and glycerol. They then measured the soap film’s temperature under a variety of environmental conditions. They found that the film could be up to 8 °C colder than the surrounding air. They also found that glycerol content of the soap film impacted this temperature difference, with films containing more glycerol having higher temperatures. Boulogne says that such a large temperature difference could impact bubble stability. But, he adds, further experiments are needed to corroborate that idea.–Anna NapolitanoAnna Napolitano is a freelance science journalist based in London, UK.ReferencesF. Boulogne et al., “Measurement of the temperature decrease in evaporating soap films,” Phys. Rev. Lett. 129, 268001 (2022).Subject AreasRelated Articles More Articles
Physics
Abstract We derive the general dispersion relation for interfacial waves along a planar viscoelastic boundary that separates two viscoelastic bulk media, including the effect of gravity. Our unified theory contains Rayleigh waves, capillary-gravity-flexural waves, Lucassen waves, bending waves in elastic plates, and the standard dispersion-free sound waves, as limiting cases. To illustrate our results, we consider waves at a viscoelastic interface immersed in water and at an air-water interface. We furthermore investigate waves at a viscoelastic interface separating two identical viscoelastic bulk media, for which we consider both Kelvin-Voigt and Maxwell materials, as applicable to polymer gels and solutions. For all cases, we study how material properties determine the crossovers, scaling, and existence regimes of the various interfacial waves. Since we include viscoelastic effects for all media involved, our theory allows to model waveguiding phenomena in biology, such as pressure pulses in axon membranes, which are possibly relevant for acoustic nerve pulse propagation phenomena.3 MoreReceived 6 May 2021Accepted 28 September 2022DOI:https://doi.org/10.1103/PhysRevFluids.7.114801©2022 American Physical SocietyResearch AreasPhysical SystemsPropertiesTechniquesFluid DynamicsPolymers & Soft MatterBiological PhysicsAuthors & Affiliations Sina Zendehroud1, Roland R. Netz1,*, and Julian Kappler1,21Freie Universität Berlin, Department of Physics, Arnimallee 14, 14195 Berlin, Germany2University of Cambridge, Wilberforce Road, DAMTP, Centre for Mathematical Sciences, Cambridge CB3 0WA, United Kingdom*Corresponding author: rnetz@physik.fu-berlin.deArticle Text (Subscription Required) Click to ExpandSupplemental Material (Subscription Required) Click to ExpandReferences (Subscription Required) Click to Expand
Physics
The facility’s antenna array includes 180 antennas spread across 33 acres.Photo: HAARPAn antenna field in Alaska that’s spawned no shortage of conspiracy theories has been carrying out a series of experiments that include sending radio signals to the Moon and Jupiter and waiting for pings back.OffEnglishThe High Frequency Active Auroral Research Program (HAARP) kicked off a 10-day science campaign that ran through October 28. On the agenda were 13 experiments that are pushing the limits of what the facility can do. “The October research campaign is our largest and most diverse to date, with researchers and citizen scientists collaborating from across the globe,” Jessica Matthews, HAARP’s program manager, said in a release.HAARP is made up of 180 high-frequency antennas, each standing at 72 feet tall, stretched across 33 acres near Gakona, Alaska. The research facility transmits radio beams toward Earth’s ionosphere, the ionized part of the atmosphere that’s located about 50 to 400 miles (80 to 600 kilometers) above Earth’s surface. The ionosphere is filled with electrically charged particles, a result of being blasted by solar energy. HAARP sends radio signals to the ionosphere and waits to see how they return, in an effort to measure the disturbances caused by the Sun, among other things.In one recent experiment, known as the “Moon Bounce,” a group of researchers from NASA’s Jet Propulsion Laboratory, Owens Valley Radio Observatory, and the University of New Mexico transmitted a signal from the HAARP antennas in Alaska to the Moon and then waited to receive a reflected signal back at the observatory sites in California and New Mexico. The purpose of the experiment is to study how the three facilities in Alaska, California, and New Mexico can work together for the future observations of near-Earth asteroids. The facility may be able to transmit a signal to an asteroid flying by Earth and receive a signal back that will hint at the space rock’s composition. G/O Media may get a commissionCozyCanadian Down & Feather CompanySleepy and ethical. The Canadian Down & Feather Company can check a few people off your holiday shopping list: cozy connoisseurs or family who just needs better sleep. Another experiment sent a radio beam to Jupiter, currently located about 374 million miles (600 million kilometers) from Earth. The hope is that the beam would reflect off Jupiter’s ionosphere and then be received at the New Mexico site. The Jupiter experiment is run by the John Hopkins Applied Physics Labarotory and aims to provide a new way of observing the ionospheres of other planets. Considering how far Jupiter is from Earth, this experiment is a true test of HAARP’s signal-transmitting capabilities.Another experiment is more on the artsy side. “Ghosts in the Air Glow” beamed video, images, spoken word, and sound art to the ionosphere and waited for the signal to bounce back to test the transitional boundary of the atmosphere. HAARP was originally a project of the U.S. Air Force to study solar flares, which can disrupt Earth’s communications and electric grid. But in 2015, the Air Force decided it was no longer interested in maintaining HAARP, and ownership transferred to the University of Alaska. While it was under the purview of the Air Force, HAARP inspired some wild conspiracy theories, including that its antennas were being used to alter the weather, create deadly hurricanes, and even control minds.
Physics
Ordinarily, light transmits the same in both directions: if I can see you, you can see me. Now, however, researchers have created a device that uses travelling sound waves to break this symmetry, thereby reducing unwanted optical phenomena such as backscattering. The new device is the first to produce this beneficial effect for selective optical vortices, which are used in optical communications, and it could also have applications for optical tweezers and vortex-based lasers. Vortices are ubiquitous in nature – in gases, fluids, plasma and DNA, for example. In optical vortices, the wavefront of a beam of light spirals around the beam’s central propagation axis, taking on a helical shape with zero intensity at the core. This spiralling effect comes about because light carries orbital angular momentum (OAM). This form of angular momentum is distinct from the more familiar spin angular momentum, which manifests itself in polarization, and was only discovered in 1992. Because information can be encoded in OAM, optical vortices show much promise for multiplexing, which is the process of sending multiple optical signals down a single fibre with minimal interference or other detrimental effects. As yet, however, it has been challenging to make devices in which certain vortex models propagate in one direction only. This is due to a fundamental principle of optics known as reciprocity, which implies that light signals will propagate freely in both directions through an optical fibre. Such two-way traffic can cause problems like backscattering that reduce the strength of the transmitted signal. Sound waves manipulate optical waves A team led by Xinglin Zeng, Philip Russel and Birgit Stiller of the Max Planck Institute for the Science of Light has now used propagating sound waves to break this light transmission reciprocity for chosen vortex models. In their work, they used the sound waves to manipulate optical waves in a chiral photonic crystal fibre via an interaction known as topology-selective stimulated Brillouin-Mandelstam scattering. The researchers explain that as the sound waves travel in one direction, they naturally enable non-reciprocal behaviour for the optoacoustic interaction. In this way, OAM modes can be either strongly suppressed or amplified, preventing random backscattering and thus minimizing signal degradation. Stiller and colleagues report that their new device can be reconfigured as an amplifier or as an optical vortex insulator by adjusting the frequency of the control signal. Indeed, they demonstrated a vortex isolation of 22 decibels, which compares well with the best fundamental mode isolators that use stimulated Brillouin-Mandelstam scattering. Researchers find ‘lost’ angular momentum According to Stiller, potential applications of the device include OAM-based quantum communication and entanglement schemes as well as classical optical communications that use OAM modes (both fundamental and higher order) to increase the capacity of the communication channels. “The possibility of selective manipulation of vortex modes by light and sound waves [is] a very fascinating concept,” Stiller says. The researchers, who detail their work in Science Advances, now plan to study more exotic sound waves that have unusual structures. “We want to see how they these waves interact with light in chiral optical fibres,” Stiller tells Physics World.
Physics
Researchers measure the phase and amplitude of the complex electron wavefunctions (a,b), represented by color (or hue) for phase and brightness (or value) for amplitude (plotted in logarithmic scale), in the hue-saturation-value (HSV) color map, as shown in (c). Credit: Hiromichi Niikura from Waseda University The early 20th century saw the advent of quantum mechanics to describe the properties of small particles, such as electrons or atoms. Schrödinger's equation in quantum mechanics can successfully predict the electronic structure of atoms or molecules. However, the "duality" of matter, referring to the dual "particle" and "wave" nature of electrons, remained a controversial issue. Physicists use a complex wavefunction to represent the wave nature of an electron. "Complex" numbers are those that have both "real" and "imaginary" parts—the ratio of which is referred to as the "phase." However, all directly measurable quantities must be "real". This leads to the following challenge: when the electron hits a detector, the "complex" phase information of the wavefunction disappears, leaving only the square of the amplitude of the wavefunction (a "real" value) to be recorded. This means that electrons are detected only as particles, which makes it difficult to explain their dual properties in atoms. The ensuing century witnessed a new, evolving era of physics, namely, attosecond physics. The attosecond is a very short time scale, a billionth of a billionth of a second. "Attosecond physics opens a way to measure the phase of electrons. Achieving attosecond time-resolution, electron dynamics can be observed while freezing molecular motion," explains Professor Hiromichi Niikura from the Department of Applied Physics, Waseda University, Japan, who, along with Professor D. M. Villeneuve—a principal research scientist at the Joint Attosecond Science Laboratory, National Research Council, and adjunct professor at University of Ottawa—pioneered the field of attosecond physics. Niikura and Villeneuve had previously developed a breakthrough method, attosecond re-collision, and also demonstrated the imaging of a molecular orbital or electron wavefunction in a molecule. In a recent study published in Physical Review A, these researchers employed another approach involving attosecond physics, using an attosecond laser pulse, or high-harmonic generation, to visualize a complex wavefunction. The attosecond laser pulse consists of coherent light with a wavelength much shorter than ultra-violet, referred to as extreme ultra-violet (EUV) light. When this pulse irradiates a gas, an electron is ejected. This process is referred to as photoionization. The attosecond pulse consists of a set of "harmonics" or different colors of light. By controlling the generation of the attosecond pulse, the researchers isolated two photoionization pathways—one consisting of a particular harmonic, and the other consisting of another harmonic along with an infrared pulse—to ionize neon. The electron wavefunctions produced by both pathways can interfere with each other. The interference pattern varies with the attosecond delay between the harmonics and the IR pulses. The team determined the phase and amplitude distributions of the photoelectron from the interference pattern and visualized its complex wavefunction. As the energy resolution is smaller than the bandwidth of the attosecond pulses, the researchers were successful in visualizing the detailed wavefunction structure. Furthermore, the researchers developed a method of disentangling the measured wavefunction into wavefunctions that are produced by individual ionization pathways. Now that the researchers have successfully visualized the complex wavefunction of an electron—something that cannot be seen through conventional photoelectron spectroscopy—there's so much more they can achieve. Niikura says, "Nowadays, photoelectron spectroscopy using EUV and X-ray has become a basic tool for investigating structures and dynamics of materials. The present method will provide a way to elucidate the quantum properties of electrons." Visualizing the complete, detailed, complex electron wavefunction will be of significant impact in the fields of nanotechnology, chemistry, and molecular biology. More information: Takashi Nakajima et al, High-resolution attosecond imaging of an atomic electron wave function in momentum space, Physical Review A (2022). DOI: 10.1103/PhysRevA.106.063513 Andrew S. Maxwell et al, Entanglement of orbital angular momentum in non-sequential double ionization, Nature Communications (2022). DOI: 10.1038/s41467-022-32128-z J. Itatani et al, Tomographic imaging of molecular orbitals, Nature (2004). DOI: 10.1038/nature03183 Citation: Visualizing a complex electron wavefunction using high-resolution attosecond technology (2023, January 11) retrieved 12 January 2023 from https://phys.org/news/2023-01-visualizing-complex-electron-wavefunction-high-resolution.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Physics
In a month or two, NASA will launch its massive Space Launch System rocket from the Kennedy Space Center. While the spacecraft atop it will travel around the moon—the farthest from Earth a crew-capable craft will have ever gone—the rocket will also deploy a bunch of little CubeSats, including one called NEA Scout that will be propelled by a solar sail toward a nearby asteroid.That project has come to fruition thanks to Les Johnson, head of that mission’s technology team at NASA’s Marshall Space Flight Center in Huntsville, Alabama. It’s a milestone for Johnson, who has been working on solar sails and other advanced propulsion systems for years.Outside his day job at NASA, Johnson also writes nonfiction and science fiction books for popular audiences, many of which envision future interstellar voyages. His latest, A Traveler’s Guide to the Stars, explores the kinds of propulsion systems that could one day make these deep-space expeditions a reality.This conversation has been edited for length and clarity.WIRED: What inspired you to study space propulsion systems?Johnson: Star Trek, if you go way back. I’ve been a science fiction fan and an advocate for space exploration and space travel since I was in elementary school. I was 7 years old when I watched Neil Armstrong walk on the moon. I was asleep probably, and I was in footie pajamas, and my parents woke me up to come watch this. And later, my older sister allowed me to stay up with her late to watch Star Trek reruns, and Lost in Space, so I was kind of hooked.I decided at that age that I wanted to study physics and be a scientist. I always had bad vision and had been a scrawny kid, so I knew I wouldn’t be an astronaut—but I wanted to work for NASA. One of the first projects I was assigned was to work on something called a space tether. Those are long wires that are deployed on spacecraft, and they can be used for scientific measurements. But there was a secondary effect in test flights: You could actually get propulsion in low Earth orbit using these wires, without electricity orfuel. So I got really excited: “Hey, this is a way to travel through space, at least in Earth orbit, where you may not ever run out of gas.” So that’s what got me interested in advanced propulsion. From there it spread out to solar sails, and to nuclear propulsion. As a result of that, I got involved with some groups outside of NASA, people thinking about how we might go to the stars. They’d ask me, “What’s a viable method to go to Proxima Centauri?” So things kind of snowballed from there.How does a solar sail work?It’s not the solar wind—that’s an unfortunate naming problem. A solar sail is propelled only by light. Light is made up of photons, and those photons don’t have mass. But they do have momentum, like a molecule of air in the wind. And just like a sailboat on a lake or the ocean, when the wind blows against the sail, some of the momentum of the air particles is absorbed by the sail, which causes it to recoil, which is pushing on the sail. And through the mast, it pulls the boat with it. Out in space, as photons of light reflect from the sail, the light gives up a little of its energy and momentum, and that momentum goes into the motion of the sail and it pushes it.How far from the sun can you go while still getting a significant amount of energy from it?This is why solar sails are really cool, and this is why I like them for interstellar travel. Let’s go out the Earth’s distance from the sun, 1 AU, 93 million miles. When you unfurl a sail of any size, say it’s 100 square meters, the sunlight falling on it pushes on it. As you move away from the sun, the intensity of sunlight falls off pretty rapidly, and so does the thrust. But if you deploy a sail closer to the sun, the thrust level goes up dramatically. If you have a light enough sail, you can get a really big acceleration. If you get well inside the orbit of Mercury and you have a sail that only weighs 1 or 2 grams per square meter—which is about 20 times better than we can do today—and you have a sail that’s like a square kilometer, if you add a laser to boost it, you can get enough thrust to go out of the solar system at a significant fraction of the speed of light, like 10 percent. It’s unbelievable. That’s where you can get a trip that will get you to Alpha Centauri in hundreds of years, as opposed to thousands or tens of thousands with chemical rockets.When I first saw these numbers, I thought, “That’s great, but we have no material that can stand those loads that’s that lightweight. That material is ‘unobtainium.’” That was pure science fiction. Then in 2004, graphene was found. The discoverers of that got a Nobel Prize for it in 2010. That’s a single layer of carbon. It has all the thermal and mechanical properties you need to build this huge sail; you just have to put something on it to make it reflective, like a layer of aluminum. And suddenly, this looks possible. We don’t know how to engineer anything that big yet. But we’ve gone from a material that doesn’t exist to one that does exist in the last two decades. And if you augment that with a high-power laser, like the folks at the Breakthrough Starshot want to do, it’s like a lot more suns falling on it, which means you can accelerate it to much higher speeds, potentially up to 5, 10, 20 percent the speed of light. And all of this without violating the laws of physics. The only laws you’re violating are known engineering. Nobody knows how to build these things, but we will! We’ll figure it out.How did you get involved with the NEA Scout’s solar sail?I have been working on solar sails since the early 2000s. It was one technology of many, in a portfolio of advanced propulsion that I was working on at my day job at NASA. It involved electric propulsion, nuclear propulsion, sail propulsion, some chemical work, and solar sails were a part of that. That was about the time little CubeSats were being flown, small, bread-loaf-sized spacecraft that a lot of universities now fly in low Earth orbit. NASA was trying to figure out, “Hey, can we do useful things with these? Does anybody have a payload?” We said, “We have some solar sail hardware. Let’s test a sail deployment in Earth orbit.” So in 2010, we flew a 10-square-meter sail called Nanosail-D. And that was successful. Then the Space Launch System was starting to move forward, and someone at NASA said, “This rocket’s going into deep space. It will have extra payload capability, we can take some of these CubeSats.” So I led a team and we wrote the proposal for NEA Scout using a scaled-up version of the Nanosail-D.Tell me about some speculative propulsions you’ve explored, such as pulsed fusion and antimatter.Oh, it’s all cool! I could talk for hours! I’ll start with the things I think are possible within the known laws of physics. I don’t want to be arrogant here: Scientists throughout history have made the mistake of saying, “Oh, that’s impossible,” and then 50 years later somebody proves them wrong.There are a few ways to get to the stars. One is sails—light sails, solar sails. Chemical rockets just don’t have the energy density to do it. Nuclear-thermal rockets basically use a small version of the reactor that produces electrical power in a power station near you. You miniaturize it and put it on a rocket and use fuel, and it’s superheated by the nuclear reactor. That’s an improvement in performance over a chemical rocket, and it’s something I think we ought to be doing for the exploration of our solar system, but it won’t take you to the stars. You can’t carry enough fuel in the mass you have available to make it work.Its descendant, fusion, which people are working on to try to have a cleaner source of power on Earth, is: Instead of splitting atoms, you’re combining them, like the way the sun produces energy. You’re squeezing hydrogen atoms so tightly until they become helium, and then they give off energy. If you can do that in a controlled reaction, you get a lot more energy out than you put in. You could use that as a propulsion system to build a rocket. It would have to be a really big rocket, because you’d have to carry a lot of fuel: Think of a rocket bigger than the Empire State Building. But it would work. You could get to the nearest few stars, like maybe Proxima Centauri, but not Ross 248, which is 10 light-years away.One of my favorites after that is antimatter. People hear that and think, “That’s out of Star Trek.” Which it was. But it’s real. In high-energy reactions, like at the CERN collider in Europe and other particle accelerators, when we smash atoms together at high speed, lots of things break apart and fly off. But a curious thing people discovered is that there are things that look like a proton, have the mass of a proton, but have a negative charge. And then they discovered these lighter-weight things that look like electrons, but they have a positive charge. So scientists have taken these antiprotons, combined them with positrons, and made anti-hydrogen. That’s in small quantities, because when these anti-particles encounter their normal matter counterparts, they undergo—in physics terms—annihilation. That mass gets turned into energy. They explode and give off gamma rays, all kinds of secondary particles—it’s a very energetic explosion. A tablespoon of antimatter would basically destroy a city—that’s how much energy is packed into antimatter. You could take a lot of this antimatter, store it in a perfect vacuum, and then as you need it for your reaction mass to propel your spaceship, you have a stream of it that goes in and annihilates with normal matter and you use that energy. We don’t know how to do that, but nature says it’s possible. Now, I don’t think I want to build this on Earth, because you’re going to need tons of antimatter. If you lost control of it, that would be a disaster. Buried in there is another pretty interesting idea that is not as good as antimatter or fusion, but it’s really close. That’s something called a fission pulse. You may have heard of Project Orion. That was a really cool project in the Cold War, in the late ’50s and into the ’60s, where some scientists including the late Freeman Dyson said, “Maybe instead of using a rocket to put a spacecraft into space, what would happen if we used a series of controlled explosions under a big steel plate?”It’s like, if you put a rock on top of a firecracker, the rock gets launched, right? Imagine a series of explosions under a steel plate. It’ll start getting off the ground—“Boom, boom, boom!”—to higher and higher speeds as you keep detonating these explosions. You could potentially get this plate or whatever’s on it—a spacecraft—moving to really high speeds. These scientists figured out, if you have a spacecraft the size of an aircraft carrier and you put extremely large plates under it, that are big enough to shield it from the radiation from the bomb going off, and you started exploding atomic bombs every three seconds under it, you could get tremendous speeds and you could use this to send a spacecraft, with a trip time of a few hundred years, to the nearest star. Of course you destroy the ecosystem while you’re launching it. But in theory, yeah, that ought to work!According to a figure in your book, it looks like it’s hard to strike a balance to achieve both efficiency and thrust—and to also not have something cost a gazillion dollars.Unfortunately, if we’re talking about building something at the scale to send a reasonably sized spacecraft to the nearest star, it’s going to be—with today’s capabilities—a really expensive endeavor. But over time, the capability evolves.That curve you’re talking about limits rockets. It applies to any rockets that have fuel on board: chemical rockets, electric rockets, nuclear-thermal, fusion, and even antimatter. You’ve got the mass of your spacecraft, and to get it moving, it requires a certain amount of fuel at a certain thrust level. To keep it going faster, you have to load more fuel on it, which increases the weight, which means you need more fuel to move it initially. Eventually it gets to a point where you get diminishing returns.That’s why I like sails, where the energy is not on the ship; it comes from somewhere else, so you don’t have to worry about that efficiency curve getting you. That’s a beautiful way to get around that problem.For very long interstellar trips—things that are farther than the closest star—continuous fusion, antimatter, and sails are the only thing that will let you get there. But the better the thrust performance, the worse the efficiency it has, with every system we’ve looked at.What motivated you to write this book, A Traveler’s Guide to the Stars?I go back to what motivated me to study science: It was our achievements in space, going to the moon. It was the dreamers, science fiction writers, and television shows, and this notion that in this big universe, as we look out and we discover exoplanets and we find that some of these exoplanets live in regions around their star where there might be liquid water, there might be a place where life could go and exist. I am a believer that life is good and that it’s a morally good thing to try to preserve and protect and spread life. We as a species, as humans, should strive to use space resources to make life better on Earth and expand our presence in the solar system, and eventually start sending our children to spread life into the rest of the universe, which sure looks like it’s a cold, dead universe. If it is, then let’s go fill it up with people who have hopes, dreams, aspirations, to create art and be human.How long will it take humanity to design and send a robotic probe to another star system?Part of that’s going to be a function of how hard we try. If we keep going on the path we’re going—which isn’t a bad path, but it’s taking longer than we thought it would to get the costs of launch down—I think it’ll be 300 years.But if someone were to come along and say, “Here’s a blank check. Let’s go figure this out,” we could do it probably in less than 100 years. It’s a challenge limited by engineering knowledge, but interest, enthusiasm, and funding could accelerate it. Now if it’s the public purse, politicians have to balance that with all the other things: health care, police. I’m just thankful our society places a value on science and exploration at any level. So it’s a balance of priorities.What might a crewed space journey to another star system look like?Let’s assume we’re not going to fundamentally change our own biology through genetic engineering, that 100 years from now, people are still people as we’d recognize them today, but maybe living longer, maybe with better health care. I think it would be a voyage of hundreds of years, in a ship where there would be generations that are born and die, before you ever reach the nearest star. It would be a concept like in the movie Passengers, but not with suspended animation, because I’m really skeptical of that. Now if we have breakthroughs in medical research that allow us to engineer ourselves to be adapted to spaceflight, perhaps engineer ourselves to be like bears, where we could go into hibernation, and then you combine that with rocket science and propulsion science, a voyage of hundreds of years might still be the case, but wouldn’t necessarily be generations. It might open the possibility of the people who get on the ship being the ones who get off the ship. But that’s two levels of revolutionary breakthroughs.What are your thoughts about sending robots versus people into space? That seems to be the eternal debate—with the moon, asteroids, and Mars?It’s going to be both. I think that’s what history has shown. Before we sent people into space, we sent Sputnik and Explorer 1 and other robotic spacecraft. Before we went to the moon, there were the Surveyor missions that we sent, and the Soviets sent spacecraft, and then we sent people. For decades we’ve been sending robotic spacecraft to Mars. I think we will send people to Mars. I’m hoping that will be in my lifetime.When I look at that debate, I think it’s a false dichotomy. And I’ve got a story in the book: I went to a meeting probably eight to 10 years ago on new strategies for exploring Mars. There was a debate going on there, with panelists on stage, about whether we should send people to Mars. Is it really worth it? There was this reserved chair in the first row that was empty. And then in walks Buzz Aldrin. Buzz, the second man to walk on the moon, makes his entrance, and sits down. And he’s there for like five minutes. He stands up, and raises his hand. He looked at all of us and said, “OK, let’s suppose we had a way to do this tomorrow. How many of you would sign up for a one-way trip to Mars?” I was stunned. I want to go as a tourist, but I want to go back home. But it was over half the people, and a lot of them who raised their hands were those who had been arguing we should only send robots. But as soon as they were given the thought, “Oh, we could send people—then of course I’d go.” That moment crystallized in my head that if the capability exists, we’re going to do both. It will first be the robots, then we’ll send people.
Physics
A ring of superconducting qubits can host “bound states” of microwave photons, where the photons tend to clump on neighboring qubit sites. Credit: Google Quantum AI Using a quantum processor, researchers made microwave photons uncharacteristically sticky. After coaxing them to clump together into bound states, they discovered that these photon clusters survived in a regime where they were expected to dissolve into their usual, solitary states. As the finding was first made on a quantum processor, it marks the growing role that these platforms are playing in studying quantum dynamics. Photons — quantum packets of electromagnetic radiation like light or microwaves — usually don’t interact with one another. For example, two crossed flashlight beams pass through one another undisturbed. However, microwave photons can be made to interact in an array of superconducting qubits. Researchers at Google Quantum AI describe how they engineered this unusual situation in “Formation of robust bound states of interacting photons,” which was published on December 7 in the journal Nature. They investigated a ring of 24 superconducting qubits that could host microwave photons. By applying quantum gates to pairs of neighboring qubits, photons could travel around by hopping between neighboring sites and interacting with nearby photons. The interactions between the photons affected their so-called “phase.” The phase keeps track of the oscillation of the photon’s wavefunction. When the photons are non-interacting, their phase accumulation is rather uninteresting. Like a well-rehearsed choir, they’re all in sync with one another. In this case, a photon that was initially next to another photon can hop away from its neighbor without getting out of sync. Just as every person in the choir contributes to the song, every possible path the photon can take contributes to the photon’s overall wavefunction. A group of photons initially clustered on neighboring sites will evolve into a superposition of all possible paths each photon might have taken.  When photons interact with their neighbors, this is no longer the case. If one photon hops away from its neighbor, its rate of phase accumulation changes, becoming out of sync with its neighbors. All paths in which the photons split apart overlap, leading to destructive interference. It would be like each choir member singing at their own pace — the song itself gets washed out, becoming impossible to discern through the din of the individual singers. Among all the possible configuration paths, the only possible scenario that survives is the configuration in which all photons remain clustered together in a bound state. This is why interaction can enhance and lead to the formation of a bound state: by suppressing all other possibilities in which photons are not bound together. To rigorously show that the bound states indeed behaved just as particles did, with well-defined quantities such as energy and momentum, researchers developed new techniques to measure how the energy of the particles changed with momentum. By analyzing how the correlations between photons varied with time and space, they were able to reconstruct the so-called “energy-momentum dispersion relation,” confirming the particle-like nature of the bound states. The existence of the bound states in itself was not new — in a regime called the “integrable regime,” where the dynamics is much less complicated, the bound states were already predicted and observed ten years ago. But beyond integrability, chaos reigns. Before this experiment, it was reasonably assumed that the bound states would fall apart in the midst of chaos. To test this, the researchers pushed beyond integrability by adjusting the simple ring geometry to a more complex, gear-shaped network of connected qubits. They were surprised to find that bound states persisted well into the chaotic regime.  The team at Google Quantum AI is still unsure where these bound states derive their unexpected resilience, but it could have something to do with a phenomenon called “prethermalization,” where incompatible energy scales in the system can prevent a system from reaching thermal equilibrium as quickly as it otherwise would.  Researchers anticipate that studying this system will provide fresh insights into many-body quantum dynamics and inspire more fundamental physics discoveries using quantum processors. Reference: “Formation of robust bound states of interacting microwave photons” by A. Morvan, T. I. Andersen, X. Mi, C. Neill, A. Petukhov, K. Kechedzhi, D. A. Abanin, A. Michailidis, R. Acharya, F. Arute, K. Arya, A. Asfaw, J. Atalaya, J. C. Bardin, J. Basso, A. Bengtsson, G. Bortoli, A. Bourassa, J. Bovaird, L. Brill, M. Broughton, B. B. Buckley, D. A. Buell, T. Burger, B. Burkett, N. Bushnell, Z. Chen, B. Chiaro, R. Collins, P. Conner, W. Courtney, A. L. Crook, B. Curtin, D. M. Debroy, A. Del Toro Barba, S. Demura, A. Dunsworth, D. Eppens, C. Erickson, L. Faoro, E. Farhi, R. Fatemi, L. Flores Burgos, E. Forati, A. G. Fowler, B. Foxen, W. Giang, C. Gidney, D. Gilboa, M. Giustina, A. Grajales Dau, J. A. Gross, S. Habegger, M. C. Hamilton, M. P. Harrigan, S. D. Harrington, M. Hoffmann, S. Hong, T. Huang, A. Huff, W. J. Huggins, S. V. Isakov, J. Iveland, E. Jeffrey, Z. Jiang, C. Jones, P. Juhas, D. Kafri, T. Khattar, M. Khezri, M. Kieferová, S. Kim, A. Y. Kitaev, P. V. Klimov, A. R. Klots, A. N. Korotkov, F. Kostritsa, J. M. Kreikebaum, D. Landhuis, P. Laptev, K.-M. Lau, L. Laws, J. Lee, K. W. Lee, B. J. Lester, A. T. Lill, W. Liu, A. Locharla, F. Malone, O. Martin, J. R. McClean, M. McEwen, B. Meurer Costa, K. C. Miao, M. Mohseni, S. Montazeri, E. Mount, W. Mruczkiewicz, O. Naaman, M. Neeley, A. Nersisyan, M. Newman, A. Nguyen, M. Nguyen, M. Y. Niu, T. E. O’Brien, R. Olenewa, A. Opremcak, R. Potter, C. Quintana, N. C. Rubin, N. Saei, D. Sank, K. Sankaragomathi, K. J. Satzinger, H. F. Schurkus, C. Schuster, M. J. Shearn, A. Shorter, V. Shvarts, J. Skruzny, W. C. Smith, D. Strain, G. Sterling, Y. Su, M. Szalay, A. Torres, G. Vidal, B. Villalonga, C. Vollgraff-Heidweiller, T. White, C. Xing, Z. Yao, P. Yeh, J. Yoo, A. Zalcman, Y. Zhang, N. Zhu, H. Neven, D. Bacon, J. Hilton, E. Lucero, R. Babbush, S. Boixo, A. Megrant, J. Kelly, Y. Chen, V. Smelyanskiy, I. Aleiner, L. B. Ioffe and P. Roushan, 7 December 2022, Nature. DOI: 10.1038/s41586-022-05348-y
Physics
The Facts Inside Our Reporter’s Notebook Walter Cunningham, the last surviving member of the legendary first crewed Apollo mission in the earliest days of NASA, died on Tuesday at 90 years old.  NASA Administrator Bill Nelson said in a statement that Cunningham "was a fighter pilot, physicist, and an entrepreneur – but, above all, he was an explorer." "On Apollo 7, the first launch of a crewed Apollo mission, Walt and his crewmates made history, paving the way for the Artemis Generation we see today,” Nelson said, referencing NASA's current moon exploration program. Cunnigham was born in Iowa and eventually graduated from UCLA with multiple physics degrees. Following his stint with NASA he worked in various business ventures.  A year after Cunningham's Apollo 7 flight, the Apollo program would go on to launch the Apollo 11 spaceflight that brought astronauts to the moon, the first time humans had ever set foot onto another astronomical body. Cunningham's family in a statement on his death touted what they said was there "immense pride in the life that he lived, and our deep gratitude for the man that he was – a patriot, an explorer, pilot, astronaut, husband, brother, and father. The world has lost another true hero, and we will miss him dearly.”
Physics
LEAD, S.D. (AP) — In a former gold mine a mile underground, inside a titanium tank filled with a rare liquified gas, scientists have begun the search for what so far has been unfindable: dark matter. Scientists are pretty sure the invisible stuff makes up most of the universe’s mass and say we wouldn’t be here without it — but they don’t know what it is. The race to solve this enormous mystery has brought one team to the depths under Lead, South Dakota. The question for scientists is basic, says Kevin Lesko, a physicist at Lawrence Berkeley National Laboratory. “What is this great place I live in? Right now, 95% of it is a mystery.”The idea is that a mile of dirt and rock, a giant tank, a second tank and the purest titanium in the world will block nearly all the cosmic rays and particles that zip around — and through — all of us every day. But dark matter particles, scientists think, can avoid all those obstacles. They hope one will fly into the vat of liquid xenon in the inner tank and smash into a xenon nucleus like two balls in a game of pool, revealing its existence in a flash of light seen by a device called “the time projection chamber.”Scientists announced Thursday that the five-year, $60 million search finally got underway two months ago after a delay caused by the COVID-19 pandemic. So far the device has found ... nothing. At least no dark matter. That’s OK, they say. The equipment appears to be working to filter out most of the background radiation they hoped to block. “To search for this very rare type of interaction, job number one is to first get rid of all of the ordinary sources of radiation, which would overwhelm the experiment,” said University of Maryland physicist Carter Hall.And if all their calculations and theories are right, they figure they’ll see only a couple fleeting signs of dark matter a year. The team of 250 scientists estimates they’ll get 20 times more data over the next couple of years.By the time the experiment finishes, the chance of finding dark matter with this device is “probably less than 50% but more than 10%,” said Hugh Lippincott, a physicist and spokesman for the experiment in a Thursday news conference. While that’s far from a sure thing, “you need a little enthusiasm,” Lawrence Berkeley’s Lesko said. “You don’t go into rare search physics without some hope of finding something.” Two hulking Depression-era hoists run an elevator that brings scientists to what’s called the LUX-ZEPLIN experiment in the Sanford Underground Research Facility. A 10-minute descent ends in a tunnel with cool-to-the-touch walls lined with netting. But the old, musty mine soon leads to a high-tech lab where dirt and contamination is the enemy. Helmets are exchanged for new cleaner ones and a double layer of baby blue booties go over steel-toed safety boots.The heart of the experiment is the giant tank called the cryostat, lead engineer Jeff Cherwinka said in a December 2019 tour before the device was closed and filled. He described it as “like a thermos” made of “perhaps the purest titanium in the world” designed to keep the liquid xenon cold and keep background radiation at a minimum.Xenon is special, explained experiment physics coordinator Aaron Manalaysay, because it allows researchers to see if a collision is with one of its electrons or with its nucleus. If something hits the nucleus, it is more likely to be the dark matter that everyone is looking for, he said. These scientists tried a similar, smaller experiment here years ago. After coming up empty, they figured they had to go much bigger. Another large-scale experiment is underway in Italy run by a rival team, but no results have been announced so far.The scientists are trying to understand why the universe is not what it seems. One part of the mystery is dark matter, which has by far most of the mass in the cosmos. Astronomers know it’s there because when they measure the stars and other regular matter in galaxies, they find that there is not nearly enough gravity to hold these clusters together. If nothing else was out there, galaxies would be “quickly flying apart,” Manalaysay said.“It is essentially impossible to understand our observation of history, of the evolutionary cosmos without dark matter,” Manalaysay said. Lippincott, a University of California, Santa Barbara, physicist, said “we would not be here without dark matter.”So while there’s little doubt that dark matter exists, there’s lots of doubt about what it is. The leading theory is that it involves things called WIMPs — weakly interacting massive particles. If that’s the case, LUX-ZEPLIN could be able to detect them. We want to find “where the wimps can be hiding,” Lippincott said. ___Follow Seth Borenstein on Twitter at @borenbears.___The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content.
Physics
STOCKHOLM -- This year’s Nobel Prize in physics has been awarded to Alain Aspect, John F. Clauser and Anton Zeilinger for their work on quantum information science.Hans Ellegren, Secretary General, Royal Swedish Academy of Sciences, announced the winner Tuesday at the Karolinska Institute in Stockholm.A week of Nobel Prize announcements kicked off Monday with Swedish scientist Svante Paabo receiving the award in medicine Monday for unlocking secrets of Neanderthal DNA that provided key insights into our immune system.They continue with chemistry on Wednesday and literature on Thursday. The 2022 Nobel Peace Prize will be announced on Friday and the economics award on Oct. 10.THIS IS A BREAKING NEWS UPDATE. AP’s earlier story follows below.STOCKHOLM (AP) — The winner, or winners, of the Nobel Prize in physics will be announced Tuesday at the Royal Swedish Academy of Sciences in Stockholm. While physicists often tackle problems that appear at first glance to be far removed from everyday concerns — tiny particles and the vast mysteries of space and time — their research provides the foundations for many practical applications of science.Last year the prize was awarded to three scientists — Syukuro Manabe, Klaus Hasselmann and Giorgio Parisi — whose work has helped to explain and predict complex forces of nature, thereby expanding our understanding of climate change.A week of Nobel Prize announcements kicked off Monday with Swedish scientist Svante Paabo receiving the award in medicine Monday for unlocking secrets of Neanderthal DNA that provided key insights into our immune system.They continue with chemistry on Wednesday and literature on Thursday. The 2022 Nobel Peace Prize will be announced on Friday and the economics award on Oct. 10.The prizes carry a cash award of 10 million Swedish kronor (nearly $900,000) and will be handed out on Dec. 10. The money comes from a bequest left by the prize’s creator, Swedish inventor Alfred Nobel, who died in 1895.———Follow all AP stories about the Nobel Prizes at https://apnews.com/hub/nobel-prizes
Physics
Article Published: 30 November 2022 Alexander Zlokapa  ORCID: orcid.org/0000-0002-4153-86462,3,4,5 na1, Joseph D. Lykken  ORCID: orcid.org/0000-0002-0090-94396, David K. Kolchmeyer1, Samantha I. Davis3,4, Nikolai Lauk3,4, Hartmut Neven  ORCID: orcid.org/0000-0002-9681-67465 & …Maria Spiropulu  ORCID: orcid.org/0000-0001-8172-70813,4  Nature volume 612, pages 51–55 (2022)Cite this article AbstractThe holographic principle, theorized to be a property of quantum gravity, postulates that the description of a volume of space can be encoded on a lower-dimensional boundary. The anti-de Sitter (AdS)/conformal field theory correspondence or duality1 is the principal example of holography. The Sachdev–Ye–Kitaev (SYK) model of N ≫ 1 Majorana fermions2,3 has features suggesting the existence of a gravitational dual in AdS2, and is a new realization of holography4,5,6. We invoke the holographic correspondence of the SYK many-body system and gravity to probe the conjectured ER=EPR relation between entanglement and spacetime geometry7,8 through the traversable wormhole mechanism as implemented in the SYK model9,10. A qubit can be used to probe the SYK traversable wormhole dynamics through the corresponding teleportation protocol9. This can be realized as a quantum circuit, equivalent to the gravitational picture in the semiclassical limit of an infinite number of qubits9. Here we use learning techniques to construct a sparsified SYK model that we experimentally realize with 164 two-qubit gates on a nine-qubit circuit and observe the corresponding traversable wormhole dynamics. Despite its approximate nature, the sparsified SYK model preserves key properties of the traversable wormhole physics: perfect size winding11,12,13, coupling on either side of the wormhole that is consistent with a negative energy shockwave14, a Shapiro time delay15, causal time-order of signals emerging from the wormhole, and scrambling and thermalization dynamics16,17. Our experiment was run on the Google Sycamore processor. By interrogating a two-dimensional gravity dual system, our work represents a step towards a program for studying quantum gravity in the laboratory. Future developments will require improved hardware scalability and performance as well as theoretical developments including higher-dimensional quantum gravity duals18 and other SYK-like models19. This is a preview of subscription content, access via your institution Access options Subscribe to Nature+Get immediate online access to Nature and 55 other Nature journalSubscribe to JournalGet full journal access for 1 year$199.00only $3.90 per issueAll prices are NET prices. VAT will be added later in the checkout.Tax calculation will be finalised during checkout.Buy articleGet time limited or full article access on ReadCube.$32.00All prices are NET prices. Additional access options: Log in Learn about institutional subscriptions Data availabilityData from this work are available upon request.Code availabilityCode from this work is available upon request.ReferencesMaldacena, J. The large-N limit of superconformal field theories and supergravity. Int. J. Theor. Phys. 38, 1113–1133 (1999).Article  MathSciNet  MATH  Google Scholar  Sachdev, S. & Ye, J. Gapless spin-fluid ground state in a random quantum Heisenberg magnet. Phys. Rev. Lett. 70, 3339–3342 (1993).Article  CAS  PubMed  Google Scholar  Kitaev, A. A simple model of quantum holography. In Proc. KITP: Entanglement in Strongly-Correlated Quantum Matter 12 (eds Grover, T. et al.) 26 (Univ. California, Santa Barbara, 2015).Maldacena, J. & Stanford, D. Remarks on the Sachdev-Ye-Kitaev model. Phys. Rev. D 94, 106002 (2016).Article  MathSciNet  Google Scholar  Almheiri, A. & Polchinski, J. Models of AdS2 backreaction and holography. J. High Energy Phys. 11, 014 (2015).Article  MATH  Google Scholar  Gross, D. J. & Rosenhaus, V. The bulk dual of SYK: cubic couplings. J. High Energy Phys. 05, 092 (2017).Article  MathSciNet  MATH  Google Scholar  Maldacena, J. & Susskind, L. Cool horizons for entangled black holes. Fortschr. Phys. 61, 781–811 (2013).Article  MathSciNet  MATH  Google Scholar  Susskind, L. Dear qubitzers, GR=QM. Preprint at https://doi.org/10.48550/arXiv.1708.03040 (2017).Gao, P. & Jafferis, D. L. A traversable wormhole teleportation protocol in the SYK model. J. High Energy Phys. 2021, 97 (2021).Maldacena, J., Stanford, D. & Yang, Z. Diving into traversable wormholes. Fortschr. Phys. 65, 1700034 (2017).Article  MathSciNet  Google Scholar  Brown, A. R. et al. Quantum gravity in the lab: teleportation by size and traversable wormholes. Preprint at https://doi.org/10.48550/arXiv.1911.06314 (2021).Nezami, S. et al. Quantum gravity in the lab: teleportation by size and traversable wormholes, part II. Preprint at https://doi.org/10.48550/arXiv.2102.01064 (2021).Schuster, T. et al. Many-body quantum teleportation via operator spreading in the traversable wormhole protocol. Phys. Rev. X 12, 031013 (2022).Gao, P., Jafferis, D. L. & Wall, A. C. Traversable wormholes via a double trace deformation. J. High Energy Phys. 2017, 151 (2017).Article  MathSciNet  MATH  Google Scholar  Maldacena, J. & Qi, X.-L. Eternal traversable wormhole. Preprint at https://doi.org/10.48550/arXiv.1804.00491 (2018).Cotler, J. S. et al. Black holes and random matrices. J. High Energy Phys. 2017, 118 (2017).Article  MathSciNet  MATH  Google Scholar  Kitaev, A. & Suh, S. J. The soft mode in the Sachdev-Ye-Kitaev model and its gravity dual. J. High Energy Phys. 2018, 183 (2018).Article  MathSciNet  MATH  Google Scholar  Berkooz, M., Narayan, P., Rozali, M. & Simón, J. Higher dimensional generalizations of the SYK model. J. High Energy Phys. 01, 138 (2017).Article  MathSciNet  MATH  Google Scholar  Witten, E. An SYK-like model without disorder. J. Phys. A. 52, 474002 (2019).Article  MathSciNet  CAS  Google Scholar  Witten, E. Anti-de Sitter space and holography. Adv. Theor. Math. Phys. 2, 253–291 (1998).Article  MathSciNet  MATH  Google Scholar  Gubser, S., Klebanov, I. & Polyakov, A. Gauge theory correlators from non-critical string theory. Phys. Lett. B. 428, 105–114 (1998).Article  MathSciNet  CAS  MATH  Google Scholar  Hochberg, D. & Visser, M. The null energy condition in dynamic wormholes. Phys. Rev. Lett. 81, 746–749 (1998).Article  MathSciNet  CAS  MATH  Google Scholar  Morris, M. S., Thorne, K. S. & Yurtsever, U. Wormholes, time machines, and the weak energy condition. Phys. Rev. Lett. 61, 1446–1449 (1988).Article  CAS  PubMed  Google Scholar  Visser, M., Kar, S. & Dadhich, N. Traversable wormholes with arbitrarily small energy condition violations. Phys. Rev. Lett. 90, 201102 (2003).Article  MathSciNet  PubMed  MATH  Google Scholar  Visser, M. Lorentzian Wormholes: From Einstein to Hawking. Computational and Mathematical Physics (American Institute of Physics, 1995).Graham, N. & Olum, K. D. Achronal averaged null energy condition. Phys. Rev. D 76, 064001 (2007).Article  Google Scholar  Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019).Article  CAS  PubMed  Google Scholar  Maldacena, J., Stanford, D. & Yang, Z. Conformal symmetry and its breaking in two dimensional nearly anti-de-Sitter space. Prog. Theor. Exp. Phys. 2016, 12C104 (2016).Article  MATH  Google Scholar  Maldacena, J. Eternal black holes in anti-de sitter. J. High Energy Phys. 2003, 021–021 (2003).Article  MathSciNet  Google Scholar  Hayden, P. & Preskill, J. Black holes as mirrors: quantum information in random subsystems. J. High Energy Phys. 2007, 120 (2007).Article  MathSciNet  Google Scholar  Susskind, L. & Zhao, Y. Teleportation through the wormhole. Phys. Rev. D 98, 046016 (2018).Article  MathSciNet  CAS  Google Scholar  Gao, P. & Liu, H. Regenesis and quantum traversable wormholes. J. High Energy Phys. 10, 048 (2019).Article  MathSciNet  MATH  Google Scholar  Yoshida, B. & Yao, N. Y. Disentangling scrambling and decoherence via quantum teleportation. Phys. Rev. X 9, 011006 (2019).CAS  Google Scholar  Landsman, K. A. et al. Verified quantum information scrambling. Nature 567, 61–65 (2019).Article  CAS  PubMed  Google Scholar  Berkooz, M., Isachenkov, M., Narovlansky, V. & Torrents, G. Towards a full solution of the large N double-scaled SYK model. J. High Energy Phys. 03, 079 (2019).Article  MathSciNet  MATH  Google Scholar  García-García, A. M. & Verbaarschot, J. J. M. Spectral and thermodynamic properties of the Sachdev-Ye-Kitaev model. Phys. Rev. D 94, 126010 (2016).Article  Google Scholar  García-García, A. M. & Verbaarschot, J. J. M. Analytical spectral density of the Sachdev-Ye-Kitaev model at finite n. Phys. Rev. D 96, 066012 (2017).Xu, S., Susskind, L., Su, Y. & Swingle, B. A sparse model of quantum holography. Preprint at https://doi.org/10.48550/arXiv.2008.02303 (2020).Garcia-Garcia, A. M., Jia, Y., Rosa, D. & Verbaarschot, J. J. M. Sparse Sachdev-Ye-Kitaev model, quantum chaos, and gravity duals. Phys. Rev. D 103, 106002 (2021).Article  MathSciNet  CAS  Google Scholar  Caceres, E., Misobuchi, A. & Pimentel, R. Sparse SYK and traversable wormholes. J. High Energy Phys. 11, 015 (2021).Article  MathSciNet  MATH  Google Scholar  Kandala, A. et al. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature 549, 242–246 (2017).Article  CAS  PubMed  Google Scholar  Cottrell, W., Freivogel, B., Hofman, D. M. & Lokhande, S. F. How to build the thermofield double state. J. High Energy Phys. 2019, 58 (2019).Article  MathSciNet  MATH  Google Scholar  Huggins, W. J. et al. Virtual distillation for quantum error mitigation. Phys. Rev. X 11, 041036 (2021).CAS  Google Scholar  O’Brien, T. E. et al. Error mitigation via verified phase estimation. PRX Quantum 2, 020317 (2021).Article  Google Scholar  Temme, K., Bravyi, S. & Gambetta, J. M. Error mitigation for short-depth quantum circuits. Phys. Rev. Lett. 119, 180509 (2017).Article  MathSciNet  PubMed  Google Scholar  Li, Y. & Benjamin, S. C. Efficient variational quantum simulator incorporating active error minimization. Phys. Rev. X 7, 021050 (2017). Google Scholar  Kolchmeyer, D. K. Toy Models of Quantum Gravity. PhD thesis, Harvard Univ. (2022); https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37372099.Zlokapa, A. Quantum Computing for Machine Learning and Physics Simulation. BSc thesis, California Institute of Technology (2021); https://doi.org/10.7907/q75q-zm20.Download referencesAcknowledgementsThe experiment was performed in collaboration with the Google Quantum AI hardware team, under the direction of A. Megrant, J. Kelly and Y. Chen. We acknowledge the work of the team in fabricating and packaging the processor; building and outfitting the cryogenic and control systems; executing baseline calibrations; optimizing processor performance and providing the tools to execute the experiment. Specialized device calibration methods were developed by the physics team led by V. Smelyanskiy. We in particular thank X. Mi and P. Roushan for their technical support in carrying out the experiment and are grateful to B. Kobrin for useful discussions and validation studies. This work is supported by the Department of Energy Office of High Energy Physics QuantISED programme grant no. SC0019219 on Quantum Communication Channels for Fundamental Physics. Furthermore, A.Z. acknowledges support from the Hertz Foundation, the Department of Defense through the National Defense Science and Engineering Graduate Fellowship Program, and Caltech’s Intelligent Quantum Networks and Technologies research programme. S.I.D. is partially supported by the Brinson Foundation. Fermilab is operated by Fermi Research Alliance, LLC under contract number DE-AC02-07CH11359 with the United States Department of Energy. We are grateful to A. Kitaev, J. Preskill, L. Susskind, P. Hayden, A. Brown, S. Nezami, J. Maldacena, N. Yao, K. Thorne and D. Gross for insightful discussions and comments that helped us improve the manuscript. We are also grateful to graduate student O. Cerri for the error analysis of the experimental data. M.S. thanks the members of the QCCFP (Quantum Communication Channels for Fundamental Physics) QuantISED Consortium and acknowledges P. Dieterle for the thorough inspection of the manuscript.Author informationAuthor notesThese authors contributed equally: Daniel Jafferis, Alexander ZlokapaAuthors and AffiliationsCenter for the Fundamental Laws of Nature, Harvard University, Cambridge, MA, USADaniel Jafferis & David K. KolchmeyerCenter for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA, USAAlexander ZlokapaDivision of Physics, Mathematics and Astronomy, Caltech, Pasadena, CA, USAAlexander Zlokapa, Samantha I. Davis, Nikolai Lauk & Maria SpiropuluAlliance for Quantum Technologies (AQT), California Institute of Technology, Pasadena, CA, USAAlexander Zlokapa, Samantha I. Davis, Nikolai Lauk & Maria SpiropuluGoogle Quantum AI, Venice, CA, USAAlexander Zlokapa & Hartmut NevenFermilab Quantum Institute and Theoretical Physics Department, Fermi National Accelerator Laboratory, Batavia, IL, USAJoseph D. LykkenAuthorsDaniel JafferisYou can also search for this author in PubMed Google ScholarAlexander ZlokapaYou can also search for this author in PubMed Google ScholarJoseph D. LykkenYou can also search for this author in PubMed Google ScholarDavid K. KolchmeyerYou can also search for this author in PubMed Google ScholarSamantha I. DavisYou can also search for this author in PubMed Google ScholarNikolai LaukYou can also search for this author in PubMed Google ScholarHartmut NevenYou can also search for this author in PubMed Google ScholarMaria SpiropuluYou can also search for this author in PubMed Google ScholarContributionsJ.D.L. and D.J. are senior co-principal investigators of the QCCFP Consortium. J.D.L. worked on the conception of the research program, theoretical calculations, computation aspects, simulations and validations. D.J. is one of the inventors of the SYK traversable wormhole protocol. He worked on all theoretical aspects of the research and the validation of the wormhole dynamics. Graduate student D.K.K.47 worked on theoretical aspects and calculations of the chord diagrams. Graduate student S.I.D. worked on computation and simulation aspects. Graduate student A.Z.48 worked on all theory and computation aspects, the learning methods that solved the sparsification challenge, the coding of the protocol on the Sycamore and the coordination with the Google Quantum AI team. Postdoctoral scholar N.L. worked on the working group coordination aspects, meetings and workshops, and follow-up on all outstanding challenges. Google’s VP Engineering, Quantum AI, H.N. coordinated project resources on behalf of the Google Quantum AI team. M.S. is the lead principal investigator of the QCCFP Consortium Project. She conceived and proposed the on-chip traversable wormhole research program in 2018, assembled the group with the appropriate areas of expertise and worked on all aspects of the research and the manuscript together with all authors.Corresponding authorCorrespondence to Maria Spiropulu.Ethics declarations Competing interests The authors declare no competing interests. Peer review Peer review information Nature thanks the anonymous reviewers for their contribution to the peer review of this work. Additional informationPublisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Supplementary informationSupplementary InformationThis file contains Supplementary Sections 1–7 including Figs. 1–36 and References: see the Contents for details.Rights and permissionsSpringer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.Reprints and PermissionsAbout this articleCite this articleJafferis, D., Zlokapa, A., Lykken, J.D. et al. Traversable wormhole dynamics on a quantum processor. Nature 612, 51–55 (2022). https://doi.org/10.1038/s41586-022-05424-3Download citationReceived: 22 February 2022Accepted: 07 October 2022Published: 30 November 2022Issue Date: 01 December 2022DOI: https://doi.org/10.1038/s41586-022-05424-3 CommentsBy submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Physics
NASA’s unmanned Artemis mission to the moon last month represented a small step toward the eventual dream of getting people to Mars and beyond, a goal that will require a giant leap in finding ways to settle and exploit the resources of earth’s lone satellite. In two years, the Artemis project — joined by more than a dozen countries, including Israel — will fly astronauts around the moon, and if all goes according to plan, 2025 will see the first crewed lunar landing since Apollo 17 in 1972. By the middle of the next decade, the US National Aeronautics and Space Administration plans to populate its first permanent base camp for rotating research teams. To make this possible, a key challenge will be mining and separating the metals and oxygen bound together in the stony deposits called regolith that cover the lunar surface, and generating the energy to power that process. NASA and the US Energy Department are working to advance space nuclear technologies, according to the administration’s website. It notes that “fission systems are reliable and could enable continuous power regardless of location, available sunlight, and other natural environmental conditions. A demonstration of such systems on the moon would pave the way for long-duration missions on the Moon and Mars.” Get The Times of Israel's Daily Edition by email and never miss our top stories By signing up, you agree to the terms As an alternative, a US-born Israeli academic has designed a conceptual plan to rig the moon with solar panels. Emeritus Professor Jeffrey Gordon of Ben-Gurion University’s Solar Energy and Environmental Physics Department has calculated that this would require six times less mass than the best nuclear option to provide the same amount of electricity. He claims that his proposal would provide uninterrupted electricity supply for oxygen-producing facilities 100% of the time, with a sufficient number of panels always exposed to the sun. Gordon published his vision in the academic journal Renewable Energy earlier this year and was subsequently invited to lecture at NASA’s John H. Glenn Research Center in Cleveland, Ohio. “We discussed it and it was stimulating,” Gordon said, explaining that solar researchers at the Glenn campus were competing with other scientists pushing for a nuclear solution. “NASA wants a reliable, long-lifetime, minimum mass system,” he said. “Reliability comes even before cost.” At the initial stage of human colonization, only small amounts of energy will be needed, and NASA has already selected six companies to come up with proposals, three based on solar energy and three employing nuclear fission. With the longer term in mind, though, NASA will need larger amounts of energy to extract water — which exists on the moon in various states — and mine the lunar surface for metals to be used for lunar construction, and separate those metals from oxygen, which makes up around 45% of the stony deposits. Gordon’s research started when he was approached a couple of years ago by an Israeli startup, The Helios Project, which is designing a lunar oxygen-producing reactor with a technology that requires very high temperatures. A joint approach for funding from the Israeli Innovation Authority didn’t bear fruit, and the partnership stopped — but not before Gordon had penned his conceptual plan for a belt of solar panels on the moon. Emeritus Professor Jeffrey Gordon in his laboratory at Ben Gurion University’s Solar Energy and Environmental Physics Department. (Courtesy) Oxygen extracted from the lunar regolith will serve human needs but be used mainly to fuel and refuel rockets and orbiting satellites. Today, rockets must be loaded with enough liquid oxygen and hydrogen to provide them with the propulsion to get to space and back to Earth. With around $1 million currently needed for every kilogram of payload, costs could be slashed were it possible to provide oxygen at lunar filling stations. NASA’s new moon rocket lifts off from Launch Pad 39B at the Kennedy Space Center in Cape Canaveral, Florida, November 16, 2022. This launch is the first flight test of the Artemis program. (AP Photo/Terry Renna) Before he began, Gordon reviewed three options, one of which was nuclear, though as an expert on solar energy he was looking to develop a solar alternative. The benchmark was producing energy around the clock. The two solar options — generating solar energy while the sun shone and storing it in batteries during periods of darkness, or building twice as many solar plants as needed and operating each one only half of the time — turned out to be prohibitively expensive. “I developed a concept and performed all the quantitative estimates that an engineering staff at a space agency would want to review,” Gordon explained. His plan would see a ring of solar panels installed close to one of the lunar poles; he used the north pole for illustration. They would be located no higher (or lower, in the case of the south pole) than the 88th latitude, to balance the advantage of a relatively short lunar circumference in these regions with the need to make sure that the shortest daylight periods still satisfied the power demands. The oxygen-making factories would be located around 10 kilometers (six miles) closer to a pole. This would maintain a sufficient distance to prevent lunar dust generated during mining from covering the photovoltaic panels, but still keep transmission lines relatively short. The transmission lines themselves would not require any insulation, Gordon pointed out, because the lunar ground provides natural electrical insulation. Experiments testing the strength of photovoltaic panels in the face of cosmic radiation looked promising, Gordon added. “PV should be able to survive cosmic radiation long enough to satisfy what’s needed,” he said. But the greatest concern — and the one preoccupying NASA — was how to sufficiently protect humans operating the oxygen factories and carrying out other tasks. “There’s no answer to that yet,” he said. Gordon said he had “no opinion” about potential risks of building nuclear reactors on the moon, and noted that nuclear fuel could easily last for 100,000 years, although turbines and generators would degrade within decades. Dealing with the nuclear waste was a “good question,” he conceded, adding, “There would be nuclear pollution.” He went on, “At this time, my impression is that NASA is planning nuclear reactors on the moon in the long term and that the solar people are trying to persuade them otherwise or at least to have the two technologies,” he said. His own plan was still “on the far horizon.” NASA did not provide any response by press time. A prototype lunar oxygen production unit that works under vacuum conditions in the laboratory of The Helios Project. (Courtesy, The Helios Project) The Helios Project, which last year signed a memorandum of understanding to cooperate with Ispace Inc of Japan, hopes to fly a small prototype of its oxygen factory to the moon in 2025. The plan is to produce a few dozen grams of oxygen to show proof of concept, according to Jonathan Geifman, Helios co-founder and CEO. To do this, a battery will probably be used. Jonathan Geifman, Helios co-founder and CEO (Courtesy) Geifman said the eventual aim was to produce 1,000 tons of oxygen per month — enough to refuel the SpaceX Starship. Powered by liquid oxygen and liquid methane, the Starship would be “the main workhorse for all activity in the near future.” Israel launched its own lunar landing capsule, Beresheet in 2019. Due to a technical glitch, the vessel crashed on landing. Earlier this year, former Israeli fighter pilot Eytan Stibbe became the second Israeli in space, paying the privately owned Axiom Space to join three others in flying to the International Space Station. Israel’s first astronaut, Ilan Ramon, was killed in 2003 when the space shuttle Columbia disintegrated while reentering the atmosphere, killing all seven crew members on board.
Physics
News and events homeNewsEventsFeaturesPress Office contacts Posted on 11 January 2023 Researchers have uncovered how some bacteria use electrical spikes to overcome antibacterial drugs, potentially leading to ‘superbugs’ that are resistant to antibiotics. Antimicrobial resistance, or AMR, is one of the world’s most urgent health problems The study, led by a team at the University of York and Peking University, reveals how bacteria – many of which result in debilitating diseases – exhibit short-lived electrical spikes very similar to those found in nerve cells, and use these to help evade the killing effects of antibiotics. The team of scientists created new types of indicator dyes that could be directly spliced into the genetic code of bacteria whose fluorescence could then be used to measure the electrical voltage across the membranes of individual cells. Antimicrobial resistance, or AMR, is one of the world’s most urgent health problems killing over one million people worldwide each year, which occurs because germs such as bacteria and fungi are becoming increasingly resistant to the antibiotics drugs that are designed to kill them. Drug tolerance It is still not clear exactly how different germs become tolerant to these drugs, and so efforts to understand how this occurs are important in paving the way to developing new approaches to eradicate germs. The research is an important step forward in understanding how actively growing bacteria exhibit transient electrical spikes across their cell membranes, and how these spikes are associated with an increased ability to survive the killing effects of antibiotics, the authors of the study say. Electrical voltage Co-lead author of the study, Professor Mark Leake, from the Physics of Life group at the University of York, said: “Our study suggests that when bacteria are actively growing, such as during an infection, they exhibit short-lived spikes in the electrical voltage across their cell membranes. “These spikes look remarkably similar to those seen in nerve cells during sensory stimulation. Their size and frequency can be ‘tuned’ by changing the mixture of chemical ions surrounding the cells in a way that suggests that tiny channels in the cell membrane are dynamically opening and closing. "We find that cells which have larger and more frequent spikes can literally spit out antibiotics via these channels before they have a chance to kill the cell.” Infectious colonies The study may solve the puzzle of how some bacteria known as ‘persisters’ can in effect resuscitate themselves after a treatment of antibiotics is stopped and go on to grow new infectious colonies. The team developed new used fluorescent dyes to act as high-precision voltage sensors that are inserted directly into the bacteria’s genetic code. Using laser fluorescence microscopy on these cells allowed the team to observe these voltage spikes directly for the first time on individual cells. Superbugs Professor Leake said: “New scientific studies, such as ours, that help us to understand at the scale of single cells how electrical signals can be used by bacteria to help them survive antibiotics, may help pave the way to completely new treatments that focus on disrupting the bacteria’s ‘electrical circuitry’ to combat the emerging global threat of infections from superbugs.” The research, in collaboration with National Central University, Taiwan, and Peking University, China, is published in the journal Proceedings of the National Academy of Sciences. Explore more news Media enquiries Samantha Martin Deputy Head of Media Relations Tel: work +44 (0)1904 322029 s.martin@york.ac.uk
Physics
NASA's Double Asteroid Redirection Test (DART) spacecraft prior to impact at the Didymos binary asteroid system showed in this undated illustration handout. NASA/Johns Hopkins/Handout via REUTERS Oct 11 (Reuters) - The spacecraft that NASA deliberately crashed into an asteroid last month succeeded in nudging the rocky moonlet out of its natural orbit - the first time humanity has altered the motion of a celestial body - NASA officials announced on Tuesday.The $330 million proof-of-concept mission, which was seven years in development, also marked the world's first test of a planetary defense system designed to prevent a potential doomsday meteorite collision with Earth.Findings of telescope observations unveiled at a NASA news briefing in Washington confirmed that the suicide test flight of the DART spacecraft on Sept. 26 achieved its primary objective: changing the direction of an asteroid through sheer kinetic force.Register now for FREE unlimited access to Reuters.comAstronomical measurements over the past two weeks showed that the target asteroid was bumped slightly closer to the larger parent asteroid it orbits in space and that its orbital period was shortened by 32 minutes, NASA scientists said."This is a watershed moment for planetary defense and a watershed moment for humanity," NASA chief Bill Nelson told reporters in announcing the results. "It felt like a movie plot, but this was not Hollywood."Last month's impact, 6.8 million miles (10.9 million km) from Earth, was monitored in real time from the mission operations center at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Maryland, where the spacecraft was designed and built for NASA.The celestial target of the DART flight was an egg-shaped asteroid named Dimorphos, roughly the size of a football stadium, that was orbiting a parent asteroid about five times bigger called Didymos once every 11 hours, 55 minutes.The aim was to fly the DART impactor vehicle - no bigger than a refrigerator - directly into Dimorphos at about 14,000 miles per hour (22,531 kph), creating enough force to shift the moonlet's orbital track closer to its larger companion.Comparison of pre- and post-impact measurements of the Dimorphos-Didymos pair showed the orbital period was shortened to 11 hours, 23 minutes.POSSIBLE WOBBLETom Statler, DART program scientist for NASA, said the collision also left Dimorphos "wobbling a bit," but additional observations would be necessary to confirm that.The outcome "demonstrated we are capable of deflecting a potentially hazardous asteroid of this size," if it were discovered well enough in advance, said Lori Glaze, director of NASA's planetary science division. "The key is early detection."Neither of the two asteroids involved, nor DART itself - short for Double Asteroid Redirection Test - posed any actual threat to Earth, NASA scientists said.But Nancy Chabot, DART's coordination lead at APL, said Dimorphos "is a size of asteroid that is a priority for planetary defense."A Dimorphos-sized asteroid, while not capable of posing a planet-wide threat, could level a major city with a direct hit.Scientists had predicted the DART impact would shorten Dimorphos' orbital path by at least 10 minutes but would have considered a change as small as 73 seconds a success. So the actual change of more than a half hour exceeded expectations.Launched by a SpaceX rocket in November 2021, DART made most of its voyage under the guidance of flight directors on the ground, with control handed over to the craft's autonomous on-board navigation system in the final hours of the journey.Dimorphos and Didymos are both tiny compared with the cataclysmic Chicxulub asteroid that struck Earth some 66 million years ago, wiping out about three-quarters of the world's plant and animal species including the dinosaurs.Smaller asteroids are far more common and present a greater theoretical concern in the near term, making the Didymos pair suitable test subjects for their size, according to NASA scientists and planetary defense experts.Also, the two asteroids' relative proximity to Earth and dual configuration made them ideal for the DART mission.The Dimorphos moonlet is one of the smallest astronomical objects to receive a permanent name and is one of 27,500 known near-Earth asteroids of all sizes tracked by NASA. Although none are known to pose a foreseeable hazard to humankind, NASA estimates that many more asteroids remain undetected in the near-Earth vicinity.Register now for FREE unlimited access to Reuters.comReporting by Steve Gorman in Los Angeles; editing by Jonathan Oatis and Sandra MalerOur Standards: The Thomson Reuters Trust Principles.
Physics
The 2022 Nobel Prize in Physics and Chemistry was equally awarded on Tuesday to Alain Aspect, John F. Clauser and Anton Zeilinger for pioneering quantum information science.The three scientists are from France, the U.S. and Austria, respectively. Each year, the prize is awarded by the Royal Swedish Academy of Sciences “for groundbreaking contributions to our understanding of complex systems.”The 2021 Nobel Prize in Physics was awarded to scientists from Japan, Germany and Italy for their climate discoveries.Syukuro Manabe, 90, and Klaus Hasselmann, 89, were cited for their work in “the physical modeling of Earth’s climate, quantifying variability and reliably predicting global warming.”The second half of the prize was awarded to Giorgio Parisi, 73, for “the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales.”This is a developing story. More to come...
Physics
CBS Mornings Updated on: December 13, 2022 / 11:46 AM / CBS News Nuclear fusion energy breakthrough announced Watch: U.S. announces nuclear fusion energy breakthrough 04:27 The U.S. Department of Energy announced Tuesday a monumental milestone in nuclear fusion research: a "net energy gain" was achieved for the first time in history by scientists from the Lawrence Livermore National Laboratory in California."Simply put, this is one of the most impressive scientific feats of the 21st century," Jennifer Granholm, U.S. energy secretary, said at a press conference, adding that researchers have been working on this for decades."It strengthens our national security, and ignition allows us to replicate certain conditions only found in the stars and in the sun," she said. "This milestone moves us one significant step closer to the possibility of zero carbon abundance fusion energy powering our society." The impact of the scientists' work will assist U.S. industries nationwide, Granholm said."Today, we tell the world that America has achieved a significant scientific breakthrough," said Granholm.  U.S. expected to announce fusion energy "breakthrough" 02:48 The hope is that it could be used to develop a clean source of power that would discontinue reliance on fossil fuels. "The day you get more energy out than you put in, the sky's the limit," American astrophysicist Neil deGrasse Tyson told CBS News.Nuclear fusion has been considered the holy grail of energy creation that some say could save humans from extinction. It combines two hydrogen atoms, which then makes helium and a whole lot of energy. It's how stars, like our sun, generate power. "We've known how to fuse atoms and generate energy. We just haven't been able to control it," said deGrasse Tyson, author of "Starry Messenger: Cosmic Perspectives on Civilization." Nuclear fusion technology has been around since the creation of the hydrogen bomb, but using that technology to harness energy has required decades of research. "They took 200 laser beams, some of the most powerful on the planet Earth, converged that energy down to a pellet, a pellet the size of a BB," said Dr. Michio Kaku, a professor of theoretical physics at the City College of New York. "And just remember, fusion power has no nuclear waste to speak of, no meltdowns to worry about."Scientists believe fusion plants would be much safer than today's nuclear fission plants — if the process can be mastered.That's the goal of a multinational, multibillion-dollar project called the International Thermonuclear Experimental Reactor, or ITER, which is under construction in southern France. U.S. Energy Secretary Jennifer Granholm (C) is joined by (L-R) Lawrence Livermore National Laboratories Director Dr. Kim Budil, National Nuclear Security Administration head Jill Hruby, White House Office of Science and Technology Policy Director Dr. Arati Prabhakar and NNSA Deputy Administrator for Defense Programs Dr. Marvin Adams for a news conference at the Department of Energy headquarters to announce a breakthrough in fusion research on Dec. 13, 2022 in Washington, DC.  Getty Images Currently, nuclear power plants use fission, which breaks atoms apart to make energy. Even thought it's not burning fossil fuel, meltdowns like Chernobyl and Fukushima are evidence that our nuclear fission can still harm humans — and our environment.But now, fusion's moment appears to finally be here."We're long overdue to have converted something so destructive that finally it could be used for a peaceful purpose in the service of civilization," deGrasse Tyson said. Granholm said scientists have achieved a milestone that will reach far beyond Tuesday's announcement."This is a landmark achievement for the researchers and staff at the National Ignition Facility who have dedicated their careers to seeing fusion ignition become a reality, and this milestone will undoubtedly spark even more discovery," Granholm said, adding that the breakthrough "will go down in the history books.'' In: nuclear fusion Lilia Luciano Lilia Luciano is an award-winning journalist and CBS News correspondent based in Los Angeles. Thanks for reading CBS NEWS. Create your free account or log in for more features. Please enter email address to continue Please enter valid email address to continue
Physics
The stunning aftermath of NASA's mission to deliberately smash a spacecraft into an asteroid at 14,000mph has been caught on camera by two of the world's most powerful space telescopes.Hubble and the US space agency's new super space observatory, James Webb, both captured views of the first ever planetary defence experiment, which saw the Double Asteroid Redirection Test (DART) attempt to knock a space rock off course.It was the world's first test of a kinetic impact mitigation technique, using a spacecraft to deflect an asteroid that poses no threat to Earth, and modifying the object's orbit.On Monday, at 19:14 ET (00:14 BST Tuesday), DART intentionally crashed into Dimorphos, the asteroid moonlet in the double-asteroid system of Didymos.  The stunning aftermath of NASA's mission to deliberately smash a spacecraft into an asteroid at 14,000mph has been caught on camera by two of the world's most powerful telescopes. Hubble's image in the visible light is pictured left and Webb's in infrared is shown right James Webb captured views of the first ever planetary defence experiment, which saw the Double Asteroid Redirection Test (DART) attempt to knock a space rock off course It was the world's first test of a kinetic impact mitigation technique, using a spacecraft to deflect an asteroid that poses no threat to Earth, and modifying the object's orbit. Pictured is the aftermath snapped by Hubble at 22 minutes, five hours and 8 hours after impact WHAT IS THE NASA DART MISSION? DART will be the world's first planetary defence test mission.It is heading for the small moonlet asteroid Dimorphos, which orbits a larger companion asteroid called Didymos.When it gets there it will be intentionally crashing into the asteroid to slightly change its orbit.While neither asteroid poses a threat to Earth, DART's kinetic impact will prove that a spacecraft can autonomously navigate to a target asteroid and kinetically impact it.Then, using Earth-based telescopes to measure the effects of the impact on the asteroid system, the mission will enhance modelling and predictive capabilities to help us better prepare for an actual asteroid threat should one ever be discovered.Both Webb and Hubble simultaneously observed the same celestial target from afar and NASA has now released timelapse footage and images of the impact.Although this asteroid posed no threat to Earth, the hope is that if the mission is a success – as believed – then it could work as a strategy for defending our planet against future threats from space. 'Webb and Hubble show what we've always known to be true at NASA: We learn more when we work together,' said NASA Administrator Bill Nelson. 'For the first time, Webb and Hubble have simultaneously captured imagery from the same target in the cosmos: an asteroid that was impacted by a spacecraft after a seven-million-mile journey. 'All of humanity eagerly awaits the discoveries to come from Webb, Hubble, and our ground-based telescopes – about the DART mission and beyond.'The coordinated Hubble and Webb observations are more than just an operational milestone for each telescope – there are also key science questions relating to the makeup and history of our solar system that researchers can explore when combining the capabilities of these observatories. Observations from Webb and Hubble together will allow scientists to gain knowledge about the nature of the surface of Dimorphos, how much material was ejected by the collision, and how fast it was ejected. The pair captured the impact in different wavelengths of light — Webb in infrared and Hubble in visible. Observing the impact across a wide array of wavelengths will reveal the distribution of particle sizes in the expanding dust cloud, helping to determine whether it threw off lots of big chunks or mostly fine dust. Combining this information, along with ground-based telescope observations, will help scientists to understand how effectively a kinetic impact can modify an asteroid's orbit. The last complete image of asteroid moonlet Dimorphos, taken by the DRACO imager on the DART mission from 7 miles (12 kilometers) from the asteroid and two seconds before impact The Double Asteroid Redirection Test was launched last November ahead of a year-long journey to crash into the small asteroid Dimorphos, which orbits a larger one called DidymosWebb took one observation of the impact location before the collision took place, then several observations over the next few hours. Images from Webb's Near-Infrared Camera (NIRCam) show a tight, compact core, with plumes of material appearing as wisps streaming away from the center of where the impact took place.Observing the impact with Webb presented the flight operations, planning, and science teams with unique challenges, because of the asteroid's speed of travel across the sky. As DART approached its target, the teams performed additional work in the weeks leading up to the impact to enable and test a method of tracking asteroids moving over three times faster than the original speed limit set for Webb.'I have nothing but tremendous admiration for the Webb Mission Operations folks that made this a reality,' said principal investigator Cristina Thomas of Northern Arizona University in Flagstaff, Arizona. 'We have been planning these observations for years, then in detail for weeks, and I'm tremendously happy this has come to fruition.'Scientists also plan to observe the asteroid system in the coming months using Webb's Mid-Infrared Instrument (MIRI) and Webb's Near-Infrared Spectrograph (NIRSpec). Spectroscopic data will provide researchers with insight into the asteroid's chemical composition.Webb observed the impact over five hours total and captured 10 images. The data was collected as part of Webb's Cycle 1 Guaranteed Time Observation Program 1245 led by Heidi Hammel of the Association of Universities for Research in Astronomy (AURA).Hubble also captured observations of the binary system ahead of the impact, then again 15 minutes after DART hit the surface of Dimorphos.  James Webb (pictured) observed the impact over five hours total and captured 10 images  Hubble, a joint project of NASA, the European Space Agency (ESA) and the Canadian Space Agency (CSA), has been observing the universe for over 30 yearsImages from Hubble's Wide Field Camera 3 show the impact in visible light. Ejecta from the impact appear as rays stretching out from the body of the asteroid. The bolder, fanned-out spike of ejecta to the left of the asteroid is in the general direction from which DART approached.Some of the rays appear to be curved slightly, but astronomers need to take a closer look to determine what this could mean. In the Hubble images, astronomers estimate that the brightness of the system increased by three times after impact, and saw that brightness hold steady, even eight hours after impact.Hubble plans to monitor the Didymos-Dimorphos system 10 more times over the next three weeks. These regular, relatively long-term observations as the ejecta cloud expands and fades over time will paint a more complete picture of the cloud's expansion from the ejection to its disappearance.'When I saw the data, I was literally speechless, stunned by the amazing detail of the ejecta that Hubble captured,' said Jian-Yang Li of the Planetary Science Institute in Tucson, Arizona, who led the Hubble observations. 'I feel lucky to witness this moment and be part of the team that made this happen.'Hubble captured 45 images in the time immediately before and following DART's impact with Dimorphos. The Hubble data was collected as part of Cycle 29 General Observers Program 16674.'This is an unprecedented view of an unprecedented event,' said Andy Rivkin, DART investigation team lead of the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland.The James Webb Telescope: NASA's $10 billion telescope is designed to detect light from the earliest stars and galaxies The James Webb telescope has been described as a 'time machine' that could help unravel the secrets of our universe.The telescope will be used to look back to the first galaxies born in the early universe more than 13.5 billion years ago, and observe the sources of stars, exoplanets, and even the moons and planets of our solar system.The vast telescope, which has already cost more than $7 billion (£5 billion), is considered a successor to the orbiting Hubble Space TelescopeThe James Webb Telescope and most of its instruments have an operating temperature of roughly 40 Kelvin – about minus 387 Fahrenheit (minus 233 Celsius).It is the world's biggest and most powerful orbital space telescope, capable of peering back 100-200 million years after the Big Bang.The orbiting infrared observatory is designed to be about 100 times more powerful than its predecessor, the Hubble Space Telescope.NASA likes to think of James Webb as a successor to Hubble rather than a replacement, as the two will work in tandem for a while. The Hubble telescope was launched on April 24, 1990, via the space shuttle Discovery from Kennedy Space Centre in Florida.It circles the Earth at a speed of about 17,000mph (27,300kph) in low Earth orbit at about 340 miles in altitude.
Physics
© Provided by The Associated Press FILE - This undated image provided by the National Ignition Facility at the Lawrence Livermore National Laboratory shows the NIF Target Bay in Livermore, Calif. The system uses 192 laser beams converging at the center of this giant sphere to make a tiny hydrogen fuel pellet implode. Officials at the Department of Energy say on Tuesday, Dec. 13, 2022, there will be an announcement of a “major scientific breakthrough” on nuclear fusion. (Damien Jemison/Lawrence Livermore National Laboratory via AP, File) WASHINGTON (AP) — Energy Secretary Jennifer Granholm was set to announce a “major scientific breakthrough” Tuesday in the decades-long quest to harness fusion, the energy that powers the sun and stars. Researchers at the Lawrence Livermore National Laboratory in California for the first time produced more energy in a fusion reaction than was used to ignite it, something called net energy gain, according to one government official and one scientist familiar with the research. Both spoke on the condition of anonymity because they were not authorized to discuss the breakthrough ahead of the announcement. Granholm was scheduled to appear alongside Livermore researchers at a morning event in Washington. The Department of Energy declined to give details ahead of time. The news was first reported by the Financial Times.Proponents of fusion hope that it could one day produce nearly limitless, carbon-free energy, displacing fossil fuels and other traditional energy sources. Producing energy that powers homes and businesses from fusion is still decades away. But researchers said it was a significant step nonetheless. “It’s almost like it’s a starting gun going off,” said Professor Dennis Whyte, director of the Plasma Science and Fusion Center at the Massachusetts Institute of Technology and a leader in fusion research. "We should be pushing towards making fusion energy systems available to tackle climate change and energy security.” Net energy gain has been an elusive goal because fusion happens at such high temperatures and pressures that it is incredibly difficult to control. Fusion works by pressing hydrogen atoms into each other with such force that they combine into helium, releasing enormous amounts of energy and heat. Unlike other nuclear reactions, it doesn't create radioactive waste.Billions of dollars and decades of work have gone into fusion research that has produced exhilarating results — for fractions of a second. Previously, researchers at the National Ignition Facility, the division of Lawrence Livermore where the success took place, used 192 lasers and temperatures multiple times hotter than the center of the sun to create an extremely brief fusion reaction.The lasers focus an enormous amount of heat on a small metal can. The result is a superheated plasma environment where fusion may occur.Riccardo Betti, a professor at the University of Rochester and expert in laser fusion, said an announcement that net energy had been gained in a fusion reaction would be significant. But he said there's a long road ahead before the result generates sustainable electricity. He likened the breakthrough to when humans first learned that refining oil into gasoline and igniting it could produce an explosion. “You still don't have the engine and you still don't have the tires,” Betti said. “You can't say that you have a car.”The net energy gain achievement applied to the fusion reaction itself, not the total amount of power it took to operate the lasers and run the project. For fusion to be viable, it will need to produce significantly more power and for longer.It is incredibly difficult to control the physics of stars. Whyte said it has been challenging to reach this point because the fuel has to be hotter than the center of the sun. The fuel does not want to stay hot -- it wants to leak out and get cold. Containing it is an incredible challenge, he said.Net energy gain isn't a huge surprise from the California lab because of progress it had already made, according to Jeremy Chittenden, a professor at Imperial College in London specializing in plasma physics. “That doesn't take away from the fact that this is a significant milestone,” he said.It takes enormous resources and effort to advance fusion research. One approach turns hydrogen into plasma, an electrically charged gas, which is then controlled by humongous magnets. This method is being explored in France in a collaboration among 35 countries called the International Thermonuclear Experimental Reactor as well as by researchers at the Massachusetts Institute of Technology and a private company. Last year the teams working on those projects in two continents announced significant advancements in the vital magnets needed for their work___Mathew Daly reported from Washington. Maddie Burakoff reported from New York, Michael Phillis from St. Louis and Jennifer McDermott from Providence, R.I. ___Associated Press climate and environmental coverage receives support from several private foundations. See more about AP’s climate initiative here. The AP is solely responsible for all content.
Physics
Researchers in Brazil have achieved a quantum breakthrough by succeeding in the creation of a source of illumination that produces two separate entangled beams of light, according to new research. The achievement was announced by a team of physicists with Brazil’s Laboratory for Coherent Manipulation of Atoms and Light (LMCAL), located at the University of São Paulo’s Physics Institute. Quantum entanglement is among the most perplexing phenomena observed in modern physics. It involves particles that are linked in such a way that when changes affect the quantum state of one, the other to which it is “entangled” will also be affected. Strangely, such effects even occur over significant distances, a phenomenon first described as “spooky action at a distance” after its discussion in a landmark 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen. Entanglement is also one of the best representations of how the quantum mechanical world does not always conform to the same predictable concepts of classical mechanics. Despite this apparent paradox, in recent years several studies have made significant headway toward better understanding the phenomenon, and even potentially finding useful applications for it in areas that range from quantum computing to encryption and communication technologies. In their recent study, the Brazilian team relied on the use of an optical parametric oscillator (OPO), a device that generates oscillatory currents at optical frequencies by converting a single laser wave into a pair of output waves of lower frequency. The output wave with the highest frequency is known as a “signal”, while the wave possessing a lower frequency is called the “idler.” Hans Marin Florez, one of the coauthors of a recent paper detailing the research, said the OPO his team used was composed of “a non-linear optical response crystal between two mirrors forming an optical cavity,” upon which a green laser is fired. “When a bright green beam shines on the apparatus, the crystal-mirror dynamics produce two light beams with quantum correlations,” Florez said in a statement. One problem with the crystal-based OPO setup the team used in their study involves the fact that the wavelength of light it produces makes it incompatible with other quantum information systems. However, in past studies, the Brazilian team managed to demonstrate that instead of relying on the use of a crystal as a medium, atoms could actually be used instead. In their new research, Florez and his team produced the first known OPO which performed in such a way by using rubidium atoms, resulting in the production of a pair of beams he says were “intensely quantum correlated,” from which a source that could successfully interact with other varieties of quantum systems was achieved. According to Florez, the entangled beams his team was able to produce “could interact with other systems with the potential to serve as quantum memory, such as cold atoms.” Despite this, showing that the beams they produced were indeed entangled was slightly more challenging. To do this, the researchers said they repeated the experiment with the addition of elements that helped them assess quantum correlations in the fields generated during subsequent attempts, using detection techniques that they say succeeded in demonstrating that entanglement had been achieved. According to Florez, “the detection technique enabled us to observe that the entanglement structure was richer than would typically be characterized,” adding that an entire system “comprising four entangled spectral bands” was attained, rather than being limited only to a single pair of spectral bands. Significantly, the entanglement occurred between phases and amplitudes of the waves produced by the OPO, meaning that the results of their experiment indicate that it may have applications in fields that include the transmission of quantum-coded data, or performing detailed scientific measurements like those achieved by atomic magnetometers in studies of alpha waves produced by the human brain. Florez and his team published the results of their research in a paper, “Continuous Variable Entanglement in an Optical Parametric Oscillator Based on a Nondegenerate Four Wave Mixing Process in Hot Alkali Atoms,” which appeared in Physical Review Letters. Micah Hanks is Editor-in-Chief and Co-Founder of The Debrief. Follow his work at micahhanks.com and on Twitter: @MicahHanks.
Physics
We’ve probably all heard the phrase you can’t make something from nothing. But in reality, the physics of our universe isn’t that cut and dry. In fact, scientists have spent decades trying to force matter from absolutely nothing. And now, they’ve managed to prove that a theory first shared 70 years ago was correct, and we really can create matter out of absolutely nothing. The universe is made up of several conservation laws. These laws govern energy, charge, momentum, and so on down the list. In the quest to fully understand these laws, scientists have spent decades trying to figure out how to create matter – a feat that is far more complex than it even sounds. We’ve previously turned matter invisible, but creating it out of nothing is another thing altogether. There are many theories on how to create matter from nothing – especially as quantum physicists have tried to better understand the Big Bang and what could have caused it. We know that colliding two particles in empty space can sometimes cause additional particles to emerge. There are even theories that a strong enough electromagnetic field could create matter and antimatter out of nothing itself. Scientists have long tried to understand how the Big Bang created the universe out of nothing. Image source: NASABut, managing to do any of these things has always seemed impossible. Still, that hasn’t stopped scientists from trying, and now, that research seems to have paid off. As Big Think reports, in early 2022, a group of researchers created strong enough electric fields in their laboratory to level the unique properties of a material known as graphene. With these fields, the researchers were able to enable the spontaneous creation of particle-antiparticle pairs from nothing at all. This proved that creating matter from nothing is indeed possible, a theory first proposed by Julian Schwinger, one of the founders of quantum field theory. And with that knowledge, we can hopefully better understand how the universe makes something from nothing.
Physics
Bright spark: a study finds that “disruptiveness” that is fostered by new ideas has plummeted in science in recent years (Courtesy: iStock/Gajus) Big jumps forward in science often come in the form of surprising discoveries that break with preceding work – but this kind of disruptive activity is declining over time. That is according to a new analysis, which suggests that creativity is being stymied by the “publish or perish” culture in science and by the sheer amount of work needed to reach the frontier of knowledge. Led by sociologist Russell Funk from the University of Minnesota, the study analysed 25 million papers from the Web of Science database that were published between 1945 and 2010. His team also examined 3.9 million patents published between 1976 and 2010. Each work was assigned a score indicating whether it was “consolidating” – in other words, refining existing work and filling in the details – or was more disruptive, such as introducing something unexpected that propels a field in new directions. Citation records were used to quantify the disruptiveness of a piece of work. A disruptive study is likely to make prior research obsolete, so subsequent papers are less likely to cite the earlier research. If a work is consolidating, on the other hand, then later papers citing it are also likely to cite the sources that the work itself cites. With scores ranging from -1 (maximum consolidation) to +1 (maximum disruption), the researchers found that the average scores across all fields of science in recent years have plummeted. The physical sciences saw the largest percentage decline with the average score dropping from 0.36 in 1945 to just 0 today. A long way to go This slowing innovation has led to speculation that the “low-hanging fruit” in science has already been taken and that scientific theories are getting closer to being “correct”. But the authors are not convinced this can account for the steep decline seen in their work. Funk emphasizes, for example, that there are many phenomena that science has yet to get to grips with such as the emergence of consciousness. Another possible explanation for the drop in innovation is the greater mountain of accumulated knowledge, which means that researchers spend longer training to reach the boundary of their field with their expertise tending to be deeper and narrower. This could inhibit innovation, the researchers say, as it often springs from making cross-field connections. Read more The secret life of scientific ideas The authors also note that the “publish or perish” culture in science could be deterring scholars from embarking on riskier, longer-term projects. Academic institutes and funders, they suggest, could tackle this by giving researchers more time to expand their breadth of knowledge, rather than focusing on the quantity of publications. “A healthy ecosystem of science and technology is likely to require a balance of different types of contributions,” Funk told Physics World. “The dramatic declines that we observe in disruptive work suggests that this balance may be off, and that encouraging more disruptive work could help to push scientific understanding forward.”
Physics
By Matt WilliamsSince the “Golden Age of General Relativity” in the 1960s, scientists have held that much of the Universe consists of a mysterious invisible mass known as “Dark Matter“. Since then, scientists have attempted to resolve this mystery with a double-pronged approach. On the one hand, astrophysicists have attempted to find a candidate particle that could account for this mass. On the other, astrophysicists have tried to find a theoretical basis that could explain Dark Matter’s behavior. So far, the debate has centered on the question of whether it is “hot” or “cold”, with cold enjoying an edge because of its relative simplicity. However, a new study conducted led by the Harvard-Smithsonian Center for Astrophysics (CfA) revits the idea that Dark Matter might actually be “warm”.This was based on cosmological simulations of galaxy formation using a model of a Universe that included interactive Dark Matter. The simulations were conducted by an international team of researchers from the CfA, MIT’s Kavli Institute for Astrophysics and Space Research, the Leibniz Institute for Astrophysics Potsdam, and multiple universities. The study recently appeared in the Monthly Notices of the Royal Astronomical Society.When it comes right down to it, Dark Matter is appropriately named. For starters, it makes up about 84% of the mass of the Universe but neither emits, absorbs or reflects light or any other known form of radiation. Second of all, it has no electromagnetic charge and does not interact with other matter except through gravity, the weakest of the four fundamental forces. Third, it not composed of atoms or their usual building blocks (i.e. electrons, protons, and neutrons), which contributes to its mysterious nature. As a result, scientists theorize that it must be made up of some new kind of matter that is consistent with the laws of the Universe but does not show up in conventional particle physics research.Regardless of its true nature, Dark Matter has had a profound influence on the evolution of the cosmos since about 1 billion years after the Big Bang onward. In fact, it is believed to have played a key role in everything from the formation of galaxies to the distribution of the Cosmic Microwave Background (CMB) radiation.What’s more, cosmological models that take into account the role played by Dark Matter are backed up by observations of these two very different types of cosmic structures. Also, they are consistent with cosmic parameters like the rate at which the Universe is expanding, which is itself influenced by a mysterious, invisible force (known as “Dark Energy“).Currently, the most widely accepted models of Dark Matter presume that it does not interact with any other kinds of matter or radiation (including itself) beyond the influence of gravity – i.e. that it is “cold”. This is what is known as the Cold Dark Matter (CDM) scenario, which is often combined with the theory of Dark Energy (represented by Lambda) in the form of the LCDM cosmological model. This theoretical form of Dark Matter is also referred to as non-interactive, since it is incapable of interacting with normal matter through anything other than the weakest of the fundamental forces. As Dr. Sownak Bose, an astronomer with the CfA and the lead author on the study, explained to Universe Today via email:“[CDM] is the most well-tested and the preferred model. This is primarily because over the last four decades or so, people have been working hard to make predictions using cold Dark Matter as the standard paradigm – these are then compared to real data – with the finding that, in general, this model is able to reproduce a wide range of observed phenomena across a wide range of scales.”As he describes it, the cold Dark Matter scenario became the front-runner after numerical simulations of cosmic evolution were conducting using “hot Dark Matter” – in this case, the neutrino. These are subatomic particles that are very similar to an electron, but have no electrical charge. They are also so light that they travel throughout the Universe at nearly the speed of light (in other words, they are kinematically ‘hot’).These simulations showed that the predicted distributions looked nothing like the Universe does today,” Bose added. “For that reason, the opposite limit began to be considered, particles that have barely any velocity when they are born (aka. “cold”). Simulations that included this candidate fit modern observations of the Universe much more closely. “After performing the same galaxy clustering tests as before, astronomers found a startling agreement between the simulated and observed universes. In the subsequent decades, the cold particle has been tested through more rigorous, non-trivial tests than simply galaxy clustering, and it has generally passed each of these with flying colors.”Another source of appeal is the fact that cold Dark Matter (at least theoretically) ought to be detectable either directly or indirectly. However, this is where the CDM runs into trouble since all attempts at detecting a single particle so far have failed. As such, cosmologists have taken to consider other possible candidates that would have even smaller levels of interaction with other matter.This is what Sownak Bose, an astronomer with the CfA, sought to determine with his team of researchers. For the sake of their study, they focused on a “warm” Dark Matter candidate. This type of particle would have the ability to subtly interact with very light particles that move close to the speed of light, though less so than the more interactive “hot” variety.In particular, it could be capable of interacting with neutrinos, the former front-runner for the HDM scenario. Neutrinos are thought to have been very prevalent during the hot early Universe, so the presence of interacting Dark Matter would have had a strong influence. Visible light (left) and infrared image (right) of the Whirlpool Galaxy, taken by NASA’s Hubble Space Telescope. - Image Credits: NASA/ESA/M. Regan & B. Whitmore (STScI), & R. Chandar (U. Toledo)/S. Beckwith (STScI), & the Hubble Heritage Team (STScI/AURA “In this class of models, the Dark Matter particle is allowed to have a finite (but weak) interaction with a radiative species like photons or neutrinos,” said Dr. Bose. “This coupling leaves a rather unique imprint in the ‘lumpiness’ of the Universe at early times, which is quite a lot different to what might be expected if the Dark Matter was a cold particle.”To test this, the team ran state-of-the-art cosmological simulations in the supercomputing facilities at Harvard and the University of Iceland. These simulations considered how galaxy formation would be affected by the presence of both warm and Dark Matter from about 1 billion after the Big Bang to 14 billion years (roughly the present). Said Dr. Bose indicated:“[W]e ran computer simulations to generate realizations of what this Universe might look like after 14 billions years of evolution. In addition to modeling the Dark Matter component, we also included state-of-the-art prescriptions for star formation, the effects of supernovae and black holes, the formation of metals etc.”The team then compared the results to each other to identify characteristic signatures that would distinguish one from the other. What they found was that for many of simulations the effects of this interactive Dark Matter were too small to be noticeable. However, they were present in some distinct ways, particularly in the way that distant galaxies are distributed throughout space.This observation is especially interesting because it can be tested in the future using next-generation instruments. “The way to do this is to map the lumpiness of the Universe at these early times by looking at the distribution of hydrogen gas,” Dr. Bose explained. “Observationally, this is a well-established technique: we can probe neutral hydrogen in the early universe by looking at the spectra of distant galaxies (usually quasars).”In short, light traveling to us from distant galaxies has to pass through the intergalactic medium. If there is a lot of neutral hydrogen in the intervening medium, the emission lines from the galaxy will be partially absorbed, whereas they will be unimpeded if there is little. If Dark Matter is truly cold, it will show up in the form of a much “lumpier” distribution of hydrogen gas, whereas a WDM scenario will result in oscillating lumps.Currently, astronomical instruments do not have the necessary resolution to measure hydrogen gas oscillations in the early Universe. But as Dr. Bose indicated, this research could provide impetus for new experiments and new facilities that would be capable of making these observations. For instance, IR instrument like the James Webb Space Telescope(JWST) could be used to create new maps of the distribution of hydrogen gas absorption. These maps would be able to either confirm the influence of interactive Dark Matter or rule it out as a candidate. It is also hoped that this research will inspire people to think of candidates beyond those that have already been considered.In the end, Dr. Bose said, the real value comes from the fact that these kinds of theoretical predictions can spur observations into new frontiers and tests the limits of what we think we know. “And that’s all that science is really,” he added, “making a prediction, proposing a method for testing it, performing the experiment and then constraining/ruling out the theory!”Source: Universe Today - Further Reading: CfA, MNRAS If you enjoy our selection of content please consider following Universal-Sci on social media:
Physics
In the quest to measure the fundamental constant that governs the strength of gravity, scientists are getting a wiggle on. Using a pair of meter-long, vibrating metal beams, scientists have made a new measurement of “Big G,” also known as Newton’s gravitational constant, researchers report July 11 in Nature Physics. The technique could help physicists get a better handle on the poorly measured constant. Big G is notoriously difficult to determine (SN: 9/12/13). Previous estimates of the constant disagree with one another, leaving scientists in a muddle over its true value. It is the least precisely known of the fundamental constants, a group of numbers that commonly show up in equations, making them a prime target for precise measurements. Sign Up For the Latest from Science News Headlines and summaries of the latest Science News articles, delivered to your inbox Because the vibrating beam test is a new type of experiment, “it might help to understand what’s really going on,” says engineer and physicist Jürg Dual of ETH Zurich. The researchers repeatedly bent one of the beams back and forth and used lasers to measure how the second beam responded to the first beam’s varying gravitational pull. To help maintain a stable temperature and avoid external vibrations that could stymie the experiment, the researchers performed their work 80 meters underground, in what was once a military fortress in the Swiss Alps. Big G, according to the new measurement, is approximately 6.82 x 10-11 meters cubed per kilogram per square second. But the estimate has an uncertainty of about 1.6 percent, which is large compared to other measurements (SN: 8/29/18). So the number is not yet precise enough to sway the debate over Big G’s value. But the team now plans to improve their measurement, for example by adding a modified version of the test with rotating bars. That might help cut down on Big G’s wiggle room.
Physics
The far side of the moon has a certain mystique about it. It’s eternally out of view, never facing the Earth—which has earned it a misleading nickname, “the dark side,” as if sunlight never reaches its surface (it does). It’s the section of the moon we’ll never see for ourselves, not unless we hop on a spaceship and fly over there. But the really mysterious parts of the moon aren’t on the far side. They’re at the poles, where the sun always hovers near the horizon. The lighting conditions create special circumstances: Hundreds of craters at the north and south poles never, ever receive direct sunlight, and so never feel the warmth of our star. They are, in astronomy parlance, permanently shadowed regions, and they’ve been that way, dark and frigid, for as long as billions of years. Astronauts have experienced the powdery surface of the moon up close, and space probes have mapped nearly every bit of the terrain from above—but none have peered into the depths of those pitch-black craters. With the right tools, astronomers hope, they’ll be able to peek inside and find something spectacular: water. Not flowing water, of course—that’s not possible on the lunar surface—but ice crystals. Scientists believe that water has lurked on the moon for ages, delivered in its early history by comets and asteroids. (The same process is thought to have sprinkled our own planet with water.) The bombardment would have scattered icy particles across the surface, and particles that were exposed to the sun wouldn’t have lasted. But any bits of ice that might have tumbled into permanently shadowed regions would have been left untouched, sparkling in their freezing surroundings ever since. The environment is perfect; some sunless spots are colder than Pluto, says Prasun Mahanti, an Arizona State research scientist. Scientists and engineers have probed permanently shadowed regions with several spacecraft missions over the years, Parvathy Prem, a planetary scientist at the Johns Hopkins Applied Physics Laboratory who studies the moon, told me. They’ve bounced radar waves off the surface to discern whether the hidden landscape was made of icy or rocky material. They’ve done the same with lasers, to get a sense of the hidden topography. In 2009, a spacecraft fired a projectile into the moon’s south pole and then detected, in the resulting plume of excavated material, the distinct signature of H20. But no mission had targeted permanently shadowed regions with a photographer like ShadowCam, a new NASA camera that started orbiting the moon in December, aboard a Korean spacecraft. Each day, Mariah Heck, a research analyst assistant at Arizona State University, programs the aptly named ShadowCam to capture those sunless places. In the coming years, the team at ASU plans to photograph every known permanently shadowed region on the moon, revealing their interiors for the first time. The instrument has already provided a glimpse inside a crater near the moon’s south pole, which included a curious, previously unknown little furrow in the otherwise smooth soil—the path of a boulder that had rolled down the slope. It’s the kind of picture that even non-astronomers can immediately appreciate. “It’s powerful to finally see permanently shadowed regions at the wavelength that our eyes are sensitive to,” Prem said. How do you illuminate a pitch-black hole on the moon? Not by dangling a flashlight from a spacecraft, as I first imagined. Like other cameras that have documented the lunar surface, ShadowCam relies on sunlight reflected by landscape features, such as crater walls. “It’s like you’re standing in the shadow of a tree, but you can still see what’s on the ground because of all the light reflecting off of the stuff around you,” Heck told me. ShadowCam is more than 200 times more sensitive than its predecessors, which means it’s better at basking in dim, ambient light to reveal details cloaked in darkness. “We’re seeing the moon in a way no one’s ever seen the moon before,” they said. Scientists weren’t expecting to spot signs of water ice in ShadowCam’s first photo shoot, which covered a region that isn’t cold enough to sustain it. But there are many places left to check. Prem says that scientists are doing laboratory experiments on Earth to determine how much water ice must be present in lunar soil to be visible from space. “The quantities of ice that we’re likely to be able to see at the surface in visible light are likely to be very small,” she said—no skating rinks—but “if there’s enough ice, we should be able to see it.” The camera may even detect signs of other kinds of ice, such as nitrogen, ammonia, and methane. Or it could reveal that there’s no water ice at all. Scientists are hopeful that’s not the case, but “that’s always a possibility,” Mahanti, ShadowCam’s deputy principal investigator, said. “We really do not know what to expect.” NASA is keen on getting close to permanently shadowed regions soon. The agency plans to dispatch to the south pole a new lunar rover next year, and a new generation of moonwalking astronauts later this decade, under the modern-day Artemis program, the successor to Apollo. The Apollo astronauts landed at sites along the moon’s sun-dappled equator, which were deemed easier and safer for their short missions. But the next generation of astronauts will arrive at the south pole. And if they can get their gloved hands on water ice, people could eventually return to the lunar surface with technology designed to extract its oxygen and hydrogen for use in life-support systems and even fuel, which would allow us to take up residence on the moon for weeks or months at a time. The prospect of mining moon water is still closer to sci-fi than reality. For now, missions like ShadowCam will keep exploring from afar, adding texture to scientists’ daydreams about the moon’s most mysterious shadows. To Prem, the thought of illuminating them at last feels almost transcendent, as if we’re craning to experience something that wasn’t meant to be illuminated. “There does seem something kind of hallowed about somewhere that’s been dark and cold and unseen by human eyes for billions of years,” she said.
Physics
A dormant black hole nine times the mass of the Sun has been found outside the Milky Way for the first time, in what researchers have called a “very exciting discovery”.Though not the first contender, a researcher from the University of Sheffield says this black hole is “the first to be unambiguously detected outside our galaxy”.The researchers had been looking for black hole binary systems for more than two years before finding what’s become known as VFTS243.Paul Crowther, professor of astrophysics at the university, described it as a “very exciting discovery” that arrives after “a number of dormant black hole candidates have been proposed”.Stellar-mass black holes are formed when massive stars reach the end of their lives and collapse under their own gravity. In a system of two stars revolving around each other, this process leaves behind a black hole in orbit with a luminous companion star.This dormant black hole is at least nine times the mass of the Sun, and orbits a hot, blue star weighing 25 times the Sun’s entire mass.It has been observed in a neighbouring galaxy by a team of international scientists; their study – published in Nature Astronomy – suggests that the star that gave rise to VFTS243 vanished without any sign of an associated supernova explosion.As part of the international research team, Crowther has been working with Tomer Shenar from the Institute of Physics and Astronomy, who started the study at KU Leuven in Belgium and is now a Marie-Curie fellow at Amsterdam University in the Netherlands.Confirming the likelihood of what he termed a “direct-collapse scenario”, ie, a collapse without an explosion, Shenar believes this has “enormous implications for the origin of black hole mergers in the cosmos”.A black hole is considered dormant if it does not emit high levels of X-ray radiation, which is how such black holes are typically detected.Dormant black holes are hard to spot as they do not interact much with their surroundings.VFTS 243 was found using six years of observations of the Tarantula Nebula by the fibre large array multi element spectrograph instrument on the European Southern Observatory’s Very Large Telescope.
Physics
While the most obvious application would be to scan for bombs and other dangerous items and substances at airports, the findings, described in Nature Communications today, could also help detect cracks and rust in buildings, and eventually it could be used to identify early-stage tumors. The team of researchers, from UCL in London, hid small quantities of explosives, including Semtex and C4, inside electrical items such as laptops, hair dryers, and mobile phones. The items were placed inside bags with toothbrushes, chargers, and other everyday objects to closely replicate a traveler’s bag.  While standard x-ray machines hit objects with a uniform field of x-rays, the team scanned the bags using a custom-built machine containing masks—sheets of metal with holes punched into them, which separate the beams into an array of smaller beamlets.  Scans inside a bag. Top is conventional, bottom is microradian scatter.UCL As the beamlets passed through the bag and its contents, they were scattered at angles as small as a microradian (around one 20,000th as big as a degree).The scattering was analyzed by AI trained to recognize the texture of specific materials from a particular pattern of angle changes.The AI is exceptionally good at picking up these materials even when they’re hidden inside other objects, says lead author Sandro Olivo, from the UCL Department of Medical Physics and Biomedical Engineering. “Even if we hide a small quantity of explosive somewhere, because there will be a little bit of texture in the middle of many other things, the algorithm will find it.” Conventional method (left) vs the scattering technique at right.UCL The algorithm was able to correctly identify explosives in every experiment carried out under test conditions, although the team acknowledged that it would be unrealistic to expect such a high level of accuracy in larger studies that resembled real-world conditions more closely. The technique could also be used in medical applications, particularly cancer screening, the team believes. Although the researchers are yet to test whether the technique could successfully differentiate the texture of a tumor from surrounding healthy breast tissue, for example, he’s excited by the possibility of detecting very small tumors that could previously have gone undetected behind a patient’s rib cage. “I’d love to do it one day,” he adds. “If we get a similar hit rate in detecting texture in tumors, the potential for early diagnosis is huge.”  “This latest work from the UCL teams presented here looks extremely promising. It combines novel X-ray imaging with AI and has major potential for the extremely challenging tasks of threat detection in hand baggage, and NDT applications such as crack detection," says Kevin Wells, Associate Professor at the University of Surrey. "Cancer detection involves its own set of challenges and we look forward to seeing the work progress in this area in due course." Update: The article has been updated with a longer quote.
Physics
The first planet ever spotted by the Kepler space telescope is falling into its star. Kepler launched in 2009 on a mission to find exoplanets by watching them cross in front of their stars. The first potential planet the telescope spotted was initially dismissed as a false alarm, but in 2019 astronomer Ashley Chontos and colleagues proved it was real (SN: 3/5/19). The planet was officially named Kepler 1658b. Now, Chontos and others have determined Kepler 1658b’s fate. “It is tragically spiraling into its host star,” says Chontos, now at Princeton University. The planet has roughly 2.5 million years left before it faces a fiery death. “It will ultimately end up being engulfed. Death by star.” Science News headlines, in your inbox Headlines and summaries of the latest Science News articles, delivered to your email inbox every Friday. The roughly Jupiter-sized planet is searingly hot, orbiting its star once every three days. In follow-up observations from 2019 to 2022, the planet kept transiting the star earlier than expected. Combined data from Kepler and other telescopes show that the planet is inching closer to the star, Chontos and colleagues report December 19 in the Astrophysical Journal Letters. “You can see the interval between the transits is shrinking, really slowly but really consistently, at a rate of 131 milliseconds per year,” says astrophysicist Shreyas Vissapragada of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. That doesn’t sound like much. But if this trend continues, the planet has only 2 million or 3 million years left to live. “For something that’s been around for 2 to 3 billion years, that’s pretty short,” Vissapragada says. If the planet’s lifetime was a more human 100 years, it would have a little more than a month left. Studying Kepler 1658b as it dies will help explain the life cycles of similar planets. “Learning something about the actual physics of how orbits shrink over time, we can get a better handle on the fates of all of these planets,” Vissapragada says.
Physics
Anton Zeilinger shares the 2010 Wolf Prize in Physics The 2010 Wolf Prize in Physics has been awarded to Alain Aspect, John Clauser and Anton Zeilinger “for their fundamental conceptual and experimental contributions to the foundations of quantum physics, specifically an increasingly sophisticated series of tests of Bell’s inequalities, or extensions thereof, using entangled quantum states”. The trio will share the $100,000 prize, which will be presented by the President of Israel at the Israeli parliament (Knesset) on 13 May 2010. Zeilinger, 64, is at the University of Vienna, Austria; Aspect, 62, is at the Institut d’Optique in Palaiseau, France; and Clauser, 67, is at J F Clauser and Associates in Walnut Creek, California. The winners were involved in three pioneering experiments that established the quantum property of entanglement – whereby two or more particles display much stronger correlations than are possible in classical physics. Entanglement plays an important role in quantum computers, which in principle could outperform conventional computers at some tasks. Violating Bell’s inequality All three experiments measured violations of Bell’s inequality, which places a limit on the correlations that can be observed in a classical system. The first was done in 1972 at the University of California at Berkeley by Clauser and Stuart Freedman, who measured the correlations between the polarizations of pairs of photons that are created in an atomic transition. They showed that Bell’s inequality was violated – which meant that the photon pairs were entangled. There were, however, several “loopholes” in this experiment, making it inconclusive. It is possible, for example, that the photons detected were not a fair sample of all photons emitted by the source (the detection loophole) or that elements of the experiment thought to be independent were somehow causally connected (the locality loophole). In 1982 Aspect and colleagues at the Université Paris-Sud in Orsay, France, improved on Clauser and Freedman’s experiment by using a two-channel detection scheme to avoid making assumptions about photons that were detected. They also varied the orientation of the polarizing filters during their measurements – and in both cases Bell’s inequality was violated. Closing the locality loophole The locality loophole was closed in 1998 by Zeilinger and colleagues at the University of Innsbruck, who used two fully independent quantum random-number generators to set the directions of the photon measurements. This meant that the direction along which the polarization of each photon was measured was decided at the last instant, such that no signal (which by necessity has to travel slower than the speed of light) would be able to transfer information to the other side before that photon was registered The Wolf Prize is awarded by the Wolf Foundation in Israel and is often thought to be the most prestigious prize in physics after the Nobel prize. The foundation was created in 1975 by Ricardo Wolf, a German-born inventor and diplomat.
Physics
The expanse of space before 91-year-old Russell Craig seems endless as he gazes at the swirling galaxies and constellations. Thousands of miles above Earth, he turns his head only to be greeted by satellites and stars.Yet in reality, Craig is seated in a chair, his feet planted firmly on the ground. While space may be far away, for residents of The Preston of the Park Cities, it’s just a click of a button away.The Dallas senior living community introduced virtual reality headsets into their weekly activities nearly two years ago. Residents can do practically anything within the technology’s limitations, from exercising, to flying through mountains, to walking through the wreckage of the Titanic. Beyond its recreational aspects, staff at the community say it’s helpful in stimulating memories in residents with dementia.“We had a resident in memory care who was in the military and had severe behavioral issues,” said Lana Francois, who assists with recreational activities at the facility. “Every time we put him into the virtual reality to fly, it calmed him. You could see it with the snap of a finger – you could see him change.”The Oculus Quest technology was meant to engage residents regardless of their physical capabilities, but it has also proven to help with dexterity, focus and concentration. A study led by Dr. Chee Siang Ang saw the virtual experience provided a “soothing effect” to dementia patients involved, caused by the sense of immersion within a virtual environment. Conversely, the active involvement of the VR technology helped promote more meaningful engagement, even for short periods of time.“We used to do Wii games on it, though it took the residents a minute to get used to that, so it’ll be about the same with any new games we introduce,” community life director Debbie Dickerson said. “Then they’ll be arguing to try and use it when they realize they are able to move and do whatever they would like in it.”While Craig may be new to seeing space up close and personal, he was eager to do it again.“I have a flight simulator on my TV and it’s kind of like virtual reality,” Craig said. “It’s not easy because I haven’t been briefed on how to use the controls. But I suppose kids do it.“I can do it if they can,” he added.Mark Denzin, executive director of the Alzheimer’s Association’s Dallas and Northeast Texas chapter, believes VR is similar to music theory when stimulating the brain in engaging ways. While he has seen positive results from the technology, he explains that it is only a way to help patients deal with dementia, not cure it.“The brain being as complex as it is, I think the standard thinking behind what we can do is probably limitless,” Denzin said. “I believe that as science and technology collide, there could be a really unique partnership to help individuals with dementia or Alzheimer’s.”VR’s benefit on dementia is a newer concept in the world of medicine, but studies show positive results. An article by Dr. Lora Appel and Dr. Jennifer Campos in 2021 reviewed 18 different studies regarding VR for dementia patients, with 89.5% of the included studies citing a positive emotional response and 73.7% targeting higher qualities of life for patients involved.Dr. Jin Ryong Kim, assistant professor of computer science at The University of Texas at Dallas, agrees this technology has the potential to benefit similar treatments.“I think people are looking to haptics and VR because it has the power to make patients feel that it’s real,” Kim said. “Realism, immersion and interactiveness – these kinds of things can make change. Technology is going that way and one application is this medical field.”Looking at more than just its potential for dementia residents, Dickerson says staff at The Preston of the Park Cities are trying to turn VR usage into a group activity to keep residents engaged while also exploring ways to help those with behavioral issues. She appreciates its ability to be geared toward residents individually by way of unique simulators and games. Francois sees its uplifting effect on dementia residents dealing with depression, bringing a smile to their faces.“It can be a chore,” Dickerson said, “But once they get used to it, they like it and they’re excited to have a chance to do it again.”Claire Tweedie wrote this story as part of her participation in High School Journalism 101, The Dallas Morning News’ high school journalism student mentor program.
Gaming & VR