diff --git "a/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" "b/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" --- "a/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" +++ "b/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" @@ -10,27 +10,438 @@
People with typical recognition capabilities are worse than chance: more often than not, they think AI-generated faces are real.
That's according to research published Nov. 12 in the journal Royal Society Open Science. However, the study also found that receiving just five minutes of training on common AI rendering errors greatly improves individuals' ability to spot the fakes.
"I think it was encouraging that our kind of quite short training procedure increased performance in both groups quite a lot," lead study author Katie Gray, an associate professor in psychology at the University of Reading in the U.K., told Live Science.
Surprisingly, the training increased accuracy by similar amounts in super recognizers and typical recognizers, Gray said. Because super recognizers are better at spotting fake faces at baseline, this suggests that they are relying on another set of clues, not simply rendering errors, to identify fake faces.
Gray hopes that scientists will be able to harness super recognizers' enhanced detection skills to better spot AI-generated images in the future.
"To best detect synthetic faces, it may be possible to use AI detection algorithms with a human-in-the-loop approach — where that human is a trained SR [super recognizer]," the authors wrote in the study.
In recent years, there has been an onslaught of AI-generated images online. Deepfake faces are created using a two-stage AI algorithm called generative adversarial networks. First, a fake image is generated based on real-world images, and the resulting image is then scrutinized by a discriminator that determines whether it is real or fake. With iteration, the fake images become realistic enough to get past the discriminator.
These algorithms have now improved to such an extent that individuals are often duped into thinking fake faces are more "real" than real faces — a phenomenon known as "hyperrealism."
As a result, researchers are now trying to design training regiments that can improve individuals' abilities to detect AI faces. These trainings point out common rendering errors in AI-generated faces, such as the face having a middle tooth, an odd-looking hairline or unnatural-looking skin texture. They also highlight that fake faces tend to be more proportional than real ones.
In theory, so-called super recognizers should be better at spotting fakes than the average person. These super recognizers are individuals who excel in facial perception and recognition tasks, in which they might be shown two photographs of unfamiliar individuals and asked to identify if they are the same person or not. But to date, few studies have examined super recognizers' abilities to detect fake faces, and whether training can improve their performance.
To fill this gap, Gray and her team ran a series of online experiments comparing the performance of a group of super recognizers to typical recognizers. The super recognizers were recruited from the Greenwich Face and Voice Recognition Laboratory volunteer database; they had performed in the top 2% of individuals in tasks where they were shown unfamiliar faces and had to remember them.
In the first experiment, an image of a face appeared onscreen and was either real or computer-generated. Participants had 10 seconds to decide if the face was real or not. Super recognizers performed no better than if they had randomly guessed, spotting only 41% of AI faces. Typical recognizers correctly identified only about 30% of fakes.
Each cohort also differed in how often they thought real faces were fake. This occurred in 39% of cases for super recognizers and in around 46% for typical recognizers.
The next experiment was identical, but included a new set of participants who received a five-minute training session in which they were shown examples of errors in AI-generated faces. They were then tested on 10 faces and provided with real-time feedback on their accuracy at detecting fakes. The final stage of the training involved a recap of rendering errors to look out for. The participants then repeated the original task from the first experiment.
Training greatly improved detection accuracy, with super recognizers spotting 64% of fake faces and typical recognizers noticing 51%. The rate that each group inaccurately called real faces fake was about the same as the first experiment, with super recognizers and typical recognizers rating real faces as "not real" in 37% and 49% of cases, respectively.
Trained participants tended to take longer to scrutinize the images than the untrained participants had — typical recognizers slowed by about 1.9 seconds and super recognizers did by 1.2 seconds. Gray said this is a key message to anyone who is trying to determine if a face they see is real or fake: slow down and really inspect the features.
It is worth noting, however, that the test was conducted immediately after participants completed the training, so it is unclear how long the effect lasts.
"The training cannot be considered a lasting, effective intervention, since it was not re-tested," Meike Ramon, a professor of applied data science and expert in face processing at the Bern University of Applied Sciences in Switzerland, wrote in a review of the study conducted before it went to print.
And since separate participants were used in the two experiments, we cannot be sure how much training improves an individual's detection skills, Ramon added. That would require testing the same set of people twice, before and after training.
Three decades later, another new technology has unleashed another wave of exuberance. Investors are pouring billions into any company with "AI" in its name. But there is a crucial difference between these two bubbles, which isn't always recognised. The World Wide Web existed. It was real. General Artificial Intelligence does not exist, and no one knows if or when it ever will.
In February, the CEO of OpenAI, Sam Altman, wrote on his blog that the very latest systems have only just started to "point towards" AI in its "general" sense. OpenAI may market its products as "AIs," but they are merely statistical data-crunchers, rather than "intelligences" in the sense that human beings are intelligent.
So why are investors so keen to give money to the people selling AI systems? One reason might be that AI is a mythical technology. I don't mean it is a lie. I mean it evokes a powerful, foundational story of Western culture about human powers of creation.
Perhaps investors are willing to believe AI is just around the corner because it taps into myths that are deeply ingrained in their imaginations?
The most relevant myth for AI is the Ancient Greek myth of Prometheus.
There are many versions of this myth, but the most famous are found in Hesiod'spoems Theogony and Works and Days, and in the play Prometheus Bound, traditionally attributed to Aeschylus.
Prometheus was a Titan, a god in the Ancient Greek pantheon. He was also a criminal who stole fire from Hephaestus, the blacksmith god. Hiding the fire in a stalk of fennel, Prometheus came to earth and gave it to humankind. As punishment, he was chained to a mountain, where an eagle visited every day to eat his liver.
Prometheus' gift was not simply the gift of fire; it was the gift of intelligence. In Prometheus Bound, he declares that before his gift humans saw without seeing and heard without hearing. After his gift, humans could write, build houses, read the stars, perform mathematics, domesticate animals, construct ships, invent medicines, interpret dreams and give proper offerings to the gods.
The myth of Prometheus is a creation story with a difference. In the Hebrew Bible, God does not give Adam the power to create life. But Prometheus gives (some of) the gods' creative power to humankind.
Hesiod indicates this aspect of the myth in Theogony. In that poem, Zeus not only punishes Prometheus for the theft of fire; he punishes humankind as well. He orders Hephaestus to fire up his forge and construct the first woman, Pandora, who unleashes evil on the world.
The fire that Hephaestus uses to make Pandora is the same fire that Prometheus has given humankind.

The Greeks proposed the idea that humans are a form of artificial intelligence. Prometheus and Hephaestus use technology to manufacture men and women. As historian Adrienne Mayor reveals in her book Gods and Robots, the ancients often depicted Prometheus as a craftsman, using ordinary tools to create human beings in an ordinary workshop.
If Prometheus gave us the fire of the gods, it would seem to follow that we can use this fire to make our own intelligent beings. Such stories abound in Ancient Greek literature, from the inventor Daedalus, who created statues that came to life, to the witch Medea, who could restore youth and potency with her cunning drugs. Greek inventors also constructed mechanical computers for astronomy and remarkable moving figures powered by gravity, water and air.
2,700 years have passed since Hesiod first wrote down the story of Prometheus. In the ensuing centuries, the myth has been endlessly retold, especially since the publication of Mary Shelley's Frankenstein; or the Modern Prometheus in 1818.
But the myth is not always told as fiction. Here are two historical examples where the myth of Prometheus seemed to come true.
Gerbert of Aurillac was the Prometheus of the 10th century. He was born in the early 940s CE, went to school at Aurillac Abbey, and became a monk himself. He proceeded to master every known branch of learning. In the year 999, he was elected Pope. He died in 1003 under his pontifical name, Sylvester II.
Rumours about Gerbert spread wildly across Europe. Within a century of his death, his life had already become legend. One of the most famous legends, and the most pertinent in our age of AI hype, is that of Gerbert's "brazen head." The legend was told in the 1120s by the English historian William of Malmesbury, in his well researched and highly regarded book, Deeds of the English Kings.
Gerbert was deeply learned in astronomy, a science of prediction. Astronomers could use the astrolabe to predict the position of the stars and foresee cosmological events such as eclipses. According to William, Gerbert used his knowledge of astronomy to construct a talking head. After inspecting the movements of the stars and planets, he cast a head in bronze that could answer yes-or-no questions.
First Gerbert asked the head: "Will I become Pope?"
"Yes," answered the head.
Then Gerbert asked: "Will I die before I sing mass in Jerusalem?"
"No," the head replied.
In both cases, the head was correct, though not as Gerbert anticipated. He did become Pope, and he sensibly avoided going on pilgrimage to Jerusalem. One day, however, he sang mass at Santa Croce in Gerusalemme in Rome. Unfortunately for Gerbert, Santa Croce in Gerusalemme was known in those days simply as "Jerusalem."
Gerbert sickened and died. On his deathbed, he asked his attendants to cut up his body and cast away the pieces, so he could go to his true master, Satan. In this way, he was, like Prometheus, punished for his theft of fire.

It is a thrilling story. It is not clear whether William of Malmesbury actually believed it. But he does try to persuade his readers that it is plausible. Why did this great historian with a devotion to the truth insert some fanciful legends about a French pope into his history of England? Good question!
Is it so fanciful to believe that an advanced astronomer might build a general-purpose prediction machine? In those days, astronomy was the most powerful science of prediction. The sober and scholarly William was at least willing to entertain the idea that brilliant advances in astronomy might make it possible for a Pope to build an intelligent chatbot.
Today, that same possibility is credited to machine-learning algorithms, which can predict which ad you will click, which movie you will watch, which word you will type next. We can be forgiven for falling under the same spell.
The Prometheus of the 18th century was Jacques de Vaucanson, at least according to Voltaire:
Bold Vaucanson, rival of Prometheus,Seems, imitating the springs of nature,To steal the fire of heaven to animate the body.

Vaucanson was a great machinist, famous for his automata. These were clockwork devices that realistically simulated human or animal anatomy. Philosophers of the time believed that the body was a machine — so why couldn't a machinist build one?
Sometimes Vaucanson's automata were scientifically significant. He constructed a piper, for example, that had lips and lungs and fingers, and blew the pipe in much the same way a human would. Historian Jessica Riskin explains in her book The Restless Clock that Vaucanson had to make significant discoveries in acoustics in order to make his piper play in tune.
Sometimes his automata were less scientific. His digesting duck was hugely famous, but turned out to be fraudulent. It appeared to eat and digest food, but its poos were in fact prefabricated pellets hidden inside the mechanism.
Vaucanson spent decades working on what he called a "moving anatomy." In 1741, he presented a plan to the Lyons Academy to build an "imitation of all animal operations." Twenty years later, he was at it again. He secured support from King Louis XV to build a simulation of the circulatory system. He claimed he could build a complete, living artificial body.

There is no evidence that Vaucanson ever completed a whole body. In the end, he couldn't live up to the hype. But many of his contemporaries believed he could do it. They wanted to believe in his magical mechanisms. They wished he would seize the fire of life.
If Vaucanson could manufacture a new human body, couldn't he also repair an existing one? This is the promise of some AI companies today. According to Dario Amodei, CEO of Anthropic, AI will soon allow people "to live as long as they want." Immortality seems like an attractive investment.
Sylvester II and Vaucanson were great technologists, but neither was a Prometheus. They stole no fire from the gods. Will the aspiring Prometheans of Silicon Valley succeed where their predecessors have failed? If only we had Sylvester II's brazen head, we could ask it.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
]]>Continental drying is a long-term decline in fresh water availability across large land masses. It is caused by accelerated snow and ice melt, permafrost thaw, water evaporation and groundwater extraction. (The report's definition excludes meltwater from Greenland and Antarctica, the authors noted.)
"We always think that the water issue is a local issue," lead author Fan Zhang, global lead for Water, Economy and Climate Change at the World Bank, told Live Science in a joint interview with co-author Jay Famiglietti, a satellite hydrologist and professor of sustainability at Arizona State University. "But what we show in the report is that ... local water problems could quickly ripple through national borders and become an international challenge."
Continents have now surpassed ice sheets as the biggest contributor to global sea level rise, because regardless of its origin, the lost fresh water eventually ends up in the ocean. The new report found this contribution is roughly 11.4 trillion cubic feet (324 billion cubic meters) of water each year — enough to meet the annual water needs of 280 million people.
"Every second you lose four Olympic-size swimming pools," Zhang said.
The report was published Nov. 4 by the World Bank. Its results are based on 22 years of data from NASA's GRACE mission, which measures small changes in Earth's gravity resulting from shifting water. The authors also compiled two decades' worth of economic and land use data, which they fed into a hydrological model and a crop-growth model.
The average amount of fresh water lost from continents each year is equivalent to 3% of the world's annual net "income" from precipitation, the report found. This loss jumps to 10% in arid and semi-arid regions, meaning that continental drying hits dry areas such as South Asia the hardest, Zhang said.
This is a growing problem. In a study published earlier this year, Zhang, Famiglietti and their colleagues showed that separate dry areas are rapidly merging into "mega-drying" regions.
"The impact is already being felt," Zhang said. Regions where agriculture is the biggest economic sector and employs the most people, such as sub-Saharan Africa and South Asia, are especially vulnerable. "In sub-Saharan Africa, dry shocks reduce the number of jobs by 600,000 to 900,000 a year. If you look at who are the people being affected, those most hard hit are the most vulnerable groups, like landless farmers."
Countries that don't have a large agricultural sector are also indirectly affected, because most of them import food and goods from drying regions.
The consequences for ecosystems are dramatic, too. Continental drying increases the likelihood and severity of wildfires, and this is especially true in biodiversity hotspots, the report found. At least 17 of the 36 globally recognized biodiversity hotspots — including Madagascar and parts of Southeast Asia and Brazil — show a trend of declining freshwater availability and have a heightened risk of wildfires.
"The implications are so profound," Famiglietti told Live Science.
Currently, the biggest cause of continental drying is groundwater extraction. Groundwater is poorly protected and undermanaged in most parts of the world, meaning the past decades have been a pumping "free-for-all," Famiglietti said. And the warmer and drier the world gets due to climate change, the more groundwater will likely be extracted, because soil moisture and glacial water sources will start to dwindle.
However, better regulations and incentives could reduce groundwater overpumping. According to the report, agriculture is responsible for 98% of the global water footprint, so "if agriculture water use efficiency is improved to a certain benchmark, the total amount of the water that can be saved is huge," Zhang said.
Globally, if water use efficiency for 35 key crops, such as wheat and rice, improved to median levels, enough water would be saved to meet the annual needs of 118 million people, the researchers found. There are many ways to improve water use efficiency in agriculture; for example, countries could change where they grow certain crops to match freshwater availability in different regions, or adopt technologies like artificial intelligence to optimize the timing and amount of irrigation.
Countries can also set groundwater extraction limits, incentivize farmers through subsidies and raise the price of water for agriculture. Additionally, the report showed that countries with higher energy prices had slower drying rates because it costs more to pump groundwater, which boosts water use efficiency.
Overall, water management at the national scale works well, according to the report. Countries with good water management plans depleted their freshwater resources two to three times more slowly than countries with poor water management.
On the global scale, virtual water trade is one of the best solutions to conserve water if it is done right, Zhang said. Virtual water trade occurs when countries exchange fresh water in the form of agricultural products and other water-intensive goods.
Global water use increased by 25% between 2000 and 2019. One-third of that increase occurred in regions that were already drying out — including Central America, northern China, Eastern Europe and the U.S. Southwest — and a big share of the water was used to irrigate water-intensive crops with inefficient methods, according to the report.
There has also been a global shift toward more water-intensive crops, including wheat, rice, cotton, maize and sugar-cane. Out of 101 drying countries, 37 have increased cultivation of these crops.
Virtual water trade can save huge amounts of water by relocating some of these crops to countries that aren't drying out. For example, between 1996 and 2005, Jordan saved 250 billion cubic feet (7 billion cubic meters) of water by importing wheat from the U.S. and maize from Argentina, among other products.
Globally, from 2000 to 2019 virtual water trade saved 16.8 trillion cubic feet (475 billion cubic meters) of water each year, or about 9% of the water used to grow the world's 35 most important crops.
"When water-scarce countries import water-intensive products, they are actually importing water, and that helps them to preserve their own water supply," Zhang said.
However, virtual water trade isn't always so straightforward. It might benefit one water-scarce country but severely deplete the resources of another country. One example is the production of alfalfa, a water-intensive legume used in livestock feed, in dry regions of the U.S. for export to Saudi Arabia, Famiglietti said. Saudi Arabia benefits from this exchange because the country isn't using its water to grow alfalfa, but aquifers in Arizona are being sucked dry, he said.
The solutions identified in the report fall into three broad categories: manage water demand, expand water supply through recycling and desalination, and ensure fair and effective water allocation.
If we can make those changes, sustainable fresh water use is "definitely possible," Zhang said. "We do have reason to be optimistic."
Famiglietti agreed that small changes could go a long way.
"It's complicated, because the population is growing and we're going to need to grow more food," he said. "I don't know that we're going to 'tech' our way out of it, but when we start thinking on decadal time scales, changes in policy, changes in financial innovations, changes in technology — I think there is some reason for optimism. And in those decades we can keep thinking about how to improve our lot."
Some of the views expressed in this article are not included in the World Bank report. They should not be interpreted as having been endorsed by the World Bank or by its representatives.
]]>January saw the abrupt suspension of key operations across the National Institutes of Health, not only disrupting clinical trials and other in-progress studies but stalling grant reviews and other activities necessary to conduct research. Around the same time, the Trump administration issued executive orders declaring there are only two sexes and ending DEI programs. The Trump administration also removed public data and analysis tools related to health disparities, climate change and environmental justice, among other databases.
February and March saw a steep undercutting of federal support for the infrastructure crucial to conducting research as well as the withholding of federal funding from several universities.
And over the course of the following months, billions of dollars of grants supporting research projects across disciplines, institutions and states were terminated. These include funding already spent on in-progress studies that have been forced to end before completion. Federal agencies, including NASA, the Environmental Protection Agency, the National Oceanic and Atmospheric Administration and the U.S. Agency for International Development have been downsized or dismantled altogether.
The Conversation asked researchers from a range of fields to share how the Trump administration’s science funding cuts have affected them. All describe the significant losses they and their communities have experienced. But many also voice their determination to continue doing work they believe is crucial to a healthier, safer and more fair society.
Carrie McDonough, Associate Professor of Chemistry, Carnegie Mellon University
People are exposed to thousands of synthetic chemicals every day, but the health risks those chemicals pose are poorly understood. I was a co-investigator on a US $1.5 million grant from the EPA to develop machine-learning techniques for rapid chemical safety assessment. My lab was two months into our project when it was terminated in May because it no longer aligned with agency priorities, despite the administration’s Make America Healthy Again report specifically highlighting using AI to rapidly assess childhood chemical exposures as a focus area.
Labs like mine are usually pipelines for early-career scientists to enter federal research labs, but the uncertain future of federal research agencies has disrupted this process. I’m seeing recent graduates lose federal jobs, and countless opportunities disappear. Students who would have been the next generation of scientists helping to shape environmental regulations to protect Americans have had their careers altered forever.

I’ve been splitting my time between research, teaching and advocating for academic freedom and the economic importance of science funding because I care deeply about the scientific and academic excellence of this country and its effects on the world. I owe it to my students and the next generation to make sure people know what’s at stake.
Cara Poland, Associate Professor of Obstetrics, Gynecology and Reproductive Biology, Michigan State University
I run a program that has trained 20,000 health care practitioners across the U.S. on how to effectively and compassionately treat addiction in their communities. Most doctors aren’t trained to treat addiction, leaving patients without lifesaving care and leading to preventable deaths.
This work is personal: My brother died from substance use disorder. Behind every statistic is a family like mine, hoping for care that could save their loved one’s life.
With our federal funding cut by 60%, my team and I are unable to continue developing our addiction medicine curriculum and enrolling medical schools and clinicians into our program.
Meanwhile, addiction-related deaths continue to rise as the U.S. health system loses its capacity to deliver effective treatment. These setbacks ripple through hospitals and communities, perpetuating treatment gaps and deepening the addiction crisis.
Brian G. Henning, Professor of Philosophy and Environmental Studies and Sciences, Gonzaga University
In 2021, a heat dome settled over the Northwest, shattering temperature records and claiming lives. Since that devastating summer, my team and I have been working with the City of Spokane to prepare for the climate challenges ahead.
We and the city were awarded a $19.9 million grant from the EPA to support projects that reduce pollution, increase community climate resilience and build capacity to address environmental and climate justice challenges.

As our work was about to begin, the Trump administration rescinded our funding in May. As a result, the five public facilities that were set to serve as hubs for community members to gather during extreme weather will be less equipped to handle power failures. Around 300 low-income households will miss out on efficient HVAC system updates. And our local economy will lose the jobs and investments these projects would have generated.
Despite this setback, the work will continue. My team and I care about our neighbors, and we remain focused on helping our community become more resilient to extreme heat and wildfires. This includes pursuing new funding to support this work. It will be smaller, slower and with fewer resources than planned, but we are not deterred.
Nathaniel M. Tran, Assistant Professor of Health Policy and Administration, University of Illinois Chicago
This year nearly broke me as a scientist.
Shortly after coming into office, the Trump administration began targeting research projects focusing on LGBTQ+ health for early termination. I felt demoralized after receiving termination letters from the NIH for my own project examining access to preventive services and home-based care among LGBTQ+ older adults. The disruption of publicly funded research projects wastes millions of dollars from existing contracts.
Then, news broke that the Centers for Disease Control and Prevention would no longer process or make publicly available the LGBTQ+ demographic data that public health researchers like me rely on.
But instead of becoming demoralized, I grew emboldened: I will not be erased, and I will not let the LGBTQ+ community be erased. These setbacks renewed my commitment to advancing the public’s health, guided by rigorous science, collaboration and equity.

Rachael Sirianni, Professor of Neurological Surgery, UMass Chan Medical School
My lab designs new cancer treatments. We are one of only a few groups in the nation focused on treating pediatric cancer that has spread across the brain and spinal cord. This research is being crushed by the broad, destabilizing impacts of federal cuts to the NIH.
Compared to last year, I am working with around 25% of our funding and less than 50% of our staff. We cannot finish our studies, publish results or pursue new ideas. We have lost technology in development. Students and colleagues are leaving as training opportunities and hope for the future of science dries up.
I’m faced with impossible questions about what to do next. Do I use my dwindling research funds to maintain personnel who took years to train? Keep equipment running? Bet it all on one final, risky study? There are simply no good choices remaining.
Stephanie Nawyn, Associate Professor of Sociology, Michigan State University
Many people have asked me how the termination of my National Science Foundation grant to improve work cultures in university departments has affected me, but I believe that is the wrong question. Certainly it has meant the loss of publications, summer funding for faculty and graduate students, and opportunities to make working conditions at my and my colleagues’ institutions more equitable and inclusive.
But the greatest effects will come from the widespread terminations across science as a whole, including the elimination of NSF programs dedicated to improving gender equity in science and technology. These terminations are part of a broader dismantling of science and higher education that will have cascading negative effects lasting decades.
Infrastructure for knowledge production that took years to build cannot be rebuilt overnight.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
]]>It took more than a century for researchers to prove that Neanderthals were actually quite intelligent and that they interbred with modern humans (Homo sapiens). The number of discoveries related to Neanderthals' biology and culture has skyrocketed in recent years — and 2025 was a noteworthy year. While we learned that Neanderthals had biological features that were strikingly different from modern humans', this year's discoveries also showed that some aspects of their behavior and culture were similar to ours.
Here are 10 major Neanderthal findings from 2025 — and what they teach us about our own evolution.

The hottest — but also somewhat controversial — Neanderthal discovery of the year was that the first humans to make and control fire were Neanderthals living in England more than 400,000 years ago.
In December, researchers announced that they had found reddened clay and heat-shattered flint hand axes at an archaeological site in Suffolk. But the smoking gun was the discovery of tiny flakes of pyrite, a mineral that produces sparks when struck against flint.
Experts have debated for decades whether early human ancestors deliberately made fire or whether they opportunistically used wildfires that sprang up. The combination of flakes of pyrite and charred soil and tools points to Neanderthals' purposeful creation of fire.
The discovery, however, does not tell us whether Neanderthals invented this technology or they learned it from even earlier ancestors, such as Homo erectus. Regardless, the fire evidence shows that Neanderthals were smart enough to figure out how to survive in cold and dark European climates.

Around 45,000 years ago — very close to when Neanderthals disappeared forever — six members of a Neanderthal group were cannibalized, according to a study published in November. Their remains were discovered in the Goyet cave system in Belgium with butchery marks similar to those on animal bones.
This isn't the first time archaeologists have found evidence of cannibalism in Neanderthals. But it is the best evidence experts have to suggest one group — probably Neanderthals but possibly modern humans — deliberately targeted the women and children of another group, perhaps as a way to eliminate the group's reproductive potential.

A curious-looking rock found in Spain contains the world's oldest known fingerprint, and it was probably made by a Neanderthal using ocher 43,000 years ago, researchers announced in May.
The team investigating the rock, which is the size of a large potato, thinks that it has face-like features and that the red dot may be a nose. If they're correct, it would mean Neanderthals were creating symbolic art, which could settle a decades-long debate in paleoanthropology.
Not all experts agree that the rock is an early version of Mr. Potato Head, but they do think the fingerprint and its characteristic whorl pattern represent a clear example of Neanderthals' use of red ocher pigment.

Scientists in Crimea found three pointy chunks of red and yellow ocher that Neanderthals may have used as early "crayons" 100,000 years ago, according to research published in November.
The hunks of mineral appear to have been repeatedly sharpened, which suggested to the researchers that the ocher was used for culturally meaningful purposes rather than in practical tasks, such as tanning hides.
Although ocher has been found at other Neanderthal sites, not all experts are convinced of the crayon interpretation. Instead, they suggest Neanderthals may have scraped powder from the ocher chunks for another purpose, such as to leave a fingerprint.

In July, researchers discovered that a key Neanderthal gene variant that is still found in some humans today could be detrimental to athletic performance because it limits the body's ability to produce energy during intense exercise.
Researchers found that the Neanderthal version of an enzyme called AMPD1 was different from the one in most modern humans. The Neanderthal enzyme variant allowed adenosine monophosphate (AMP) to build up in their muscles rather than being quickly removed. This AMP buildup is problematic because it makes it harder to produce adenosine triphosphate (ATP), a molecule that the body uses to store energy.
Modern humans who carry the Neanderthal variant of the gene have a lower probability of achieving elite athletic status, the researchers found. But while the Neanderthal variant may have affected their muscle metabolism slightly, it may not have contributed to their extinction.

In a study published in October, researchers examined 51 teeth from H. sapiens, Neanderthals and other ancestors for evidence of lead exposure. Lead occurs naturally in our environment, but it is known to be toxic at high levels, causing damage to the brain and other organs. Researchers discovered that human ancestors were affected by episodic lead exposure for nearly 2 million years — and that human brains may have evolved some protection against lead poisoning.
Humans living today have a unique version of a gene called NOVA1 that is important for brain development and language skills. The gene also appears to confer greater resistance to lead than other versions of the gene do, such as the one in our Neanderthal cousins.
Therefore, researchers propose, the modern-human version of NOVA1 may have given us a slight advantage over Neanderthals and may have contributed to the demise of the Neanderthals.

Neanderthals primarily ate meat (and maggots), which put them at risk of developing protein poisoning, a lethal condition that results from eating too much protein and too few fats and carbohydrates.
But in July, researchers announced their discovery of a "fat factory" that Neanderthals may have used to stave off this condition 125,000 years ago. Their survey of nearly 200 animal bones revealed that Neanderthals smashed the bones to get at the marrow inside, which they boiled to extract the fat.
Fat is high in calories, and Neanderthals may have saved it to eat during food shortages. This innovative food-collection method is similar to what some ancient modern-human foraging groups did, suggesting that, in at least one way, Neanderthals were similar to us.

In August, researchers investigating the enzyme adenylosuccinate lyase (ADSL) found that the version in Neanderthals was more active than the one in humans. ADSL helps synthesize purine, which is one of the fundamental building blocks of DNA, and an ADSL deficiency is known to result in intellectual disability in modern humans. So researchers modified mice to have a modern-human-like ADSL gene and found that they were better at completing a task to get water.
But even though ADSL deficiency can cause intellectual and behavioral problems in modern-day people, it's not yet clear whether the Neanderthal variant impaired them.

Even before Neanderthals disappeared forever, their numbers were dwindling because of a population bottleneck, according to research published in February.
Scientists looked at the tiny inner-ear bones of Neanderthals from various time periods and noticed that, around 110,000 years ago, there was an abrupt decline in the diversity of bone shapes. This decline suggests a bottleneck event, when a species undergoes a sudden reduction in variation due to factors such as genocide or climate change.
While the ear bones alone didn't cause the Neanderthals' downfall, the bottleneck may have been the beginning of the end.

Biologically, Neanderthals had distinct blood variants that separated them from modern humans — and two of those variants we learned about this year may have hastened our ancient cousins' extinction.
In January, researchers discovered that Neanderthals had a rare blood type that may have been fatal to their offspring when they mated with Denisovans or early H. sapiens.
Neanderthals carried a variation of the blood antigen Rh, which gives the positive and negative signs to blood types. Before modern medical interventions, if someone who was Rh-negative was pregnant with a fetus that was Rh-positive, it caused a miscarriage or stillbirth. The researchers found that, if a Neanderthal female mated with a H. sapiens or Denisovan male, there would have been a high risk of anemia, brain damage and infant death. And that might have spelled the end of the line for Neanderthals.
Another study published in October suggested that a fatal red blood cell incompatibility between Neanderthals and humans also contributed to our ancient cousins' extinction. Researchers focused on the PIEZO1 gene that affects oxygen transportation in red blood cells. Neanderthals' version of this gene essentially let their blood cells trap oxygen efficiently, while the modern-human version more efficiently released oxygen to tissues. When maternal oxygen isn't passed on to the fetus, it can restrict the growth of the fetus or lead to miscarriage. So, if a hybrid Neanderthal-human mother mated with a modern-human father or with a hybrid Neanderthal-human father, their offspring would be more likely to die than the offspring of non-hybrids.
Although Neanderthals' extinction likely did not hinge on any one specific gene variant, the new research into red blood cells and maternal-fetal incompatibility is providing key insight into the demise of our archaic cousins around 35,000 years ago.
The shift has largely been attributed to the reintroduction of wolves to the park — as predators, they helped control the elk numbers. But their return may not have reshaped the entire ecosystem in the way that scientists thought, and has sparked a fierce debate among scientists over exactly why and how Yellowstone has rebounded.
According to a study published in January, the reintroduction of gray wolves (Canis lupus) in the 1990s created a trophic cascade — a chain reaction in the food web — that benefitted the entire ecosystem. The study linked wolves in the area to a reduction in the elk population, which in turn reduced browsing and allowed willow trees to grow. Between 2001 and 2020, this led to a 1,500% increase in crown volume, the total space filled by upper branches of the willows.
But now, scientists have written a response letter to the editor, published in Oct. 13 in the journal Global Ecology and Conservation, in which they argue that the original study's methodology was flawed, and that Yellowstone wolves' effect on willow shrubs is not so clear.
Large predators were targeted in Yellowstone from the end of the 1800s. By the 1920s, wolves were largely extinct from the park. Their disappearance created an ecological imbalance — the elk population exploded, which decimated plant populations and in turn threatened beavers, among other impacts. This is known as a trophic cascade, where the removal of one species causes ripples throughout the food web.
While the reintroduction of wolves to Yellowstone has led to changes within the park, the authors of the response letter claim the original study reinterpreted existing data to fit an oversimplified story.
The study converted willow height measurements collected and published by another research group into a metric called crown volume, response author Daniel MacNulty, a wildlife ecologist at Utah State University, told Live Science in an email. Crown volume was used as a proxy for willow size, meant to capture the shrub’s entire three-dimensional growth more than simply measuring its height.
"Because crown volume was built directly from height, [the study] only showed that height predicts height," MacNulty said. "They did not reveal anything new about how willow growth changed after wolf reintroduction."
The response letter suggests other inconsistencies in data analysis, like comparing willow measurements from different locations across years. This is problematic because it shows a misleading time series of willow growth, and MacNulty's research group has previously published research noting sampling biases in other studies supporting this same trophic cascade theory.
"There is substantial scientific evidence of a definitive effect of wolf recovery on the rest of the Yellowstone ecosystem," MacNulty said, like wolves increasing the supply of carrion to bears, coyotes, eagles and other meat-eating species. But the effect of wolves on vegetation is less clear because it operates through the decline of elk populations, which wolves were likely not solely responsible for. As MacNulty points out, humans, grizzly bears and cougars also hunt elk. "A major problem with the simple trophic cascade story is that it ignores the role of these other predators."
William Ripple, an Oregon State University wildlife ecologist and author of the original paper, stands by the original conclusions of the paper, maintaining that a large carnivore, elk, and willow trophic cascade occurred in Yellowstone. "Our methods are sound, the modeling approach is standard," Ripple told Live Science in an email. "So we reject the idea that there are fatal flaws."
The debate about Yellowstone wolves and the impact of their reintroduction goes beyond this study and the latest response. While scientists widely agree that there is a trophic cascade in Yellowstone, its strength — and which predators are most responsible for it — form the center of the disagreement, MacNulty said.
Some scientists argue the story is more complex. "There are reasons other than trophic cascades by which carnivores and plants can be positively associated," Jake Goheen, a wildlife ecologist at Iowa State University told Live Science in an email. Goheen, who was not involved in the research or response, said he doesn't believe that the authors of the original study provided enough evidence to support their conclusion that reintroducing wolves in Yellowstone caused a strong trophic cascade that affected willows.
"There is a growing body of literature at this point that has scrutinized the hypothesized cascade in Yellowstone," Goheen said. He adds that this does not mean there's no wolf-to-elk-to-willow trophic cascade in Yellowstone, only that the evidence presented so far is not clear enough.
To establish a clear trophic cascade from Yellowstone wolf reintroduction to willows, researchers would need to account for other predators and herbivores, said MacNulty. The ideal study would then analyze how much more total willow biomass there is now compared with before wolf introduction, to identify the strength of the effect; then calculate how much of that increase can be attributed solely to wolves, to identify its cause.
Ripple and his research team are now preparing a detailed reply, which explains that criticisms of the original study come from misunderstandings of what they did, Ripple said. "The basic scientific logic of the paper is solid," Ripple said.
Conservation priorities might be fueling the controversy over large carnivores' beneficial effects on ecosystems, said Goheen, adding that even if wolves are not definitively causing a trophic cascade to willows, they are still important to conserve.
]]>But the urgency with which Muller, a climate scientist at the Institute of Science and Technology Austria in Klosterneuburg, considers such atmospheric puzzles has surged in recent years. As our planet swelters with global warming, storms are becoming more intense, sometimes dumping two or even three times more rain than expected. Such was the case in Bahía Blanca, Argentina, in March 2025: Almost half the city’s yearly average rainfall fell in less than 12 hours, causing deadly floods.
Atmospheric scientists have long used computer simulations to track how the dynamics of air and moisture might produce varieties of storms. But existing models hadn’t fully explained the emergence of these fiercer storms. A roughly 200-year-old theory describes how warmer air holds more moisture than cooler air: an extra 7 percent for every degree Celsius of warming. But in models and weather observations, climate scientists have seen rainfall events far exceeding this expected increase. And those storms can lead to severe flooding when heavy rain falls on already saturated soils or follows humid heatwaves.
Clouds, and the way that they cluster, could help explain what’s going on.
A growing body of research, set in motion by Muller over a decade ago, is revealing several small-scale processes that climate models had previously overlooked. These processes influence how clouds form, congregate and persist in ways that may amplify heavy downpours and fuel larger, long-lasting storms. Clouds have an “internal life,” Muller says, “that can strengthen them or may help them stay alive longer.”
Other scientists need more convincing, because the computer simulations researchers use to study clouds reduce planet Earth to its simplest and smoothest form, retaining its essential physics but otherwise barely resembling the real world.
Now, though, a deeper understanding beckons. Higher-resolution global climate models can finally simulate clouds and the destructive storms they form on a planetary scale — giving scientists a more realistic picture. By better understanding clouds, researchers hope to improve their predictions of extreme rainfall, especially in the tropics where some of the most ferocious thunderstorms hit and where future rainfall projections are the most uncertain.
All clouds form in moist, rising air. A mountain can propel air upwards; so, too, can a cold front. Clouds can also form through a process known as convection: the overturning of air in the atmosphere that starts when sunlight, warm land or balmy water heats air from below. As warm air rises, it cools, condensing the water vapor it carried upwards into raindrops. This condensation process also releases heat, which fuels churning storms.
But clouds remain one of the weakest links in climate models. That’s because the global climate models scientists use to simulate scenarios of future warming are far too coarse to capture the updrafts that give rise to clouds or to describe how they swirl in a storm — let alone to explain the microphysical processes controlling how much rain falls from them to Earth.
To try to resolve this problem, Muller and other like-minded scientists turned to simpler simulations of Earth’s climate that are able to model convection. In these artificial worlds, each the shape of a shallow box typically a few hundred kilometers across and tens of kilometers deep, the researchers tinkered with replica atmospheres to see if they could figure out how clouds behaved under different conditions.
Intriguingly, when researchers ran these models, the clouds spontaneously clumped together, even though the models had none of the features that usually push clouds together — no mountains, no wind, no Earthly spin or seasonal variations in sunlight. “Nobody knew why this was happening,” says Daniel Hernández Deckers, an atmospheric scientist at the National University of Colombia in Bogotá.
In 2012, Muller discovered a first clue: a process known as radiative cooling. The Sun’s heat that bounces off Earth’s surface radiates back into space, and where there are few clouds, more of that radiation escapes — cooling the air. The cool spots set up atmospheric flows that drive air toward cloudier regions — trapping more heat and forming more clouds. A follow-up study in 2018 showed that in these simulations, radiative cooling accelerated the formation of tropical cyclones. “That made us realize that to understand clouds, you have to look at the neighborhood as well — outside clouds,” Muller says.
Once scientists started looking not just outside clouds, but also underneath them and at their edges, they found other small-scale processes that help to explain why clouds flock together. The various processes, described by Muller and colleagues in the Annual Review of Fluid Mechanics, all bring or hold together pockets of warm, moist air so more clouds form in already-cloudy regions. These small-scale processes hadn’t been understood much before because they are often obscured by larger weather patterns.
Hernández Deckers has been studying one of the processes, called entrainment — the turbulent mixing of air at the edges of clouds. Most climate models represent clouds as a steady plume of rising air, but in reality “clouds are like a cauliflower,” he says. “You have a lot of turbulence, and you have these bubbles [of air] inside the clouds.” This mixing at the edges affects how clouds evolve and thunderstorms develop; it can weaken or strengthen storms in various ways, but, like radiative cooling, it encourages more clouds to form as a clump in regions that are already moist.
Such processes are likely to be most important in storms in Earth’s tropical regions, where there’s the most uncertainty about future rainfall. (That’s why Hernández Deckers, Muller and others tend to focus their studies there.) The tropics lack the cold fronts, jet streams and spiraling high- and low-pressure systems that dominate air flows at higher latitudes.

There are other microscopic processes happening inside clouds that affect extreme rainfall, especially on shorter timescales. Moisture matters: Condensed droplets falling through moist, cloudy air don’t evaporate as much on their descent, so more water falls to the ground. Temperature matters too: When clouds form in warmer atmospheres, they produce less snow and more rain. Since raindrops fall faster than snowflakes, they evaporate less on their descent — producing, once again, more rain.
These factors also help explain why more rain can get squeezed from a cloud than the 7 percent rise per degree of warming predicted by the 200-year-old theory. “Essentially you get an extra kick … in our simulations, it was almost a doubling,” says Martin Singh, a climate scientist at Monash University in Melbourne, Australia.
Cloud clustering adds to this effect by holding warm, moist air together, so more rain droplets fall. One study by Muller and her collaborators found that clumping clouds intensify short-duration rainfall extremes by 30 to 70 percent, largely because raindrops evaporate less inside sodden clouds.
Other research, including a study led by Jiawei Bao, a postdoctoral researcher in Muller’s group, has likewise found that the microphysical processes going on inside clouds have a strong influence over fast, heavy downpours. These sudden downpours are intensifying much faster with climate change than protracted deluges, and often cause flash flooding.
Scientists who study the clumping of clouds want to know how that behavior will change as the planet heats up — and what that will mean for incidences of heavy rainfall and flooding.
Some models suggest that clouds (and the convection that gives rise to them) will clump together more with global warming — and produce more rainfall extremes that often far exceed what theory predicts. But other simulations suggest that clouds will congregate less. “There seems to be still possibly a range of answers,” says Allison Wing, a climate scientist at Florida State University in Tallahassee who has compared various models.

Scientists are beginning to try to reconcile some of these inconsistencies using powerful types of computer simulations called global storm-resolving models. These can capture the fine structures of clouds, thunderstorms and cyclones while also simulating the global climate. They bring a 50-fold leap in realism beyond the global climate models scientists generally use — but demand 30,000 times more computational power.
Using one such model in a paper published in 2024, Bao, Muller and their collaborators found that clouds in the tropics congregated more as temperatures increased — leading to less frequent storms but ones that were larger, lasted longer and, over the course of a day, dumped more rain than expected from theory.
But that work relied on just one model and simulated conditions from around one future timepoint — the year 2070. Scientists need to run longer simulations using more storm-resolving models, Bao says, but very few research teams can afford to run them. They are so computationally intensive that they are typically run at large centralized hubs, and scientists occasionally host “hackathons” to crunch through and share data.
Researchers also need more real-world observations to get at some of the biggest unknowns about clouds. Although a flurry of recent studies using satellite data linked the clustering of clouds to heavier rainfall in the tropics, there are large data gaps in many tropical regions. This weakens climate projections and leaves many countries ill-prepared. In June of 2025, floods and landslides in Venezuela and Colombia swept away buildings and killed at least a dozen people, but scientists don’t know what factors worsened these storms because the data are so paltry. “Nobody really knows, still, what triggered this,” Hernández Deckers says.
New, granular data are on their way. Wing is analyzing rainfall measurements from a German research vessel that traversed the tropical Atlantic Ocean for six weeks in 2024. The ship’s radar mapped clusters of convection associated with the storms it passed through, so the work should help researchers see how clouds organize over vast tracts of the ocean.
And an even more global view is on the horizon. The European Space Agency plans to launch two satellites in 2029 that will measure, among other things, near-surface winds that ruffle Earth’s oceans and skim mountaintops. Perhaps, scientists hope, the data these satellites beam back will finally provide a better grasp of clumping clouds and the heaviest rains that fall from them.
Research and interviews for this article were partly supported through a journalism residency funded by the Institute of Science & Technology Austria (ISTA). ISTA had no input into the story.
This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.
]]>We’ve taken it to a nature reserve, photographed birds from our window and zoomed in on the moon to assess its performance in all-light conditions for static and moving subjects, emulating real-world shooting conditions to test its mettle.

There’s no beating around the bush here — this lens is big, and it’s heavy. Weighing about 4.5 lbs (just over 2 kilograms), this thing makes itself known both in your camera bag and out in the field. Needless to say, it got quite heavy after a while, even when resting in a hide, but it feels solid and well-built and is dust- and weather-resistant, although we never got caught out in the rain to fully test this.
We found it frustrating that it didn’t have a zoom lock, as it had an annoying amount of lens creep when we held the lens vertically, which meant we couldn’t carry the camera around our neck (as if its weight didn’t already see to that). We found the zoom ring a little on the stiff side, and, to be picky, the lens actually looked quite ugly when it was zoomed all the way in on a subject.


Focal length: 200-800 mm
Maximum aperture: f/6.3-9
Weight: 4.5 pounds (2.05 kg)
Image stabilization: 5.5 stops
Filter thread: 95 mm
Dimensions (in): ⌀4.03 x 12.37
Dimensions (mm): ⌀102.3 x 314.1
In addition, it has a control ring, AF/MF switch, image stabilizer switch and two custom buttons, although we found these buttons hard to press as they aren’t within easy reach when holding the camera’s hefty weight. When we took our hand away to try to press either of the buttons, it threw the entire weight distribution off.
It has a nice big lens hood, although we’d have liked this to have a door in order to utilize a polarizer, particularly when we were photographing waterfowl.





For wildlife photography in generally favorable conditions, this lens performed very well overall. Its obvious downfall is the limited maximum aperture — f/6.3 performs just fine during the daytime, but as the light levels fell at dusk, or even when we went into a heavily wooded area, we had to push the ISO up higher than we’d have wanted.
Luckily, we were shooting with the Canon EOS R6 II, which has excellent noise handling, so we were able to save a lot of our images. But if you often shoot at dawn or dusk, we’d recommend investing in a wider telephoto lens so you won’t need to rely on denoise software.
The autofocus was also good, but at higher focal lengths, it’s at the mercy of how steady your hand is. It generally performed very well, but it suffered when we were shooting in harsh conditions, or if there were distractions or foliage in front of our subject.
Overall, though, its performance is very good for the price. Images are sharp and it captures color very nicely — certainly more than well enough for wildlife or moon photography.





As much as it suffers from a fairly wide maximum aperture, the 200-800mm focal length offers versatility that many other lenses don’t. There’s a Sony super-telephoto with a 400-800mm range, but you’d be stuck if a subject came too close to you — with the Canon, you’d be able to zoom out easily. We never found ourselves wishing we had multiple lenses, as the 200-800mm can cover a lot of subjects, near or far.
Plus, although it doesn’t have the close focusing capabilities of a true macro lens, it can focus as close as 2.6 feet (0.8 meters) at 200mm, which is great for photographing butterflies and insects at a fairly close range.
The 5.5 stops of image stabilization were a lifesaver, and pretty crucial for such a long focal length. Even just for compositional purposes, we still struggled to follow subjects on occasion at the full 800mm, and if there had been no image stabilization, we’d have had no chance.
Best lenses for wildlife photography
Best wildlife lenses under $1,000
Best cameras for wildlife photography
Best beginner wildlife cameras
Best cameras
Best beginner cameras
Best macro lenses
Best binoculars for bird-watching
Best compact binoculars
Best wildlife observation equipment
Beginner's guide to wildlife photography
Overall, this lens provides excellent value for money. You get a lot of lens for the price, and although it’s not a low-light champion, it still produces beautifully sharp, contrast-y images, while the versatility of the focal length is hard to beat.
Considering the very best wildlife lenses are telephoto primes costing upwards of $10,000, it’s one of the best you can buy for most wildlife photographers — that is, for anyone who’s not a serious pro.
If you don't need 800mm
Another great wildlife lens with a little less reach, but a little more aperture. This lens would be better in low light if you don't need a huge zoom.
If you'd prefer a prime lens
This 800mm prime lens is perfect for bird photography or capturing distant animals on a budget — but the f/11 aperture means good lighting is essential.
If you're a professional
If you're a pro photographer and have serious cash to spend, this 400mm prime lens with an f/2.8 aperture will see you through any light conditions.
Scientists are developing a real-life tractor beam, dubbed an electrostatic tractor. This tractor beam wouldn't suck in helpless starship pilots, however. Instead, it would use electrostatic attraction to nudge hazardous space junk safely out of Earth orbit.
The stakes are high: With the commercial space industry booming, the number of satellites in Earth's orbit is forecast to rise sharply. This bonanza of new satellites will eventually wear out and turn the space around Earth into a giant junkyard of debris that could smash into working spacecraft, plummet to Earth, pollute our atmosphere with metals and obscure our view of the cosmos. And, if left unchecked, the growing space junk problem could hobble the booming space exploration industry, experts warn.
The science is pretty much there, but the funding is not.
The electrostatic tractor beam could potentially alleviate that problem by safely moving dead satellites far out of Earth orbit, where they would drift harmlessly for eternity.
While the tractor beam wouldn't completely solve the space junk problem, the concept has several advantages over other proposed space debris removal methods, which could make it a valuable tool for tackling the issue, experts told Live Science.
Related: 11 sci-fi concepts that are possible (in theory)
A prototype could cost millions, and an operational, full-scale version even more. But if the financial hurdles can be overcome, the tractor beam could be operational within a decade, its builders say.
"The science is pretty much there, but the funding is not," project researcher Kaylee Champion, a doctoral student in the Department of Aerospace Engineering Sciences at the University of Colorado Boulder (CU Boulder), told Live Science.

The tractor beams depicted in "Star Wars" and "Star Trek" suck up spacecraft via artificial gravity or an ambiguous "energy field." Such technology is likely beyond anything humans will ever achieve. But the concept inspired Hanspeter Schaub, an aerospace engineering professor at CU Boulder, to conceptualize a more realistic version.
Schaub first got the idea after the first major satellite collision in 2009, when an active communications satellite, Iridium 33, smashed into a defunct Russian military spacecraft, Kosmos 2251, scattering more than 1,800 pieces of debris into Earth's orbit.
Related: How many satellites orbit Earth?

In the wake of this disaster, Schaub wanted to be able to prevent this from happening again. To do this, he realized you could pull spacecraft out of harm's way by using the attraction between positively and negatively charged objects to make them "stick" together.
Over the next decade, Schaub and colleagues refined the concept. Now, they hope it can someday be used to move dead satellites out of geostationary orbit (GEO) — an orbit around Earth's equator where an object's speed matches the planet's rotation, making it seem like the object is fixed in place above a certain point on Earth. This would then free up space for other objects in GEO, which is considered "prime real estate" for satellites, Schaub said.

The electrostatic tractor would use a servicer spacecraft equipped with an electron gun that would fire negatively charged electrons at a dead target satellite, Champion told Live Science. The electrons would give the target a negative charge while leaving the servicer with a positive charge. The electrostatic attraction between the two would keep them locked together despite being separated by 65 to 100 feet (20 to 30 meters) of empty space, she said.
Once the servicer and target are "stuck together," the servicer would be able to pull the target out of orbit without touching it. Ideally, the defunct satellite would be pulled into a "graveyard orbit" more distant from Earth, where it could safely drift forever, Champion said.
Related: 15 of the weirdest things we have launched into space
The electrostatic attraction between the two spacecraft would be extremely weak, due to limitations in electron gun technology and the distance by which the two would need to be separated to prevent collisions, project researcher Julian Hammerl, a doctoral student at CU Boulder, told Live Science. So the servicer would have to move very slowly, and it could take more than a month to fully move a single satellite out of GEO, he added.
That's a far cry from movie tractor beams, which are inescapable and rapidly reel in their prey. This is the "main difference between sci-fi and reality," Hammerl said.

The electrostatic tractor would have one big advantage over other proposed space junk removal methods, such as harpoons, giant nets and physical docking systems: It would be completely touchless.
"You have these large, dead spacecraft about the size of a school bus rotating really fast," Hammerl said. "If you shoot a harpoon, use a big net or try to dock with them, then the physical contact can damage the spacecraft and then you are only making the [space junk] problem worse."
Scientists have proposed other touchless methods, such as using powerful magnets, but enormous magnets are both expensive to produce and would likely interfere with a servicer's controls, Champion said.
Related: How do tiny pieces of space junk cause incredible damage?
The main limitation of the electrostatic tractor is how slowly it would work. More than 550 satellites currently orbit Earth in GEO, but that number is expected to rise sharply in the coming decades.
If satellites were moved one at a time, then a single electrostatic tractor wouldn't keep pace with the number of satellites winking out of operation. Another limitation of the electrostatic tractor is that it would work too slowly to be practical for clearing smaller pieces of space junk, so it wouldn't be able to keep GEO completely free of debris.
Cost is the other big obstacle. The team has not yet done a full cost analysis for the electrostatic tractor, Schaub said, but it would likely cost tens of millions of dollars. However, once the servicer were in space, it would be relatively cost-effective to operate it, he added.

The researchers are currently working on a series of experiments in their Electrostatic Charging Laboratory for Interactions between Plasma and Spacecraft (ECLIPS) machine at CU Boulder. The bathtub-sized, metallic vacuum chamber, which is equipped with an electron gun, allows the team to "do unique experiments that almost no one else can currently do" in order to simulate the effects of an electrostatic tractor on a smaller scale, Hammerl said.
Once the team is ready, the final and most challenging hurdle will be to secure funding for the first mission, which is a process they have not yet started.
Most of the mission cost would come from building and launching the servicer. However, the researchers would ideally like to launch two satellites for the first tests, a servicer and a target that they can maneuver, which would give them more control over their experiments but also double the cost.
Related: 10 stunning shots of Earth from space in 2022
If they can somehow wrangle that funding, a prototype tractor beam could be operational in around 10 years, the team previously estimated.

While tractor beams may sound like a pipe dream, experts are optimistic about the technology.
"Their technology is still in the infancy stage," John Crassidis, an aerospace scientist at the University at Buffalo in New York, who is not involved in the research, told Live Science in an email. "But I am fairly confident it will work."
If you shoot a harpoon, use a big net or try to dock with them, then the physical contact can damage the spacecraft and then you are only making the [space junk] problem worse.
Removing space junk without touching it would also be much safer than any current alternative method, Crassidis added.
The electrostatic tractor "should be able to produce the forces necessary to move a defunct satellite" and "certainly has a high potential to work in practice," Carolin Frueh, an associate professor of aeronautics and astronautics at Purdue University in Indiana, told Live Science in an email. "But there are still several engineering challenges to be solved along the way to make it real-world-ready."
Scientists should continue to research other possible solutions, Crassidis said. Even if the CU Boulder team doesn't create a "final product" to remove nonfunctional satellites, their research will provide a stepping stone for other scientists, he added.
If they are successful, it wouldn't be the first time scientists turned fiction into fact.
"What is today's science fiction could be tomorrow's reality," Crassidis said.
]]>This is the vision for the Autonomous Closed-Loop Intervention System (ACIS), a device being developed by scientists at NTT Research, an arm of global technology company NTT. The device has been tested in animal experiments but not in human patients yet.
The researchers' eventual goal is to allow the heart to rest and minimize its oxygen use in that critical recovery window after a patient experiences a cardiac emergency. The jobs that would be handled by ACIS are usually done by medical providers — but the idea is that the device could standardize and optimize the process to deliver better outcomes while relieving strain on doctors' already-limited resources.
"We think that this system will outperform the standard of care," said Dr. Joe Alexander, director of NTT Research's Medical and Health Informatics (MEI) lab.
ACIS stemmed from a larger effort spearheaded by the MEI Lab known as the Bio Digital Twin program. Its aim is to construct advanced virtual models of organ systems that can be personalized with an individual patient's data, providing a detailed and dynamic representation of their medical status and a testable model for developing treatment plans.
Live Science spoke with Alexander about Digital Twins, ACIS and his vision for how they might transform health care.
Nicoletta Lanese: When we're talking about a Bio Digital Twin, is it fair to say it's a virtual copy of the patient?
Dr. Joe Alexander: Probably the layperson would think of a Bio Digital Twin as a copy of the person. But actually, it's just a system of equations, modeling and simulation to represent a person to the extent that is relevant for the disease. It's a very specific application, so there's no single Bio Digital Twin representing the [whole] person.
In our case, although we set out to build a family of Bio Digital Twins to represent different organ systems for different types of important diseases, we're starting with the cardiovascular system. So when I talk about a Cardiovascular Bio Digital Twin, I'm not talking about even a copy of the heart; I'm talking about a mathematical representation of all of the systems necessary for looking at the cardiovascular system in a particular patient.
In the case of ACIS, we're looking at acute heart failure and acute myocardial infarction [colloquially known as a heart attack].

NL: Could you talk about what kind of data goes into the model?
JA: This Cardiovascular Bio Digital Twin is representing pressures and flows throughout the cardiovascular system, including pressures and flows generated by all four chambers of the heart. … We are able to represent the cardiovascular system dynamics in pressures, flows and volumes.
NL: And how do you make that actionable for an individual patient?
JA: We're in the early stages of it, but we have a road map for how to do it. Basically, we first go after representing the "normal" cardiovascular system for patients. So, if we can get data around "normal," then that's very good. [Editor's note: The MEI Lab is working with partners such as the National Cerebral and Cardiovascular Center in Japan to get access to this kind of data.]
But probably what's most important is finding populations that are relevant to the particular patient — so, in this case, patients with cardiovascular disease or patients with heart failure. So we go after that population-level data; let's say for heart failure. Then, from that data, we can estimate parameters for our cardiovascular model that represent the general population of patients with heart failure.
Within that population, as you know, there's a lot of variability. So are there other characteristics specific to our patient that we can use? Maybe results from echocardiogram [EKG]; maybe age; maybe comorbidities [other medical conditions]; sex, male or female; or environment. And if there is genetic information available, then we can find a subpopulation that's even more relevant to the patient.
Now, with ACIS, we [would] actually hook up a patient to the "first guess" of our Cardiovascular Bio Digital Twin for what would match that patient based on population-level data. Since it's a feedback control system, the feedback will automatically adjust the parameter values to deliver the necessary drugs or device therapies that that particular patient needs for some prespecified cardiac output. In that way, we can further fine-tune the Digital Twin for that patient.
NL: Can you describe how ACIS and its feedback loop work?
JA: The idea is that it's a "self-driving" therapeutic, just like a self-driving car. But in this case, "self-driving" is delivering the appropriate drugs or, in severe cases, medical-device therapies that a patient may need.
We have a system where we specify — just type in the keyboard — the desired cardiac output, heart rate, left atrial pressure, arterial pressure that we want the patient to achieve. Then, syringes that are filled with the appropriate drugs to create those changes are driven by our model, or "best guess" for that particular patient. This is all after a patient has had the primary lesion [like a blood vessel blockage] treated in the cath lab.
Let's say they had a vessel that was occluded; it's already been opened up or a stent has been placed, and they go to the ICU [intensive care unit] or CCU [coronary care unit] in order to recover. Recovery means that the heart needs an opportunity to rest. That means letting the heart work as little as possible to maintain the desired cardiac output.
We have a certain regimen of drugs that are given. Catecholamines improve the ability of the heart to contract. Nitrates reduce afterload of the heart so it doesn't have to work against such a high load when it tries to inject into the arterial system. Diuretics decrease the circulating blood volume and remove blood from the lungs, which has built up due to the acute failure.
These drugs are typically given by a physician; they'll give one drug and look at the response, give another drug, the response, and manage that patient over several days. When our system achieves proper function — and we're almost there, I think — all those drugs can be given at once if we know how the system will respond. That saves us a lot of time in treating the patient.
The drugs are delivered by these autonomously controlled syringes; then the patient responds to them, and that response is fed back in this system. Those values are compared to the ones that we typed in the keyboard, and if there's a difference, then feedback systems work to reduce that difference. It also gives information to our Digital Twin for that patient, so that in the future, we have better representations of those resistors and capacitors in the model.

NL: What stage of development has ACIS reached at this point?
JA: So, in animal experiments in dogs, last year for the first time, we experimentally induced acute heart failure and we were able to let this autonomous system correct the cardiac output, arterial pressure autonomously, while minimizing myocardial [heart muscle] oxygen consumption.
Since that first successful experiment about a year ago, we've had several other successful [animal] experiments, all the while improving our feedback system to be more complex, making it so that it can operate based on intermittent data, so you don't have to be continuously sampling. You can do it episodically.
We have several more years of work in optimizing this system, we think, in animal experimentation — probably about three years more. And then we'll be ready for first-in-human studies where ACIS will be used but with a clinician in the loop [for the initial human tests]. What ACIS would do is tell the physician what doses of these various drugs to deliver, and the physician would then make a decision whether to do it or not, as a safety measure.
Now, what I've been describing so far has mostly been about drugs, but the same algorithms work for medical devices, such as left ventricular assist devices [LVAD, a type of mechanical pump] or extracorporeal membrane oxygenation devices [ECMO, which circulates the blood to let the heart and lungs rest]. This is all within the scope of what we expect to achieve in experimental animals within the next three years before going to first-in-human studies.
NL: What are the next steps toward getting ACIS approved? What might the trials look like?
JA: It would be kind of like [testing] an autonomous or self-driving vehicle — level 1 through 4 degrees, or stages, of autonomy.
In other words, allowing the system to have increasing responsibility and watching the performance until settling into acceptance of an autonomous system where then, still, probably a specialist would monitor it — like someone sitting in the seat of a self-driving car, ready to take over if things go wrong. I see that kind of progression, similar to the self-driving vehicle.
NL: And in the long run, would ACIS always have some kind of clinician supervision?
JA: I still hold to the concept of "autonomous," but I suspect that there will be a cardiologist somewhere roaming around, monitoring, perhaps, a number of patients at once.
I'm very committed to the idea that the device that we conceive of can actually outperform the cardiologist. And I know that we'll rub some cardiologists the wrong way. But we expect to demonstrate that point, or strongly suggest that that's true, by doing experiments in animals where we compare the ACIS system to clinically trained cardiologists. We expect reduced infarct size [degree of heart tissue death] from ACIS compared to the standard of care from cardiologists.
NL: Assuming this device gets approved in the future, where do you see it having the most benefit?
JA: There's the so-called Quintuple Aim of Health Care, which says to improve the patient experience, improve the physician experience, improve population health, reduce the cost of care, and improve health equity. These aims, I think, are all addressed by ACIS.
The patient would have more attention and minute-to-minute care — you wouldn't have a resident trying to juggle many patients at once. You could have a less-specialized clinical caretaker who is watching the behavior of the device, and so that would improve not only the patient experience and quality of the patient's care but also the health care provider's experience. They wouldn't have to be overworked to such an extent.
We think that this system will outperform the standard of care because [on paper] you more rapidly converge on the minimization of myocardial oxygen consumption and have better recovery during the hospital stay. So the patients have fewer readmissions and complications after being released. There's always some injury to the heart [with these cardiac events], and maybe, there may be some infarction of the heart. So we think that this level of care could reduce infarct size, so you preserve more of the heart, during treatment.
NL: And when you eventually hand off ACIS for clinical testing, what would the next project be?
JA: For us, the natural progression within the next 10 years, probably within the next five years, would be chronic heart failure. In chronic heart failure, you have to deal with more complexity, such as [tissue] remodeling, where the ventricles get thicker or get dilated. That kind of remodeling changes the mechanics.
You also have to deal with data from patients who are not in the hospital. We plan on building registries of patients [with Digital Twins] who would have been acutely ill to have access to that data for treating them outside. But then we have to also rely on things like wearable technologies, and we've been working on that as well. We have collaborations with folks at the Technical University of Munich who are developing special biosensors and biomaterials and implantable sensors and so forth that could help provide the data that would be important to doing predictive health maintenance in patients with chronic heart failure.
And in chronic heart failure, we have to deal with comorbidities and complications like kidney failure … and anemia. The combination of fluid overload and anemia all due to renal failure really makes the heart suffer from a lack of oxygen and causes slow deterioration.
I'm sure that complexity alone will keep me busy for the rest of my life. We have a lot of work to do with chronic heart failure; that would be next for sure.
Editor's note: This interview has been lightly edited for length and clarity.
]]>The product, called Atlas Eon 100, claims it will store humanity’s "irreplaceable archives" for thousands of years. These include family photos, scientific data, corporate records, cultural artifacts and the master versions of digital artworks, movies, manuscripts and music.
"This is the culmination of more than ten years of product development and innovation across multiple disciplines," Bill Banyai, Founder of Atlas Data Storage, said in a statement. “We intend to offer new solutions for long-term archiving, data preservation for AI models, and the safeguarding of heritage and high-value content."
Fundamentally, all digital data is just a series of 1s and 0s in a defined sequence. DNA is similar in that it is made up of defined sequences of the chemical bases adenine (A), cytosine (C), guanine (G) and thymine (T).
DNA data storage works by mapping the binary code to these bases; for example, an encoding scheme might assign A as 00, C as 01, G as 10, and T as 11. Artificial DNA can then be synthesized with the bases arranged in the corresponding order.
For Atlas Eon 100, the DNA is then dehydrated and stored as a powder in 0.7-inch-tall (1.8 cm) ruggedized steel capsules. It is rehydrated only when it needs to be sequenced and its bases translated back to binary.

Just one quart (one liter) of the DNA solution can hold 60 petabytes of data — the equivalent of 10 billion songs or 12 million HD movies. This makes Atlas Eon 100, which was announced on Dec. 2, 1,000 times more storage-dense than magnetic tape.
For context, about 15,500 miles (25,000 km) of 0.5-inch-wide (12.7 mm) LTO-10 tape, a standard high-capacity storage medium, would be needed to hold that same amount of data.
This storage density will make transporting large quantities of data easier than it would be with typical hard drives or tape reels. DNA is also known to keep its form for centuries, making it a remarkably stable medium for preserving data over very long periods.
Atlas Data Storage says its product is stable in an office environment with 99.99999999999% reliability, but the capsules can also endure temperatures as high as 104°F (40°C). Magnetic tape, on the other hand, decays in about a decade even with temperature and humidity controls.
Optical media, such as CDs and DVDs, typically degrade within 30 years, while hard drives last about 6 or 7 years before showing signs of deterioration. In less than 3 hours at 158°F (70 °C), a flash memory cell can ‘age’ as much as it normally would in a month.
Atlas also argues that its DNA storage service offers an easier way to make backups of its customers’ data than other media do. Indeed, once one strand is encoded, enzymes can be used to make more than a billion copies in just a few hours.
According to Atlas, society generates 280 PB of data every minute. It presents its DNA data storage as a potential solution to the proliferation of digital data, which has been exacerbated massively by the generative artificial intelligence (AI) boom.
However, the biotech faces a key scaling challenge: synthesizing encoded artificial DNA is still quite a long process compared with, say, saving a photo on an existing hard drive. Twist Bioscience, Atlas’s former parent company from which it inherited its DNA synthesis process, currently has a lead time of between 2 and 8 business days on gene and oligo (short and long DNA strands) orders.

Sequencing is notoriously expensive, too; it costs about $30 USD to read one gigabase of DNA, the equivalent of about 250 GB of data. It also takes a long time, with another recent DNA storage resolution reporting that it takes 25 minutes to recover a single file. Nevertheless, Atlas Data Storage claims that modern DNA sequencers are “improving throughput and cutting costs 1,000× faster than Moore’s Law.”
That said, due to the time required to synthesize and sequence DNA, the DNA Data Storage Alliance noted in 2025 that they do not expect DNA to be used for archival data storage at scale for another three to five years.
Professor Thomas Heinis, a computer science professor at Imperial College London who researches DNA-based data storage, is sceptical about the lack of concrete data that Atlas has published about the performance of Atlas Eon 100. He pointed to the fact that Catalog DNA, which made similar promises about its Shannon storage solution, went bust a few months ago.
"I have no doubt that they have built an impressive device, but it’s difficult to appreciate without concrete information," he told Live Science, adding that the major challenge to commercialising DNA storage is synthesis, not sequencing.
"It sounds banal, but if the write/synthesis cost is not competitive, then there is no point in reading/sequencing cost efficiently. You cannot read (cheaply) what you cannot afford to write. Currently, synthesis is orders of magnitude too expensive while sequencing is closer to tape but still more expensive. Despite being a firm believer in DNA storage, a lot of technological progress is needed and I have not seen anyone with an economically viable solution yet."
]]>Where is it? Los Glaciares National Park, Argentina [-50.469690266, -73.03391046]
What's in the photo? The point where a non-retreating glacier, a turquoise lake and a murky river meet
Who took the photo? An unnamed astronaut onboard the International Space Station (ISS)
When was it taken? March 2, 2021
This incredible astronaut photo shows the unusual point where a hefty non-retreating glacier, a pristine turquoise lake and a murky green "river" perfectly converge at the intersection of three valleys in Argentina.
The trio of hydrological features — the Perito Moreno Glacier, Lago Argentino and Brazo Rico — lie at the heart of Los Glaciares National Park, which covers an area of around 2,300 square miles (6,000 square kilometers) in the Santa Cruz province of southern Argentina, near the country's border with Chile.
The aerial photo doesn’t just show off these three aqueous entities in a single frame; if you look closely, it also reveals the point where the trio touch in a slim channel along the western edge of the Magallanes Peninsula — the rocky outcrop that lies between the lake and the river, according to NASA's Earth Observatory.
In this photo, the waters of Lago Argentino and Brazo Rico are likely in direct contact with each other (as in the photo below). But their waters do not readily mix because they have different densities, due to their respective concentrations of suspended particulate matter, according to a 2022 study.
But every four to five years, the glacier's tongue juts forward, colliding with the Magallanes Peninsula and temporarily damming the Brazo Rico. When this happens, the surface of the murky body of water rises by up to 100 feet (30 meters) until a pressure build-up causes the icy dam to spectacularly "rupture," the Earth Observatory previously reported.

Perito Moreno is the largest glacier in Patagonia, which includes parts of Argentina and Chile. It is approximately 19 miles (30 km) long with ice up to 200 feet (60 m) thick. In total, the glacier holds roughly the same amount of water as 360,000 Olympic swimming pools, according to back-of-the-envelope calculations.
The glacier is "non-retreating," meaning that it is not shrinking despite rising atmospheric temperatures triggered by human-caused climate change. This is extremely rare nowadays, and Perito Moreno is frequently cited as one of the "world's last major non-retreating glaciers." However, a recent study hints that it may finally be starting to shrink.
Lago Argentino is the largest freshwater lake in Argentina, covering a total area of around 550 square miles (1,425 square km). The section visible in the astronaut photo is the lake's southernmost arm. It contains glacial meltwater filled with rocky particles released by the glaciers' constant movements, collectively known as "glacier milk," which gives the water its striking turquoise color.
The lake's northernmost arm also connects to the Upsala Glacier, which is currently in full retreat.

Brazo Rico, meaning "rich arm" in Spanish, is also technically part of Lago Argentino. However, it has become increasingly isolated from the rest of the lake due to repeated damming by the Perito Moreno glacier, making it behave more like a river than part of a lake.
The frequent icy obstruction is also responsible for Brazo Rico's insipid color, which is the result of sediment dislodged by its movements. The continued rising and falling of the river's surface has also carved out a border around its edges where no trees can grow.
Eagle-eyed viewers may have also spotted the narrow road winding across the Magallanes Peninsula and along the Brazo Rico's northern edge (just above the tree line): One can only imagine the extraordinary views you'd get to experience driving along there.
For more incredible satellite photos and astronaut images, check out our Earth from space archives.
]]>In recent months, orcas (Orcinus orca) have also been spotted abducting baby pilot whales and tearing open sharks to feast on their livers. And off the coast of Spain and Portugal, a small population of orcas has begun ramming and sinking boats.
All of these incidents show just how clever these apex predators are.
"These are animals with an incredibly complex and highly evolved brain," Deborah Giles, an orca researcher at the University of Washington and the nonprofit Wild Orca, told Live Science. "They've got parts of their brain that are associated with memory and emotion that are significantly more developed than even in the human brain."
But the scale and novelty of recent attacks have raised a question: Are orcas getting smarter? And if so, what's driving this shift?
They've got parts of their brain that are associated with memory and emotion that are significantly more developed than even in the human brain.
It's not likely that orcas' brains are changing on an anatomical level, said Josh McInnes, a marine ecologist who studies orcas at the University of British Columbia. "Behavioral change can influence anatomical change in an animal or a population" — but only over thousands of years of evolution, McInnes told Live Science.
Related: Scientists investigate mysterious case of orca that swallowed 7 sea otters whole
But orcas are fast learners, which means they can and do teach each other some terrifying tricks, and thus become "smarter" as a group. Still, some of these seemingly new tricks may in fact be age-old behaviors that humans are only documenting now. And just like in humans, some of these learned behaviors become trends, ebbing and flowing in social waves.
Frequent interactions with humans through boat traffic and fishing activities may also drive orcas to learn new behaviors. And the more their environment shifts, the faster orcas must respond and rely on social learning to persist.


There's no question that orcas learn from each other. Many of the skills these animals teach and share relate to their role as highly evolved apex predators.
Scientists described orcas killing and eating blue whales (Balaenoptera musculus) for the first time in a study published last year. In the months and years that followed the first attack in March 2019, orcas preyed on a blue whale calf and juvenile in two additional incidents, pushing the young blue whales below the surface to suffocate them.
This newly documented hunting behavior is an example of social learning, with strategies being shared and passed on from adult orcas to their young, Robert Pitman, a marine ecologist at Oregon State University's Marine Mammal Institute, told Live Science in an email. "Anything the adults learn will be passed along" from the dominant female in a pod to her offspring, he said.
Taking down a blue whale "requires cooperation and coordination," Pitman said. Orcas may have learned and refined the skills needed to tackle such enormous prey in response to the recovery of whale populations from whaling. This know-how was then passed on, until the orcas became highly skilled at hunting even the largest animal on Earth, Pitman said.

Some of the gory behaviors researchers have observed recently may actually be long-standing habits.
For instance, during the blue whale attacks, observers noted that the orcas inserted their heads inside live whales' mouths to feed on their tongues. But this is probably not a new behavior — just a case of humans finally seeing it up close.
"Killer whales are like humans in that they have their 'preferred cuts of meat,'" Pitman said. "When preying on large whales, they almost always take the tongue first, and sometimes that is all they will feed on."
Tongue is not the only delicacy orcas seek out. Off the coast of South Africa, two males — nicknamed Port and Starboard — have, for several years, been killing sharks to extract their livers.
Killer whales are like humans in that they have their 'preferred cuts of meat.'
Although the behavior surprised researchers at first, it's unlikely that orcas picked up liver-eating recently due to social learning, Michael Weiss, a behavioral ecologist and research director at the Center for Whale Research in Washington state, told Live Science.
Related: Orcas attacked a great white shark to gorge on its liver in Australia, shredded carcass suggests
That's because, this year, scientists also captured footage of orcas slurping down the liver of a whale shark off the coast of Baja California, Mexico. The likelihood that Port and Starboard transferred their know-how across thousands of miles of ocean is vanishingly small, meaning liver-eating is probably a widespread and established behavior.
"Because there are more cameras and more boats, we're starting to see these behaviors that we hadn't seen before," Weiss said.

Orcas master and share more than hunting secrets. Several populations worldwide have learned to poach fish caught for human consumption from the longlines used in commercial fisheries and have passed on this information.
In the southern Indian Ocean, around the Crozet Islands, two orca populations have increasingly scavenged off longlines since fishing in the region expanded in the 1990s. By 2018, the entire population of orcas in these waters had taught one another to feast on longline buffets, with whole groups that previously foraged on seals and penguins developing a taste for human-caught toothfish.
Sometimes, orcas' ability to quickly learn new behaviors can have fatal consequences. In Alaska, orcas recently started dining on groundfish caught by bottom trawlers, but many end up entangled and dead in fishing gear.
"This behavior may be being shared between individuals, and that's maybe why we're seeing an increase in some of these mortality events," McInnes said.

Orcas' impressive cognitive abilities also extend to playtime.
Giles and her colleagues study an endangered population of salmon-eating orcas off the North Pacific coast. Called the Southern Resident population, these killer whales don't eat mammals. But over the past 60 years, they have developed a unique game in which they seek out young porpoises, with the umbilical cords sometimes still attached, and play with them to death.
Related: 'An enormous mass of flesh armed with teeth': How orcas gained their 'killer' reputation
There are 78 recorded incidents of these orcas tossing porpoises to one another like a ball but not a single documented case of them eating the small mammals, Giles said. "In some cases, you'll see teeth marks where the [killer] whale was clearly gently holding the animal, but the animal was trying to swim away, so it's scraping the skin."
The researchers think these games could be a lesson for young orcas on how to hunt salmon, which are roughly the same size as baby porpoises. "Sometimes they'll let the porpoise swim off, pause, and then go after it," Giles said.

Humans may indirectly be driving orcas to become smarter, by changing ocean conditions, McInnes said. Orca raids on longline and trawl fisheries show, for example, that they innovate and learn new tricks in response to human presence in the sea.
Human-caused climate change may also force orcas to rely more heavily on one another for learning.
In Antarctica, for instance, a population of orcas typically preys on Weddell seals (Leptonychotes weddellii) by washing them off ice floes. But as the ice melts, they are adapting their hunting techniques to catch leopard seals (Hydrurga leptonyx) and crabeater seals (Lobodon carcinophaga) — two species that don't rely on ice floes as much and are "a little bit more feisty," requiring orcas to develop new skills, McInnes said.
While human behaviors can catalyze new learning in orcas, in some cases we have also damaged the bonds that underpin social learning. Overfishing of salmon off the coast of Washington, for example, has dissolved the social glue that keeps orca populations together.
"Their social bonds get weaker because you can't be in a big partying killer-whale group if you're all hungry and trying to search for food," Weiss said. As orca groups splinter and shrink, so does the chance to learn from one another and adapt to their rapidly changing ecosystem, Weiss said.
And while orcas probably don't know that humans are to blame for changes in their ocean habitat, they are "acutely aware that humans are there," McInnes said.
Luckily for us, he added, orcas don't seem interested in training their deadly skills on us.
The initial trials will focus on assessing the safety of the vaccine, which was initially developed with funding from the U.S. Department of Defense. The shot was previously tested in rats and showed promising results. Now, it's been licensed by startup ARMR Sciences, which will begin enrolling patients for Phase I clinical trials in the Netherlands in 2026, starting in either January or February.
"Our goal as a company is to eliminate the lethality of the drug supply," said Colin Gage, co-founder and CEO of ARMR. "We want to go about doing that by attacking the root cause of not only addiction, but also, obviously, overdose."
The vaccine works by keeping fentanyl out of the brain, which it does by making the molecule a target of the immune system.
Fentanyl is a synthetic opioid with effects 50 times stronger than heroin. Opioids, also called narcotics, broadly work by binding to opioid receptors in the brain and spinal cord, triggering changes in nerve cell signaling that prevent pain and can create a euphoric high.
But these opioid receptors are also found in the part of the brain that controls breathing, so fentanyl can also reduce respiration to a deadly degree if used in excess. A 2-milligram dose of fentanyl — similar in volume to about a dozen grains of salt — can be fatal, according to the Drug Enforcement Agency (DEA).
If a person overdosing on fentanyl is treated with naloxone (better known by the brand name Narcan), quickly enough, these effects can be reversed. This antidote also binds to opioid receptors, thus blocking the effects of fentanyl.
ARMR's vaccine takes a different approach: It works in the circulatory system, before the drug can reach the brain.
"This would be the first-ever treatment that does not work on the [opioid] receptor," Gage told Live Science.
To keep fentanyl from reaching the brain, the immune system must first recognize the drug. But fentanyl is a tiny molecule, not a pathogen like a virus, and immune cells don't naturally react to its presence.
To spur an immune response to fentanyl, the University of Houston's Colin Haile, an ARMR co-founder and scientific adviser, and his colleagues had to tie the opioid to something else.
They chose a deactivated diphtheria toxin called CRM197, a compound already used in vaccines on the market; once deactivated, the toxin is no longer toxic and instead helps rouse an immune response. To boost this immune response even further, they also added dmLT, a compound distilled from toxins produced by the Escherichia coli bacterium. This modified compound is not toxic itself, and it has also been tested in humans in trials of other, not-yet-approved, vaccines.
These two components are attached to a synthetic piece of the fentanyl molecule, which in and of itself cannot cause a high or pain relief.
When the immune system meets this combo of fentanyl fragments, CRM197 and dmLT, it builds antibodies that react to real fentanyl. These antibodies bind to the opioid, keeping it from crossing the brain's protective membrane — the blood-brain barrier — and then clearing it from the body.
In rat studies, the vaccine blocked fentanyl from entering the rodents' brain and also blocked the drug from depressing respiration and causing overdose.
So far, the studies on the vaccine have been in rodents, though dmLT and CRM197 have respectively been tested to some extent and are already used in other vaccines in humans. The protocol in rats is to give an initial dose of the fentanyl vaccine and then boosters three and six weeks out from the first dose, Haile told Live Science.
"The longest we've followed the animals in our studies is about six months and we saw complete blockade of fentanyl effects at six months post the initial vaccination," Haile said. It remains to be seen how that will translate to "human years," he noted, but lab rats live a couple of years in total, so the researchers think the vaccine will work for a long time in humans.
The initial human trials that will begin in early 2026 will enroll 40 people and will focus on detecting any safety issues with the vaccines, such as unwanted or dangerous side effects. Researchers will also draw blood samples from participants to make sure that the vaccine is spurring the creation of anti-fentanyl antibodies.
If these Phase I trials are successful, the next step will be Phase II trials to test the vaccine's efficacy — how well the vaccine blocks fentanyl's effects. In these trials, not only will antibody levels be tracked over time, but some participants will also be dosed with safe levels of fentanyl used for pain relief in medical procedures. This will be done under close supervision, to check that the vaccine works in the presence of the drug.

Fentanyl has legitimate medical uses as a painkiller, especially in emergency situations. One concern about the vaccine is that people who take it will lose this option for pain relief.
However, the antibodies created by vaccination do not bind to other opioids — such as morphine, oxycodone or methadone — or to other pain-relief options, Haile said. That means there are alternatives if people who get the vaccine need pain relief down the line.
The drug also does not interfere with buprenorphine, a drug used to treat opioid use disorder by reducing withdrawal symptoms and cravings. Haile said he and his team are currently testing the vaccine in combination with naltrexone, a non-opioid medication also used to block the effects of opioids in treatment of substance use.
In theory, it might be possible to take enough fentanyl to override the body's supply of anti-fentanyl antibodies, Haile said. However, given that the vaccine blocks fentanyl's euphoric effects, he expects people who want to quit will not be motivated to try to work around it.
"We want people who want to quit, want to not use the drug," he said. "That will give them a chance to realize that they won’t get high from this drug and there is no use in taking it any longer."
Gage suggested that one market for the vaccine could be first responders concerned about accidental fentanyl exposure. (That concern has risen in recent years with the spread of misinformation about fentanyl.)
For clarity: if fentanyl gets on your skin via casual exposure — for example, if you touch an object that's been exposed to the drug — it will not absorb through the skin. Meaningful absorption through the skin requires direct contact to the drug over hours or days. That said, if an EMT or police officer gets the drug on their hands and then touches their mouth or eyes, they could feel some of the drug's analgesic, or pain-relieving, effects, Haile said.
The vaccine could also be "an extra tool in the toolset" for people with opioid use disorder, Gage said. Combining the vaccine with "robust" cognitive behavioral therapy, a type of talk therapy, and communal support could be "incredibly beneficial to people who are just looking for another lifeline to help themselves get better," he said.
Finally, the vaccine could be beneficial for people who use less-deadly drugs — such as cocaine, stimulants or painkillers — that they buy on the black market. That's because these drugs are increasingly cut with fentanyl, meaning people may overdose without even knowing they are taking the opioid.
"I had two close childhood friends who passed away from fentanyl overdose," Gage said. "Neither of them were seeking it out."
Over 48,000 people are estimated to have died of opioid overdoses in 2024 in the U.S., according to provisional data. Perhaps due to this high death toll, early research suggests that people with personal experience with opioid use disorder and the general public alike view a possible anti-fentanyl vaccine positively. Time will tell how the new vaccine will perform in human trials, but if eventually approved, it could be a first-of-its-kind tool against overdose deaths.
This article is for informational purposes only and is not meant to offer medical advice.
]]>The idea, dubbed "Project Suncatcher" and outlined in a study uploaded Nov. 22 to the preprint arXiv database, explores whether future AI workloads could be run on constellations of satellites equipped with specialized accelerators and powered primarily by solar energy.
In certain low Earth or sun-synchronous orbits, the argument goes, solar panels can operate for much of the time, avoiding many of the night-day cycles, atmospheric losses and grid constraints that limit terrestrial data centers. Heat, meanwhile, would be rejected into space via radiative cooling rather than relying on water-intensive cooling systems on Earth.
The push to look beyond Earth for AI infrastructure isn’t coming out of nowhere. Data centers already consume a non-trivial slice of the world’s power supply: recent estimates put global data-center electricity use at roughly 415 terawatt-hours in 2024, or about 1.5% of total global electricity consumption, with projections suggesting this could more than double by 2030 as AI workloads surge.
Utilities in the U.S. are already planning for data centers, driven largely by AI workloads, to account for between 6.7-12% of total electricity demand in some regions by 2028, prompting some executives to warn that there simply “isn’t enough energy on the grid” to support unchecked AI growth without significant new generation capacity.
In that context, proposals like space-based data centers start to read less like sci-fi indulgence and more like a symptom of an industry confronting the physical limits of Earth-bound energy and cooling. On paper, space-based data centers sound like an elegant solution. In practice, some experts are unconvinced.
Joe Morgan, COO of data center infrastructure firm Patmos, is blunt about the near-term prospects. "What won’t happen in 2026 is the whole ‘data centers in space’ thing," he told Live Science. "One of the tech billionaires might actually get close to doing it, but aside from bragging rights, why?"
Morgan points out that the industry has repeatedly flirted with extreme cooling concepts, from mineral-oil immersion to subsea facilities, only to abandon them once operational realities bite. "There is still hype about building data centers under the ocean, but any thermal benefits are far outweighed by the problem of replacing components," he said, noting that hardware churn is fundamental to modern computing.
That churn is central to the skepticism around orbital AI. GPUs and specialized accelerators depreciate quickly as new architectures deliver step-change improvements every few years. On Earth, racks can be swapped, boards replaced and systems upgraded continuously. In orbit, every repair requires launches, docking or robotic servicing — none of which scale easily or cheaply.
"Who wants to take a spaceship to update the orbital infrastructure every year or two?" Morgan asks. "What if a vital component breaks? Actually, forget that, what about the latency?"
Latency is not a footnote. Most AI workloads depend on tightly coupled systems with extremely fast interconnects, both within data centers and between them. Google’s proposal leans heavily on laser-based inter-satellite links to mimic those connections, but the physics remains unforgiving. Even at low Earth orbit, round-trip latency to ground stations is unavoidable.
"Putting the servers in orbit is a stupid idea, unless your customers are also in orbit," Morgan said. But not everyone agrees it should be dismissed so quickly. Paul Kostek, a senior member of IEEE and systems engineer at Air Direct Solutions, said the interest reflects genuine physical pressures on terrestrial infrastructure.
"The interest in placing data centers in space has grown as the cost of building centers on earth keeps increasing," Kostek said. "There are several advantages to space-based or Moon-based centers. First, access to 24 hours a day of solar power… and second, the ability to cool the centers by radiating excess heat into space versus using water."
From a purely thermodynamic standpoint, those arguments are sound. Heat rejection is one of the hardest limits on computation, and Earth-based data centers are increasingly constrained by water availability, grid capacity and local environmental opposition.
The backlash against terrestrial AI infrastructure isn’t limited to energy and water issues; health fears are increasingly part of the narrative. In Memphis, residents near xAI’s massive Colossus data center have voiced concern about air quality and long-term respiratory impacts, with community members reporting worsened symptoms and fear of pollution-linked illnesses since the facility began operating. In other states, opponents of proposed hyperscale data center projects have framed their resistance around potential health and environmental harms, arguing that large facilities could degrade local air and water quality and exacerbate existing public health burdens.
Putting data centers into orbit would remove some constraints, but replace them with others.
"The technology questions that need to be answered include: Can the current processors used in data centers on Earth survive in space?” Kostek said. "Will the processors be able to survive solar storms or exposure to higher radiation on the Moon?"
Google researchers have already begun probing some of those questions through early work on Project Suncatcher. The team describes radiation testing of its Tensor Processing Units (TPUs) and modeling of how tightly clustered satellite formations could support the high-bandwidth inter-satellite links needed for distributed computing. Even so, Kostek stresses that the work remains exploratory.
"Initial testing is being done to determine the viability of space-based data centers," he said. "While significant technical hurdles remain and implementation is still several years away, this approach could eventually offer an effective way to achieve expansion."
That word — expansion — may be the real clue. For some researchers, the most compelling rationale for off-world computing has little to do with serving Earth-based users at all. Christophe Bosquillon, co-chair of the Moon Village Association’s working group for Disruptive Technology & Lunar Governance, argues that space-based data centers make more sense as infrastructure for space itself.
"With humanity on track to soon establish a permanent lunar presence, an infrastructure backbone for a future data-driven lunar industry and the cis-lunar economy is warranted," he told Live Science.
From this perspective, space-based data centers aren’t substitutes for Earth’s infrastructure so much as tools for enabling space activity, handling everything from lunar sensor data to autonomous systems and navigation.
"Affordable energy is a key issue for all activities and will include a nuclear component next to solar power and arrays of fuel cells and batteries," Bosquillon said, adding that the challenges extend well beyond engineering to governance, law and international coordination.
Crucially, space-based computing could offload non-latency-sensitive workloads from Earth altogether. "Solving the energy problem in space and taking that burden off the Earth to process Earth-related non-latency-sensitive data… has merit," Bosquillon said, even extending to the idea of space and the Moon as a secure vault for "civilisational" data.
Seen this way, Google’s proposal looks less like a solution to today’s data center shortages and more like a probe into the long-term physics of computation. As AI approaches planetary-scale energy consumption, the question may not be whether Earth has enough capacity, but whether researchers can afford to ignore environments where energy is abundant but everything else is hard.
For now, space-based AI remains strictly experimental. Whether it ever escapes Earth’s gravity may depend less on solar panels and lasers than on how desperate the energy race becomes.
]]>Now, experts plan to test these newfound bugs inside a "planetary simulation chamber" that could reveal whether these microbes, or ones with similar adaptations, could survive a trip through space to Mars, possibly contaminating the alien worlds on arrival.
Earlier this year, scientists identified more than two dozen previously unknown bacterial species lurking in the Kennedy Space Center cleanrooms in Florida, where NASA assembled its Phoenix Mars Lander in 2007. The discovery showed that despite constant scrubbing, harsh cleaning chemicals and extreme nutrient scarcity, some microbes evolved a suite of genetic tricks that allowed them to persist in these punishing environments.
"It was a genuine 'stop and re-check everything' moment," study co-author Alexandre Rosado, a professor of Bioscience at King Abdullah University of Science and Technology in Saudi Arabia, told Live Science about the findings, which were described in a paper published in May in the journal Microbiome. While there were relatively few of these microbes, they persisted for a long time and in multiple cleanroom environments, he added.
Identifying these unusually hardy organisms and studying their survival strategies matters, the researchers say, because any microbe capable of slipping through standard cleanroom controls could also evade the planetary-protection safeguards meant to prevent Earth life from contaminating other worlds.
When asked whether any of these microbes might, in theory, tolerate conditions during a journey to Mars' northern polar cap, where Phoenix landed in 2008, Rosado said several species do carry genes that may help them adapt to the stresses of spaceflight, such as DNA repair and dormancy-related resilience. But he cautioned that their survival would depend on how they handle harsh conditions a microbe would face both during space travel and on Mars — factors the team didn't test — including exposure to vacuum, intense radiation, deep cold and high levels of UV at the Martian surface.
To explore that question, the researchers are now building a planetary simulation chamber at the King Abdullah University of Science and Technology in Saudi Arabia to expose the bacteria to Mars-like and space-like conditions, Rosado said. The chamber, now in its final assembly phase, with pilot experiments expected to begin in early 2026, is engineered to mimic stresses such as the low, carbon-dioxide-rich air pressure of Mars, high radiation, and the extreme temperature swings the microbes would face during spaceflight. These controlled environments will allow scientists to investigate how hardy microbes adapt and survive under combinations of stresses comparable to those encountered during spaceflight or on the Martian surface, said Rosado.

NASA's spacecraft-assembly cleanrooms are engineered to be hostile to microbes — a cornerstone of the agency's efforts to prevent Earth organisms from hitchhiking to worlds beyond Earth — through continuously filtered air, strict humidity control and repeated treatments using chemical detergents and UV light, among other measures.
Even so, "cleanrooms don't contain 'no life,'" said Rosado. "Our results show these new species are usually rare but can be found, which fits with long-term, low-level persistence in cleanrooms."
During the Phoenix lander's assembly at the Kennedy Space Center's Payload Hazardous Servicing Facility, a team led by study co-author Kasthuri Venkateswaran, who is a senior research scientist at NASA's Jet Propulsion Laboratory, collected and preserved 215 bacterial strains from the cleanroom floors. Some samples were gathered before the spacecraft arrived in April 2007, again during assembly and testing in June, and once more after the spacecraft moved to the launch pad in August, according to the study.
At the time, researchers lacked the technology to classify new species precisely or in large numbers. But DNA technology has advanced dramatically in the 17 years since that mission, and today scientists can sequence almost every gene these microbes carry and compare their DNA to broad genetic surveys of microbes collected from cleanrooms in later years. This allows scientists "to study how often and for how long these microbes appear in different places and times, which wasn't possible in 2007," said Rosado.
Further analysis revealed a suite of survival strategies. Many of the newly identified species carry genes that help them resist cleaning chemicals, form sticky biofilms that anchor them to surfaces, repair radiation-damaged DNA or produce tough, dormant spores — adaptations that help them survive in tucked-away corners or microscopic cracks, the study reports. This makes the microbes "excellent test organisms" for validating the decontamination protocols and detection systems that space agencies rely on to keep spacecraft sterile, Rosado said.
From a broader research standpoint, Rosado said the next step is coordinated, long-term sampling across multiple cleanrooms using standardized methods, paired with controlled experiments that measure microbes' survival limits and stress responses, said Rosado.
"This would give us a much clearer picture of which traits truly matter for planetary protection and which might have translational value in biotechnology or astrobiology," he said.
]]>Name: Lchashen wagon
What it is: An oak wagon
Where it is from: Lchashen village, Armenia
When it was made: Circa 1500 B.C.
Covered wagons are often associated with the Old West. But the best-preserved example of an ancient covered wagon was actually found in a Bronze Age grave in Armenia, where it had been buried for 3,500 years.
The remains of six oak wagons were excavated from an elite cemetery in Lchashen, Armenia, and were dated to the 15th to 14th centuries B.C., or the Late Bronze Age. Each wagon had four wheels arranged on two axles. But while two of the wagons were open, the other four had a complex frame structure on top. One of these wagons is considered the best-preserved example of an early covered wagon.
On display at the History Museum of Armenia in Yerevan, the Lchashen wagon was made of at least 70 parts joined together by a mortise-and-tenon system involving slotted pieces of wood and bronze fittings. The frame of the canopy required at least 600 mortise holes, archaeologist Stuart Piggott wrote in a 1968 study, indicating the precise workmanship that went into creating the wagon.
The wagon measures approximately 6.5 feet (2 meters) in length. Each wooden wheel was made of two slabs of wood joined together and measured a whopping 63 inches (160 centimeters) tall, historian Christoph Baumer wrote in "History of the Caucasus" (Bloomsbury, 2021).
The Lchashen wagon was discovered in the 1950s, when construction workers from the Soviet Union drained part of Lake Sevan in Armenia to help irrigate a nearby plain. They found a Late Bronze Age cemetery that contained more than 500 burials, along with hundreds of grave goods. One distinctive feature of the Lchashen necropolis is the presence of two- and four-wheeled wagons, as well as bronze models of war chariots, archaeologist L.A. Petrosyan wrote in a 2016 study.
Although some claim the Lchashen wagon is the "oldest in the world," there is abundant evidence of both wagon technology and covered wagons that predate this example. The exact invention dates are still being debated, but humans likely first invented the wheel and wheeled vehicles in Mesopotamia in the Copper Age, between about 4500 and 3300 B.C.
But the Lchashen wagon is a very early — as well as the best-preserved — example of a covered wagon with spoked wheels on axles, demonstrating innovation in early wheeled vehicles. Whether this technology was invented in Armenia or came from Mesopotamia to the south or the Russian steppe to the north is still being investigated.
According to the History Museum of Armenia, burials with wheeled vehicles arose in the Middle Bronze Age (2400 to 1500 B.C.) in Armenia but became most popular in the Late Bronze Age, when they were used as vehicles for physically and metaphorically transporting the remains of a deceased leader into the next life.
For more stunning archaeological discoveries, check out our Astonishing Artifacts archives.
]]>Milestone: Vision of nanotechnology laid out
Date: Dec. 29, 1959
Where: Pasadena, California
Who: Richard Feynman
On a December day, Richard Feynman gave a fun little lecture at Caltech — and dreamed up an entirely new field of physics.
During the talk, entitled "Plenty of room at the bottom," he described the enormous potential that could be realized if scientists could manipulate and control things at a "small scale."
How small? Feynman went on to discount advances of the time, such as writing the Lord's Prayer on the head of a pin, as trivial.
"But that's nothing; that's the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below," Feynman said in his lecture. Rather, he suggested, people could write the entire 24-volume encyclopedia on the head of a pin, and elegantly showed that there's enough space there to write it legibly and read it out.
He then explored the possibility of a number of then-futuristic ideas: electron microscopes capable of manipulating individual atoms, ultracompact data storage, miniaturized computers, and powerful, ingestible biological machines that travel into organs like the heart, find defects, and repair them with tiny knives. He proposed a number of ways to create these small-scale innovations, including manipulating light and ions.
He ended the lecture by offering a reward of $1,000 to anyone who could miniaturize the text in a book 25,000-fold, such that it could be read using an electron microscope. He offered another $1,000 to anyone who could make a motor no bigger than 1/64th of an inch cubed.

The latter of these prizes was scooped up the following year by engineer William McLellan, who created a 250-microgram motor composed of 13 parts. In his award letter, Feynman congratulated McLellan on the feat but joked that he shouldn't "start writing small," lest he solve the first challenge, too and expect to receive the other $1,000 prize.
"I don't intend to make good on the other one. Since writing the article I've gotten married and bought a house!" Feynman wrote.The former challenge was eventually solved in 1985, when Stanford graduate Thomas Newman miniaturized the first page of the Dickens classic "A Tale of Two Cities." Feynman did, ultimately, pay up for the second prize.
Feynman's Caltech talk is now mythologized as having ushered in the field of nanotechnology. And yet, the term "nanotechnology" itself was not coined until 15 years after his talk, when scientist Norio Taniguchi penned a paper about manipulating material at the atomic scale.
In that 1974 paper, Taniguchi described nanotechnology as "the processing of separation, consolidation, and deformation of materials by one atom or one molecule." Many science historians now argue that the field was following its own trajectory, and that Feynman's talk, while prescient, wasn't the actual driver of future innovations. Prior to 1980, his talk was cited less than 10 times.
Whether it drove innovation or not, since Feynman's famous lecture, many of his predictions have proven true. The scanning tunneling microscope manipulated individual xenon atoms in 1990. Computers more powerful than he described now sit in our pockets, rather than taking up whole rooms. And indeed, tiny nanobots have been designed that can repair damaged blood vessels.
Primates don't just live in lots of places; there are also hundreds of species and subspecies. In fact, the order primates is the fourth most biodiverse mammal order in the animal kingdom — yet the majority (62.6%) of primates are threatened with extinction.
Scientists researching primates, called "primatologists," have learned a lot over the years about our closest evolutionary relatives. For example, did you know that chimps have opposable big toes, or that not all monkeys can swing through the trees? Or even that there are some primates that are neither monkeys nor apes?
Fancy yourself a primatologist? Put your knowledge to the test below!
Remember to log in to put your name on the leaderboard; hints are available if you click the yellow button. Good luck!

Maria Branyas Morera, once the world's oldest woman, died in 2024 at age 117. Live Science took a deep look at a study that examined Branyas' biology and uncovered key traits that may have protected her from disease in old age. Could lessons from the study help others lead longer, healthier lives?
Many consider the brain to be a central feature of what makes us human — but how did the remarkable organ come to be? In an interview, science communicator Jim Al-Khalili discussed what he learned from shooting the new BBC show "Horizon: Secrets of the Brain," which tells the story of how the human brain evolved. And in a book excerpt and interview with Live Science, neuroscientist Nikolay Kukushkin described the evolutionary forces he believes were key to the formation of the human brain and consciousness as we know it.
Miniature models of the human brain can be grown from stem cells in the lab, and they're getting more and more advanced. Some scientists have raised concerns that these "minibrains" could become conscious and feel pain. We investigated experts' concerns and hopes for future regulation of the research.
mRNA may be best known for forming the basis of the first COVID-19 vaccines, but it could also be used in revolutionary cancer therapeutics, immune-reprogramming treatments and gene therapies. The promise of these emerging mRNA medicines is staggering, but due to the politicization of COVID-19 shots in the U.S., mRNA research and development — even unrelated to vaccines — now hangs in precarious uncertainty. A Science Spotlight feature described emerging mRNA technologies and their wobbly status under the second Trump administration.

You may have heard that more young people are being diagnosed with cancer. But which types of cancer are driving this trend? And why are the rates going up in the first place? We looked at what may be driving this pattern, from underlying cancer triggers to better techniques for early detection.
Is there really a difference between male and female brains? And do we even have the data required to answer that question? A Science Spotlight explored the existing research on sex differences in the brain, finding the results murkier than one might expect. Headlines often proclaim that male and female brains are "wired differently," and that may be true in some subtle ways. But the biological consequences of those differences remain unclear, even to experts in the field.
Artificial intelligence can now be used to design brand-new viruses. Scientists hope to use these viruses for good — for example, to treat drug-resistant bacterial infections. But could the technology usher in the next generation of bioweapons? An analysis probed this dual-use problem and what can be done to safeguard our biosecurity.
In a book excerpt, epidemiologist Dr. Seth Berkley explained how he and other health leaders orchestrated a massive vaccine rollout to poor countries during the COVID-19 pandemic, so that the shots wouldn't exclusively be hoarded by wealthy nations. Live Science also spoke with Berkley about the lessons learned from the pandemic and the ongoing fight for vaccine equity.

The United States Agency for International Development (USAID), once the world's largest foreign aid agency, was hit by massive funding cuts under the second Trump administration. A few of its functions will reportedly continue, under the control of the Department of State. We looked at the predicted and devastating effects that the loss of USAID will likely have on HIV care worldwide. And in an interview with author John Green, who published a book on tuberculosis (TB) this year, we explored what the cuts could mean for TB patients.
A study went viral after suggesting that healthy human brains may contain a similar amount of plastic as the average plastic spoon. But should we really be concerned? Our analysis broke down what we know and what we don't about microplastics in the brain.
A man genetically guaranteed to develop early Alzheimer's disease is still disease-free in his 70s. We explored the details of the man's case, digging into his genetic profile and the broader lessons it could teach scientists about dementia.
Weight-loss surgeries often come with improvements in mental health — but research revealed that this effect is less tied to the weight loss itself and more connected to the relief from stigma that people often experience post-procedure. We examined this finding and what it can tell us about the profound impact of weight stigma on people's health and well-being.

In 2000, the United States hit a public health milestone by eliminating measles. But now, there's been a sustained resurgence of the highly infectious disease, putting the country on the brink of losing that precious elimination status. This story explained how we got here and what's at stake. And in an opinion piece, several experts called out the anti-vaccine movement that drove down measles vaccination rates — a movement that health secretary Robert F. Kennedy Jr. has been spearheading for years.
In a book excerpt, Nafis Hasan argued that the United States has been employing the wrong strategies to fight cancer for decades. While hyperfocusing on finding treatments for individuals with cancer, America has largely ignored population-level strategies that could help drive down cancer rates and cancer deaths across the board, he argued.
The U.S. federal government is threatening to restrict research conducted with human fetal tissue. In an opinion piece, cell biologist, geneticist and neuroscientist Lawrence Goldstein dispelled widespread myths and misinformation about this type of research.
Epidemiologist Michael Osterholm predicts that the next pandemic could be even worse than COVID-19. In a book excerpt and interview with Live Science, Osterholm described the lessons we should have taken away from the coronavirus pandemic, and how recent changes in U.S. policy may have destroyed our capacity to handle serious outbreaks.
As the planet warms, a dangerous condition called hyponatremia may be on the rise. The condition causes a dramatic decline in sodium in the body, which can potentially cause seizures, coma and death. A Live Science exclusive looked at the emerging trend.
A viral story suggested that researchers in China were working on a "pregnancy robot" that could gestate a human baby from conception to birth. It turns out that the story was complete fiction — but, in theory, could such a technology be realized? Experts weighed in on the sci-fi-sounding idea and discussed whether, eventually, it could be feasible to build a bona fide pregnancy robot.
]]>If your first few sessions have been more frustrating than awe-inspiring, you're not alone. Here are five of the most common mistakes, plus how to avoid them so you can spend less time fiddling and more time actually enjoying the view.

Many beginners grab their telescope on a whim, head outside and hope for magic. The problem is that astronomy doesn't work on impulse — it works on timing. Moon phases affect how bright the sky is, and local light pollution can wash out fainter objects. Even the time of year dictates what's actually visible.
Before heading out, take a moment to observe what's above the horizon, when the moon rises and whether your sky conditions are cooperating. Free apps make this easy — Stellarium is a favorite of ours — and a quick look at a cloud forecast can save you a wasted session.
Planning isn't a chore; it's the difference between hunting blindly and having a solid target list. When you know when are where to look, observing the sky with a telescope becomes far more rewarding.

It's completely normal to hope for swirling nebulas and razor-sharp galaxies like the images you see online. Unfortunately, those are long-exposure photographs taken by spacecraft or huge professional observatories. A backyard telescope shows the real sky, and it's much more subtle.
But that doesn't mean it's disappointing. The moon looks incredible through even a small telescope, Jupiter and Saturn show details and star clusters sparkle beautifully. What tends to trip people up is expecting colors and drama rather than appreciating the delicate, natural brightness of what can be seen with the eye.
Think of visual observation as seeing the universe with your own eyes, and once you adjust your expectations, you start to notice far more. If you do want to experiment with imaging space, you can mount one of the best astrophotography cameras directly onto your telescope, or invest in one of the best smart telescopes.
Another thing that often catches beginners out is that not all telescopes excel at the same targets. Not only are there different types of telescopes, but some designs are better suited to deep-space objects like galaxies and nebulas, while others are better suited for crisp planetary and lunar viewing.
Wide aperture, low focal-ratio scopes (like Dobsonians) gather lots of light, making faint objects easier to spot. On the other hand, longer focal length telescopes naturally deliver higher magnification, which is perfect for observing the details on Jupiter, Saturn or the moon's craters.

One of the least glamorous but most important steps is simply letting your telescope cool down (or warm up) to match the outdoor temperature. If you take a scope from a warm living room out into the cold night, turbulent air currents swirl inside the tube, softening the view. The result looks like your optics suddenly went blurry.
Give your telescope 20-40 minutes outside before you start observing — maybe even a bit longer for bigger scopes. During this time, you can align your finderscope, set up a star chart or choose your targets.
Once the air settles inside the tube, things improve dramatically. Planets snap into focus, double stars separate cleanly and lunar details show the crisp edges it's meant to. Acclimation isn't sexy or exciting, but it's one of the easiest ways to upgrade your observing without spending a dime.

A common assumption is that more magnification automatically means better views. In reality, pushing the zoom too high will result in a dim, wobbly image.
Every telescope has a highest useful magnification. This is essentially the upper limit where the view will still look sharp, and it's determined by the scope's aperture and the viewing conditions. The general rule of thumb is that the highest useful magnification is roughly 50x its aperture in inches, although this does depend on the overall quality of your telescope. For example, a 6-inch telescope will have a highest useful magnification of around 300x.
Start with a low-power eyepiece, like the 20mm which typically comes with beginner telescopes. This will give you a wider field, making objects a lot easier to find and track. Only once you've centered your target should you then switch to a higher-power eyepiece — and even then, it's best to increase in small steps. On nights with poor viewing conditions, high magnification will just make objects look blurrier.
To determine the magnification of an eyepiece, divide the telescope's focal length by the measurement of the eyepiece. For example, on a 1,000mm scope, a 20mm eyepiece will provide 50x magnification. Over time, you'll instinctively know which eyepiece works best for the moon, planets and deep-sky objects.
When magnification is chosen well, everything suddenly becomes sharp, steady and a lot more impressive.

Modern telescopes can be surprisingly smart — some align themselves, some slew automatically to targets and others use your phone to guide you around the night sky. These features are awesome, especially for beginners, but they can create a false expectation that the telescope will do all the work.
In reality, even the most automated systems will still need some input and understanding from the user. Motorized GoTo mounts, for example, won't magically know where they are. They need accurate setup, which requires a level tripod, the correct date and time and a proper alignment on a couple of bright stars. If any of that is off, the telescope will miss every target.
Smart telescopes and app-driven models make navigation easier, but they're not a substitute for knowing what's actually visible or why certain objects won't appear on a bright, hazy night. Plus, smart telescopes often produce the best view by stacking images over a longer period, so they're better suited to photographing the cosmos as opposed to observing it.
]]>
What it is: Reflection nebula NGC 1333 and binary star system SVS 13
Where it is: 1,000 light-years away in the constellation Perseus
When it was shared: Dec. 16, 2025.
Go outside after dark this winter and look to the southeast, and you'll see some of the brightest stars in the night sky — Orion's Belt, Betelgeuse, Sirius, Aldabaran and Capella. Just above this melee is the quieter constellation Perseus, which lacks bright stars but hosts something extraordinary that the naked eye can't see — the explosive birth of new stars.
Lurking within the Perseus Molecular Cloud is NGC 1333, nicknamed the Embryo Nebula because it contains many young, hot stars that are teaching astronomers just what goes on when a star is born. NGC 1333 is a reflection nebula, meaning a cloud of gas and dust illuminated by the intense light coming from newly forged stars, some of which appear to be regularly spewing jets of matter. It's one of the closest star-forming regions to our solar system. On Dec. 16. astronomers published the most detailed images ever of a jet launched by a newborn star, called SVS 13, which revealed a sequence of nested, ring-like structures. The finding is evidence that the star has been undergoing an outburst — releasing an immense amount of energy — for decades.
The discovery, which the researchers described in the journal Nature Astronomy, marks the first direct observational confirmation of a long-standing theoretical model of how young stars feed on, and then explosively expel, surrounding material.
The researchers captured the high-resolution, 3D view of a fast-moving jet emitted from one of SVS 13's young stars using the Atacama Large Millimeter/submillimeter Array (ALMA) radio telescope array in Chile. Within the image, they identified more than 400 ultra-thin, bow-shaped molecular rings. Like tree rings that mark the passage of time, each ring marks the aftermath of an energetic outburst from the young star's early history. Remarkably, the youngest ring matches a bright outburst seen in the SVS 13 system in the early 1990s, allowing researchers to directly connect a specific burst of activity in a forming star with a change in the speed of its jet. It's thought that sudden bursts in jet activity are caused by large amounts of gas falling onto a young star.
"These images give us a completely new way of reading a young star's history," said study co-author Gary Fuller, a professor at the University of Manchester. "Each group of rings is effectively a time-stamp of a past eruption. It gives us an important new insight into how young stars grow and how their developing planetary systems are shaped."
For more sublime space images, check out our Space Photo of the Week archives.
]]>
What it is: Sagittarius B2 molecular cloud
Where it is: Roughly 26,000 light-years from Earth,, in the constellation Sagittarius
When it was shared: Sept. 24, 2025
Stars form in molecular clouds — molecular clouds — regions that are cold, dense, rich in molecules and filled with dust. One enormous cloud responsible for forming half of the stars in the Milky Way's central region is the Sagittarius B2 (Sgr B2) molecular cloud, located a few hundred light-years from our central supermassive black hole.
Boasting a total mass between 3 million and 10 million times that of the sun and stretching 150 light-years across, it is one of the largest molecular clouds in the galaxy. It lies roughly 26,000 light-years from Earth, in the constellation Sagittarius. It is also chemically rich, with several complex molecules discovered so far.
But this giant star-forming region is shrouded in a mystery: how it has managed to produce 50% of the stars in the region, despite containing just 10% of the galactic center's gas.
Astronomers observed this super-efficient stellar factory using the James Webb Space Telescope (JWST), in the hope of finding some clues about its unusual productivity. This spectacular image is the telescope's mid-infrared view, captured by JWST's Mid-Infrared Instrument (MIRI).
In the image, the clumps of dust and gas in the molecular complex glow in shades of pink, purple and red. These clumps are seen surrounded by dark areas. Dark does not mean that these regions are empty or emit nothing; instead, light in these areas is blocked by dense dust that the instrument cannot detect.

In star-forming regions like this one, warm dust and gas and only the brightest stars emit in the mid-infrared. This contrasts with the near-infrared image captured simultaneously by JWST's Near-Infrared Camera (NIRCam), which reveals an abundance of stars because stars emit more strongly in the near-infrared light.
In this MIRI image, the clumps on the right that appear redder than the rest of the cloud complex correspond to one of the most chemically complex areas known, as revealed by previous observations using other telescopes. Astronomers think this unique region may hold clues to why Sgr B2 is more efficient at star formation than the rest of the galactic center.
Additionally, an in-depth analysis of the masses and ages of the stars in this stellar factory could reveal further insight into the star-forming mechanisms in the Milky Way's center.
For more sublime space images, check out our Space Photo of the Week archives.
]]>It sounds like a simple enough question to answer — list the openings and add them up. But it's not quite that easy once you start considering questions like: "What exactly is a hole?" "Does any opening count?" And "why don't mathematicians know the difference between a straw and a doughnut?"
Before we start counting, we need to agree on what constitutes a "hole." Katie Steckles, a lecturer in mathematics at Manchester Metropolitan University in the U.K. and a freelance mathematics communicator, told Live Science that mathematicians "use the term 'hole' to mean one like the hole in a donut: one that goes all the way through a shape and out the other side."
But if you dig a "hole" at the beach, your aim is probably not to dig right through to the other side of the world. Many people would think of a hole as a depression in a solid object. But "this isn't a true hole, as it has an end," Steckles said.
Similarly, mathematical communicator James Arthur, who is based in the U.K., told Live Science that "in topology, a 'hole' is a through hole, that is you can put your finger through the object."
When digging a tunnel under the sea, like the Channel Tunnel that connects the U.K. and France, engineers started off by digging two openings. But as soon as those two digging projects joined up, the Channel Tunnel became a fundamentally different object (what Arthur and engineers would call a "through hole") — like a straw, or a tube with an opening at either end.
And if you ask people how many holes a straw has you will get a range of different answers: one, two and even zero. This is a result of our colloquial understanding of what constitutes a hole.
To find a consistent answer, we can turn to mathematics. And the problem of classifying how many holes there are in an object falls squarely within the realm of topology.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
To a topologist, the actual shapes of objects are not important. Instead, "topology is more concerned with the fundamental properties of shapes and how things connect together in space," Steckles said.
In topology, objects can be grouped together by the number of holes they possess. For example, a topologist sees no difference between a golf ball, a baseball or even a Frisbee. If they were all made of plasticine, or putty, they could theoretically be squashed, stretched or otherwise manipulated to look like each other without making or closing any holes in the plasticine or sticking different parts together, Steckles argued.
However, to a topologist, these objects are fundamentally different to a bagel, a doughnut or a basketball hoop, which each have a hole through the middle of them. A figure of eight with two holes and a pretzel with three are different topological objects again.

A useful way to get into the mathematicians' way of thinking about the straw problem is to "imagine our straw is made of play dough," Arthur said. "Let's take this straw and slowly squish the top down and down and down towards the bottom, making sure the hole in the middle stays open. We will squish it until we are in a shape that looks like a doughnut." Mathematicians, Arthur said, would say that "the straw is homeomorphic to a doughnut."
The long, thin aspect ratio of the straw, and the fact that the two openings are relatively far apart, are perhaps what gives rise to the suggestion of two holes. But to a topologist, bagels, basketball hoops and doughnuts are all topologically equivalent to a straw with a single hole. "The hole in a straw goes all the way through it, and the opening at the other end is just the back of that same hole," Steckles said.
Armed with the topologists' definition of a hole, we can tackle the original question: How many holes does the human body have? Let's first try to list all the openings we have. The obvious ones are probably our mouths, our urethras (the ones we pee out of) and our anuses, as well as the openings in our nostrils and our ears. For some of us, there are also milk ducts in nipples and vaginas.
There are also four less-obvious openings that we all have in the corners of eyelids closest to our nose — the four lacrimal puncta, which drain tears from our eyes into our nasal cavities. At an even smaller scale there are the pores that enable sweat to escape our bodies and sebum to lubricate our skin. In total there are potentially millions of these openings in our bodies, but do they all count as holes?

To make the question interesting, think about whether we could pass a very thin string into one hole and out of another. If we set the size of this string to be about 60 microns (60 millionths of a meter) then it's possible that the string could enter an opening as small as a pore. However — and this is key — it wouldn't be able to leave. It wouldn't be able to come out the other end. It would be blocked by the cells at the bottom of the pore — too thick to pass through into the vasculature that supplies the pore.
"They're not actually holes in the topological sense, as they don't go all the way through," Steckles said. "They're just blind pits."
By this definition we can rule out all the pores, milk ducts and urethras. We couldn't thread a string in one of these openings and out of another. Even the ears canals have to go as they are separated from the rest of the sinuses by the ear drums.
"We have our mouth, our anus, and then our nostrils. They are four of the … openings that form a hole," Arthur said. "But we actually have eight. The remaining four come from the tear ducts, we each have two in each eye, an upper and a lower."
But this doesn't mean eight holes. Steckles pointed out ."When the holes that pass through a shape connect together inside the shape, it makes it harder to count how many there are."
A pair of underwear, for example, has three openings (one for the waist and one for each of the two legs), but it's not immediately clear how many holes a topologist would say it has. "A useful trick is to think about flattening it out," Steckles said. — "If we were to stretch the waistband of the pants out onto a big hula hoop, we'd see the two trouser legs sticking down, each being one hole."

So despite having three openings, the pair of underwear has only two holes. "So when the holes connect together in the middle, there's one fewer hole than there are openings," Steckles argued. Correspondingly, topology tells us that, despite eight interconnected openings, the human body has seven different holes.
But there might be one more. Although often counted as a blind hole, the vagina leads to the uterus, which then leads to one of two fallopian tubes. These tubes are open at the far end and lead to the peritoneal cavity near the ovary. It is the job of the finger-like projections of the funnel-shaped infundibulum at the end of the fallopian tube to catch the egg when it is released from the nearest ovary. However, it has been demonstrated that eggs released from one ovary can be captured by the fallopian tube on the other side, so that passage between the two open ends of the fallopian tubes is possible. Our tiny string could therefore be threaded all the way through the female reproductive tract and back out, counting as one more hole.
So the mathematician's answer is that humans have either seven or eight holes.
In the end, the question is not just about counting openings but about understanding connections. Topologically speaking, our bodies are less like Swiss cheese and more like a carefully constructed onesie for an octopus.
People with typical recognition capabilities are worse than chance: more often than not, they think AI-generated faces are real.
That's according to research published Nov. 12 in the journal Royal Society Open Science. However, the study also found that receiving just five minutes of training on common AI rendering errors greatly improves individuals' ability to spot the fakes.
"I think it was encouraging that our kind of quite short training procedure increased performance in both groups quite a lot," lead study author Katie Gray, an associate professor in psychology at the University of Reading in the U.K., told Live Science.
Surprisingly, the training increased accuracy by similar amounts in super recognizers and typical recognizers, Gray said. Because super recognizers are better at spotting fake faces at baseline, this suggests that they are relying on another set of clues, not simply rendering errors, to identify fake faces.
Gray hopes that scientists will be able to harness super recognizers' enhanced detection skills to better spot AI-generated images in the future.
"To best detect synthetic faces, it may be possible to use AI detection algorithms with a human-in-the-loop approach — where that human is a trained SR [super recognizer]," the authors wrote in the study.
In recent years, there has been an onslaught of AI-generated images online. Deepfake faces are created using a two-stage AI algorithm called generative adversarial networks. First, a fake image is generated based on real-world images, and the resulting image is then scrutinized by a discriminator that determines whether it is real or fake. With iteration, the fake images become realistic enough to get past the discriminator.
These algorithms have now improved to such an extent that individuals are often duped into thinking fake faces are more "real" than real faces — a phenomenon known as "hyperrealism."
As a result, researchers are now trying to design training regiments that can improve individuals' abilities to detect AI faces. These trainings point out common rendering errors in AI-generated faces, such as the face having a middle tooth, an odd-looking hairline or unnatural-looking skin texture. They also highlight that fake faces tend to be more proportional than real ones.
In theory, so-called super recognizers should be better at spotting fakes than the average person. These super recognizers are individuals who excel in facial perception and recognition tasks, in which they might be shown two photographs of unfamiliar individuals and asked to identify if they are the same person or not. But to date, few studies have examined super recognizers' abilities to detect fake faces, and whether training can improve their performance.
To fill this gap, Gray and her team ran a series of online experiments comparing the performance of a group of super recognizers to typical recognizers. The super recognizers were recruited from the Greenwich Face and Voice Recognition Laboratory volunteer database; they had performed in the top 2% of individuals in tasks where they were shown unfamiliar faces and had to remember them.
In the first experiment, an image of a face appeared onscreen and was either real or computer-generated. Participants had 10 seconds to decide if the face was real or not. Super recognizers performed no better than if they had randomly guessed, spotting only 41% of AI faces. Typical recognizers correctly identified only about 30% of fakes.
Each cohort also differed in how often they thought real faces were fake. This occurred in 39% of cases for super recognizers and in around 46% for typical recognizers.
The next experiment was identical, but included a new set of participants who received a five-minute training session in which they were shown examples of errors in AI-generated faces. They were then tested on 10 faces and provided with real-time feedback on their accuracy at detecting fakes. The final stage of the training involved a recap of rendering errors to look out for. The participants then repeated the original task from the first experiment.
Training greatly improved detection accuracy, with super recognizers spotting 64% of fake faces and typical recognizers noticing 51%. The rate that each group inaccurately called real faces fake was about the same as the first experiment, with super recognizers and typical recognizers rating real faces as "not real" in 37% and 49% of cases, respectively.
Trained participants tended to take longer to scrutinize the images than the untrained participants had — typical recognizers slowed by about 1.9 seconds and super recognizers did by 1.2 seconds. Gray said this is a key message to anyone who is trying to determine if a face they see is real or fake: slow down and really inspect the features.
It is worth noting, however, that the test was conducted immediately after participants completed the training, so it is unclear how long the effect lasts.
"The training cannot be considered a lasting, effective intervention, since it was not re-tested," Meike Ramon, a professor of applied data science and expert in face processing at the Bern University of Applied Sciences in Switzerland, wrote in a review of the study conducted before it went to print.
And since separate participants were used in the two experiments, we cannot be sure how much training improves an individual's detection skills, Ramon added. That would require testing the same set of people twice, before and after training.
We know these cities exist because ancient texts describe them, but their location may be lost to time.
In a few cases, looters have found these cities, and have looted large numbers of artifacts from them. But these robbers have not come forward to reveal their location. In this countdown Live Science takes a look at six ancient cities whose whereabouts are unknown.

Not long after the 2003 U.S. invasion of Iraq, thousands of ancient tablets from a city called "Irisagrig" began appearing on the antiquities market. From the tablets, scholars could tell that Irisagrig was in Iraq and flourished around 4,000 years ago.
Those tablets reveal that the rulers of the ancient city lived in palaces that housed many dogs. They also kept lions which were fed cattle. Those that took care of the lions, referred to as "lion shepherds," got rations of beer and bread. The inscriptions also mention a temple dedicated to Enki, a god of mischief and wisdom, and say that festivals were sometimes held within the temple.
Scholars think that looters found and looted Irisagrig around the time the 2003 U.S. invasion took place. Archaeologists have not found the city so far and the looters who did have not come forward and identified where it is.

Egyptian pharaoh Amenemhat I (reign circa 1981 to 1952 B.C.) ordered a new capital city built. This capital was known as "Itjtawy" and the name can be translated as "the seizer of the Two Lands" or "Amenemhat is the seizer of the Two Lands." As the name suggests Amenemhat faced a considerable amount of turmoil. His reign ended with his assassination.
Despite Amenemhat's assassination, Itjtawy would remain the capital of Egypt until around 1640 B.C, when the northern part of Egypt was taken over by a group known as the "Hyksos," and the kingdom fell apart.
While Itjtawy has not been found, archaeologists think it is located somewhere near the site of Lisht, in central Egypt. This is partly because many elite burials, including a pyramid belonging to Amenemhat I, are located at Lisht.

The city of Akkad (also called Agade) was the capital of the Akkadian Empire, which flourished between 2350 and 2150 B.C. At its peak the empire stretched from the Persian Gulf to Anatolia. Many of its conquests occurred during the reign of "Sargon of Akkad," who lived sometime around 2300 B.C. One of the most important structures in Akkad itself was the "Eulmash," a temple dedicated to Ishtar, a goddess associated with war, beauty and fertility.
Akkad has never been found, but it is thought to have been built somewhere in Iraq. Ancient records indicate that the city was destroyed or abandoned when the Akkadian empire ended around 2150 B.C.

Al-Yahudu, a name which means "town" or "city" of Judah, was a place in the Babylonian empire where Jews lived after the kingdom of Judah was conquered by the Babylonian king Nebuchadnezzar II in 587 B.C. He sent part of the population into exile, a practice the Babylonians often engaged in after conquering a region.
About 200 tablets from the settlement are known to exist and they indicate that the exiled people who lived in this settlement kept their faith and used Yahweh, the name of God, in their names. Al-Yahudu's location has not been identified by archaeologists, but like many of these lost cities, was likely located in what is now Iraq. Given that the tablets showed up on the antiquities market, and there is no record of them being found in an archaeological excavation, it appears that at some point looters succeeded in finding its location.

Waššukanni was the capital city of the Mitanni empire, which existed between roughly 1550 B.C. and 1300 B.C. and included parts of northeastern Syria, southern Anatolia and northern Iraq. It faced intense competition from the Hittite empire in the north and the Assyrian empire in the south and its territory was gradually lost to them.
Waššukanni has never been found and some scholars think that it may be located in northeastern Syria. The people who lived in the capital, and indeed throughout much of its empire, were known as the "Hurrians" and they had their own language which is known today from ancient texts.

Thinis (also known as Tjenu) was an ancient city in southern Egypt that flourished early in the ancient civilization's history. According to the ancient writer Manetho, it was where some of the early kings of Egypt ruled from around 5,000 years ago, when Egypt was being unified. Egypt's capital was moved to Memphis a bit after unification and Thinis became the capital of a nome (a province of Egypt) during the Old Kingdom (circa 2649 to 2150 B.C.) period, Ali Seddik Othman, an inspector with the Egyptian Ministry of Tourism and Antiquities, noted in an article published in the Journal of Abydos.
Thinis has never been identified although it is believed to be near Abydos, which is in southern Egypt. This is partly because many elite members of society, including royalty, were buried near Abydos around 5,000 years ago.
]]>How many Diagnostic Dilemmas have you read — and can you guess the diagnosis? Take our quiz that draws from the cases we highlighted in 2025 and see if you can figure out each patient’s ailment. Tell us how you got on in the comments below.
How many Diagnostic Dilemmas have you read — and can you guess the diagnosis? Take our quiz that draws from the cases we highlighted in 2025 and see if you can figure out each patient’s ailment. Tell us how you got on in the comments below.
These advances have revealed that Neanderthals and modern humans interbred — something that wasn't previously thought to have happened. It has allowed researchers to disentangle the various migrations that shaped modern people. It has also allowed teams to sequence the genomes of extinct animals, such as the mammoth, and extinct agents of disease, such as defunct strains of plague.
While much of this work has been carried out by analyzing the physical remains of humans or animals, there is another way to obtain ancient DNA from the environment. Researchers can now extract and sequence DNA (determine the order of "letters" in the molecule) directly from cave sediments rather than relying on bones. This is transforming the field, known as paleogenetics.
Caves can preserve tens of thousands of years of genetic history, providing ideal archives for studying long-term human–ecosystem interactions. The deposits beneath our feet become biological time capsules.
It is something we are exploring here at the Geogenomic Archaeology Campus Tübingen (GACT) in Germany. Analyzing DNA from cave sediments allows us to reconstruct who lived in ice age Europe, how ecosystems changed and what role humans played. For example, did modern humans and Neanderthals overlap in the same caves? It's also possible to obtain genetic material from faeces left in caves. At the moment we are analyzing DNA from the droppings of a cave hyena that lived in Europe around 40,000 years ago.
The oldest sediment DNA discovered so far comes from Greenland and is 2 million years old.
Paleogenetics has come a long way since the first genome of an extinct animal, the quagga, a close relative of modern zebras, was sequenced in 1984. Over the past two decades, next-generation genetic sequencing machines, laboratory robotics and bioinformatics (the ability to analyze large, complex biological datasets) have turned ancient DNA from a fragile curiosity into a high-throughput scientific tool.

Today, sequencing machines can decode up to a hundred million times more DNA than their early predecessors. Where the first human genome took over a decade to complete, modern laboratories can now sequence hundreds of full human genomes in a single day.
In 2022, the Nobel prize in physiology or medicine was awarded to Svante Pääbo, a leading light in this field. It highlighted the global significance of this research. Ancient DNA now regularly makes headlines, from attempts to recreate mammoth-like elephants, to tracing hundreds of thousands of years of human presence in parts of the world. Crucially, advances in robotics and computing have allowed us to recover DNA from sediments as well as bones.
GACT is a growing research network based in Tübingen, Germany, where three institutions collaborate across disciplines to establish new methods for finding DNA in sediments. Archaeologists, geoscientists, bioinformaticians, microbiologists and ancient-DNA specialists combine their expertise to uncover insights that no single field could achieve alone —- a collaboration in which the whole genuinely becomes greater than the sum of its parts.
The network extends well beyond Germany. International partners enable fieldwork in archaeological cave sites and natural caves all over the world. This summer, for example, the team investigated cave sites in Serbia, collecting several hundred sediment samples for ancient DNA and related ecological analyses. Future work is planned in South Africa and the western United States to test the limits of ancient DNA preservation in sediments from different environments and time periods.

Recovering DNA from sediments sounds simple: take a scoop, extract, sequence. In reality, it is far more complex. The molecules are scarce, degraded and fragmented, and mixed with modern contamination from cave visitors and wildlife. Detecting authentic ice age molecules relies on subtle chemical damage patterns to the DNA itself, ultra-clean laboratories, robotic extraction, and specialized bioinformatics. Every positive identification is a small triumph, revealing patterns invisible to conventional archaeology.
Much of GACT's work takes place in the caves of the Swabian Jura within Unesco World Heritage sites such as Hohle Fels, home to the world's oldest musical instruments and figurative art. Neanderthals and Homo sapiens left behind stone artifacts, bones, ivory and sediments that accumulated over tens of millennia. Caves are natural DNA archives, where stable conditions preserve fragile biomolecules, enabling researchers to build up a genetic history of ice age Europe.
One of the most exciting aspects of sediment DNA research is its ability to detect species long gone, even when no bones or artifacts remain. A particular focus lies on humans: who lived in the cave, and when? How modern humans and Neanderthals use the caves and, as mentioned, were they there at the same times? Did cave bears and humans compete for shelter and resources? And what might the microbes that lived alongside them reveal about the impact humans had on past ecosystems?
Sediment DNA also traces life outside the cave. Predators dragged prey into sheltered chambers, humans left waste behind. By following changes in human, animal and microbial DNA over time, researchers can examine ancient extinctions and ecosystem shifts, offering insights relevant to today's biodiversity crisis.
The work is ambitious: using sedimentary DNA to reconstruct ice age ecosystems and to understand the ecological consequences of human presence. Only two years into GACT, every dataset generates new questions. Every cave layer adds another twist to the story.
With hundreds of samples now being processed, major discoveries lie ahead. Researchers expect soon to detect the first cave bear genomes, the earliest human traces, and complex microbial communities that once thrived in darkness. Will the sediments reveal all their secrets? Time will tell — but the prospects are exhilarating.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
]]>That's the question posed by a study published in November in the journal Cancer. Conducted by researchers at the cancer detection-test company Exact Sciences, the paper models how cancer care for five million U.S. adults might be changed by easy access to blood tests designed to spot many cancers — known as multicancer early detection (MCED) liquid biopsy tests.
The study suggests the tests could save lives by catching cancer at earlier stages, when it's more easily treated. It predicted that, over a decade of use, MCEDs could reduce stage IV cancer diagnoses in the U.S. by 45%, compared to the current standard-of-care.
But for now, liquid biopsies that test for multiple cancers still have unacceptably high false-positive rates. And even when they don't, there aren't clear guidelines for how to integrate them into the standard-of-care. That means they aren't going to make their way into the clinic in the near term, experts told Live Science.
Before the transformative effect predicted by the Cancer paper can be borne out, doctors will need to figure out how to best use these tests in the clinic.
The idea behind liquid biopsy tests is that they enable clinicians to look for cancer without going anywhere near the tumor itself, Dr. Carolina Reduzzi, an oncologist and director of the liquid-biopsy platform at Weill Cornell Medicine, told Live Science.
"It's like translating a tissue biopsy into the blood," said Reduzzi, who was not involved in the Cancer report. These tests can detect various signs of cancer, including individual circulating tumor cells (CTCs), chunks of tumor genetic material floating in the bloodstream, and even tiny fragments of tumor cells.
Because they do not direct tumor sampling, liquid biopsy tests are comparatively simpler and less invasive. Additionally, the hope is that if clinicians regularly repeat liquid biopsies, they could build up a picture of how a tumor changes in response to treatment.
If a patient's tumors contain many genetically distinct cells, tissue biopsies that only sample one portion of the tissue may provide a biased view of their disease, said Reduzzi. Liquid biopsy, in contrast, should provide a broader picture of a patient's cancer by making it easier to analyze cells from multiple tumor sites.
To date, the U.S. Food and Drug Administration (FDA) has approved five liquid biopsy diagnostic tests, each for single types of cancer. These tests have been validated via assays that compared their ability to detect signs of cancer against tests that sample tumor tissue.
No MCED tests are currently approved or available through routine clinical care, although some, like Exact's Cancerguard and GRAIL's Galleri, are available in the United States as "laboratory-developed tests" (LDTs). LDTs exist in a regulatory gray area in which they are not formally approved by the FDA but are available to patients through their clinicians or independent telemedicine providers.
Dr. Iseult Browne, a clinical oncologist based at the Royal Marsden Hospital in London and the U.K. Institute of Cancer Research, said that progress in Europe is patchier. The U.K.'s National Health Service is conducting a trial of Galleri, called PATHFINDER 2, based on data from 140,000 participants. That data will be released next year.
Browne and Reduzzi noted that inertia in the field of oncology could delay the further rollout of liquid biopsies. Oncologists have, for decades, built diagnostic and treatment plans based on data from tissue-biopsy analyses. Shaking these entrenched practices, even with data showing the utility of liquid biopsies, is difficult.
Even with single-cancer tests, Browne says that standardization is an issue. "Everyone is using a different assay," so making head-to-head comparisons to decide which test is best can be confusing. Different trials have been analyzing different markers of cancer, at various timepoints in disease progression, she said.

Ruth Etzioni, a biostatistician at Fred Hutch cancer center in Seattle, leads a multi-institute effort to review emerging cancer treatment and diagnostics. Similar to the Cancer report, Etzioni's team has modeled the impact of MCEDs and predicts that they would allow cancers to be detected at earlier stages.
However, she added, "our numbers are a little less optimistic."
The tests' helpfulness varies by cancer type, because it hinges on how long different cancers remain in each stage of progression. If a cancer lingers longer in stage I and II, then MCED tests would be well-placed to diagnose it early. But if a cancer rapidly progresses to stage IV, then the test will be less useful, Etzoni explained.
The question of how long different cancers stay in each stage is still a matter of debate. The "dwell times" used in the recent Cancer report leaned more optimistic, assuming that cancers would progress slowly enough for an annual MDEC test to make a difference.
Another reason MCED tests are not ready to replace existing diagnostics is that some analyses will always require a tissue biopsy, and current medical guidelines advise doctors to make some clinical decisions based on tumor-tissue samples.
I don't think we have a test that is there. But I think we will. With time.
Dr. Carolina Reduzzi, Weill Cornell Medicine
"Immunotherapy is given in some cases based on how much your tumor has leukocytes [immune cells] infiltrating the tumor," Reduzzi said. "You cannot get that in the blood." All the researchers interviewed for this article agreed that a positive on a liquid biopsy test would need to be followed up with further testing before any cancer treatment was initiated.
So, multicancer tests may diagnose cancer earlier, but whether that early diagnosis will lead to lower death rates will depend on whether those confirmation tests happen quickly, Etzioni said. And those follow-up tests also have to be up to the task of identifying early-stage cancer, she noted.
Emerging MCEDs also have issues with false positives, Browne said. Early, non-peer reviewed data from the PATHFINDER 2 trial shows that Galleri was extremely good at identifying people without cancer — correctly identifying people without the disease 99.6% of the time. But meanwhile, roughly 40% of the patients that the test diagnosed with cancer were actually cancer-free.
A false-positive rate of that magnitude puts unnecessary worry on the patient, said Browne, and each false-positive could trigger follow-up tests that people would not have gotten otherwise. If scaled up to the millions of people at elevated risk of cancer, it would significantly burden any health system that adopted the tests.
To reduce false-positive rates, future studies will need to find more reliable signals of cancer to detect. Detecting information from other cell types, like immune cells, has been shown to improve test specificity. Improvements to laboratory standardization could also help cut the false-positive rate.
Browne hopes that liquid biopsy could someday help patients avoid the sapping side effects that were once unavoidable parts of cancer treatment. For instance, an ongoing trial at the Royal Marsden Hospital is assessing whether a test could identify breast cancer patients who don't need post-operative chemotherapy. The test enables the doctors to assess a patient's risk even after their tumor has been removed because it looks for tumor DNA in the blood.
Reduzzi believes that optimized multicancer tests — which would identify a large fraction of people who have cancer while having a low false-positive rate — will transform cancer diagnostics, and that such tests are on the horizon.
"I don't think we have a test that is there," she said. "But I think we will. With time."
This article is for informational purposes only and is not meant to offer medical advice.
]]>Naked-eye stargazing in winter is a joy, but lift a pair of binoculars to your eyes and the whole experience changes. The sky stops being a flat backdrop and suddenly has depth. It’s layered with stars, open clusters and nebulas that you never knew were there. Galactic immersion is yours.
That’s the magic of binocular astronomy. Sweeping the sky with both eyes open, holding a pair of binoculars up to the night sky, feels natural and relaxed, yet you’re seeing so much more than with the unaided eye. It’s also easy and affordable to do — all you need is a warm coat, a dark corner and a steady pair of hands.
Choose a good pair of the best stargazing binoculars — something like 7x50, 8x42 or 10x50 — and you’ll unlock a second layer of the winter night sky with almost no effort. Here’s what to look at in a pair of binoculars from the Northern Hemisphere this season.
If you want to get even closer to the night sky, the best telescopes will give you that extra bit of power.

It's the brightest star in the night sky, but Sirius in the constellation Canis Major also appears to be one of the most colorful. Although it’s a blue-white star, Sirius shows a rainbow of colors as it twinkles.
Its high brightness and the fact that it is low in the sky during the Northern Hemisphere winter make Sirius shimmer in multiple colors as its starlight is refracted by Earth’s atmosphere. Put your binoculars on Sirius and you will see a kaleidoscope of colors.

The best time to look at an outer planet is when it is at opposition. At that moment, the Earth is between the planet and the sun, making the planet both closest to Earth and fully illuminated by the sun.
On Jan. 10, 2026, Jupiter will come to opposition, something that happens once every 13 months. For a few weeks either side of this date, put a pair of 8x42, 10x42 or 10x50 binoculars on Jupiter and you will see its four Galilean moons — Europa, Callisto, Ganymede and Io — as dots either side of the giant planet.

Ask someone when the best time to look at the moon is, and they will almost always say when it's a full moon — but that’s bad advice. Through binoculars, the moon looks better at almost any other time of month, with perhaps the most intriguing (and convenient) coming at first quarter moon, when dramatic shadows can be seen along the terminator — the line between lunar night and day.
Use any pair of 10x binoculars and you'll get a spectacular close-up of shadows cast by the craters, valleys and mountains on the moon. As a bonus, a first-quarter moon is up from dusk until midnight.

A particularly bright open star cluster in the constellation Cassiopeia, the Owl Cluster (or NGC 457 , if you prefer) is over 9,000 light-years from the solar system and contains almost 100 stars.
Its name comes from its yellow and blue stars, which are said to resemble the eyes of an owl. If you see Cassiopeia as a ‘W’ shape, NGC 457 is just beneath the first ‘V’.

As we've already said, the full moon phase is not the best time to look at the moon through binoculars — with one very specific exception.
If you can catch the full moon as it rises in the east during dusk, there are a few better sights than the lunar surface cast in an orange light. It looks that way because the sunlight being reflected into your eyes is traveling through the thickest part of Earth's atmosphere, which scatters away short-wavelength blue light, while the longer wavelengths of red and orange light pass through easily.
See the full moon rise on Dec. 4 (Cold Supermoon), Jan. 3 (Wolf Supermoon) and Feb. 1 (Snow Moon), researching the exact time of moonrise for your location and looking east a few minutes after.

The constellation of Auriga dominates the autumn and winter sky, but tends to get overshadowed by the rising stars in the constellation Orion below. Auriga’s brightest star is Capella, the goat star — the brightest in a rough pentagon of five stars.
However, within the constellation, there are some deep sky delights in the form of three star clusters — M36, M37 and M38. Find M36, and all three will be in the field of view of a pair of most 10x50 binoculars.

Stargazers and astrophotographers rave about capturing the Milky Way during the Northern Hemisphere summer months, but the dense star fields of our galaxy's spiral arms can easily be seen in winter. All you need to do is scan your binoculars between the constellations of Orion in the south and Cassiopeia high in the north, and you will see many thousands of bright stars.
Looking its best between December and February, it's not as bright as the summer Milky Way, but the crisp and cold nights can give it a gorgeous, glittering look.

In the constellation Cassiopeia there is an open cluster, NGC 7789, whose stars and the dark lanes between them are said to resemble a rose. A great target for binoculars, the name comes from its discoverer in 1783, Caroline Herschel — a noted comet-hunter and the younger sister of astronomer William Herschel, who discovered Uranus.
If you see Cassiopeia as a ‘W’ shape, NGC 7789 is close to the final point, marked by the star Caph.

It is one of the easiest and most spectacular sights of all to see through a pair of binoculars, but Earthshine doesn't get the attention it deserves. When the moon is a slim crescent, put your binoculars on the night side of the moon, and you will see detail on the lunar surface. This is Earthshine, sunlight reflected from Earth's icecaps, oceans and clouds, gently illuminating the dark side of the moon.
You'll see it for two or three nights, either side of the new moon phase, initially during a waning crescent moon visible in the east just before dawn, and later during a waxing crescent moon in the west just after dusk. New moons occur on Dec. 19, 2025, and Jan. 18, 2026.
]]>The symptoms: The man went to the urology unit of a hospital after urine had been leaking out of small holes in his perineum — the skin between the penis and anus — for about two weeks. This condition is known as "watering can" perineum. The man had a history of various urinary problems, such as a poor stream, discharge from the urethra, urine dribbling, and a burning sensation while peeing.
What happened next: Because the man had a poor urinary stream — meaning his urine didn't flow as quickly or forcefully as usual — the doctors had to drain his full bladder before they could explore the cause of the leakage in his perineum.
The doctors attempted to insert a catheter through the urethra and into the bladder, creating a tunnel for urine to flow into a bag. However, as they tried to push the tube, they hit a wall. Something was blocking the catheter's path.
The doctors instead made an incision in his abdomen and inserted a catheter into the bladder that way, bypassing the urethra altogether. Once urine began to flow, they tested it for signs of infection and found Staphylococcus aureus bacteria. These bugs are an uncommon cause of urinary tract infections and usually appear only when there is a physical abnormality blocking urine flow, which allows this species to remain in the bladder and thrive.
The doctors referred the patient to the radiology department to get scans of his bladder and search for signs of such physical abnormalities. To visualize the bladder in X-ray scans, the radiologists administered an X-ray-sensitive dye through the abdominal catheter. This revealed that the bladder had inflated at its base, leaving a pointy tip.
The diagnosis: This condition is known as a "Christmas tree" or "pinecone" bladder, owing to its appearance.

The X-rays confirmed that urine had been blocked from leaving the organ. Yet a closer inspection of the bladder revealed no issues with the organ itself, such as an obstructing mass or bladder stones. This led the doctors to wonder if an obstruction was located elsewhere.
Further X-ray imaging revealed that the urethra had narrowed significantly about halfway up its length, cutting off urine flow. This condition is called a urethral stricture and has a multitude of causes.
Often, it arises following a pelvic injury or physical trauma, such as from falling onto a bicycle's crossbar. It can also stem from sexually transmitted bacterial infections or appear if a tumor presses against the tube. Sometimes, the condition has no identifiable cause. (The exact reason for this man's condition wasn't noted in the report of his case.)
The treatment: The doctors treated the man's staph infection with antibiotics and performed an operation to restore the urethra's channel. Surgery can offer some respite from the condition, but urethral strictures often reoccur, the doctors noted in the report.
What makes the case unique: A urethral stricture is an unusual cause of Christmas tree bladder. Normally, a bottleneck in the urethra slows urine flow, leading to some degree of distension in the bladder, but not so much that the organ would widen at the base into a tree-like shape.
The "Christmas tree" swelling is usually caused by a problem with the nerves that control bladder contractions, thus preventing it from emptying properly. Often, this occurs following nerve damage from a spinal cord injury, stroke or neurodegenerative disease, such as multiple sclerosis. Alternatively, it can arise if the neck of the bladder becomes constricted or choked, such as by an inflamed prostate in men.
This article is for informational purposes only and is not meant to offer medical advice.
]]>Researchers have compiled a detailed map of seasonal rhythms around the world, which shows that some physically close regions have dramatically different timing for seasonal variations such as the start and end of the growing season. These differences could contribute to high biodiversity in certain ecosystems, the development of new species and even the different types of coffee harvested in Colombia, the team said.
"Seasonality may often [be] thought of as a simple rhythm — winter, spring, summer, fall — but our work shows that nature's calendar is far more complex," study co-author Drew Terasaki Hart, an ecologist and data analyst at the Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia, said in a statement. "This is especially true in regions where the shape and timing of the typical local seasonal cycle differs dramatically across the landscape. This can have profound implications for ecology and evolution in these regions."
The idea of a simple, seasonal growing pattern works well for plants that grow at high latitudes, such as those in much of Europe and North America, researchers wrote in the study, published Aug. 27 in the journal Nature. But it doesn't work quite as well in arid or tropical ecosystems.
In the study, Terasaki Hart and his colleagues used 20 years' worth of satellite data that captured how plants reflected infrared light throughout the year to map vegetation's growth cycles around the world.
Areas on the slopes of mountains in tropical regions or that have a balmy Mediterranean climate frequently exhibited seasonal asynchrony, or differences in their seasonal cycles across short distances, the team found. In these areas, the availability of light and water was more important for the local plants' growth cycles than the temperature.
"Our map predicts stark geographic differences in flowering timing and genetic relatedness across a wide variety of plant and animal species," Terasaki Hart said in the statement. "It even explains the complex geography of coffee harvest seasons in Colombia — a nation where coffee farms separated by a day's drive over the mountains can have reproductive cycles as out of sync as if they were in opposite hemispheres."
These starkly different niches over short distances could explain why tropical regions have such high biodiversity, the team wrote in the study. Plant and animal species on different seasonal cycles would slowly diverge, reproducing at different times and possibly forming new species after many years.
The results could help explain how species evolve in other ecosystems, such as in river or ocean environments, as well as how environments are adapting to climate change, the researchers wrote in the study.
"We suggest exciting future directions for evolutionary biology, climate change ecology, and biodiversity research, but this way of looking at the world has interesting implications even further afield, such as in agricultural sciences or epidemiology," Terasaki Hart added.
]]>The mammoth dwellings show how communities thrived in extreme environments, turning the remnants of giant animals into protective architecture," the archaeologists wrote in a statement.
The bones were originally found near the village of Mezhyrich, about 70 miles (110 kilometers) southeast of Kyiv. When it was excavated between 1966 and 1974, the archaeological team at the time found the mammoth remains arranged in such a way that suggested they had been used to make houses sometime during the ice age. While this interpretation has found much support among archaeologists, questions still remain about exactly when these bone dwellings were used and for how long. Earlier dates from the site gave a broad range from roughly 19,000 to 12,000 years ago, the team noted in their paper.
To investigate these questions, archaeologists re-examined the site to try and get a better idea of when it was built and how long it stayed in use. They dated the remains of about a dozen small animals found near the mammoth dwellings to try and get a more precise chronology.
The largest structure at Mezhyrich dates to 18,323 to 17,839 years ago, the team reported in the study, published on Nov. 21 on the publishing platform Open Research Europe. These dates are just after the Last Glacial Maximum (26,500 to 19,000 years ago), the coldest part of the last ice age. The researchers noted that the dwelling may have been used for up to 429 years. This indicates that the "shelters were practical solutions for survival rather than permanent settlements," they wrote in the statement.

The foundation of the shelters may have had "mammoth skulls and large long bones, set vertically into the ground [which] formed a kind of plinth or 'foundation,'" study co-author Pavlo Shydlovskyi, an archaeology professor at the Taras Shevchenko National University of Kyiv, told Live Science in an email.
A wooden framework may have covered parts of the shelter, along with hides from smaller animals or possibly birch bark. In addition, "tusks and large flat bones were placed on the upper part of the structure [the roof] functioning as weights and wind protection," Shydlovskyi said.
Five to seven people likely lived within each shelter, Shydlovskyi said. A variety of activities such as flint knapping, animal skin processing and small animal butchering were likely done inside.
Francois Djindjian, an honorary professor at the University of Paris 1 Panthéon Sorbonne has done research on other possible mammoth bone shelters but was not involved in the new paper. He was cautious about the team's dates, and said that he thought that more dates were needed from more of the site. Getting more radiocarbon dates from across the site would give a better idea of when it was used.
The minimally invasive wireless device, which is placed under the scalp, receives inputs in the form of light patterns, which are then conveyed to genetically modified neurons in brain tissue.
In the new study, these neurons activated as if they were responding to sensory information from the mice's eyes. The mice learned to match these different patterns of brain activity to perform specific tasks — namely, to uncover the locations of tasty snacks in a series of lab experiments.
The device marks a step toward a new generation of BMIs that will be capable of receiving artificial inputs — in this case, LED light — independent of typical sensory channels the brain relies on, such as the eyes. This would help scientists build devices that interface with the brain, without requiring trailing wires or bulky external parts.
"The technology is a very powerful tool for doing fundamental research," and it could address human health challenges in the longer term, said John Rogers, a bioelectronics researcher at Northwestern University and senior author of the study, which was published Dec. 8 in the journal Nature Neuroscience.
The device, which is smaller than a human index finger, is soft and flexible, so it conforms to the curvature of the skull. It includes 64 tiny LEDs, an electronic circuit that powers the lights, and a receiver antenna. Additionally, an external antenna controls the LEDs using near-field-communications (NFC) — electromagnetic fields for short-range communications as is done for contactless card payments.
The compact device is designed to be placed under the skin, rather than being implanted directly into the brain. "It projects light directly onto the brain [through the skull], and the response of the brain to that light is generated by a genetic modification in the neurons," Rogers told Live Science.
Brain cells don't normally respond to light that is shone on them, so gene editing is required to make that happen.
"The genetic modification creates light-sensitive ion channels," Rogers explained. When activated by light, these channels allow charged particles to flow into brain cells, tripping a signal that then gets sent to other cells. "Through that mechanism, we create light sensitivity directly in the brain tissue itself," he said. The genetic modification of the brain cells was done using a viral vector, a harmless virus made to deliver the desired genetic tweak into specific cells in different regions of the brain.
The use of light to control the activity of genetically modified cells is called optogenetics, and it's a relatively new science. In past work, the researchers used a similar approach to activate just one group of brain cells, but the new device enabled them to toggle the activity of many neurons across the brain.
"[The genetic modification] is not just stimulating the part of the brain that's naturally responsible for visual perception, but across the entire surface of the cortex," Rogers said. Thus, sending different patterns of illumination creates a corresponding distribution of neural activity. "It's like we can project a series of images — almost like play a movie — directly into the brain by controlling [the] sequence of patterns."
The researchers tested the implant in the mice by wirelessly instructing it to produce various patterned bursts of light. The mice were trained to respond to each pattern with a specific behavior, indicating that they could distinguish between the patterns transmitted. With each type of signal, they had to go to a specific cavity in a wall, and for choosing correctly, they'd get sugar water as a reward.
Bin He, a neuroengineering researcher at Carnegie Mellon University who wasn't involved in the study, called it a novel technique for using light to tune circuits across the brain. "It may have various applications in neuroscience research using animal models … and beyond," he said.
For instance, the researchers see potential for this device in future prosthetics. Applications could include adding sensations, like touch or pressure, to prosthetic limbs, or sending visual or auditory signals to vision or hearing prostheses.
"Optogenetic techniques are just beginning to be used with humans," Rogers said. "There are tremendous advantages [to using light] because you don't need to disrupt the brain tissues. You can use different wavelengths of light to control different regions of the brain."
Rogers said that from a technology standpoint, the platform could scale to cover much larger areas of the brain and contain more micro-LEDs. However, they would have to rethink the power-supply requirements to support a larger device. It should technically work in humans as it does in mice, but further research will be needed before any tests are attempted in humans.
"The biggest hurdle is around the regulatory approval for the genetic modification," he said.
The new research reveals "cats' ability to categorize bonded individuals and modulate their responses," said study co-author Kaan Kerman, principal investigator of the Animal Behavior and Human-animal Interactions Research Group at Bilkent University in Turkey. "This shows that cats are not automata and possess cognitive abilities that enable them to live alongside humans in an adaptive manner," he told Live Science in an email.
Despite their reputation for being aloof and unfriendly, cats are actually highly communicative and masters at fitting into different social groups.
"Both the public imagination and the scientific community for a time viewed cats as loners with little need for social bonds," Kerman said. However, "cats are more social than previously assumed. They do not interact with humans solely to obtain food. They actively seek social contact and form bonds with their caregivers."
Greeting is a key part of that sociability, as it helps reinforce bonds between domestic cats (Felis catus) and their humans, the researchers wrote in the study, which was published Nov. 14 in the journal Ethology.
To find out more about how cats greet humans, the researchers fitted 40 cat owners with cameras. They were asked to film the first 100 seconds of their interactions with their cat after returning home. The participants were told to act normally so they could capture typical interactions. The researchers then analyzed the footage to assess whether certain behaviors are related, and whether different demographic variables influenced the cats' behaviors.
Nine people were excluded from the study for various reasons, but videos from the remaining 31 participants revealed that the cats were far more vocal toward men than women when their humans first walked in. "No other demographic factor had a discernible effect on the frequency or duration of greetings," the researchers wrote.
The researchers then accounted for different factors, such as the animals' sex, pedigree status and number of cats in the household — but found that the sex of the human was the only significant influence on cat vocalizations.
The researchers suggest this could be because women are typically more verbally active with their cats and better at interpreting what their cats want. Men, on the other hand, may need a lot more prompting before they pay sufficient attention to their cats, the researchers hypothesized in the study.
The team also speculate that cultural factors may have influenced their findings. Previous research shows that people in different cultures interact with cats in different ways — and that this also impacts how cats interact with humans. In this case, the participants were in Turkey, and it may be that men in Turkey are less likely to be chatty with their cats, the team wrote. "However, this interpretation remains speculative and warrants further exploration in future research," the team wrote.
The team also found that meowing and other vocalizations didn't fit into a specific pattern of behavior — meaning these vocalizations were not a sign of a specific emotional state or need.
The team acknowledged that the study has several limitations, including the small sample size and the participants being from the same region. The researchers also noted that the study did not control for other potentially important factors, such as how hungry the cats were when their humans returned, the number of other people in the household or the length of time the animals were alone. Previous research suggested that cats react differently to humans — such as by purring and stretching more — when they are separated for longer periods of time, so the results don't necessarily reveal that cats always meow more at men.
"One important next step is to replicate the findings in different cultural contexts. This would help us understand how generalizable the results are," Kerman said.
Dennis Turner, director of the Institute for Applied Ethology and Animal Psychology in Switzerland who was not involved in the study, said he was impressed by the team's findings.
"I liked the authors' speculation about the reason for this finding and suspect that the men either were less attentive to the cats' vocalizations on other occasions or reacted differently (more or less strongly, different voice frequency) to the greeting vocalizations than women," he told Live Science in an email.
"Much of my team's research [h]as shown that men and women (and children) interact differently with cats in the household." For instance, women speak more to cats and are likelier to go down to the cats' level to interact with them, he noted.
However, cats likely have no preference towards men or women, Turner added. Instead, he agreed with the researchers' view that more meowing toward men is a sign of cats' social flexibility.
According to NASA, MRO has just taken its 100,000th photo of the Martian surface using its HiRISE camera. Put another way, that's an average of 5,000 photos a year, 417 photos a month, or about 14 a day every day since March 2006.
The new milestone image, snapped on Oct. 7, shows a shadowy wasteland of mesas, craters and dunes known as Syrtis Major. This region is just southeast of Jezero Crater, the ancient lakebed where NASA's Perseverance rover is hunting for evidence of life, and appears as an enormous dark spot when seen from afar by space telescopes like Hubble.
MRO has observed the region many times before, and has previously turned up evidence that the sand dunes there are slowly migrating across the planet's surface.
"HiRISE hasn't just discovered how different the Martian surface is from Earth, it's also shown us how that surface changes over time," Leslie Tamppari MRO's deputy project scientist at NASA's Jet Propulsion Laboratory, said in a statement. "We've seen dune fields marching along with the wind and avalanches careening down steep slopes."

Studying how the Red Planet changes over time will help demystify the forces that govern it, and help reveal whether it was ever a lush waterworld like Earth. Launched from Florida on Aug. 12, 2005, and inserted into Mars orbit on March 10, 2006, the MRO will continue its mission to photograph the planet as long as it's able.
Occasionally, MRO does take a break from its primary mission to gaze off into space. In October, the satellite looked skyward to snap a shot of the interstellar comet 3I/ATLAS as it passed about 19 million miles (30 million kilometers) from the spacecraft — significantly closer that the comet got to Earth at its closest point on Dec. 19.
While MRO wasn't designed to observe small, fast-moving objects at such great distances, it nevertheless provided early confirmation that 3I/ATLAS showed the telltale characteristics of a natural comet, including a small nucleus enshrouded in a bright coma of gas and dust.
]]>This wasn't the only place where Zoroastrians mingled with people of other faiths; a 2,000-year-old sanctuary discovered in modern Georgia reveals a mixture of Zoroastrian beliefs and those of other religions, another study reports.
Taken together, the finds are more evidence that Zoroastrianism — the official religion of the royal dynasties that governed the Persian empires for more than 1,000 years — often coexisted peacefully with other religions.
In the Iraq finding, a team led by archaeologists Alexander Tamm, of the Friedrich-Alexander University of Erlangen-Nuremberg, and Dirk Wicke, of Goethe University Frankfurt, examined the ruins of a building complex at the Gird-î Kazhaw site in the Kurdistan region of the country, according to a statement from Goethe University Frankfurt.
They found buried stone pillars and other architectural evidence that the building complex had been a church at the center of a Christian monastery, which was originally discovered in 2015. The monastery was built in about A.D. 500 — "a huge surprise" because it was the first Christian structure ever found there, according to the statement.
The team also unearthed buried fragments of a large jug decorated with an early Christian cross. (Crosses were rarely used as Christian symbols until the Roman Empire legalized Christianity in the fourth century.)
And yet the newly investigated Christian monastery lies only a few yards from a Sasanian Persian fortification where Zoroastrianism was practiced. The two structures' proximity indicates that Christians and Zoroastrians were living peacefully side by side at this location, the statement said.

The archaeological team noted that in that era, Christianity was spreading beyond the borders of the Roman Empire, where it had been the official religion since the Edict of Thessalonica by Emperor Theodosius in 380.
The Romans — and, later, the Byzantines — were usually rivals of the Persians, and sometimes allies. The new religion of Christianity, however, was spreading even among the Persians. "The early dating for a church building into the fifth to sixth century AD is not unusual in the region," the statement said. "There are comparable structures in northern Syria and northern Mesopotamia."
The finds in northern Iraq come amid new details of a roughly 2,000-year-old sanctuary within a "grandiose" temple complex at Dedoplis Gora in Georgia, less than 400 miles (600 kilometers) north of Gird-î Kazhaw in Iraq.
Dedoplis Gora was under the independent Kartli kingdom at that time. However, the region was heavily influenced by the Achaemenid Persian Empire, and there is extensive evidence that Zoroastrianism was practiced there.
According to a study in the January 2026 issue of the American Journal of Archaeology by David Gagoshidze, an archaeologist at the University of Georgia in Tbilisi, "the kings of Kartli worshiped Iranian (Zoroastrian) gods merged with local Georgian astral deities." The study looks at three sanctuary rooms in the Dedoplis Gora palace that had different religious traditions.

In one sanctuary, the rites of Zoroastrainism were practiced at an altar where "permanent residents of the palace of Dedoplis Gora offered daily sacrifices and prayed." In another room, it appears the "noble owners of the palace" worshipped the Greek cult of Apollo, "based on the statuettes found there," according to the study.
Finally, in a third room, in what seems to have been a "syncretic" ceremony (that is, a ceremony that combines more than one religious tradition), rituals were likely carried out for a local cult related to "fertility, agriculture and harvest."
The studies indicate the official Persian religion of Zoroastrianism was generally tolerant of other beliefs, although there were times during the late Sasanian Persian Empire when followers of rival religions like Christianity or Manicheism (a now-extinct Persian religion centered on the prophet Mani) were persecuted.
Zoroastrianism is named after the Persian prophet Zarathustra (Zoroaster in Greek), who is thought to have lived about 3,500 years ago, and it is centered on the worship of the "Wise Lord" Ahura Mazda, whose primary symbol is fire. (The phrase "Thus spake Zarathustra" is the title of a book by the 19th-century German philosopher Friedrich Nietzsche, who wasn't Zoroastrian but used Zarathustra as a fictional mouthpiece for his ideas.)
Zoroastrianism sharply declined in Persia (now Iran) after the Islamic conquest of the Sasanian Empire in the seventh century, and now there are only about 120,000 practicing Zoroastrians worldwide.
]]>When incorporated into energy storage devices called supercapacitors, this new form of graphene could be the key to high-capacity, fast-charging energy storage that could deliver power more quickly than conventional batteries, the researchers said in a statement.
The new material, called multiscale reduced graphene oxide (M-rGO), is created from graphite, a globally abundant resource. Researchers incorporated it into pouch cells, a type of rechargeable battery packaged into a thin, flexible, laminated foil envelope instead of rigid metal. The scientists published their findings Sept. 15 in the journal Nature Communications.
Pouch cells are used in electric vehicles, drones, wearable electronics, laptops, smartphones and tablets. Building them from M-rGO could lead to improvements in total capacity, charge time and the ability to power more complex and power-hungry devices with smaller batteries, according to the research team.
Whereas traditional batteries store energy in chemical bonds, supercapacitors are electrochemical capacitors that store energy as separated electric charge on electrode surfaces. They have the advantage of superior energy density — how much energy can be stored in a given space — and power density — how quickly energy can be delivered per unit volume — over traditional batteries.
Until now, however, supercapacitors have been hamstrung by one significant limitation: only a portion of the potential energy storage of the materials from which they were created was available for use.
This limitation comes from graphene's physical makeup. While it has the advantage of allowing for denser electrodes — the solid conductors in a battery where charge is stored — it's very inefficient at using that space. Simply stacking graphene, for instance, is inefficient because the sheets adhere too closely together and don't leave enough space for the ions that need to move in and out to store energy.
To get around this problem, scientists built messy 3D structures similar to sponges, which provide both large amounts of storage area and pathways for ions to move. While lightweight, the downside is that these structures were large and cumbersome.
This breakthrough overcomes that issue by heating the graphene in a two-step process. This results in a tangled, curved graphene network with multiple levels of structure that still allows for the rapid movement of ions while providing lots of surface area for energy storage.
"This discovery could allow us to build fast-charging supercapacitors that store enough energy to replace batteries in many applications, and deliver it far more quickly," said Mainak Majumder, a professor of mechanical and aerospace engineering at Australia's Monash University, in the statement.
]]>Where is it? Atacama Desert, Chile
What's in the photo? A rare dusting of snow covers parts of one of the driest places on Earth
Which satellite took the photo? Landsat 9
When was it taken? July 10, 2025
This striking satellite photo captured a rare spectacle earlier this year, when "one of the driest places on Earth" experienced a rare snowstorm. This freak event temporarily turned the barren, rocky landscape white — and briefly shut down one of the world's most powerful radio telescopes.
The Atacama Desert is a roughly 40,500-square-mile (105,000 square kilometer) non-polar desert, located within a 1,000-mile-long (1,600 km) strip of land nestled between the Pacific Ocean and the Andes mountains in northern Chile. It is the world's oldest non-polar desert, having remained semi-arid for at least 150 million years. And it is home to the sunniest spot on Earth, the Altiplano Plateau, which experiences sunlight levels equivalent to those on Venus.
The desert is also widely considered to be one of the driest places on Earth, alongside other hyperarid spots, such as Antarctica and the Sahara. Some areas currently experience as little as 0.002 inches (0.5 millimeters) of rain annually, according to Guinness World Records. Previous research has hinted that parts of the Atacama went nearly 400 years without any recorded rain, between 1570 and 1971.
On June 25, a rare snowstorm hit Atacama after a "cold-core cyclone" unexpectedly drifted down from the north, covering over half the desert with white powder, according to NASA's Earth Observatory.
The satellite photo above shows a section of the desert in the Chajnantor Plateau, which rises to around 16,000 feet (5,000 meters) above sea level. This area is home to the Atacama Large Millimeter/submillimeter Array (ALMA) observatory — an array of more than 50 radio dishes that scour the "Dark Universe." (ALMA itself is not visible in the aerial photo.)

This area is well-suited to astronomical research because it is remote, dry and well-elevated, which reduces interference and maximizes the amount of data telescopes like ALMA can collect. But when the snow settled over the observatory, it temporarily forced ALMA into "survival mode," meaning that the dishes were repositioned to stop them from accumulating snow, halting observations.
The icy dust may have also affected the Southern Astrophysical Research (SOAR) Telescope, located around 530 miles (850 km) southwest of ALMA, but to a lesser extent, according to Live Science's sister site Space.com. The newly constructed Vera C. Rubin Observatory is also located in Atacama, near the SOAR telescope, but was not affected by the storm.
The snow did not last long, and most of it had disappeared by July 16. In some places, the sunlight was so intense that the snow likely sublimated, or turned directly from solid to gas, before it melted, according to the Earth Observatory.
This is not the first time that snow has fallen in the Atacama. Similar events also occurred in 2011, 2013 and 2021.
The region has also experienced several intense bouts of rain in recent years. When this happens, it can trigger deadly mudflows. In March 2015, at least 31 people were killed after heavy rainfall triggered Atacama's largest ever flood, according to a 2016 study.

Rain can also cause desert flowers, which normally appear in spring, to unexpectedly bloom during winter months, creating fields of vibrant petals to sprout up around the desert. This most recently happened in 2024, after a surprise rain shower caught the plants off guard.
Precipitation is rare in the Atacama for two reasons. Firstly, it sits within the "rain shadow" of the Andes, which block clouds moving in from the east. And second, cold ocean currents off the region's western Pacific coastline prevent water from evaporating into the air over the desert. This makes Atacama inhospitable to most lifeforms, aside from hardy desert flowers and extreme microbes that live well below its dry surface.
However, the recent instances of extreme precipitation in the region could be a sign that human-caused climate change is making it more likely for snow and rain to fall there. If this continues, the Atacama may one day no longer be one of the driest places on Earth.
For more incredible satellite photos and astronaut images, check out our Earth from space archives.
]]>Milestone: "Taung Child" skull revealed
Date: Dec. 23, 1924
Where: Taung, South Africa
Who: Raymond Dart's anthropological team
At the end of 1924, an anthropologist began chipping away rock around an old primate skull — and rewrote the story of human evolution.
The diminutive skull — about the size of a coffee mug — clearly belonged to a creature very different from us and yet also quite distinct from other apes and monkeys.
But the man credited with its discovery, Raymond Dart, a professor of anatomy and anthropology at the University of Witwatersrand in Johannesburg, hadn't actually excavated the skull.
Rather, it came to Dart because his student had brought another skull from a quarry to his class. Local workers at the Buxton Limeworks in Taung had previously blasted a baboon skull out of the rock and had brought it to the attention of the company. From there, the baboon skull landed with Dart's student, Josephine Salmons. She recognized it for what it was and brought it to his class.
Dart was excited about the possibility that other ancient primate fossils would be embedded in the same sediments, and he showed the baboon skull to his geologist colleague Robert Young. Young knew the quarry and made contact with the quarryman, a Mr. de Bruyn, and asked him to keep an eye out for more skulls. In late November, de Bruyn identified a brain cast in a piece of rock and set it aside for Young, who then hand-delivered the cranium to Dart.
In his 1959 memoir, "Adventures with the Missing Link," Dart makes no mention of Young hand-delivering the skull. Instead, he implies that he had pulled the skull out of rubble in crates that were delivered from Buxton Limeworks.
In Dart's telling, he immediately recognized what he had found.
"As soon as I removed the lid a thrill of excitement shot through me. On the very top of the rock heap was what was undoubtedly an endocranial cast or mold of the interior of the skull," Dart recounted in his memoir. "I stood in the shade holding the brain as greedily as any miser hugs his gold … Here, I was certain, was one of the most significant finds ever made in the history of anthropology."
On Dec. 23, "the rock parted. I could view the face from the front, although the right side was still embedded," Dart wrote in his 1959 memoir.
Over the next 40-odd days, he feverishly analyzed the skull. In a paper published in the journal Nature on Feb. 7, 1925, he described a newfound human ancestor and named it Australopithecus africanus, or "The Man-Ape of South Africa."
The "Taung Child" would rocket Dart to fame and confirm Charles Darwin's hypothesis that humans and nonhuman apes shared a common ancestor that evolved in Africa.

The discovery of the "Taung Child" was the first time scientists had ever found a near-complete fossil skull of an ancient human ancestor. It was longer than other primate skulls, and the molars in the skull suggested "it corresponds anatomically with a human child of six years of age," according to the study, though later estimates would suggest the child died at around age 3 or 4. We don't know for sure, but most researchers think the Taung Child was a girl.
Because the skull was taken out of its "context," it was difficult to peg its age. Over the years, some researchers have estimated it to be 3.7 million years old, but more recent research suggests it was around 2.58 million years old.
For nearly 50 years, A. africanus was thought to be our direct human ancestor. Then, in 1974, scientists digging in Afar, Ethiopia, unearthed another fossil skull from a related species. This one, dated to 3.2 million years ago, was the iconic "Lucy," and her species, Australopithecus afarensis, wound up dethroning the Taung Child as our direct common ancestor.
But there's a twist ending to this story, as scientists found a few fossil fragments that raise the possibility that Lucy's species isn't our direct ancestor after all, with some even suggesting A. africanus could regain its title.
]]>The two gold coins were minted almost 2,300 years ago, around the mid-third century B.C. "This makes them part of a very small group of just over 20 known examples of the oldest Celtic coins from Switzerland," Swiss archaeologists said in a translated statement released Dec. 18.
One coin is a stater that weighs 0.28 ounces (7.8 grams), and the other is a one-fourth stater with a weight of 0.06 ounces (1.86 grams). The term "stater" derives from ancient Greek coins. As mercenaries, the Celts of mainland Europe were increasingly given Greek coins as payment at the end of the fourth century B.C. These coins later served as inspiration for Celtic coinage at the beginning of the third century B.C., when the imitation started, as noted in the statement.
In this case, gold staters minted during the reign of Philip II of Macedon, the father of Alexander the Great, were imitated. Both coins showcase the profile of the Greek god Apollo on the "heads" side (obverse) and a two-horse chariot on the "tails" side (reverse).
However, the two newfound coins were modified slightly from their Greek originals. For example, on the smaller one's reverse, a triple spiral can be seen beneath the horses. This symbol, known as a triskele (also called a triskelion), appears frequently in Celtic art.
The rare coins were unearthed largely on a hunch. Between 2022 and 2023, volunteer archaeologists with Archaeology Baselland, the local archaeological department, discovered 34 Celtic silver coins found in the same area — the Bärenfels bog near the municipality of Arisdorf. This prompted Wolfgang Niederberger and Daniel Mona, also volunteer archaeologists with Archaeology Baselland, to do follow-up investigations there in spring 2025, when they discovered the two gold coins, according to the statement.

It's possible these two coins were deposited as an offering to the gods, according to the statement.
"Experts assume that Celtic gold coins were not used for everyday transactions. They were too valuable for that," the statement noted. Including salary payments, they may also have been used as diplomatic gifts, gifts to followers, to achieve political goals, or as dowries.
Celtic coins are frequently found near moors and bodies of water. This pattern is also evident in Arisdorf, where water-filled sinkholes form the Bärenfels bog. The Celts considered such places to be sacred and dedicated to gods, so it seems reasonable to assume that the coins were deliberately placed there as offerings, the statement noted.
Both coins will go on display together, along with the silver coins from the same site, in a special showcase in Basel starting in March 2026.
This year, we covered a range of stunning space images, from an eye-catching alien comet and a planetary parade portrait to the first Vera C. Rubin photos and otherworldly animal lookalikes. Here are 10 of our absolute favorites.

The biggest space news story this year was undoubtedly the arrival of the third-ever interstellar object 3I/ATLAS, which has dominated headlines and astronomers' attention ever since it was first spotted speeding through the solar system in early July. As a result, there has been no shortage of stunning shots of the alien comet.
Our favorite is this timelapse image captured by the Gemini North telescope on the summit of Hawaii's Mauna Kea volcano. The image was created by combining 16 different photos using multiple colored filters to create a giant cosmic rainbow.
Read more: Interstellar comet 3I/ATLAS transforms into a giant 'cosmic rainbow' in trippy new telescope image

One of the most unbelievable photos of 2025 was this solar spectacle, dubbed The Fall of Icarus, which perfectly captured the moment a skydiver fell directly in front of the sun.
Astrophotographer Andrew McCarthy captured this shot in early November, at a distance of around 8,000 feet (2,440 meters) from the skydiver, YouTuber Gabriel C. Brown. It took six attempts to properly line up Brown with the solar surface before the thrill-seeker leapt from a small propeller-powered craft at an altitude of around 3,500 feet (1,070 m).
"It was a narrow field of view, so it took several attempts to line up the shot," McCarthy told Live Science. "Capturing the sun is something I'm quite familiar with, but this added new challenges."
Read more: Astrophotographer snaps 'absolutely preposterous' photo of skydiver 'falling' past the sun's surface

In June, the most powerful digital camera on Earth winked on. The Vera C. Rubin Observatory in Chile's Atacama desert revealed its first-ever images in June. These debut photos were chock-full of cosmic treasures, including the spiral galaxy M61 (shown here), which researchers noticed was being trailed by a massive stellar tail around the same size as the Milky Way.
We can look forward to many more spellbinding shots from Rubin in the coming years as it begins its decade-long survey of the night sky.

In late January and early February, up to six of the solar system's planets were simultaneously visible in the night sky in what astronomers refer to as a "planetary parade." This particular parade was one of the best in recent years, allowing astrophotographers to snap several stunning pics of the event.
Our favorite pick of the bunch is this planetary portrait from French astrophotographer Gwenaël Blanck, which he digitally edited to show each planet alongside the sun in the order of distance from Earth. Blanck snapped each of the individual worlds within 80 minutes of one another.
Read more: Parisian photographer produces phenomenal, perfectly-proportioned 'planetary parade' portrait

All that glitters is not gold, and in this scintillating starscape, released in November, it is high-energy X-rays that sparkle like a giant ring.
This object, dubbed a "diamond ring," is an expanding bubble of gas in a star-forming region of the Cygnus constellation. The glowing bubble is around 20 light-years across and is around 400,000 years old. It was photographed by NASA's Stratospheric Observatory for Infrared Astronomy (SOFIA), which previously scanned the night sky from a telescope onboard a Boeing 747SP aircraft, at an altitude of more than 45,000 feet (13,700 m).
The cosmic ring is not to be confused with Einstein rings, which are rings of light created by gravitational lensing.
Read more: Giant 'diamond ring' sparkles 4,500 light-years away in the Cygnus constellation

JWST has, yet again, captured some stunning photos in 2025, including the fiery Cigar Galaxy, a tantruming stellar toddler and a "starlit mountaintop" nebula. However, our favorite is this striking portrait of the "Butterfly Star," IRAS 04302+2247.
The insect imposter's shining wings are made from a mini nebula of stellar material leftover from a supernova. This nebula is bisected by a protoplanetary disk that surrounds the baby star like a cosmic cocoon, and just happens to be aligned with Earth so that the two halves of the nebula are seen from side-on. It is located around 525 light-years away, in a star-forming region, known as the Taurus Molecular Cloud.
Read more: James Webb telescope finds a warped 'Butterfly Star' shedding its chrysalis

Speaking of Mars, NASA's Mars Odyssey orbiter also captured this stunning shot of a giant dead volcano peeking above the clouds on the Red Planet, as eerie green lights dance above the Martian horizon.
The mountain in the image is Arsia Mons, which stands at more than 12 miles (19 kilometers) above the surface of the previously volcanic Tharsis plateau. The extinct volcano is more than twice as tall as Mount Everest, but around 4 miles (6 km) shorter than Mars' tallest peak, Olympus Mons.
The green lights look like auroras. But they are actually just an effect of the image being partially captured using infrared light, which emanates from the planet's wispy atmosphere.
Read more: NASA spots Martian volcano twice the height of Mount Everest bursting through the morning clouds

There is no escaping the dark lord of Mordor's malevolent gaze, even from halfway across the universe. That's the impression given by this photo, dubbed the "Eye of Sauron," which playfully references J. R. R. Tolkien's fantasy epic "The Lord of the Rings."
The "eye" is actually the magnetic field of a supercharged energy jet being shot into space by a quasar — a supermassive black hole at the center of a distant galaxy. This quasar, dubbed PKS 1424+240, is billions of light-years from Earth and has one of its jets pointed almost directly at our planet, allowing researchers to peer directly through its "jet cone" and map out the magnetic swirls within.
Read more: Giant, cosmic 'Eye of Sauron' snapped staring directly at us in stunning 15-year time-lapse photo

This ethereal image shows a set of stellar structures reminiscent of the famous "Pillars of Creation," first seen by the Hubble Space Telescope in 1995. The structure is named Ua 'Ōhi'a Lani, which means the "heavenly rains" in Hawaiin, and this image of it was taken by the Gemini North telescope.
What you are seeing is two distinct regions: the twinkling blue stars of a star cluster, named NGC 6823, overlapping the veil of red gas that comprises a more distant emission nebula, dubbed NGC 6820. The ethereal pillars are made from additional gas and dust that have been sculpted by the foreground stars' intense radiation.
The original pillars of creation were also recently given a glow-up by JWST, which captured the iconic cosmic structures using infrared light.
Read more: 'Heavenly rains': Ethereal structure in the sky rivals 'Pillars of Creation'

As incredible as it is to point our cameras out into the universe, space also provides a unique angle of our own planet. And that's exactly the case in our final photo, which shows off a giant, electrifying "jellyfish" hovering above Earth.
The luminous branching structure was snapped by NASA astronaut Nichole Ayers in July, while onboard the ISS. It shows a type of transient luminous event that researchers commonly call sprites. In this case, the red jellyfish-like sprite formed at the summit of a rare upward-shooting "gigantic jet" of lightning, up to 50 miles (80 km) above the U.S.-Mexico border.
If you liked this photo, then be sure to check out Live Science's weekly Earth from space series for more incredible images of our planet from above.
Want to see more amazing images of the cosmos?Be sure to check out Live Science's Space Photo of the Week series, or peep our favorite space shots from 2024 or this gallery of stunning James Webb Space Telescope (JWST) images.
]]>The finding is unusual because, although many Bronze Age burial spots in Scotland were reused over the years, the newfound cremations "tell a different story," the researchers wrote in a new study, published recently in the journal Archaeology Reports Online. In this case, the urns were "tightly arranged, giving the impression of being buried collectively, and then remaining undisturbed except for modern plough damage," the team wrote.
The individuals were found in the remains of a barrow, a burial mound made of earth and rocks. The urns were in the center of the barrow, in a 3-foot-wide (1 meter) burial pit, and were surrounded by a ring of rocks, the archaeologists noted in the study. Organic materials in the burial, including charcoal, enabled the team to radiocarbon-date it to about 1439 to 1287 B.C.
Three of the urns each contain the remains of an adult and a juvenile, while the other two each contain only one adult. The burial was found at Twentyshilling Hill, which is near Twentyshilling Hill Wind Farm in southwest Scotland, during excavations conducted in 2020 and 2021 while an access road to the wind farm was being built. The excavations were conducted by a team from Guard Archaeology, a company that undertakes archaeological excavations during or before construction.
"The discovery of five urns tightly packed together at the same time in one mass burial event is very rare and distinguishes the Twentyshilling Barrow from other barrows in Scotland," the researchers wrote in the report.

The team suspects these eight individuals likely died around the same time, during a terrible event. It's unclear what that event was, but it could have been a famine, disease or war, Ronan Toolis, CEO of Guard Archaeology, told Live Science in an email.
They suspect the individuals died around the same time because the urns appear to have been made by the same craftsperson, Toolis said. Also, during that time, it was common for deceased people in this region to be left out long enough for their flesh to decompose before they were cremated. In this instance, however, the team found that the individuals still had some of their flesh when the cremation was done, which indicates that they had to be cremated in a hurry, the team noted.

These people would have been farmers, Toolis said. He noted that they likely lived near the burial spot, although their settlement has not been found.
This is "an area of Scotland where few such archaeological remains have so far been discovered, so future research may reveal much more about the context of this barrow," Toolis said.
]]>But as the list of ancient human relatives has grown and more fossils have been discovered, Lucy's position has increasingly been called into question. Now, a key paper published last month in the journal Nature could overturn that theory entirely, some scientists say.
They argue that, given the new evidence, an older species, Australopithecus anamensis, was our direct ancestor, not Lucy.
The proposal has revealed intense disagreements in the field. Some say A. anamensis is our direct ancestor, others argue that we don't know which Australopithecus species we descended from, and still others say the new analysis doesn't shake up the human family tree at all.
The new discovery is "not altering our picture of human evolution in any way, in my opinion," Zeray Alemseged, a paleoanthropologist and professor of organismal biology and anatomy at the University of Chicago who was not involved in the new study, told Live Science.
Either way, a resolution might not come until more fossils are found.
Understanding the roots of the debate requires going back a century. In 1925, Raymond Dart announced the discovery of the first known Australopithecus — a skull dubbed the Taung Child unearthed in what is now South Africa that dates to around 2.6 million years ago. For the next 50 years, researchers thought that humans descended directly from the Taung Child's species, Australopithecus africanus.
But Lucy's discovery in 1974 at the Hadar site in Ethiopia rewrote that picture. The 3.2 million-year-old fossil became the oldest known australopithecine specimen at the time.
And researchers found her species, A. afarensis, walked upright on two legs similarly to how humans do today, yet it had a smaller brain — about the size of a modern-day chimp's. This suggested Lucy's kind could represent a "halfway" point in human evolution between the last common ancestor with chimps and us, making her species a good candidate for our direct ancestor among the many known hominins, the lineage that encompasses humans and our closest relatives.

Then, in 1979, her status as our direct ancestor was cemented: an assessment of the evolutionary relationships among hominin fossils uncovered until that point suggested Lucy's species gave rise to the genus Homo. In that family tree, A. africanus was demoted from our ancestor to a more distant cousin.
As more australopithecines have been unearthed, the Australopithecus family tree has become bushier and more tangled, complicating the picture of who we may have descended from. But for many anthropologists Lucy's species still reigns, eventually giving rise to the lineage from which modern humans evolved.
Then the new Nature paper was published. Researchers had unearthed new fossil fragments and tied them to a previously discovered, enigmatic 3.4 million-year-old fossil known as the "Burtele foot".
The new tooth and jaw fragments allowed anthropologists to ascribe the foot, for the first time, to a little described and controversial species — Australopithecus deyiremeda, a tree-climbing ancient human relative that walked upright on two legs and lived alongside Lucy's species 3.5 million to 3.3 million years ago at the Woranso-Mille site in Ethiopia.
For Fred Spoor, a professor of evolutionary anatomy at University College London who was not involved in the recent study of the Burtele foot, the new discovery was the nail in the coffin for the theory Lucy's species was our direct ancestor.

That's because the paper suggested that the species tied to the Burtele foot and the South African A. africanus were more closely related to each other than either was to Lucy's species. By that logic, then, A. africanus may not have descended from Lucy's species, but was rather her cousin.
So, it's possible that both A. deyiremeda and A. africanus descended from the more ancient A. anamensis, who lived in East Africa from around 4.2 million to 3.8 million years ago.
This would also make A. anamensis the direct ancestor to humans, Spoor told Live Science in an email.
For Spoor, this finding would have huge implications. "If this is correct, A. afarensis will lose its iconic status as the ancestor of all later hominins," probably including us, Spoor wrote in an accompanying commentary about the recent research.
But other anthropologists are hotly divided on the implications of the new paper.
Some Live Science spoke to thought Spoor's conclusions were plausible, while other experts said they were "far-fetched" and "a stretch, to put it mildly."
Because the existing fossil record in East Africa goes much further back in time than the current South African record, many believe the Homo genus arose in East Africa.
Currently the oldest known Homo fossil is a 2.8 million-year-old jawbone from Ethiopia, but models estimate the genus would have actually emerged around 0.5 million to 1.5 million years earlier.
This is older than many of the earliest South African hominin fossils, which were found thousands of miles away. That "would make it unlikely that any of those are the direct ancestor," Carol Ward, the Curators' Distinguished Professor of pathology and anatomical sciences at the University of Missouri, told Live Science.
Lucy’s species is still a candidate, but no longer the candidate.
Lauren Schroeder, University of Toronto Mississauga
For many, the most likely candidate for an East African ancestor is still Lucy's species, A. afarensis, which lived in modern-day Ethiopia, Tanzania and Kenya from around 3.9 million to 3 million years ago. This wide geographic distribution and persistence for almost a million years means it had many opportunities to give rise to other species across Africa, Alemseged said.
Scientists in the "Lucy" camp argue that A. afarensis' fully upright mode of walking, broad diet, use of early stone tools and wide geographic range constitute strong evidence for Lucy's ancestral position in the human family tree.
This makes Spoor's claim that Lucy's species wasn't our direct ancestor a big one. But he isn't alone in this view.
Thomas Cody Prang, an assistant professor of biological anthropology at Washington University in St. Louis and a co-author of the Nature study, said it's possible A. afarensis evolved human-like features completely independently of modern humans, like how bats and birds independently evolved wings. Such convergent evolution has been proposed before in our family tree: For example, Prang's team previously found that A. afarensis and modern humans independently evolved certain body proportions.
If this is true, other species living at roughly the same time as Lucy's kind are likely ancestors to later hominins, Prang told Live Science in an email.
Prang, for his part, thinks A. deyiremeda's anatomy makes the species a better candidate for our direct ancestor than Lucy. That's because the species has a combination of ancient and new traits. What's more, a 2015 analysis flagged A. deyiremeda as being more closely related to Homo than Lucy's species.
Others think the Nature paper resurrects A. africanus as a plausible ancestor to Homo.
Lauren Schroeder, a paleoanthropologist at the University of Toronto Mississauga and who was not involved in the new study, said that either way, many different hominin species were evolving and intermingling across Africa during this 3.5 million to 2 million period of time. That means our evolutionary history is more like a braided stream, with species separating and then recombining, and less like a straight evolutionary line.
"Early Homo could have emerged from a broader, pan-African pool of australopith diversity. So yes, Lucy’s species is still a candidate, but no longer the candidate," for a direct human ancestor, Schroeder told Live Science in an email.
Even the authors of the new paper disagree on its implications. While Prang supports the dethroning of Lucy's species as our direct ancestor, the study's lead author Yohannes Haile-Selassie, a paleoanthropologist and director of Arizona State University's Institute of Human Origins, insists that Lucy's species is still the best candidate for the direct ancestor to Homo.
He told Live Science in an email that the more ancient traits found in A. deyiremeda and A. africanus, like having feet adapted for climbing trees, contradict the idea that they are our direct ancestors. On the other hand, Lucy's species had more human-like feet, which Haile-Selassie said makes A. afarensis the "more likely ancestor of those which came later."
Of course, it's possible the smoking gun evidence that settles the debate will never come.
"We will almost certainly never know who our direct ancestor is — and the more we learn about human evolution and how diverse our past was, the more elusive that ancestor becomes," said Ward.
But that doesn't mean we'll ultimately understand less of our evolutionary past, Ward said. "Even though we may never know which one was our ancestor, we can still piece together much of what that ancestor may have been like."
The motor — which was made by YASA, a subsidiary of Mercedes-Benz that also provides motors to Ferrari — weighs only 28 pounds (12.7 kilograms) but can deliver up to 1,000 horsepower at once or a sustained 469 to 536 hp for longer durations. This new mark breaks YASA's own previous unofficial record, a 29-pound motor that yielded 738 horsepower, company representatives said in a statement.
For comparison, the 2025 Nissan Leaf has a single motor that generates 214 hp, and even a high-performance EV like the Tesla Model S utilizes three motors to generate around 1,020 hp.
The ability to pack so much power into such a compact, lightweight motor is due in part to YASA's axial flux technology. Traditional radial flux motors are longer, tube-like structures, with a stator — the stationary part of a motor that creates a magnetic field used to produce motion — surrounding a cylindrical rotor. A magnetic field is passed perpendicularly to the shaft through the cylinder to spin the rotor.
By contrast, an axial flux motor is more like a pancake, with a disc-like rotor and stator. Magnetic flux passes along the axis parallel to the shaft (hence the name). The axial flux tech allows for much smaller designs than traditional radial designs, according to YASA.
The company emphasized that the design is scalable and doesn't rely on any rare or exotic materials to function.
The design also opens up a pathway for massive weight reduction in EV design. YASA said that deploying the in-wheel motors in lieu of a traditional power and drivetrain could save around 440 pounds (200 kg). And for vehicles designed from the ground up to incorporate the new motor, the savings could be closer to 1,100 pounds (500 kg).
This is in part because the system also incorporates advanced regenerative braking, the process by which electric vehicles capture energy that would normally be lost as heat during braking and utilize it to recharge the battery.
Instead of power being shunted from the battery to spin the wheels, energy from the wheels is captured to spin the motor, which generates electricity rather than consuming it. The motor resists the rotation while generating energy, thereby slowing the car and powering up the battery. YASA says efficient regenerative braking could reduce the need for traditional friction brakes, saving both weight and space.
While the current iteration is clearly geared toward high-performance EVs and supercars, axial flux motor technology opens the door for longer-range electric vehicles capable of generating more power with fewer, lighter components. The reduction in space required for traditional powertrain components also provides manufacturers an opportunity to streamline aerodynamics or provide more interior space for cargo or passengers.
]]>Name: Holy Crib
What it is: Five pieces of sycamore wood
Where it is from: Jerusalem
When it was made: Circa 4 to 6 B.C.
In the 640s, Sophronius, the patriarch of Jerusalem, sent several unassuming wooden slats to Rome for safekeeping after the Muslim conquest of Jerusalem. Sophronius asked Pope Theodore I to protect the pieces of wood, which he said were the remains of the Holy Crib — Jesus' manger.
Today, five pieces of wood from the manger are preserved in a gold, silver and glass reliquary in a crypt chapel in the Basilica of Santa Maria Maggiore in Rome. The reliquary was commissioned by Pope Pius IX in 1802, when it replaced an older urn that was stolen by Napoleon's troops in the late 1700s. (The troops left the wood behind in the basilica.)
According to Monsignor Piero Marini, who is the guardian of the Holy Crib, four of the wooden slats once formed two X's, while the fifth slat ran down the middle to hold them together. Wooden mangers in the late first century B.C. would have been topped with straw for animals to eat. But this particular manger is much more significant: It is said to have once held the baby Jesus.
The Gospel of Luke mentions that Jesus was born in Bethlehem. Because there was nowhere for Mary and Joseph to stay in town, the newborn Jesus was laid in a manger. Many biblical scholars believe that Jesus was born between 6 and 4 B.C. And while his birth is celebrated on Dec. 25 every year, scholars are unsure of his exact birth date.
The first historical mention of the manger pieces comes from Origen, an early Christian scholar, who wrote in A.D. 220 that the crib was preserved in Bethlehem, according to Marini. Then, around A.D. 400, St. Jerome discussed the Holy Crib and many people's pilgrimages to it in the Grotto of the Nativity in Bethlehem. Ever since the wood pieces made their way to Rome in the seventh century, the manger has remained in Santa Maria Maggiore.
In 1894, abbot Giuseppe Cozza-Luzi was the first to study the remains of the Holy Crib. His examination revealed that there are two longer pieces of wood and three shorter pieces — ranging from about 25 to 33.5 inches (64 to 85 centimeters) in length — and that all of them had been damaged over time. Several of the wooden pieces also had holes and traces of metal, suggesting they had once been constructed into a manger.
Based on a microscopic analysis of a small piece of wood that had been removed in the 1600s, Cozza-Luzi concluded that the wood was a kind of hard maple, possibly sycamore. The type of wood, the form of the slats, and the evidence of construction — along with historical references — all suggested to him that the Holy Crib remains were part of an authentic ancient manger from the Jerusalem area.
In 2019, experts restored the wooden slats, and Pope Francis took the opportunity to give a small piece of the crib back to the Holy Land, with a plaque that read "Ex cunis Iesu Infantis," meaning "From the crib of the Infant Jesus."
Each year, many Christians visit the Holy Crib during the annual Christmas Eve mass at midnight at Santa Maria Maggiore, which is also called Santa Maria ad Praesepe (St. Mary of the Crib) and the "Bethlehem of the West" for its association with Mary and her baby's manger.
For more stunning archaeological discoveries, check out our Astonishing Artifacts archives.
Items like cameras, binoculars, and telescopes aren't impulse buys, and certainly not stocking fillers. They are also very personal purchases, and often expensive. Trusting someone else to make the right choice on your behalf is a gamble, and frankly, we wouldn't always recommend it.
If you're a skywatcher, you'll have a preference for what you like to look at; some telescopes will be more suitable for deep space explorations, and others will be best for planets. Similarly, photographers will know the subject matter they want to capture, which will dictate what the camera needs to be good at. Handling ISO well, a reliable autofocus? Mirrorless or DSLR? 12MP versus 61MP?
Nature spotters and birders will know what magnification and objective lens size they need for their binoculars, whether they need a waterproof or nitrogen-purged model, and whether they prefer coated or multi-coated lenses or Bk7 or BaK-4 glass. The list goes on.
There's a good chance someone buying a pair as a gift won't consider what you would if you were buying for yourself. But skywatching and camera equipment are often expensive, and deserve careful consideration. This is why self-gifting makes sense. Besides, we all like to splurge on a "me-to-me" every now and then!
That is why we created this self-gifting guide, which lists the binoculars, telescopes and cameras that we are more than happy to recommend this season. Each item below links to an in-depth review conducted by our staff writers or experienced freelance contributors.
We do, of course, have individual guides to the best cameras, best binoculars, and best telescopes, but this guide has been curated based on what our editors would buy for themselves — and have, in some cases — in each of the categories below.

This first section is for skywatchers who don't want a telescope, as well as wildlife watchers and birders. Other people might think they know what you want, but it really pays to do the research for yourself. Binoculars aren't just about how powerful they are.
Binoculars vary depending on their purpose. Stargazing, birding, wildlife spotting and sports viewing will all require different objective lens sizes, have more suitable fields of view, and require different quality glass and coatings. For example, a pair that is ideal for astronomy might be too heavy and impractical for long walks, while compact travel binoculars might fall short under dark skies.
Comfort is also a crucial consideration. Weight, balance, eyecup design, interpupillary distance (will you use them whilst wearing spectacles?) and grip all affect how long you can comfortably use a pair of binoculars for. These are things only you can judge.
Comfort aside, optical preferences are personal, too. Some people might prioritize brightness, whereas others prefer edge-to-edge sharpness or excellent colour accuracy.
Binoculars are typically a long-term investment. A well-chosen pair can last decades. It’s worth making sure they suit your needs now and into the future. Here is a list of binoculars that we believe strike the perfect balance between performance, versatility, and ease of use. Each one is a standout choice in its category.

Save $140 on a pair of some of the best binoculars for stargazing. They have huge 100mm objectives and 25x magnification. Get wonderful views of the moon, star clusters and even faint deep-sky objects like nebulas. Read our full Celestron SkyMaster 25x100 review ★★★★View Deal

These Regal ED 8x42 binoculars are ideal for bird-watching beginners who want to observe wildlife without breaking the bank. They have surprisingly good optics for the price, delivering sharp views from the center to the very edges of the image circle. We rated the Celestron Nature DX ED 10x42 variant 4 out of 5 stars in our review. ★★★★View Deal

Usually, image-stabilized binoculars are unfavourably heavy, but these buck the trend, weighing just 13.9 ounces (395 grams). Our friends at Space.com gave them five out of five stars in their hands-on review. They are about $50 cheaper at Newegg than Amazon's $696.95. ★★★★★View Deal

These are the best binoculars for bird-watchers with (very) deep pockets. They have unrivaled optics and excellent build quality. In our review, we noted they may be the only binoculars we've tested with no discernible chromatic aberration. Read our hands-on Leica Noctivid 10x42 review. ★★★★★ View Deal

Next up, we have telescopes — and as previously touched on, which is the right telescope for you depends on exactly what you want to observe.
Do you want to see local planets and the moon, or deep-sky objects? Do you want to use it for astrophotography? Do you need to move it from A to B often? Do you have space to store it in your house, assembled or not? There are lots of things to consider. Things which others might not think of.
Telescopes also have different experience requirements. Some are usable right out of the box, whereas others are reserved for advanced users. A well-meaning gift can be too complex, causing the recipient frustration, or too basic, making it difficult for a budding astronomer to advance. Only you know your experience and/or patience level.
When so much money is being spent on telescopes, whether it is your own or not, there should be no compromises. You want something that will provide enjoyment on every clear night you can be outside. Here are some of our recommendations.

Use your smartphone to enjoy a tour of the night sky. Easily locate and see real-time stars and planets as well as the brighter nebulas, galaxies, star clusters and double stars.View Deal

A beginner's telescope offering superb views of the moon and planets. It is a great choice for newcomers to skywatching. It is often discounted during major sales events, so you might be able to pick one up for less during the Boxing Day or New Year Sales. Read our hands-on Celestron Inspire 100AZ Refractor Telescope Review ★★★★View Deal

Observe objects near and far thanks to its large 8-inch aperture, and tour the cosmos easily using the automated GoTo mount. It's great for seasoned astronomers but also makes navigating the skies easy for newcomers. Read our Celestron NexStar 8SE review ★★★★ ½View Deal

Explore the universe with Unistellar's eVscope 2, which enables you to photograph and observe thousands of celestial objects through your phone screen. Read our hands-on Unistellar eVscope 2 review ★★★★ ½View Deal

A camera is another (and probably the most important) purchase in this guide that is best made by you, rather than guessed at by someone else. You want something that matches your skill level and ambition. Plus, photography is intensely personal, and it is difficult to buy a camera for someone else.
Each style of camera, and even each manufacturer, will have its own particular strengths. Ergonomics also matter. Button layouts and menu styles are a matter of personal preference. Specs like autofocus systems, number of megapixels and ISO handling will depend on the style of shooting. Even questions like how your computer handles image files are things only you will know the answer to (and the frustration of your computer not being able to cope!). If you're already a photographer with an ecosystem of lenses and accessories, you don't want to have to change that because someone thought they were doing the right thing by getting you an "upgrade".
Finally, it goes without saying that cameras are expensive. Buying your own camera gives you the exact tool to suit your vision as a photographer. Here are some of our recommendations for self-gifting.

Most users will be able to get to grips with this camera easily. It is a perfect blend of excellent functionality and a gorgeous retro design. We really like the old-school manual controls to adjust the shutter speed and ISO settings. Read Space.com's hands-on Nikon Z fc review ★★★★ ½View Deal

This is one of the best cameras for astrophotography. It handles a high ISO exceptionally well and has reliable autofocus. It is also hugely customizable, so you can set it up exactly how you like. Read our full Sony A7 IV review ★★★★ ½View Deal

This is a great camera for users who need a balance of speed, resolution and reliability. It is very expensive, so it's probably reserved for professional users, though if you're a committed beginner, this will see you right for many years to come. Read our full Canon EOS R5 II review ★★★★ ½View Deal

A camera so good that our Managing Editor just bought himself one as a me-to-me gift! The Nikon Z8 is a market-leading mirrorless camera and sits at the very top of our best cameras guide. Read our full Nikon Z8 review ★★★★ ½View Deal

What it is: The spiral galaxies NGC 2207 and IC 2163
Where it is: 120 million light-years away, in the constellation Canis Major
When it was shared: Dec. 1, 2025
A stark new portrait of two colliding spiral galaxies combines different kinds of light to evoke the colors, shapes and moods of autumn. The image, which shows the galaxies NGC 2207 (lower right) and IC 2163 (upper left), was created by combining infrared light captured by the James Webb Space Telescope (JWST) with X-ray light from the Chandra X-ray Observatory.
NGC 2207 and IC 2163 are locked in a slow gravitational merge that, by chance, is seen face-on from the solar system. The larger galaxy, NGC 2207, dominates the field, while the smaller IC 2163 overlaps its outer regions. The gravitational pull of each galaxy distorts the other's spiral arms, stretching out streams of stars and gas and compressing gas and dust in ways that can ignite new stars. The result is an intricate web of chaos.
One of JWST's core tasks, according to NASA, is to provide scientists with a clear view of the centers of merging galaxies and thereby inform a new generation of models that will describe how galaxies interact and merge. NGC 2207 and IC 2163 are the perfect targets.
In the image, JWST's midinfrared data appear in white, gray and red, primarily showing the dust and cooler material within the galaxies' cores and spiral arms. Chandra's X-ray data are shown in blue, highlighting high-energy regions of the two galaxies — binary stars, the remnants of dead stars, and regions where supernovas have occurred.
The spectacular layered image of NGC 2207 and IC 2163 is one of four Chandra-based composites that were published at the same time. The other three include NGC 6334, a star-forming region known for its arcs of glowing gas and dust; supernova remnant G272.2-0.3, where hot X-ray-emitting gas fills an expanding shell; and a star system called R Aquarii, where a white dwarf star sucks material from a red giant star.
Each image merges Chandra's view of the high-energy universe with data from JWST (launched in 2021), the Hubble Space Telescope (launched in 1990) and the Spitzer Space Telescope (active between 2003 and 2020), as well as from ground-based telescopes.
For more sublime space images, check out our Space Photo of the Week archives.
]]>"The planet is about 100 times brighter than a first magnitude star," Anthony Mallama, a researcher at the IAU's Centre for Protection of the Dark and Quiet Sky, told Live Science in an email. First magnitude stars are the brightest stars visible in the night sky. For example, when looking at average brightness, the first magnitude star Sirius is at -1.47, and Venus is at -4.14 (on the scale astronomers use, dimmer objects have a more positive magnitude).
But what makes Venus super-bright? Astronomical research suggests several factors can change how luminous Venus appears from Earth.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
Venus' shininess is largely due to the planet's high albedo, or the amount of light reflected off its surface. Venus has an albedo of 0.76, meaning it scatters about 76% of the sunlight it receives back into space, according to Sanjay Limaye, a distinguished scientist in the Space Science and Engineering Center at the University of Wisconsin-Madison. In contrast, a perfect mirror would bounce off 100%, Earth bounces 30% and the moon has a low albedo, reflecting just 7% of the light that hits it.
Venus' high albedo arises from a thick, all-swaddling cloak of clouds. Extending from 30 miles to 43.5 miles (48 to 70 kilometers) above the Venusian surface, these decks of clouds are cushioned between haze layers, and are mostly suspended droplets of sulfuric acid, according to a 2018 review of data from 1970s and 1980s space missions to Venus. Limaye noted that such droplets are tiny, mostly about the size of a bacterium. Together, the droplets and haze layers scatter sunlight extremely efficiently.

But Venus isn't the solar system's shiniest object. Saturn's ice-covered moon Enceladus has a high albedo of around 0.8, a 2010 study noted. From Earth, though, this cosmic object appears much dimmer than Venus. That's because it's much farther from the sun. While Earth's "morning star" is 67 million miles (108 million km) from the sun, Enceladus is at least 13 times as distant. The inverse square law shows that Venus consequently receives 176 times more intense light compared to Enceladus, giving it a significant edge.
Being close to Earth also influences Venus' brightness. The average Venus-Earth distance is 105.6 million miles (170 million km). Sometimes, Mercury is the closest planet to Earth at an average distance of 96.6 million miles (155.5 million km), but Venus' larger size (of 7,521 miles (12,104 km) compared to Mercury causes it to look brighter.
But Venus' distance from our planet — and consequently, its apparent luminosity — aren't fixed. At its closest, when Venus lies directly between Earth and the sun, it's a mere 24 million miles (about 38 million km) away, according to NASA. Yet at this point — called the inferior conjunction — it's actually extremely dim, according to the National Astronomical Observatory of Japan.

This arises because the inner planets show moon-like phases when viewed from Earth, Limaye said. At inferior conjunction, Venus' illuminated surface is completely invisible from Earth. In contrast, most of Venus' illuminated surface can be seen only when Earth and Venus are on opposite sides of the sun, a position called the superior conjunction. At this point, though, Venus is at its smallest and is very dim because it is extremely far from Earth.
Venus is at its brightest when only a crescent-like sliver of its sunlit surface can be seen. Termed the point of greatest brilliancy, this typically occurs a month before and after the inferior conjunction. A 2006 study co-authored by Mallama suggested that, at this phase, Venus' suspended sulfuric acid droplets scatter sunlight toward Earth. "This phenomenon is called a glory and it is in the same family of optical effects that includes rainbows," Mallama explained.

Together, variations in the albedo, its distance from the Earth and sun, and its phases seen from Earth can all cause the brightness of Venus to swing from -4.92 to -2.98, according to a 2018 study. However this is still luminous enough to make Venus viewable most of the year, even from urban areas.
So why should you go for a 10x42? In short, it’s the jack-of-all-trades of binocular specs; ideal if you want to look at a variety of general subjects without necessarily specializing in a particular niche. The 10x magnification gets you close enough to see detail without resulting in too much shake, and the 42mm objective lenses gather plenty of light without weighing you down. It’s no surprise that this size is the best seller across almost every brand.
But here’s the catch: not all 10x42s are created equal. The glass, coatings and materials can be the difference between “hey look, there’s a bird” to “wow, I can see every feather”. Celestron offers six models across four different ranges in this size — Outland X, Nature DX (plus an ED version), TrailSeeker (plus an ED version) and Regal ED. These range from budget-friendly to “treat yourself,” with each step up adding a little more sophistication (and a little more strain on your wallet).
We took all six models to a nature reserve to compare them side by side, to determine which one you should invest your money in and which ones to avoid.
The budget model
The people's favorite
The sweet spot
The middle child
The awkward overachiever
The one for serious birders



We've all heard the saying "you get what you pay for." This is particularly true when it comes to optics. At first glance, all of these models wear the same 10x42 badge. But binoculars are like cakes: it's the same flour and sugar, but you get different results based on the quality of the ingredients.
Coatings
The Outland X makes do with basic multi-coated lenses, which are fine for daytime use but lacking after dark. Nature DX upgrades this to fully multi-coated optics with phase-coated prisms, which sharpen the contrast and reduce glare. TrailSeeker and Regal ED both combine phase and dielectric prism coatings with fully coated lenses, delivering the clearest, brightest view of the lot.

Glass
ED (Extra-low Dispersion) glass keeps different wavelengths of light focused together, reducing the purple or green fringing that appears around bright edges. The Outland X series has no ED option, while the Nature DX and TrailSeeker offer both standard and ED models. The Regal is only sold as an ED version, and its flat-field technology also maintains edge-to-edge sharpness.
Build quality and materials
The Outland X and Nature DX models are built with polycarbonate bodies clad in rubber armor — durable enough, but can feel a little plasticky. The TrailSeeker and Regal series upgrade to a magnesium chassis and rubber armor, which feels tougher yet lighter in extended use. However, all six models are waterproof and nitrogen-purged to prevent fogging.

The Celestron Outland X series is designed for outdoor enthusiasts who want a rugged, durable pair of binoculars that won't cost the earth. If you aren't too bothered about having amazing detail, contrast or overall image quality, the Outland X will fit the bill if you absolutely cannot stretch your budget.
During our tests, they unsurprisingly performed the worst in most cases, but that's to be expected in a budget pair of binoculars. If you were to look through them on their own, they appear to do the job at first glance, but when you compare them directly to the other 10x42 models, it becomes obvious where they fall short.


There was noticeable chromatic aberration around high-contrast subjects, and the overall picture was softer and duller than the rest of the lineup. The build quality feels solid yet lightweight, although it lacks some of the premium features you'll find on the more expensive models. Still, it's waterproof and nitrogen-purged to prevent fogging, which is important for an outdoor binocular. The focus wheel is fairly smooth, although the diopter is noticeably stiffer than the other models.
Overall, we'd recommend the Outland X series if you just want to get a closer look at subjects without spending too much money, and you aren't bothered about having highly detailed views. We're not sure we'd pay $119 for the 10x42, so it might be worth waiting for Black Friday to take advantage of a deal, or downsizing to the 8x25 or 10x25 if you don't need to use them in low light.
Would we buy them? No.

In the TrailSeeker, we see the introduction of phase and dielectric-coated prisms. The latter enables more light to be reflected off the prism, resulting in a brighter image than the Nature DX and Outland X, which only have phase coating. The combination of phase and dielectric maximizes light transmission, making this pair better suited for wildlife observation in low light and stargazing. The differences in brightness, sharper image quality and reduced glare compared to the Nature DX are small, but noticeable. The build has also been upgraded to lightweight and durable magnesium alloy.
When we tested them out at a nature reserve, we noticed a fair amount of chromatic aberration when observing ducks on a pond, which was completely eliminated once we switched to a pair with ED glass. There's a bit of fringing around the moon, and we enjoyed using them for stargazing as they are comfortable to use for long periods.


Putting the TrailSeeker in fifth place does feel a bit harsh, because they are undoubtedly a fantastic pair of binoculars. Before we introduced the ED models into our group test, the TrailSeeker was initially our favorite. However, the addition of the Nature DX ED, in particular, has presented a better option for a lower price, so it's hard to justify placing the TrailSeeker any higher.
Overall, the TrailSeeker performed very similarly to the Nature DX, and although they were slightly better in terms of sharpness, brightness and overall clarity, we didn't notice enough of a difference to make the price jump worth it. At the end of it, it came down to value for money.
Would we buy them? No, but only because there are better options — not because there's anything wrong with these ones.

It's easy to see why the Nature DX is a bestseller. They offer a great balance of decent performance and affordability, making them good for beginners and hobbyists who want to get their money's worth without having to spend too much on exceptional optics.
We took them to our local nature reserve and struggled to make out finer details when observing waterfowl. There was also noticeable color fringing on birds and trees, both near and in the distance. For this reason, we wouldn't recommend them for birdwatching, specifically, but they performed quite well for stargazing and general purpose viewing.
In many of our tests, the Nature DX actually performed very similarly to the more expensive TrailSeeker, with only a fraction of a difference in sharpness across the frame, chromatic aberration and overall brightness. For more casual users, these differences certainly won't warrant spending the extra $100-$140 to upgrade to the TrailSeeker.


The Nature DX is the lightest of the six models, and the weight difference is particularly apparent when compared to the Regal. They're perfectly suited to throwing in your bag on a hike or taking on camping trips where you'll want to view a whole range of subjects, and we think they're good value for money overall — but we would pay the extra to upgrade to the Nature DX ED.
Would we buy them? No, we'd pay the extra for the ED variant.

This is where the fun starts. The TrailSeeker ED are bright, sharp and excellent in low light, and we have no complaints about them at all. So, why have we placed them third, you may ask? Well, the pricing makes them a bit redundant when the Nature DX ED and Regal ED are on the table. Although the TrailSeeker ED are slightly better than the Nature DX ED, we don't think they're worth the extra cost, so we'd recommend the Nature DX ED for beginners or anyone on a budget.
On the other end, the Regal ED has everything the TrailSeeker ED has, plus flat-field technology to improve edge-to-edge sharpness, but the TrailSeeker ED are more expensive, so it's a no-brainer there as well.
The TrailSeeker ED is fantastic, but the problem is that the price prevents our being able to recommend them, as there's always a more attractive option.


During our group test at the nature reserve, we noticed the TrailSeeker ED was definitely brighter, sharper and clearer than the Nature ED, but again, not by the kind of margin that might warrant such a huge price jump. The trees in the distance had more definition, so the TrailSeeker ED would be better for long-distance viewing, specifically, in addition to low-light observation. The TrailSeeker ED was also much more detailed than the standard TrailSeeker when we were looking at leaves on a pond. They're fantastic for both birdwatching and stargazing, but we just wish they were priced better.
The TrailSeeker ED was one of two pairs where we felt truly immersed in the scene we were observing. With the other four pairs, it felt obvious that we were looking through binoculars.
Would we buy them? Currently, no. If they were cheaper, yes.

Financially, the Nature DX ED seems to make the most sense out of Celestron's lineup. While they don't quite have the optical power of the Regal ED, the Nature DX ED still excels in comparison to the Nature DX and TrailSeeker, and packs a lot of punch for the price.
We noticed a big reduction in chromatic aberration when comparing them against the normal Nature DX and TrailSeeker models, both for close-up viewing and objects in the distance. This was particularly apparent when observing birds in flight, and when peering at a bright moon, where we barely noticed any color fringing whatsoever.


For birdwatching, there wasn't a huge amount of difference between the Nature DX ED and the TrailSeeker ED, despite the latter having dielectric coatings. While the TrailSeeker ED had the edge overall, the Nature DX ED give more bang for your buck and are fantastic overall.
Despite their polycarbonate build, we think they're an excellent choice that combines substance with affordability.
Would we buy them? Yes.

Regal by name, regal by nature. These powerful 10x42s seem to tick all the boxes, and after spending some time with them, we are impressed — particularly after we spent a couple of nights stargazing with them in the Bannau Brycheiniog National Park in Wales, enjoying sights of Andromeda, the Summer Triangle and the Coathanger asterism.
The feature that sets them apart from the other models, which could arguably make it a little unfair to compare them, is their flat-field technology. This eliminates the natural curvature that typically occurs with standard convex lenses, to ensure edge-to-edge sharpness across the entire field of view. As you can see from the graphic further up the page, we found these to be the sharpest binoculars by far.


They also come with ED objective lenses as standard, whereas the other models don't. They offer the same prism and lens coatings as the TrailSeeker models, along with the same field of view and body materials.
The views were tack-sharp throughout, with bright, contrasty views in any light. The moon looked perfect with no color fringing, and we could easily follow a Kingfisher dancing above a pond with no issues at all. Like the TrailSeeker ED, we felt truly immersed in the scene as opposed to feeling like we were looking through binoculars.
If we're being picky, their weight could potentially deter users who are looking for a more lightweight and compact pair of binoculars. Not only is the Regal the heaviest of the lot, but the eyecups are the biggest, and we found them to be bigger than our eye sockets (yet another unrealistic beauty standard!). This meant we couldn't, for lack of a better term, get our face in them properly without being a little uncomfortable.
This shouldn't deter you, as this is just a personal preference, but it's small details like this that can make a big difference in finding the right pair of binoculars for you.
Would we buy them? If we were serious about birding, yes.