diff --git "a/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" "b/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" --- "a/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" +++ "b/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" @@ -10,8 +10,274 @@
The scorching cluster existed just 1.4 billion years after the Big Bang, blazing far earlier and hotter than current models of galaxy cluster formation predict should be possible. The discovery suggests that the predicted patterns of cluster growth might need a rethink, researchers reported Jan. 5 in the journal Nature.
Galaxy clusters are collections of dark matter and hundreds to thousands of galaxies, all bound together by gravity. Those galaxies are separated by a mix of gas known as the intracluster medium. As galaxy clusters form, the intracluster medium heats up due to gravitational interactions within the cluster and energy emissions from young stars and black holes. But this process takes a long time, and scientists have only rarely observed a hot intracluster medium in young galaxy clusters.
"Understanding galaxy clusters is the key to understanding the biggest galaxies in the universe," study co-author Scott Chapman, an astrophysicist at Dalhousie University who conducted the research while at the National Research Council of Canada (NRC), said in a statement. "These massive galaxies mostly reside in clusters, and their evolution is heavily shaped by the very strong environment of the clusters as they form, including the intracluster medium."
In the new study, researchers used the Atacama Large Millimeter/submillimeter Array (ALMA), a powerful radio telescope located in Chile, to observe a bright, young galaxy cluster known as SPT2349-56, whose light was emitted just 1.4 billion years after the Big Bang. This cluster is relatively small — about the size of the Milky Way's outer halo — but it contains more than 30 active galaxies and three supermassive black holes, and it forms stars more than 5,000 times as fast as the Milky Way.
Using a phenomenon called the thermal Sunyaev-Zeldovich effect, the team found that the gas in the intracluster medium is at least five times hotter than current theories of cluster formation predict it should be for its relatively young age.

"We didn't expect to see such a hot cluster atmosphere so early in cosmic history," study coauthor Dazhi Zhou, a PhD student in the department of physics and astronomy at the University of British Columbia, said in the statement. "In fact, at first I was skeptical about the signal as it was too strong to be real."
But it was real — and that could mean that galaxy clusters can form more quickly than expected.
"This tells us that something in the early universe, likely three recently discovered supermassive black holes in the cluster, were already pumping huge amounts of energy into the surroundings and shaping the young cluster, much earlier and more strongly than we thought," Chapman said.
In future studies, the team plans to investigate what this unusual cluster might mean for the formation and evolution of existing galaxy clusters.
"We want to figure out how the intense star formation, the active black holes and this overheated atmosphere interact, and what it tells us about how present galaxy clusters were built," Zhou said. "How can all of this be happening at once in such a young, compact system?"
]]>However, while this thought experiment raises interesting questions about alien intelligence, it does not provide any evidence that these signals actually exist.
So far, the quest to uncover alien intelligence has focused on finding evidence of distant human-like civilizations. For example, the Search for Extraterrestrial Intelligence (SETI) Institute — the world's leading organization dedicated to searching for alien life — spends most of its time searching for radio signals from distant exoplanets or heat given off by technological megastructures, such as the theoretical Dyson sphere.
However, some scientists believe that these searches suffer from an "anthropocentric bias" — meaning we're trying to understand nonhuman entities through a distinctly human lens — and do not account for potential civilizations that are wholly different from our own. Due to this bias, we may be overlooking promising signs of life.
In the new study, uploaded Nov. 8 to the preprint server arXiv, researchers proposed a new way that an alien civilization could communicate — by flashing to one another like fireflies. These flashing signals could be used for specific and complex communications. However, the researchers argue that they are more likely being widely broadcast to other civilizations, like a luminous repeating beacon. (This paper has not yet been peer reviewed, but is now under consideration for publication in the journal PNAS.)

On Earth, fireflies communicate via a series of regularly repeating flashes caused by internal chemical reactions. These flashes are mainly used to find mates. But while these signals are simple, they do allow distinct firefly species to tell each other apart.
The researchers argue that similar flashing could be used as "here we are" signals by an alien civilization. And space is plentiful with repetitive bursts of light.
In the new paper, researchers analyzed the flashes of more than 150 pulsars — rapidly spinning, highly magnetized neutron stars that shoot out regular beams of electromagnetic radiation — as a proxy for what these signals may look like. And while they found no evidence of any artificial signals, they did note some similarities between the pulsars and firefly signals, and proposed ways of being able to detect future firefly-like flashes from other natural objects, like pulsars.
The study team argues that these signals could be more likely to evolve in long-lasting alien civilizations that progress past the need for widespread use of radio waves. A similar progression is already happening on Earth, where the use of communications satellites with more specific and concentrated radio signals is making our planet appear more "radio quiet" from afar, the researchers wrote.
And just because we may not naturally think to communicate in this way, it doesn't mean that other civilizations wouldn't, they added.
"Communication is a fundamental feature of life across lineages and manifests in a wonderful diversity of forms and strategies," study co-author Estelle Janin, a doctoral candidate at the School of Earth and Space Exploration at Arizona State University, recently told Universe Today. "Taking non-human communication into account is essential if we want to broaden our intuition and understanding about what alien communication could look like, and what a theory of life ought to explain."
This is just one example of what non-human signals may look like, and the researchers encourage others to think outside of the anthropocentric box to come up with other ways that a non-human-like civilization could communicate.
"Our study is meant as a provoking thought-experiment and an invitation for SETI and animal communication research to engage more directly and to draw more systematically on each other's insights," Janin said.
]]>For an exercise novice, however, this endless sea of workout gear and gadgets can be downright overwhelming. But if that is you, do not worry! You do not need a gym's worth of exercise equipment, a high-end Garmin watch or a professional athlete's wardrobe to begin. Starting simple is often the most effective strategy for a long-term lifestyle change — not to mention that it is much easier on the wallet.
With that in mind, we rounded up a list of essential, science-backed purchases for a beginner, prioritizing ease-of-use, safety and versatility. Plus, we sprinkled in some fitness deals to help your post-Christmas budget go that little bit further.
Here’s what to buy (and what to skip) to launch your fitness journey in strategic fashion — and save yourself some money in the process.

Before investing in a premium running watch or one of the best rowing machines, focus on the basics. Buying the right footwear and workout clothing is an essential first step on your fitness journey. This is not about fashion; it is about comfort and safety.
Footwear: Your most important investment
This is your non-negotiable purchase. Appropriate footwear helps you maintain good foot health, reduces the risk of injuries and boosts your overall well-being, according to a 2024 review published in the journal Applied Sciences. Choose shoes that are stable, comfortable to wear and suitable for your intended activities.
That said, do not get bogged down in premium brands and ultra-specialized shoes. Start with a quality pair of all-purpose trainers or cross-trainers. The best beginner-friendly options have a good balance of cushioning and stability for a mix of gym workouts, walking, jogging and low- to moderate-intensity aerobics. Good looks are just the cherry on top.

Save up to 31% off on the Nike Men's Pegasus 41. Reliable, comfortable to wear and suitable for a wide range of activities, from gym workouts and dance classes to hiking and road running, these superb all-rounder trainers offer plenty of value for those new to exercise.
Price check: Nike $101.97 (29% off)
Also available: Nike Women's Pegasus 41, now up to 29% off View Deal

Looking for your first-ever running shoes? This deal may spark your interest: the Brooks Glycerin 22 is now up to 24% off. We tested the older version of this model, the Glycerin 21, and liked it so much that we named it the best option for beginners in our guide to the best running shoes. By the looks of it, this version offers an even better combo of cushioning and stability.
Price check: Brooks $124.95 (24% off)
Also available: Brooks Women's Glycerin 22, now up to 24% offView Deal

If you are looking for something more budget-friendly, consider the Puma Men's Velocity Nitro 3. This popular training shoe is known for its responsive foam cushioning, lightweight feel, great traction and reasonable price. With up to 37% at Amazon, you can get it for less than $99.
Price check: Puma $93.99 (31% off)
Also available: Puma Women's Velocity Nitro 3, now up to 30% off View Deal

Save up to 41% on the New Balance Fresh Foam X 1080 V14, a durable and ultra-comfortable running shoe that we named the best option for everyday runs. While not specifically designed for beginners, it is an excellent option for those who may need that extra cushioning.
Price check: New Balance $129.99 (21% off)
Also available: New Balance Women's Fresh Foam X 1080 V14, now up to 38% offView Deal

Save up to 43% on the Reebok Men's Nano X4, one of the best budget-friendly cross-trainer shoes and an excellent pick for those who prefer gym workouts and weightlifting to steady state cardio.
Price check: Reebok $99.99 (33% off)
Also available: Reebok Women's Nano X4, now up to 33% offView Deal

Workout clothing: It is all about comfort
Workout clothing should help you withstand the demands of intense exercise, not actively impede your attempts to get fit. Look for synthetic, moisture-wicking fabrics like polyester or spandex — they help regulate body temperature and prevent sweat from lingering on your skin, while cotton, for example, holds sweat and can cause chafing.
Then, focus on freedom of movement. A well-designed piece of activewear will allow for freedom of movement and not slip off or irritate your skin during intense workouts. Start with a few core pieces to build a rotation: a few tops and bottoms, several pairs of sports socks and, for women, a couple of good-quality sports bras.
Again, there is no need to invest in premium brands and highly specialized activewear; affordable lines from major retailers work perfectly fine. Comfort here is key — if you feel good, you are more likely to get moving.
The best retailers for finding deals on beginner-friendly workout clothing:

While you may be tempted to splash out on a premium treadmill or super-smart exercise bike, hold off on the big purchases for now. You can build remarkable strength, endurance and mobility with basic, space-savvy equipment too, and at a much lower cost. A yoga mat, adjustable dumbbells and resistance bands, for example, are very beginner-friendly, offering maximum versatility with a minimal footprint.
If you are not entirely sure how to use them, look up beginner-friendly home exercise video tutorials or join an online fitness class. January fitness sales are not just about physical gear — many fitness apps and services are discounted, too, or offer free taster sessions. There is also plenty of good-quality content that is entirely free of charge.

While not a match for the premium Liforme or Lululemon products, the Gaiam Yoga Mat impressed us with its combination of comfortable padding, beautiful designs and affordable pricing. It is the best budget option in our guide to the best yoga mats, and a great choice for warm-up routines, stretching and beginner-friendly yoga poses.
Price check: Target $24.99 (38% off), selected block color options onlyView Deal

Price check: Best Buy $479.99View Deal

Save up to 15% on the Whatafit Resistance Bands, our top pick for varied workouts in our guide to the best resistance bands. Available in a mind-boggling array of colors and designs, this workout set offers something for everyone and does not cost the earth.View Deal

Save 40% on the Urevo E4W Walking Pad, a compact, affordable and unusually stylish machine that impressed us so much that we named the best walking treadmill in our round-up of the best treadmills on the market. An excellent way to ramp up your step count during the cold winter months.
Price check: Dick's Sporting Goods $199.99, Urevo $180.99View Deal

Save 34% on the Yosuda Indoor Stationary Cycling Bike, an unusually beginner-friendly cardio machine and the top budget option in our guide to the best exercise bikes. It is quiet, easy to use and smart-enabled, but more importantly, it offers great, joint-friendly workouts for people of all fitness levels.View Deal

A well-chosen fitness tracker can provide a lot of valuable feedback and positive reinforcement in the early stages of your fitness journey. Counting your daily steps, tracking an active workout and reviewing your past activities can help you make more sense of your workouts and general progress.
Training by "feel" is hard for beginners, and continuous heart rate measurements provide objective data on your efforts. It helps you understand zones: are you in a moderate, fat-burning zone or pushing into high intensity? This ensures your easy days are genuinely promoting recovery and your hard days are truly effective.
Moreover, the goal-setting and "closing your rings" features (a visual representation of your progress towards your daily exercise goals) leverage gamification, and this in itself can be a powerful motivator. Not to mention, many fitness trackers come in handy outside of the gym or running track, too.
However, do not get swayed by the trends here. A basic Fitbit, Garmin or Apple Watch SE will track steps, heart rate, sleep, and active minutes just as well as the more advanced and expensive models. They also tend to be more beginner-friendly in terms of their user interfaces and the language they use to describe your fitness stats.

Save 30% on the Fitbit Inspire 3. This unassuming fitness band takes the top spot in our guide to the best budget fitness trackers, and for a good reason. It is relatively accurate and easy to use, and it will not overload your wrist with heavy machinery or confuse you with complex fitness stats.
Price check: Walmart $70.64, Target $69.95View Deal

If you are not a fan of narrow fitness bands, consider the Amazfit Active 2, the second iteration of our favorite budget smartwatch. With its compact design, multiple sports modes and long battery life, it does a great job as a first-ever workout tracker.
Price check: Best Buy $84.99, Target $84.99, Amazfit $84.99View Deal

Save 17% on the Garmin Vivoactive 6. Sleek, highly accurate and packed with beginner-friendly features, it is our favorite smartwatch for those new to exercise and the best option for hikers in our guide to the best Garmin watches. If you have never used a Garmin watch before, this is an excellent starting point.
Price check: Amazon $270, Adorama $249.99, Best Buy $263.99View Deal

Save 45% on the Amazfit Helio, an excellent alternative to bulky smartwatches and our favorite budget-friendly smart ring. While not suitable for tracking high-intensity workouts, it offers heaps of useful data on your sleep, stress and recovery. It looks and feels good, too.
Price check: Amazfit $109.99View Deal

The allure of a shiny new machine is strong, but impulsive buys often become expensive clothes racks. Here are some tips on how to avoid costly mistakes as an exercise beginner.
Hold off on major equipment. Do not start by buying an expensive treadmill, elliptical, or full home-gym system. Use your foundational gear or a gym trial for one month at least. If you have consistently stuck with your routine, then research what machine would best suit the activities you have genuinely enjoyed.
Avoid over-specialization. You do not need cycling shoes until you are sure indoor cycling is your go-to sport. Similarly, you do not need Olympic weightlifting shoes for general strength training. Let your sustained interest guide niche purchases, not the other way around.
Beware of fads and "quick fix" gadgets. If a product promises insane results with minimal effort, it is likely selling a fantasy. Sustainable fitness is built on consistent effort, not electrical muscle stimulators, ab belts or dodgy supplements. Stick to the good-old healthy diet and regular workouts, and you will be primed for success in 2026.
The goal of your initial purchases is not to equip a pro athlete from the get-go, but to minimize barriers to exercise and help you establish healthy habits. Every item should make it easier to say "yes" to your workout and harder to make an excuse.
This New Year, invest first in the basics that support consistency. Let your proven dedication over weeks and months, not your initial January enthusiasm, guide your future investments. Your journey starts not with the fanciest gear, but with the first step taken in the right shoes.
We hope that our list will help you do just that.
]]>The findings upend a 20-year-old theory about how certain charged particles, known as ions, ended up on the lunar surface, and could have big implications for upcoming moon missions, researchers say.
Ever since NASA's Apollo missions first returned lunar samples to Earth in the early 1970s, scientists have been puzzled by traces of volatiles — substances that vaporize at relatively low temperatures, including water, carbon dioxide, helium, argon, and nitrogen — that they found within the moon's soil, or regolith. It soon became clear that some of these substances, particularly nitrogen ions, had originated from Earth's upper atmosphere and were most likely blown onto the moon by gusts of solar wind. (Recent research has also shown that some volatiles on the moon, such as water, may be created directly by the solar wind and have no terrestrial ties.)
Since 2005, the leading theory suggests that this material transfer could have only happened before Earth developed its magnetic field, or magnetosphere, because this invisible forcefield would have likely trapped any atmospheric ions being blown away from our planet.
However, in the new study, published Dec. 11 in the journal Communications Earth & Environment, scientists combined data from the Apollo samples with computer models simulating the evolution of Earth's magnetosphere, and found that the transfer of atmospheric ions was greatest whenever the moon passes through our planet's magnetic tail — the largest section of the magnetosphere that always points away from the sun. (This alignment occurs when Earth gets between the moon and sun, near the full moon phase each month).

The models revealed that, rather than blocking atmospheric ions from being blown from our planet, the magnetic field lines within Earth's tail act as invisible highways for charged particles, guiding them toward the moon, where they are then settled into the lunar regolith.
This means that the transfer of atmospheric ions likely began shortly after the magnetosphere took shape around 3.7 billion years ago — and is likely still occurring today.
Until now, scientists had assumed that the lunar regolith would only contain traces of Earth's earliest atmosphere. However, the new study suggests that these samples could actually act as a time capsule for our atmosphere and magnetosphere.
"By combining data from particles preserved in lunar soil with computational modeling of how solar wind interacts with Earth’s atmosphere, we can trace the history of Earth's atmosphere and its magnetic field," study co-author Eric Blackman, a theoretical astrophysicist and plasma physicist at the University of Rochester, said in a statement.

As a result, regolith collected during upcoming lunar missions — such as NASA's Artemis program, which aims to put boots on the moon by 2028, and China's moon missions, which have already returned lunar samples to Earth — could help researchers fill in gaps in our planet's geological history.
Earth is not the only solar system object to lose tiny bits of itself to the solar wind. Mercury is often seen with a long comet-like tail of dust that is blown off its surface, while the moon also has a tail of ablated sodium ions that Earth repeatedly passes through.
By further studying how Earth loses its atmosphere to the moon, the researchers are hopeful of learning more about how this may have happened elsewhere in our cosmic neighborhood.
"Our study may also have broader implications for understanding early atmospheric escape on planets like Mars, which lacks a global magnetic field today but had one similar to Earth in the past," study lead author Shubhonkar Paramanick, a planetary scientist at the University of Rochester, said in the statement. Future research could help scientists "gain insight into how these processes shape planetary habitability," he added.
]]>The warriors' burials hold weapons, including a saber and a bow with a quiver of arrows, as well as dozens of coins. A DNA analysis indicates that one of the warriors might be the father or brother of a teenage warrior in one of the other burials and that all three warriors were related along their paternal lines.
Located near the village of Akasztó, about 57 miles (92 kilometers) southeast of Budapest, the burials were discovered by volunteers from the Katona József Museum's community archaeology program and were excavated by a team of volunteers and professionals led by Wilhelm Gábor, the head of the museum's archaeology department.
All three men were buried in the 920s or 930s, the archaeological team told Live Science in an email. In total, the three burials yielded 81 coins. Most are from northern Italy and date to the reign of Berengar (888 to 924), a king who ruled parts of Italy and was a great-grandson of Charlemagne. At that time, the Hungarians had formed a kingdom in Hungary, and warriors from the kingdom were involved in military campaigns in northern Italy. It's possible that the warriors in the burials obtained the coins during those campaigns, the archaeologists said.
One of the warriors was 17 to 18 years old when he died and had a belt that was partly decorated with gilded silver. On his right side was a leather pouch, known as a sabretache, that was decorated with a silver plate.
"On his left hand he wore a gold ring with blue glass stones," and his "legs were adorned with ornate silver bracelets and anklets," the archaeologists wrote. Several small, gold plates were found on his body — possibly the remains of clothing or his death shroud, the team suggested. He was also buried with a horse harness that had straps decorated with gilded silver.

Another burial contained a warrior who died at the slightly younger age of 15 to 16. He was buried with a quiver that contained seven arrows and a bow. The "stiff arched ends and handle of his bow were covered with decorative antler plates," the archaeological team wrote.
The third burial held a warrior who died between the ages of 30 and 35. It contained a saber, archery equipment, a horse harness, a silver bracelet, and a belt decorated with coins, the archaeologists said. A DNA analysis revealed that this individual was likely the father or brother of the youngest warrior and that all three warriors were related.

The team also looked at the ratios of isotopes, or elements with varying numbers of neutrons in their nuclei, in the warriors' remains. This analysis showed that the three warriors had diets rich in animal protein.
From the archaeological finds, "it can be stated that an elite warrior group, presumably members of a military leadership, were buried here," the archaeologists wrote. Research is underway to learn more about the warriors' identities. It's not clear how they died.
]]>Where is it? Carter's Cays and Strangers Cay, the Bahamas [27.105580266, -78.06669135]
What's in the photo? Underwater sandbanks and a coral reef surrounding a pair of small islands
Who took the photo? An unnamed astronaut on the International Space Station (ISS)
When was it taken? Oct. 20, 2016
This intriguing astronaut photo shows off a series of rippling sandbanks surrounding a pair of small islands in the Bahamas. The submerged swirls were partly carved out by a coral reef lurking on the edge of a hidden ocean "drop-off."
The Bahamas is made up of more than 3,000 islands and smaller cays interspersed with coral reefs, fast-flowing tidal channels and shallow underwater structures, known as sand banks. These features collectively make the islands "one of the most recognizable places on Earth for astronauts," according to NASA's Earth Observatory.
This photo shows a series of intricate sandbanks and a shallow barrier-like coral reef in the waters surrounding two diminutive islands — Carter's Cays (lower left) and Strangers Cay (upper right). The islands are two of the northernmost landmasses in the Bahamas, located around 125 miles (200 kilometers) east of Florida. (For context, Strangers Cay is around 2.2 miles (3.6 km) across at its widest point.)
The sand banks, which can be seen winding in and around the two cays like ribbons, have been sculpted by decades of unchanged ocean currents, causing sand to pile up in the same place over time.
But the coral reef — which cuts across the bottom right corner of the image and has waves breaking across its far edge — is much older, having likely built up over several millennia.

The largest and most prominent sandbank, which looks like a giant U-shape in the center of the image, lies directly opposite a large gap in the coral reef. This is no coincidence: The break in the reef has created a strong and sustained tidal flow that has pushed the sand much further backward, according to the Earth Observatory.
These sand swirls are fairly small compared with some of the larger sandbanks in the region. The biggest is the Great Bahama Bank, which covers an area of around 80,000 square miles (210,000 square kilometers) off the Exuma Islands in the central Bahamas and supports a massive seagrass ecosystem.
These features frequently draw comparisons to abstract paintings or the Northern Lights, due to their shape and captivating glow, when viewed from above. However, their supposed luminosity is actually just an optical illusion caused by their proximity to the ocean's surface. In some areas, the sand is likely only around 6.5 feet (2 meters) below the waves, according to the Earth Observatory.
If you look closely at the ocean's surface in the image, you will also notice that the water to the upper left of the islands is very light and covered with shimmering streaks, while the bottom right corner of the image — beyond the reef — is darker and exhibits traditional wave patterns.
This is the result of a steep drop-off in the deep ocean just beyond the coral reef, similar to the one depicted in the film "Finding Nemo." Beyond this point, ocean currents create the swells that many people see from the window of an airplane. But behind the reef, the wind sculpts the ocean's surface into subtle streaks instead.
This drop-off is also why there are no sandbanks visible beyond the reef.
For more incredible satellite photos and astronaut images, check out our Earth from space archives.
]]>This policy change effectively downgrades the recommendations for several shots, such as those against rotavirus, the flu and hepatitis A. Rather than being recommended to all children by default, those vaccines will now be recommended to only certain "high-risk" groups or will be accessible through "shared clinical decision-making" between parents and providers.
The concept of shared clinical decision-making emphasizes that, if a child's caregivers wish to give them a routine vaccine, they should first consult with a medical provider. While that idea may sound benign, it could sow confusion around which vaccines are considered effective and medically necessary, Dr. Daniel Jernigan, former director of the Centers for Disease Control and Prevention's (CDC) National Center for Emerging and Zoonotic Infectious Diseases, told STAT. And it could introduce logistical hurdles to accessing vaccines.
"By making these vaccines a shared clinical decision making, it introduces one more barrier that prevents a child from getting a life-saving vaccine," Jernigan said.
The new recommendations group vaccines and immunizations into three categories:
Federal guidance still recommends that all children receive shots against 11 diseases: measles, mumps, rubella, polio, pertussis, tetanus, diphtheria, Haemophilus influenzae type B (Hib), pneumococcal disease, varicella (chickenpox), and human papillomavirus (HPV). However, HHS is recommending only one dose of HPV vaccine instead of the usual two, STAT reported.
Certain "high-risk" populations are recommended to be immunized for RSV, hepatitis A, hepatitis B, dengue, and two types of meningococcal disease. (Note that immunizations against respiratory syncytial virus, or RSV, include a prenatal vaccine given to mothers and antibody drugs given to kids. There is no RSV vaccine available for children.)
Vaccines against meningococcal disease and hepatitis A and B are also listed under the "shared decision-making" category, as are shots against rotavirus, COVID-19 and the flu.
"Abandoning recommendations for vaccines that prevent influenza, hepatitis and rotavirus, and changing the recommendation for HPV without a public process to weigh the risks and benefits, will lead to more hospitalizations and preventable deaths among American children," Michael Osterholm of the University of Minnesota's Center for Infectious Disease Research and Policy, told The Associated Press.
Stakeholders had been bracing for this policy change for several weeks by the time it was announced Monday (Jan. 5).
In early December, President Donald Trump called on federal officials to compare the U.S. childhood vaccine schedule to that of "peer nations," implying that other countries have superior policies. In mid-December, Politico reported that Robert F. Kennedy Jr., the head of the Department of Health and Human Services (HHS), had intended to make the U.S. vaccine schedule more like that of Denmark — which recommends shots against only 11 diseases in its schedule.
Comparable countries often recommend vaccines and immunizations against about 12 to 15 pathogens, while Austria and the U.S. have historically sat on the high end at around 17.
Experts have emphasized that the United States' vaccine schedule has been rigorously tested and that the decision to change it was not made using new data on its safety or effectiveness. They also noted that the policies of Denmark — a small country of roughly 6 million people with universal health care and a fairly homogenous population — may not serve the U.S. population, given that it's much larger and contends with a splintered health care system and greater health inequities. (The U.S. population is roughly 340 million.)
"The truth is that while vaccine guidance is largely similar across developed countries, it may differ by country due to different disease threats, population demographics, health systems, costs, government structures, vaccine availability, and programs for vaccine delivery," the American Academy of Pediatrics (AAP) noted.
These location-specific factors weigh upon which vaccines health officials recommend to a given country's children. But despite the differences between America and Denmark, federal officials are now claiming that Denmark's approach is the superior one regardless of context.
Officials had already been shifting away from giving full-throated recommendations of routine vaccines. For example, HHS previously recommended "shared clinical-decision making" for giving COVID-19 vaccines to kids and providing hepatitis B vaccines to infants of mothers who test negative for the virus.
Various stakeholders are expected to break with the CDC's new recommendations. For instance, medical societies, city and state health departments, and regional health alliances have rejected the CDC's other vaccine policy changes, and the AAP has sued HHS for allegedly violating established rules around vaccine regulatory changes when the agency tweaked its COVID-19 vaccine guidance.
"Today's announcement by federal health officials to arbitrarily stop recommending numerous routine childhood immunizations is dangerous and unnecessary," AAP president Dr. Andrew Racine, said in a statement, according to the clinical news source Contemporary Pediatrics.
"The longstanding, evidence-based approach that has guided the U.S. immunization review and recommendation process remains the best way to keep children healthy," Racine said, "and protect against health complications and hospitalizations."
This article is for informational purposes only and is not meant to offer medical advice.
]]>Since researchers first established the link between diet, cholesterol and heart disease in the 1950s, risk for heart disease has been partly assessed based on a patient’s cholesterol levels, which can be routinely measured via blood work at the doctor’s office.
However, accumulating evidence over the past two decades demonstrates that a biomarker called C-reactive protein – which signals the presence of low-grade inflammation – is a better predictor of risk for heart disease than cholesterol.
As a result, in September 2025, the American College of Cardiology published new recommendations for universal screening of C-reactive protein levels in all patients, alongside measuring cholesterol levels.
C-reactive protein is created by the liver in response to infections, tissue damage, chronic inflammatory states from conditions like autoimmune diseases, and metabolic disturbances like obesity and diabetes. Essentially, it is a marker of inflammation – meaning immune system activation – in the body.
C-reactive protein can be easily measured with blood work at the doctor’s office. A low C-reactive protein level – under 1 milligram per deciliter – signifies minimal inflammation in the body, which is protective against heart disease. An elevated C-reactive protein level of greater than 3 milligrams per deciliter, signifies increased levels of inflammation and thus increased risk for heart disease. About 52% of Americans have an elevated level of C-reactive protein in their blood.
Research shows that C-reactive protein is a better predictive marker for heart attacks and strokes than “bad,” or LDL cholesterol, short for low-density lipoprotein, as well as another commonly measured genetically inherited biomarker called lipoprotein(a). One study found that C-reactive protein can predict heart disease just as well as blood pressure can.
Inflammation plays a crucial role at every stage in the development and buildup of fatty plaque in the arteries, which causes a condition called atherosclerosis that can lead to heart attacks and strokes.
From the moment a blood vessel is damaged, be it from high blood sugar or cigarette smoke, immune cells immediately infiltrate the area. Those immune cells subsequently engulf cholesterol particles that are typically floating around in the blood stream to form a fatty plaque that resides in the wall of the vessel.
This process continues for decades until eventually, one day, immune mediators rupture the cap that encloses the plaque. This triggers the formation of a blood clot that obstructs blood flow, starves the surrounding tissues of oxygen and ultimately causes a heart attack or stroke.
Hence, cholesterol is only part of the story; it is, in fact, the immune system that facilitates each step in the processes that drive heart disease.

Lifestyle can significantly influence the amount of C-reactive protein produced by the liver.
Numerous foods and nutrients have been shown to lower C-reactive protein levels, including dietary fiber from foods like beans, vegetables, nuts and seeds, as well as berries, olive oil, green tea, chia seeds and flaxseeds.
Weight loss and exercise can also reduce C-reactive protein levels.

Though cholesterol may not be the most important predictor of risk for heart disease, it does remain highly relevant.
However, it’s not just the amount of cholesterol – or more specifically the amount of bad, or LDL, cholesterol – that matters. Two people with the same cholesterol level don’t necessarily have the same risk for heart disease. This is because risk is determined more so by the number of particles that the bad cholesterol is packaged into, as opposed to the total mass of bad cholesterol that’s floating around. More particles means higher risk.
That is why a blood test known as apolipoprotein B, which measures the number of cholesterol particles, is a better predictor of risk for heart disease than measurements of total amounts of bad cholesterol.
Like cholesterol and C-reactive protein, apolipoprotein B is also influenced by lifestyle factors like exercise, weight loss and diet. Nutrients like fiber, nuts and omega-3 fatty acids are associated with a decreased number of cholesterol particles, while increased sugar intake is associated with a larger number of cholesterol particles.
Furthermore, lipoprotein(a), a protein that lives in the wall surrounding cholesterol particles, is another marker that can predict heart disease more accurately than cholesterol levels. This is because the presence of lipoprotein(a) makes cholesterol particles sticky, so to speak, and thus more likely to get trapped in an atherosclerotic plaque.
However, unlike other risk factors, lipoprotein(a) levels are purely genetic, thus not influenced by lifestyle, and need only be measured once in a lifetime.
Ultimately, heart disease is the product of many risk factors and their interactions over a lifetime.
Therefore, preventing heart disease is way more complicated than simply eating a cholesterol-free diet, as once thought.
Knowing your LDL cholesterol level alongside your C-reactive protein, apolipoprotein B and lipoprotein (a) levels paints a comprehensive picture of risk that can hopefully help motivate long-term commitment to the fundamentals of heart disease prevention. These include eating well, exercising consistently, getting adequate sleep, managing stress productively, maintaining healthy weight and, if applicable, quitting smoking.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
]]>We love that this device is integrated into something you would ordinarily use in the pool. There is no need to remember an additional device, such as a watch or training band. It is also inherently swim-specific. There aren't heaps of menus, workout types, or other distractions you would find on a multi-sport device. If swimming is the only exercise you're interested in tracking, these could be a good fit.

Use the FORM Smart Swim 2 goggles to help you stay on track with your New Year's resolution. Enjoy your swims and let these goggles keep track of the stats for you.
Price check: Amazon: $199, Walmart: $223.73, Best Buy: $199View Deal
The Smart Swim 2 comes with five interchangeable nose bridges and adjustable straps to help you achieve a secure fit. Once you have fitted the goggles to your face, the optical heart rate sensor sits snugly against it. In our hands-on FORM Smart Swim 2 review, we found the heart rate readings to be spot on, with no missed or erratic readings, unlike some swim trackers we have reviewed.
While these aren't the most comfortable swimming goggles in the world, as they require a tight fit to ensure data accuracy, they are about as comfortable as any other swimming goggles we've worn for extended periods in the water. They all need to be snug to the face to ensure a leak-free swim after all.
While testing, we found that the reduced peripheral vision takes a little while to get used to, as does switching between looking 'at' the data and looking 'through' the data, as this is likely something you haven't done before. We quickly adapted to it, and over time, it just became part and parcel of our swimming experience.



The real strength of the FORM Smart Swim 2 is the hands-off real-time feedback in the pool. If, like us, you're prone to losing count of your laps/lengths in the pool, you can relax into your swim and not have to think about the numbers. Turn detection, automatic pause detection and lap counting work with impressive accuracy, taking the guesswork out of tracking swims. You can connect and sync your data with third-party platforms, such as Strava, if you wish.
The accompanying app provides structure through workouts and training plans, but access to the full range requires a subscription, priced at around $15 (£13) per month or $99 (£84) per year. In the grand scheme of things, though this may seem a bit annoying, it is still great value.
Key features: Customizable transparent 'in-eye' display for metrics such as heart rate tracking, number of lengths, lap split times and overall distance.
Product launched: April 2024.
Price history: The price was consistently $279 (aside from a few flash-sale drops) but has been $199 since the start of November, possibly because consumers are now favoring the updated Form Smart Swim 2 Pro.
Price comparison: Amazon: $199 | Walmart: $223.73 | Best Buy: $199
Reviews consensus: The Smart Swim 2 goggles have a rating of 4.1 out of 5 on Amazon, with many users reporting significant improvements in their swimming technique and overall experience. A frequently mentioned benefit is the 'swim straight' feature, which is particularly useful for helping open-water swimmers stay on track.
Live Science: ★★★★½ | T3: ★★★★ | Toms Guide: ★★★½
✅ Buy it if: Your primary focus is data logging or improving your swims. These are purpose-built for swimmers.
❌ Don't buy it if: You want to track multiple sports, for that, you'll be better off taking a look at one of the best fitness trackers. Our favourite overall, the Amazfit, is on sale for just $119.99 with a 40% discount right now.
Check out our other guides to the best budget fitness trackers, sleep trackers, smart rings and much more.
]]>The Wolf Moon's name has both Native American and Anglo-Saxon origins, and likely comes from the belief that hungry wolves are more likely to be heard howling close to the middle of the winter.
This year's January full moon was also a supermoon, meaning it appeared brighter and larger than usual. In fact, there won't be another chance to see a moon as big and bright as this one until November. Photographers across the Northern Hemisphere whipped out their cameras to capture our neighbor in its full glory, and you can see some of the best snaps below.










Name: The Alfred Jewel
What it is: Gold-encased cloisonné gemstone with inscription
Where it is from: Somerset county, England
When it was made: A.D. 871 to 899
In 1693, a farmer ploughing his field in North Petherton in southwest England found an intriguing medieval jewel made from gold, enamel and rock crystal. But it is the remarkable inscription around the edge that sets the piece apart from others. The jewel reads "AELFRED MEC HEHT GEWYRCAN," an Old English sentence that means "Alfred ordered me to be made."
The Alfred Jewel, which is in the collection of the Ashmolean Museum at the University of Oxford, is assumed to have been made between 871 and 899, during the reign of King Alfred the Great. Alfred initially ruled as king of the West Saxons (Wessex), one of seven Anglo-Saxon kingdoms at the time. In 886, he expanded his power to the entirety of England and has thus been considered the first English king.
The jewel measures 2.4 by 1.2 inches (6.2 by 3.1 centimeters). Its design consists of dozens of small cells filled with colorful enamel paste and accented by thin strips of gold. It depicts a person from the mid-thighs up. The Old English inscription in capital letters around the edge of the jewel's bezel connects it to Alfred the Great.
King Alfred had a reputation as a savvy military leader since he helped fight off Viking invasions in the ninth century. He was also a highly educated man who had numerous religious texts translated from Latin into Old English. According to the Ashmolean Museum, Alfred distributed these religious manuscripts to bishops in the Anglo-Saxon kingdom along with an aestel, which was a kind of bookmark or pointer to help keep one's place while reading. The Alfred Jewel is likely the end of an aestel.
At the base of the jewel, in what looks like the mouth of a dragon or snake, experts have noticed a cylindrical socket. This was likely where the pointer itself was once connected.
The Alfred Jewel was found near Athelney Abbey, originally a tiny fortification. Alfred reportedly hid from Danish Vikings for several months at Athelney before launching a successful counter-attack in 878 that helped him expand his influence across southern England. Alfred then returned to establish a monastery at Athelney and to appoint its first abbot.
Because of its ties to England's first king, the Ashmolean Museum has called the Alfred Jewel "among the most significant of royal relics."
For more stunning archaeological discoveries, check out our Astonishing Artifacts archives.
]]>1. Until the 1960s, researchers thought people largely dreamed in black and white.
2. Pumpkins are a type of berry (and a very big one).
3. Evacuating your bowels stimulates the vagus nerve, which can lower your blood pressure and heart rate — no wonder it feels so good to poop.
4. Iceland used to be the only country in the world without mosquitoes, but that changed in October 2025.
5. If a human could fly with wings, they would need to have a wingspan of about 20 feet (6 m) to have any chance of gliding through the air.
6. Leaving the pit in doesn't technically delay the browning process of an entire avocado; it just prevents oxygen from browning the bit underneath it.
7. At about 1,300 feet (400 m) below sea level, the banks of the Dead Sea are Earth's lowest point on dry land.
8. During the adolescent growth spurt, some teenagers can grow as much as 4 to 5 inches (10 to 13 centimeters) in a single year.
9. Black holes are dark because they trap light that crosses the event horizon, which means if you were to enter one, it would actually be extremely bright.
10. Your brain can take 15 to 30 minutes to reach full cognitive capacity after you wake up, a period known as "sleep inertia."
11. When sea levels were lower during the last ice age, North America and Asia were joined by an enormous land bridge. A similar bridge enabled the ancestors of Tyrannosaurus rex to trek from Asia to North America around 68 million years ago.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
12. The slowest-moving land animal is likely the banana slug, which moves at the extremely leisurely pace of 0.006 mph (0.0096 km/h), or a tenth of an inch per second (2.7 millimeters per second). By comparison, the common garden snail glides along at a relatively speedy 0.03 mph (0.048 km/h), or half an inch per second (1.3 centimeters per second).
13. Although rare, in certain circumstances, women were allowed to compete as gladiators in ancient Rome, but there are no records of any of them dying in battle.
14. The sound of the supermassive black holes in the Perseus cluster burping out gas would hit a low B flat, some 57 octaves below middle C.
15. All newts are salamanders, but not all salamanders are newts.
16. Even though the Colorado River toad releases the chemical 5-MeO-DMT — one of the most potent psychedelics around — from poison glands in its head, you can't get high by licking it.
17. The 1883 eruption of Krakatau is often considered the loudest sound in history, with people 1,900 miles (3,000 kilometers) away hearing the blast.
18. The oldest known human in the genus Homo lived in Africa around 2.8 million years ago, but we're not sure which species it is.
19. The largest known prime number contains 41,024,320 digits.
20. Frogs breathe and drink through their skin.
21. A bullet fired from a 223 Remington leaves the weapon at up to 2,727 mph (4,390 km/h) — fast enough to cover 11 football fields in a single second.
22. A turtle's shell is made of 50 bones.
23. Despite what you may have seen in the movies, ancient Egyptians did not booby-trap the pyramids.
24. The world's longest undersea section of a tunnel belongs to the Channel Tunnel, which has a 23.5-mile (37.9 kilometers) underwater section connecting England and France.
25. Despite evidence to the contrary, Christopher Columbus continued to claim the lands he "discovered" were parts of Asia, likely so he'd get paid.
26. The primary mirror on the James Webb Space Telescope is 21.3 feet (6.5 meters) in diameter, giving it a total collecting area of more than 270 square feet (25 square m).
27. As of March 2025, there were 953 known natural satellites in the solar system (depending on your definition of a moon).
28. There are roughly 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 atoms in the observable universe.
29. It takes five to 10 years for a body in a coffin to completely decompose down to a skeleton.
30. The Atlantic Meridional Overturning Circulation is a web of ocean currents that loop through the Atlantic Ocean, moving 600 million cubic feet (17 million cubic meters) of water per second and 1.2 petawatts of heat — roughly the same amount of heat put out by a million power plants running at the same time.
31. The deepest place on Earth is the bottom of the Mariana Trench, which lies about 35,876 feet (10,935 meters) below the surface. That makes it about 7,000 feet (2,100 m) deeper than Mount Everest is tall.

32. Researchers have shown that octopuses can be fooled by a version of the "rubber hand illusion," by stroking a real octopus arm hidden from view and a visible fake octopus arm at the same time. When the fake arm was pinched, the octopus reacted as if its own arm had been attacked — by changing color or pulling back.
33. The asteroid that wiped out the dinosaurs hit Earth at 27,000 mph (43,000 km/h).
34. Roughly half of all eukaryotic species on Earth are insects.
35. Mount Everest is only the tallest mountain by altitude, at 29,031.69 feet (8,848.86 m) above sea level. If you measure Mauna Kea, an inactive volcano in Hawaii, from base to peak, it's actually taller, at 33,497 feet (10,211 m) in altitude.
36. You're more likely to cry when chopping an onion with a dull knife than with a sharp one.
37. Antarctica became a continent around 34 million years ago, after losing its land connections with Australia and South America.
38. Jellyfish, sea anemones and hydras don't have brains, yet they're capable of surprisingly advanced behavior.
39. Kangaroos have three vaginas.
40. Many shark species will become temporarily paralyzed if you turn them upside down.
41. The human heart has incredible stamina, beating around 100,000 times and pumping roughly 2,500 gallons (9,500 liters) of blood daily, on average.
42. Dragonflies are one of nature's most effective hunters, catching prey up to 97% of the time. By comparison, tigers have a success rate of only 10%.
43. Yes, some figs really do have wasps in them.
44. Training OpenAI's GPT-4 used an estimated 50 gigawatt-hours of energy — enough to power San Francisco for three days.
45. The oldest DNA sequenced from animals and plants is from 2.4 million years ago.
46. On average, a person produces about 30 to 91 cubic inches (500 to 1,500 cubic centimeters) of gas every day, regardless of their diet. Thankfully, over 99% of those gases are odorless.
47. A female puff adder holds the record for the most offspring born in one live-birth pregnancy — a whopping 156 fully developed snakelets.
48. The record for the most times a piece of paper has been folded in half is 12. If you were to fold it 42 times, it would be more than 273,280 miles (439,800 kilometers) high — more than the average distance between Earth and the moon.
49. It is possible to turn a different element into gold, just not a lot of it.
50. The most-cited number of organs in the human body is 78, and the heaviest organ is the skin.
Here are some more incredible stories from Life's Little Mysteries:
—Do humans and chimps really share nearly 99% of their DNA?
—Why does boiling water have bubbles, except in a microwave?
]]>Yet, our picture of how this dinosaur looked and lived has changed markedly thanks to more than a century of scientific discoveries. Think you know everything there is to know about the king of the dinosaurs? Well, start the quiz below to find out if your knowledge is dino-mite or a faded fossil.
Remember to log in to put your name on the leaderboard; hints are available if you click the yellow button!
Until now, only a very few have been found in the Antarctic. In a new study published in Geophysical Research Letters, I present evidence for hundreds of these quakes in Antarctica between 2010 and 2023, mostly at the ocean end of the Thwaites Glacier — the so-called Doomsday Glacier that could send sea levels rising rapidly if it were to collapse.
A glacial earthquake is created when tall, thin icebergs fall off the end of a glacier into the ocean.
When these icebergs capsize, they clash violently with the “mother” glacier. The clash generates strong mechanical ground vibrations, or seismic waves, that propagate thousands of kilometres from the origin.
What makes glacial earthquakes unique is that they do not generate any high-frequency seismic waves. These waves play a vital role in the detection and location of typical seismic sources, such as earthquakes, volcanoes and nuclear explosions.
Due to this difference, glacial earthquakes were only discovered relatively recently, despite other seismic sources having been documented routinely for several decades.
Most glacial earthquakes detected so far have been located near the ends of glaciers in Greenland, the largest ice cap in the Northern Hemisphere.
The Greenland glacial earthquakes are relatively large in magnitude. The largest ones are similar in size to those caused by nuclear tests conducted by North Korea in the past two decades. As such, they have been detected by a high-quality, continuously operating seismic monitoring network worldwide.
The Greenland events vary with the seasons, occurring more often in late summer. They have also become more common in recent decades. The signs may be associated with a faster rate of global warming in the polar regions.
Although Antarctica is the largest ice sheet on Earth, direct evidence of glacial earthquakes caused by capsizing icebergs there has been elusive. Most previous attempts to detect Antarctic glacial earthquakes used the worldwide network of seismic detectors.
However, if Antarctic glacial earthquakes are of much lower magnitude than those in Greenland, the global network may not detect them.
In my new study, I used seismic stations in Antarctica itself to look for signs of these quakes. My search turned up more than 360 glacier seismic events, most of which are not yet included in any earthquake catalogue.
The events I detected were in two clusters, near Thwaites and Pine Island glaciers. These glaciers have been the largest sources of sea-level rise from Antarctica.
Thwaites Glacier is sometimes known as the Doomsday Glacier. If it were to collapse completely it would raise global sea levels by 3 meters (10 feet), and it also has the potential to fall apart rapidly.
About two-thirds of the events I detected — 245 out of 362 — were located near the marine end of Thwaites. Most of these events are likely glacial earthquakes due to capsizing icebergs.
The strongest driver of such events does not appear to be the annual oscillation of warm air temperatures that drives the seasonal behavior of Greenland glacier earthquakes.
Instead, the most prolific period of glacial earthquakes at Thwaites, between 2018 and 2020, coincides with a period of accelerated flow of the glacier's ice tongue towards the sea. The ice-tongue speed-up period was independently confirmed by satellite observations.
This speed-up could have been caused by ocean conditions, the effect of which is not yet well understood.
The findings suggest the short-term scale impact of ocean states on the stability of marine-terminating glaciers. This is worth further exploration to assess the potential contribution of the glacier to future sea-level rise.
The second largest cluster of detections occurred near the Pine Island Glacier. However, these were consistently located 60–80 kilometers [37 to 50 miles] from the waterfront, so they are not likely to have been caused by capsizing icebergs.
These events remain puzzling and require follow-up research.
The detection of glacial earthquakes associated with iceberg calving at Thwaites Glacier could help answer several important research questions. These include a fundamental question about the potential instability of the Thwaites Glacier due to the interaction of the ocean, ice and solid ground near where it meets the sea.
Better understanding may hold the key to resolving the current large uncertainty in the projected sea-level rise over the next couple of centuries.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
]]>The answer is more complicated than it might seem. According to Leslie A. Lyons, a cat geneticist at the University of Missouri College of Veterinary Medicine, most cat breeds developed in the past 140 years as a result of human selection for specific physical traits.
However, a select few are known as "natural" cat breeds because they are derived from a population of cats that evolved due to factors in their natural environments over thousands of years. These natural cat breeds include some of the most beloved breeds today, such as Maine coons, Siberians, Russian blues, Norwegian forest cats, Turkish Vans and Egyptian maus.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
According to Sarah Hartwell, a cat genetics hobbyist and founder of the cat resource MessyBeast, the progenitors of natural breeds form under the same conditions as wild species do.
"Natural breeds could be considered a step along the road to speciation," she told Live Science. In most cases, they form as a result of environmental adaptation. In Western Russia, cold and snowy conditions favored thick-furred, big-boned cats that became the foundation of the Siberian forest cat breed. In Southeast Asia and the coastal areas of the Indian Ocean, warm and humid conditions favored short-haired, slender-bodied, big-eared cats that set the stage for the Abyssinian.

In some cases, natural breeds begin as a result of geographic isolation. This phenomenon, known as the founder effect, occurs when a gene that is not advantageous to the animals' survival spreads because the population has a small, isolated gene pool. On the Isle of Man (a self-governing British Crown Dependency in the Irish Sea), a mutation that caused a short tail spread as a result of inbreeding, resulting in the ancestors of the Manx cat. Unfortunately, Manx cats can suffer from spinal defects due to this tail mutation.
Although the ancestors of natural breeds evolved under natural conditions, modern-day cats of these breeds are not so "natural" anymore.
"All breeds, no matter what species, have human influence," Lyons told Live Science. According to a study in the journal Animal Genetics that she co-authored, the selective breeding of cats has increased exponentially in the past century, which, in turn, has removed the environmental pressures that shaped natural breeds.

The Manx breed, for example, likely would have died out naturally due to a lack of genetic diversity and the detrimental effects of the short-tail mutation. In the modern day, the breed is still widespread as a result of intentional breeding by humans. However, there are some who are trying to eliminate it.
"The thought is to retire this breed, or find a way to make them healthier," Lyons told Live Science. "Maybe we [work toward] a tailed Manx."
Some of the once-natural breeds are more closely connected to their roots than others are. Modern Siberian cats, for example, are genetically and physically similar to their ancestors because breeders regularly bring in new cats, found as strays or as pets in the breed's home region, to add to breeding programs.

Other breeds have been altered both genetically and physically from their original appearance. Russian blues, for example, were crossbred with Siamese cats to prevent the breed's extinction after World War II, and breeders have since divided them into specific "types" that look different from the original cats.
So yes, "natural" cat breeds do exist, but they are not entirely natural. The traits that are quintessential of a Maine coon purchased from a breeder — such as large size, square jaw, and often feet with six or more toes — may resemble the Maine coons discovered back in the 1800s, but they have been preserved — and, in some cases, exaggerated — through artificial selection.
"It all depends on popularity and what people prefer," Lyons told Live Science. "One lineage of cats might become very popular and change what the breed looks like, and then it might swing back another direction depending on the next new craze."
Remember to log in to put your name on the leaderboard; hints are available if you click the yellow button!
Remember to log in to put your name on the leaderboard; hints are available if you click the yellow button!
The initial trials will focus on assessing the safety of the vaccine, which was initially developed with funding from the U.S. Department of Defense. The shot was previously tested in rats and showed promising results. Now, it's been licensed by startup ARMR Sciences, which will begin enrolling patients for Phase I clinical trials in the Netherlands in 2026, starting in either January or February.
"Our goal as a company is to eliminate the lethality of the drug supply," said Colin Gage, co-founder and CEO of ARMR. "We want to go about doing that by attacking the root cause of not only addiction, but also, obviously, overdose."
The vaccine works by keeping fentanyl out of the brain, which it does by making the molecule a target of the immune system.
Fentanyl is a synthetic opioid with effects 50 times stronger than heroin. Opioids, also called narcotics, broadly work by binding to opioid receptors in the brain and spinal cord, triggering changes in nerve cell signaling that prevent pain and can create a euphoric high.
But these opioid receptors are also found in the part of the brain that controls breathing, so fentanyl can also reduce respiration to a deadly degree if used in excess. A 2-milligram dose of fentanyl — similar in volume to about a dozen grains of salt — can be fatal, according to the Drug Enforcement Agency (DEA).
If a person overdosing on fentanyl is treated with naloxone (better known by the brand name Narcan), quickly enough, these effects can be reversed. This antidote also binds to opioid receptors, thus blocking the effects of fentanyl.
ARMR's vaccine takes a different approach: It works in the circulatory system, before the drug can reach the brain.
"This would be the first-ever treatment that does not work on the [opioid] receptor," Gage told Live Science.
To keep fentanyl from reaching the brain, the immune system must first recognize the drug. But fentanyl is a tiny molecule, not a pathogen like a virus, and immune cells don't naturally react to its presence.
To spur an immune response to fentanyl, the University of Houston's Colin Haile, an ARMR co-founder and scientific adviser, and his colleagues had to tie the opioid to something else.
They chose a deactivated diphtheria toxin called CRM197, a compound already used in vaccines on the market; once deactivated, the toxin is no longer toxic and instead helps rouse an immune response. To boost this immune response even further, they also added dmLT, a compound distilled from toxins produced by the Escherichia coli bacterium. This modified compound is not toxic itself, and it has also been tested in humans in trials of other, not-yet-approved, vaccines.
These two components are attached to a synthetic piece of the fentanyl molecule, which in and of itself cannot cause a high or pain relief.
When the immune system meets this combo of fentanyl fragments, CRM197 and dmLT, it builds antibodies that react to real fentanyl. These antibodies bind to the opioid, keeping it from crossing the brain's protective membrane — the blood-brain barrier — and then clearing it from the body.
In rat studies, the vaccine blocked fentanyl from entering the rodents' brain and also blocked the drug from depressing respiration and causing overdose.
So far, the studies on the vaccine have been in rodents, though dmLT and CRM197 have respectively been tested to some extent and are already used in other vaccines in humans. The protocol in rats is to give an initial dose of the fentanyl vaccine and then boosters three and six weeks out from the first dose, Haile told Live Science.
"The longest we've followed the animals in our studies is about six months and we saw complete blockade of fentanyl effects at six months post the initial vaccination," Haile said. It remains to be seen how that will translate to "human years," he noted, but lab rats live a couple of years in total, so the researchers think the vaccine will work for a long time in humans.
The initial human trials that will begin in early 2026 will enroll 40 people and will focus on detecting any safety issues with the vaccines, such as unwanted or dangerous side effects. Researchers will also draw blood samples from participants to make sure that the vaccine is spurring the creation of anti-fentanyl antibodies.
If these Phase I trials are successful, the next step will be Phase II trials to test the vaccine's efficacy — how well the vaccine blocks fentanyl's effects. In these trials, not only will antibody levels be tracked over time, but some participants will also be dosed with safe levels of fentanyl used for pain relief in medical procedures. This will be done under close supervision, to check that the vaccine works in the presence of the drug.

Fentanyl has legitimate medical uses as a painkiller, especially in emergency situations. One concern about the vaccine is that people who take it will lose this option for pain relief.
However, the antibodies created by vaccination do not bind to other opioids — such as morphine, oxycodone or methadone — or to other pain-relief options, Haile said. That means there are alternatives if people who get the vaccine need pain relief down the line.
The drug also does not interfere with buprenorphine, a drug used to treat opioid use disorder by reducing withdrawal symptoms and cravings. Haile said he and his team are currently testing the vaccine in combination with naltrexone, a non-opioid medication also used to block the effects of opioids in treatment of substance use.
In theory, it might be possible to take enough fentanyl to override the body's supply of anti-fentanyl antibodies, Haile said. However, given that the vaccine blocks fentanyl's euphoric effects, he expects people who want to quit will not be motivated to try to work around it.
"We want people who want to quit, want to not use the drug," he said. "That will give them a chance to realize that they won’t get high from this drug and there is no use in taking it any longer."
Gage suggested that one market for the vaccine could be first responders concerned about accidental fentanyl exposure. (That concern has risen in recent years with the spread of misinformation about fentanyl.)
For clarity: if fentanyl gets on your skin via casual exposure — for example, if you touch an object that's been exposed to the drug — it will not absorb through the skin. Meaningful absorption through the skin requires direct contact to the drug over hours or days. That said, if an EMT or police officer gets the drug on their hands and then touches their mouth or eyes, they could feel some of the drug's analgesic, or pain-relieving, effects, Haile said.
The vaccine could also be "an extra tool in the toolset" for people with opioid use disorder, Gage said. Combining the vaccine with "robust" cognitive behavioral therapy, a type of talk therapy, and communal support could be "incredibly beneficial to people who are just looking for another lifeline to help themselves get better," he said.
Finally, the vaccine could be beneficial for people who use less-deadly drugs — such as cocaine, stimulants or painkillers — that they buy on the black market. That's because these drugs are increasingly cut with fentanyl, meaning people may overdose without even knowing they are taking the opioid.
"I had two close childhood friends who passed away from fentanyl overdose," Gage said. "Neither of them were seeking it out."
Over 48,000 people are estimated to have died of opioid overdoses in 2024 in the U.S., according to provisional data. Perhaps due to this high death toll, early research suggests that people with personal experience with opioid use disorder and the general public alike view a possible anti-fentanyl vaccine positively. Time will tell how the new vaccine will perform in human trials, but if eventually approved, it could be a first-of-its-kind tool against overdose deaths.
This article is for informational purposes only and is not meant to offer medical advice.
]]>The idea, dubbed "Project Suncatcher" and outlined in a study uploaded Nov. 22 to the preprint arXiv database, explores whether future AI workloads could be run on constellations of satellites equipped with specialized accelerators and powered primarily by solar energy.
In certain low Earth or sun-synchronous orbits, the argument goes, solar panels can operate for much of the time, avoiding many of the night-day cycles, atmospheric losses and grid constraints that limit terrestrial data centers. Heat, meanwhile, would be rejected into space via radiative cooling rather than relying on water-intensive cooling systems on Earth.
The push to look beyond Earth for AI infrastructure isn’t coming out of nowhere. Data centers already consume a non-trivial slice of the world’s power supply: recent estimates put global data-center electricity use at roughly 415 terawatt-hours in 2024, or about 1.5% of total global electricity consumption, with projections suggesting this could more than double by 2030 as AI workloads surge.
Utilities in the U.S. are already planning for data centers, driven largely by AI workloads, to account for between 6.7-12% of total electricity demand in some regions by 2028, prompting some executives to warn that there simply “isn’t enough energy on the grid” to support unchecked AI growth without significant new generation capacity.
In that context, proposals like space-based data centers start to read less like sci-fi indulgence and more like a symptom of an industry confronting the physical limits of Earth-bound energy and cooling. On paper, space-based data centers sound like an elegant solution. In practice, some experts are unconvinced.
Joe Morgan, COO of data center infrastructure firm Patmos, is blunt about the near-term prospects. "What won’t happen in 2026 is the whole ‘data centers in space’ thing," he told Live Science. "One of the tech billionaires might actually get close to doing it, but aside from bragging rights, why?"
Morgan points out that the industry has repeatedly flirted with extreme cooling concepts, from mineral-oil immersion to subsea facilities, only to abandon them once operational realities bite. "There is still hype about building data centers under the ocean, but any thermal benefits are far outweighed by the problem of replacing components," he said, noting that hardware churn is fundamental to modern computing.
That churn is central to the skepticism around orbital AI. GPUs and specialized accelerators depreciate quickly as new architectures deliver step-change improvements every few years. On Earth, racks can be swapped, boards replaced and systems upgraded continuously. In orbit, every repair requires launches, docking or robotic servicing — none of which scale easily or cheaply.
"Who wants to take a spaceship to update the orbital infrastructure every year or two?" Morgan asks. "What if a vital component breaks? Actually, forget that, what about the latency?"
Latency is not a footnote. Most AI workloads depend on tightly coupled systems with extremely fast interconnects, both within data centers and between them. Google’s proposal leans heavily on laser-based inter-satellite links to mimic those connections, but the physics remains unforgiving. Even at low Earth orbit, round-trip latency to ground stations is unavoidable.
"Putting the servers in orbit is a stupid idea, unless your customers are also in orbit," Morgan said. But not everyone agrees it should be dismissed so quickly. Paul Kostek, a senior member of IEEE and systems engineer at Air Direct Solutions, said the interest reflects genuine physical pressures on terrestrial infrastructure.
"The interest in placing data centers in space has grown as the cost of building centers on earth keeps increasing," Kostek said. "There are several advantages to space-based or Moon-based centers. First, access to 24 hours a day of solar power… and second, the ability to cool the centers by radiating excess heat into space versus using water."
From a purely thermodynamic standpoint, those arguments are sound. Heat rejection is one of the hardest limits on computation, and Earth-based data centers are increasingly constrained by water availability, grid capacity and local environmental opposition.
The backlash against terrestrial AI infrastructure isn’t limited to energy and water issues; health fears are increasingly part of the narrative. In Memphis, residents near xAI’s massive Colossus data center have voiced concern about air quality and long-term respiratory impacts, with community members reporting worsened symptoms and fear of pollution-linked illnesses since the facility began operating. In other states, opponents of proposed hyperscale data center projects have framed their resistance around potential health and environmental harms, arguing that large facilities could degrade local air and water quality and exacerbate existing public health burdens.
Putting data centers into orbit would remove some constraints, but replace them with others.
"The technology questions that need to be answered include: Can the current processors used in data centers on Earth survive in space?” Kostek said. "Will the processors be able to survive solar storms or exposure to higher radiation on the Moon?"
Google researchers have already begun probing some of those questions through early work on Project Suncatcher. The team describes radiation testing of its Tensor Processing Units (TPUs) and modeling of how tightly clustered satellite formations could support the high-bandwidth inter-satellite links needed for distributed computing. Even so, Kostek stresses that the work remains exploratory.
"Initial testing is being done to determine the viability of space-based data centers," he said. "While significant technical hurdles remain and implementation is still several years away, this approach could eventually offer an effective way to achieve expansion."
That word — expansion — may be the real clue. For some researchers, the most compelling rationale for off-world computing has little to do with serving Earth-based users at all. Christophe Bosquillon, co-chair of the Moon Village Association’s working group for Disruptive Technology & Lunar Governance, argues that space-based data centers make more sense as infrastructure for space itself.
"With humanity on track to soon establish a permanent lunar presence, an infrastructure backbone for a future data-driven lunar industry and the cis-lunar economy is warranted," he told Live Science.
From this perspective, space-based data centers aren’t substitutes for Earth’s infrastructure so much as tools for enabling space activity, handling everything from lunar sensor data to autonomous systems and navigation.
"Affordable energy is a key issue for all activities and will include a nuclear component next to solar power and arrays of fuel cells and batteries," Bosquillon said, adding that the challenges extend well beyond engineering to governance, law and international coordination.
Crucially, space-based computing could offload non-latency-sensitive workloads from Earth altogether. "Solving the energy problem in space and taking that burden off the Earth to process Earth-related non-latency-sensitive data… has merit," Bosquillon said, even extending to the idea of space and the Moon as a secure vault for "civilisational" data.
Seen this way, Google’s proposal looks less like a solution to today’s data center shortages and more like a probe into the long-term physics of computation. As AI approaches planetary-scale energy consumption, the question may not be whether Earth has enough capacity, but whether researchers can afford to ignore environments where energy is abundant but everything else is hard.
For now, space-based AI remains strictly experimental. Whether it ever escapes Earth’s gravity may depend less on solar panels and lasers than on how desperate the energy race becomes.
]]>Now, experts plan to test these newfound bugs inside a "planetary simulation chamber" that could reveal whether these microbes, or ones with similar adaptations, could survive a trip through space to Mars, possibly contaminating the alien worlds on arrival.
Earlier this year, scientists identified more than two dozen previously unknown bacterial species lurking in the Kennedy Space Center cleanrooms in Florida, where NASA assembled its Phoenix Mars Lander in 2007. The discovery showed that despite constant scrubbing, harsh cleaning chemicals and extreme nutrient scarcity, some microbes evolved a suite of genetic tricks that allowed them to persist in these punishing environments.
"It was a genuine 'stop and re-check everything' moment," study co-author Alexandre Rosado, a professor of Bioscience at King Abdullah University of Science and Technology in Saudi Arabia, told Live Science about the findings, which were described in a paper published in May in the journal Microbiome. While there were relatively few of these microbes, they persisted for a long time and in multiple cleanroom environments, he added.
Identifying these unusually hardy organisms and studying their survival strategies matters, the researchers say, because any microbe capable of slipping through standard cleanroom controls could also evade the planetary-protection safeguards meant to prevent Earth life from contaminating other worlds.
When asked whether any of these microbes might, in theory, tolerate conditions during a journey to Mars' northern polar cap, where Phoenix landed in 2008, Rosado said several species do carry genes that may help them adapt to the stresses of spaceflight, such as DNA repair and dormancy-related resilience. But he cautioned that their survival would depend on how they handle harsh conditions a microbe would face both during space travel and on Mars — factors the team didn't test — including exposure to vacuum, intense radiation, deep cold and high levels of UV at the Martian surface.
To explore that question, the researchers are now building a planetary simulation chamber at the King Abdullah University of Science and Technology in Saudi Arabia to expose the bacteria to Mars-like and space-like conditions, Rosado said. The chamber, now in its final assembly phase, with pilot experiments expected to begin in early 2026, is engineered to mimic stresses such as the low, carbon-dioxide-rich air pressure of Mars, high radiation, and the extreme temperature swings the microbes would face during spaceflight. These controlled environments will allow scientists to investigate how hardy microbes adapt and survive under combinations of stresses comparable to those encountered during spaceflight or on the Martian surface, said Rosado.

NASA's spacecraft-assembly cleanrooms are engineered to be hostile to microbes — a cornerstone of the agency's efforts to prevent Earth organisms from hitchhiking to worlds beyond Earth — through continuously filtered air, strict humidity control and repeated treatments using chemical detergents and UV light, among other measures.
Even so, "cleanrooms don't contain 'no life,'" said Rosado. "Our results show these new species are usually rare but can be found, which fits with long-term, low-level persistence in cleanrooms."
During the Phoenix lander's assembly at the Kennedy Space Center's Payload Hazardous Servicing Facility, a team led by study co-author Kasthuri Venkateswaran, who is a senior research scientist at NASA's Jet Propulsion Laboratory, collected and preserved 215 bacterial strains from the cleanroom floors. Some samples were gathered before the spacecraft arrived in April 2007, again during assembly and testing in June, and once more after the spacecraft moved to the launch pad in August, according to the study.
At the time, researchers lacked the technology to classify new species precisely or in large numbers. But DNA technology has advanced dramatically in the 17 years since that mission, and today scientists can sequence almost every gene these microbes carry and compare their DNA to broad genetic surveys of microbes collected from cleanrooms in later years. This allows scientists "to study how often and for how long these microbes appear in different places and times, which wasn't possible in 2007," said Rosado.
Further analysis revealed a suite of survival strategies. Many of the newly identified species carry genes that help them resist cleaning chemicals, form sticky biofilms that anchor them to surfaces, repair radiation-damaged DNA or produce tough, dormant spores — adaptations that help them survive in tucked-away corners or microscopic cracks, the study reports. This makes the microbes "excellent test organisms" for validating the decontamination protocols and detection systems that space agencies rely on to keep spacecraft sterile, Rosado said.
From a broader research standpoint, Rosado said the next step is coordinated, long-term sampling across multiple cleanrooms using standardized methods, paired with controlled experiments that measure microbes' survival limits and stress responses, said Rosado.
"This would give us a much clearer picture of which traits truly matter for planetary protection and which might have translational value in biotechnology or astrobiology," he said.
]]>Name: Lchashen wagon
What it is: An oak wagon
Where it is from: Lchashen village, Armenia
When it was made: Circa 1500 B.C.
Covered wagons are often associated with the Old West. But the best-preserved example of an ancient covered wagon was actually found in a Bronze Age grave in Armenia, where it had been buried for 3,500 years.
The remains of six oak wagons were excavated from an elite cemetery in Lchashen, Armenia, and were dated to the 15th to 14th centuries B.C., or the Late Bronze Age. Each wagon had four wheels arranged on two axles. But while two of the wagons were open, the other four had a complex frame structure on top. One of these wagons is considered the best-preserved example of an early covered wagon.
On display at the History Museum of Armenia in Yerevan, the Lchashen wagon was made of at least 70 parts joined together by a mortise-and-tenon system involving slotted pieces of wood and bronze fittings. The frame of the canopy required at least 600 mortise holes, archaeologist Stuart Piggott wrote in a 1968 study, indicating the precise workmanship that went into creating the wagon.
The wagon measures approximately 6.5 feet (2 meters) in length. Each wooden wheel was made of two slabs of wood joined together and measured a whopping 63 inches (160 centimeters) tall, historian Christoph Baumer wrote in "History of the Caucasus" (Bloomsbury, 2021).
The Lchashen wagon was discovered in the 1950s, when construction workers from the Soviet Union drained part of Lake Sevan in Armenia to help irrigate a nearby plain. They found a Late Bronze Age cemetery that contained more than 500 burials, along with hundreds of grave goods. One distinctive feature of the Lchashen necropolis is the presence of two- and four-wheeled wagons, as well as bronze models of war chariots, archaeologist L.A. Petrosyan wrote in a 2016 study.
Although some claim the Lchashen wagon is the "oldest in the world," there is abundant evidence of both wagon technology and covered wagons that predate this example. The exact invention dates are still being debated, but humans likely first invented the wheel and wheeled vehicles in Mesopotamia in the Copper Age, between about 4500 and 3300 B.C.
But the Lchashen wagon is a very early — as well as the best-preserved — example of a covered wagon with spoked wheels on axles, demonstrating innovation in early wheeled vehicles. Whether this technology was invented in Armenia or came from Mesopotamia to the south or the Russian steppe to the north is still being investigated.
According to the History Museum of Armenia, burials with wheeled vehicles arose in the Middle Bronze Age (2400 to 1500 B.C.) in Armenia but became most popular in the Late Bronze Age, when they were used as vehicles for physically and metaphorically transporting the remains of a deceased leader into the next life.
For more stunning archaeological discoveries, check out our Astonishing Artifacts archives.
]]>Milestone: Vision of nanotechnology laid out
Date: Dec. 29, 1959
Where: Pasadena, California
Who: Richard Feynman
On a December day, Richard Feynman gave a fun little lecture at Caltech — and dreamed up an entirely new field of physics.
During the talk, entitled "Plenty of room at the bottom," he described the enormous potential that could be realized if scientists could manipulate and control things at a "small scale."
How small? Feynman went on to discount advances of the time, such as writing the Lord's Prayer on the head of a pin, as trivial.
"But that's nothing; that's the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below," Feynman said in his lecture. Rather, he suggested, people could write the entire 24-volume encyclopedia on the head of a pin, and elegantly showed that there's enough space there to write it legibly and read it out.
He then explored the possibility of a number of then-futuristic ideas: electron microscopes capable of manipulating individual atoms, ultracompact data storage, miniaturized computers, and powerful, ingestible biological machines that travel into organs like the heart, find defects, and repair them with tiny knives. He proposed a number of ways to create these small-scale innovations, including manipulating light and ions.
He ended the lecture by offering a reward of $1,000 to anyone who could miniaturize the text in a book 25,000-fold, such that it could be read using an electron microscope. He offered another $1,000 to anyone who could make a motor no bigger than 1/64th of an inch cubed.

The latter of these prizes was scooped up the following year by engineer William McLellan, who created a 250-microgram motor composed of 13 parts. In his award letter, Feynman congratulated McLellan on the feat but joked that he shouldn't "start writing small," lest he solve the first challenge, too and expect to receive the other $1,000 prize.
"I don't intend to make good on the other one. Since writing the article I've gotten married and bought a house!" Feynman wrote.The former challenge was eventually solved in 1985, when Stanford graduate Thomas Newman miniaturized the first page of the Dickens classic "A Tale of Two Cities." Feynman did, ultimately, pay up for the second prize.
Feynman's Caltech talk is now mythologized as having ushered in the field of nanotechnology. And yet, the term "nanotechnology" itself was not coined until 15 years after his talk, when scientist Norio Taniguchi penned a paper about manipulating material at the atomic scale.
In that 1974 paper, Taniguchi described nanotechnology as "the processing of separation, consolidation, and deformation of materials by one atom or one molecule." Many science historians now argue that the field was following its own trajectory, and that Feynman's talk, while prescient, wasn't the actual driver of future innovations. Prior to 1980, his talk was cited less than 10 times.
Whether it drove innovation or not, since Feynman's famous lecture, many of his predictions have proven true. The scanning tunneling microscope manipulated individual xenon atoms in 1990. Computers more powerful than he described now sit in our pockets, rather than taking up whole rooms. And indeed, tiny nanobots have been designed that can repair damaged blood vessels.
Primates don't just live in lots of places; there are also hundreds of species and subspecies. In fact, the order primates is the fourth most biodiverse mammal order in the animal kingdom — yet the majority (62.6%) of primates are threatened with extinction.
Scientists researching primates, called "primatologists," have learned a lot over the years about our closest evolutionary relatives. For example, did you know that chimps have opposable big toes, or that not all monkeys can swing through the trees? Or even that there are some primates that are neither monkeys nor apes?
Fancy yourself a primatologist? Put your knowledge to the test below!
Remember to log in to put your name on the leaderboard; hints are available if you click the yellow button. Good luck!

Maria Branyas Morera, once the world's oldest woman, died in 2024 at age 117. Live Science took a deep look at a study that examined Branyas' biology and uncovered key traits that may have protected her from disease in old age. Could lessons from the study help others lead longer, healthier lives?
Many consider the brain to be a central feature of what makes us human — but how did the remarkable organ come to be? In an interview, science communicator Jim Al-Khalili discussed what he learned from shooting the new BBC show "Horizon: Secrets of the Brain," which tells the story of how the human brain evolved. And in a book excerpt and interview with Live Science, neuroscientist Nikolay Kukushkin described the evolutionary forces he believes were key to the formation of the human brain and consciousness as we know it.
Miniature models of the human brain can be grown from stem cells in the lab, and they're getting more and more advanced. Some scientists have raised concerns that these "minibrains" could become conscious and feel pain. We investigated experts' concerns and hopes for future regulation of the research.
mRNA may be best known for forming the basis of the first COVID-19 vaccines, but it could also be used in revolutionary cancer therapeutics, immune-reprogramming treatments and gene therapies. The promise of these emerging mRNA medicines is staggering, but due to the politicization of COVID-19 shots in the U.S., mRNA research and development — even unrelated to vaccines — now hangs in precarious uncertainty. A Science Spotlight feature described emerging mRNA technologies and their wobbly status under the second Trump administration.

You may have heard that more young people are being diagnosed with cancer. But which types of cancer are driving this trend? And why are the rates going up in the first place? We looked at what may be driving this pattern, from underlying cancer triggers to better techniques for early detection.
Is there really a difference between male and female brains? And do we even have the data required to answer that question? A Science Spotlight explored the existing research on sex differences in the brain, finding the results murkier than one might expect. Headlines often proclaim that male and female brains are "wired differently," and that may be true in some subtle ways. But the biological consequences of those differences remain unclear, even to experts in the field.
Artificial intelligence can now be used to design brand-new viruses. Scientists hope to use these viruses for good — for example, to treat drug-resistant bacterial infections. But could the technology usher in the next generation of bioweapons? An analysis probed this dual-use problem and what can be done to safeguard our biosecurity.
In a book excerpt, epidemiologist Dr. Seth Berkley explained how he and other health leaders orchestrated a massive vaccine rollout to poor countries during the COVID-19 pandemic, so that the shots wouldn't exclusively be hoarded by wealthy nations. Live Science also spoke with Berkley about the lessons learned from the pandemic and the ongoing fight for vaccine equity.

The United States Agency for International Development (USAID), once the world's largest foreign aid agency, was hit by massive funding cuts under the second Trump administration. A few of its functions will reportedly continue, under the control of the Department of State. We looked at the predicted and devastating effects that the loss of USAID will likely have on HIV care worldwide. And in an interview with author John Green, who published a book on tuberculosis (TB) this year, we explored what the cuts could mean for TB patients.
A study went viral after suggesting that healthy human brains may contain a similar amount of plastic as the average plastic spoon. But should we really be concerned? Our analysis broke down what we know and what we don't about microplastics in the brain.
A man genetically guaranteed to develop early Alzheimer's disease is still disease-free in his 70s. We explored the details of the man's case, digging into his genetic profile and the broader lessons it could teach scientists about dementia.
Weight-loss surgeries often come with improvements in mental health — but research revealed that this effect is less tied to the weight loss itself and more connected to the relief from stigma that people often experience post-procedure. We examined this finding and what it can tell us about the profound impact of weight stigma on people's health and well-being.

In 2000, the United States hit a public health milestone by eliminating measles. But now, there's been a sustained resurgence of the highly infectious disease, putting the country on the brink of losing that precious elimination status. This story explained how we got here and what's at stake. And in an opinion piece, several experts called out the anti-vaccine movement that drove down measles vaccination rates — a movement that health secretary Robert F. Kennedy Jr. has been spearheading for years.
In a book excerpt, Nafis Hasan argued that the United States has been employing the wrong strategies to fight cancer for decades. While hyperfocusing on finding treatments for individuals with cancer, America has largely ignored population-level strategies that could help drive down cancer rates and cancer deaths across the board, he argued.
The U.S. federal government is threatening to restrict research conducted with human fetal tissue. In an opinion piece, cell biologist, geneticist and neuroscientist Lawrence Goldstein dispelled widespread myths and misinformation about this type of research.
Epidemiologist Michael Osterholm predicts that the next pandemic could be even worse than COVID-19. In a book excerpt and interview with Live Science, Osterholm described the lessons we should have taken away from the coronavirus pandemic, and how recent changes in U.S. policy may have destroyed our capacity to handle serious outbreaks.
As the planet warms, a dangerous condition called hyponatremia may be on the rise. The condition causes a dramatic decline in sodium in the body, which can potentially cause seizures, coma and death. A Live Science exclusive looked at the emerging trend.
A viral story suggested that researchers in China were working on a "pregnancy robot" that could gestate a human baby from conception to birth. It turns out that the story was complete fiction — but, in theory, could such a technology be realized? Experts weighed in on the sci-fi-sounding idea and discussed whether, eventually, it could be feasible to build a bona fide pregnancy robot.
]]>If your first few sessions have been more frustrating than awe-inspiring, you're not alone. Here are five of the most common mistakes, plus how to avoid them so you can spend less time fiddling and more time actually enjoying the view.

Many beginners grab their telescope on a whim, head outside and hope for magic. The problem is that astronomy doesn't work on impulse — it works on timing. Moon phases affect how bright the sky is, and local light pollution can wash out fainter objects. Even the time of year dictates what's actually visible.
Before heading out, take a moment to observe what's above the horizon, when the moon rises and whether your sky conditions are cooperating. Free apps make this easy — Stellarium is a favorite of ours — and a quick look at a cloud forecast can save you a wasted session.
Planning isn't a chore; it's the difference between hunting blindly and having a solid target list. When you know when are where to look, observing the sky with a telescope becomes far more rewarding.

It's completely normal to hope for swirling nebulas and razor-sharp galaxies like the images you see online. Unfortunately, those are long-exposure photographs taken by spacecraft or huge professional observatories. A backyard telescope shows the real sky, and it's much more subtle.
But that doesn't mean it's disappointing. The moon looks incredible through even a small telescope, Jupiter and Saturn show details and star clusters sparkle beautifully. What tends to trip people up is expecting colors and drama rather than appreciating the delicate, natural brightness of what can be seen with the eye.
Think of visual observation as seeing the universe with your own eyes, and once you adjust your expectations, you start to notice far more. If you do want to experiment with imaging space, you can mount one of the best astrophotography cameras directly onto your telescope, or invest in one of the best smart telescopes.
Another thing that often catches beginners out is that not all telescopes excel at the same targets. Not only are there different types of telescopes, but some designs are better suited to deep-space objects like galaxies and nebulas, while others are better suited for crisp planetary and lunar viewing.
Wide aperture, low focal-ratio scopes (like Dobsonians) gather lots of light, making faint objects easier to spot. On the other hand, longer focal length telescopes naturally deliver higher magnification, which is perfect for observing the details on Jupiter, Saturn or the moon's craters.

One of the least glamorous but most important steps is simply letting your telescope cool down (or warm up) to match the outdoor temperature. If you take a scope from a warm living room out into the cold night, turbulent air currents swirl inside the tube, softening the view. The result looks like your optics suddenly went blurry.
Give your telescope 20-40 minutes outside before you start observing — maybe even a bit longer for bigger scopes. During this time, you can align your finderscope, set up a star chart or choose your targets.
Once the air settles inside the tube, things improve dramatically. Planets snap into focus, double stars separate cleanly and lunar details show the crisp edges it's meant to. Acclimation isn't sexy or exciting, but it's one of the easiest ways to upgrade your observing without spending a dime.

A common assumption is that more magnification automatically means better views. In reality, pushing the zoom too high will result in a dim, wobbly image.
Every telescope has a highest useful magnification. This is essentially the upper limit where the view will still look sharp, and it's determined by the scope's aperture and the viewing conditions. The general rule of thumb is that the highest useful magnification is roughly 50x its aperture in inches, although this does depend on the overall quality of your telescope. For example, a 6-inch telescope will have a highest useful magnification of around 300x.
Start with a low-power eyepiece, like the 20mm which typically comes with beginner telescopes. This will give you a wider field, making objects a lot easier to find and track. Only once you've centered your target should you then switch to a higher-power eyepiece — and even then, it's best to increase in small steps. On nights with poor viewing conditions, high magnification will just make objects look blurrier.
To determine the magnification of an eyepiece, divide the telescope's focal length by the measurement of the eyepiece. For example, on a 1,000mm scope, a 20mm eyepiece will provide 50x magnification. Over time, you'll instinctively know which eyepiece works best for the moon, planets and deep-sky objects.
When magnification is chosen well, everything suddenly becomes sharp, steady and a lot more impressive.

Modern telescopes can be surprisingly smart — some align themselves, some slew automatically to targets and others use your phone to guide you around the night sky. These features are awesome, especially for beginners, but they can create a false expectation that the telescope will do all the work.
In reality, even the most automated systems will still need some input and understanding from the user. Motorized GoTo mounts, for example, won't magically know where they are. They need accurate setup, which requires a level tripod, the correct date and time and a proper alignment on a couple of bright stars. If any of that is off, the telescope will miss every target.
Smart telescopes and app-driven models make navigation easier, but they're not a substitute for knowing what's actually visible or why certain objects won't appear on a bright, hazy night. Plus, smart telescopes often produce the best view by stacking images over a longer period, so they're better suited to photographing the cosmos as opposed to observing it.
]]>
What it is: Reflection nebula NGC 1333 and binary star system SVS 13
Where it is: 1,000 light-years away in the constellation Perseus
When it was shared: Dec. 16, 2025.
Go outside after dark this winter and look to the southeast, and you'll see some of the brightest stars in the night sky — Orion's Belt, Betelgeuse, Sirius, Aldabaran and Capella. Just above this melee is the quieter constellation Perseus, which lacks bright stars but hosts something extraordinary that the naked eye can't see — the explosive birth of new stars.
Lurking within the Perseus Molecular Cloud is NGC 1333, nicknamed the Embryo Nebula because it contains many young, hot stars that are teaching astronomers just what goes on when a star is born. NGC 1333 is a reflection nebula, meaning a cloud of gas and dust illuminated by the intense light coming from newly forged stars, some of which appear to be regularly spewing jets of matter. It's one of the closest star-forming regions to our solar system. On Dec. 16. astronomers published the most detailed images ever of a jet launched by a newborn star, called SVS 13, which revealed a sequence of nested, ring-like structures. The finding is evidence that the star has been undergoing an outburst — releasing an immense amount of energy — for decades.
The discovery, which the researchers described in the journal Nature Astronomy, marks the first direct observational confirmation of a long-standing theoretical model of how young stars feed on, and then explosively expel, surrounding material.
The researchers captured the high-resolution, 3D view of a fast-moving jet emitted from one of SVS 13's young stars using the Atacama Large Millimeter/submillimeter Array (ALMA) radio telescope array in Chile. Within the image, they identified more than 400 ultra-thin, bow-shaped molecular rings. Like tree rings that mark the passage of time, each ring marks the aftermath of an energetic outburst from the young star's early history. Remarkably, the youngest ring matches a bright outburst seen in the SVS 13 system in the early 1990s, allowing researchers to directly connect a specific burst of activity in a forming star with a change in the speed of its jet. It's thought that sudden bursts in jet activity are caused by large amounts of gas falling onto a young star.
"These images give us a completely new way of reading a young star's history," said study co-author Gary Fuller, a professor at the University of Manchester. "Each group of rings is effectively a time-stamp of a past eruption. It gives us an important new insight into how young stars grow and how their developing planetary systems are shaped."
For more sublime space images, check out our Space Photo of the Week archives.
]]>
What it is: Sagittarius B2 molecular cloud
Where it is: Roughly 26,000 light-years from Earth,, in the constellation Sagittarius
When it was shared: Sept. 24, 2025
Stars form in molecular clouds — molecular clouds — regions that are cold, dense, rich in molecules and filled with dust. One enormous cloud responsible for forming half of the stars in the Milky Way's central region is the Sagittarius B2 (Sgr B2) molecular cloud, located a few hundred light-years from our central supermassive black hole.
Boasting a total mass between 3 million and 10 million times that of the sun and stretching 150 light-years across, it is one of the largest molecular clouds in the galaxy. It lies roughly 26,000 light-years from Earth, in the constellation Sagittarius. It is also chemically rich, with several complex molecules discovered so far.
But this giant star-forming region is shrouded in a mystery: how it has managed to produce 50% of the stars in the region, despite containing just 10% of the galactic center's gas.
Astronomers observed this super-efficient stellar factory using the James Webb Space Telescope (JWST), in the hope of finding some clues about its unusual productivity. This spectacular image is the telescope's mid-infrared view, captured by JWST's Mid-Infrared Instrument (MIRI).
In the image, the clumps of dust and gas in the molecular complex glow in shades of pink, purple and red. These clumps are seen surrounded by dark areas. Dark does not mean that these regions are empty or emit nothing; instead, light in these areas is blocked by dense dust that the instrument cannot detect.

In star-forming regions like this one, warm dust and gas and only the brightest stars emit in the mid-infrared. This contrasts with the near-infrared image captured simultaneously by JWST's Near-Infrared Camera (NIRCam), which reveals an abundance of stars because stars emit more strongly in the near-infrared light.
In this MIRI image, the clumps on the right that appear redder than the rest of the cloud complex correspond to one of the most chemically complex areas known, as revealed by previous observations using other telescopes. Astronomers think this unique region may hold clues to why Sgr B2 is more efficient at star formation than the rest of the galactic center.
Additionally, an in-depth analysis of the masses and ages of the stars in this stellar factory could reveal further insight into the star-forming mechanisms in the Milky Way's center.
For more sublime space images, check out our Space Photo of the Week archives.
]]>It sounds like a simple enough question to answer — list the openings and add them up. But it's not quite that easy once you start considering questions like: "What exactly is a hole?" "Does any opening count?" And "why don't mathematicians know the difference between a straw and a doughnut?"
Before we start counting, we need to agree on what constitutes a "hole." Katie Steckles, a lecturer in mathematics at Manchester Metropolitan University in the U.K. and a freelance mathematics communicator, told Live Science that mathematicians "use the term 'hole' to mean one like the hole in a donut: one that goes all the way through a shape and out the other side."
But if you dig a "hole" at the beach, your aim is probably not to dig right through to the other side of the world. Many people would think of a hole as a depression in a solid object. But "this isn't a true hole, as it has an end," Steckles said.
Similarly, mathematical communicator James Arthur, who is based in the U.K., told Live Science that "in topology, a 'hole' is a through hole, that is you can put your finger through the object."
When digging a tunnel under the sea, like the Channel Tunnel that connects the U.K. and France, engineers started off by digging two openings. But as soon as those two digging projects joined up, the Channel Tunnel became a fundamentally different object (what Arthur and engineers would call a "through hole") — like a straw, or a tube with an opening at either end.
And if you ask people how many holes a straw has you will get a range of different answers: one, two and even zero. This is a result of our colloquial understanding of what constitutes a hole.
To find a consistent answer, we can turn to mathematics. And the problem of classifying how many holes there are in an object falls squarely within the realm of topology.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
To a topologist, the actual shapes of objects are not important. Instead, "topology is more concerned with the fundamental properties of shapes and how things connect together in space," Steckles said.
In topology, objects can be grouped together by the number of holes they possess. For example, a topologist sees no difference between a golf ball, a baseball or even a Frisbee. If they were all made of plasticine, or putty, they could theoretically be squashed, stretched or otherwise manipulated to look like each other without making or closing any holes in the plasticine or sticking different parts together, Steckles argued.
However, to a topologist, these objects are fundamentally different to a bagel, a doughnut or a basketball hoop, which each have a hole through the middle of them. A figure of eight with two holes and a pretzel with three are different topological objects again.

A useful way to get into the mathematicians' way of thinking about the straw problem is to "imagine our straw is made of play dough," Arthur said. "Let's take this straw and slowly squish the top down and down and down towards the bottom, making sure the hole in the middle stays open. We will squish it until we are in a shape that looks like a doughnut." Mathematicians, Arthur said, would say that "the straw is homeomorphic to a doughnut."
The long, thin aspect ratio of the straw, and the fact that the two openings are relatively far apart, are perhaps what gives rise to the suggestion of two holes. But to a topologist, bagels, basketball hoops and doughnuts are all topologically equivalent to a straw with a single hole. "The hole in a straw goes all the way through it, and the opening at the other end is just the back of that same hole," Steckles said.
Armed with the topologists' definition of a hole, we can tackle the original question: How many holes does the human body have? Let's first try to list all the openings we have. The obvious ones are probably our mouths, our urethras (the ones we pee out of) and our anuses, as well as the openings in our nostrils and our ears. For some of us, there are also milk ducts in nipples and vaginas.
There are also four less-obvious openings that we all have in the corners of eyelids closest to our nose — the four lacrimal puncta, which drain tears from our eyes into our nasal cavities. At an even smaller scale there are the pores that enable sweat to escape our bodies and sebum to lubricate our skin. In total there are potentially millions of these openings in our bodies, but do they all count as holes?

To make the question interesting, think about whether we could pass a very thin string into one hole and out of another. If we set the size of this string to be about 60 microns (60 millionths of a meter) then it's possible that the string could enter an opening as small as a pore. However — and this is key — it wouldn't be able to leave. It wouldn't be able to come out the other end. It would be blocked by the cells at the bottom of the pore — too thick to pass through into the vasculature that supplies the pore.
"They're not actually holes in the topological sense, as they don't go all the way through," Steckles said. "They're just blind pits."
By this definition we can rule out all the pores, milk ducts and urethras. We couldn't thread a string in one of these openings and out of another. Even the ears canals have to go as they are separated from the rest of the sinuses by the ear drums.
"We have our mouth, our anus, and then our nostrils. They are four of the … openings that form a hole," Arthur said. "But we actually have eight. The remaining four come from the tear ducts, we each have two in each eye, an upper and a lower."
But this doesn't mean eight holes. Steckles pointed out ."When the holes that pass through a shape connect together inside the shape, it makes it harder to count how many there are."
A pair of underwear, for example, has three openings (one for the waist and one for each of the two legs), but it's not immediately clear how many holes a topologist would say it has. "A useful trick is to think about flattening it out," Steckles said. — "If we were to stretch the waistband of the pants out onto a big hula hoop, we'd see the two trouser legs sticking down, each being one hole."

So despite having three openings, the pair of underwear has only two holes. "So when the holes connect together in the middle, there's one fewer hole than there are openings," Steckles argued. Correspondingly, topology tells us that, despite eight interconnected openings, the human body has seven different holes.
But there might be one more. Although often counted as a blind hole, the vagina leads to the uterus, which then leads to one of two fallopian tubes. These tubes are open at the far end and lead to the peritoneal cavity near the ovary. It is the job of the finger-like projections of the funnel-shaped infundibulum at the end of the fallopian tube to catch the egg when it is released from the nearest ovary. However, it has been demonstrated that eggs released from one ovary can be captured by the fallopian tube on the other side, so that passage between the two open ends of the fallopian tubes is possible. Our tiny string could therefore be threaded all the way through the female reproductive tract and back out, counting as one more hole.
So the mathematician's answer is that humans have either seven or eight holes.
In the end, the question is not just about counting openings but about understanding connections. Topologically speaking, our bodies are less like Swiss cheese and more like a carefully constructed onesie for an octopus.
In a new study, researchers tested people's ability to distinguish images of real faces from AI-generated ones and found that most participants missed most of the AI-generated faces. Even "super-recognizers" — an elite group with exceptionally strong facial-processing abilities — were able to correctly identify fake faces as fake only 41% of the time. Typical recognizers correctly identified only about 30% of the AI-generated faces.
However, the study also showed that people's detection of fake faces improved when they were given just five minutes of training beforehand. The training taught the participants how to spot common computer-rendering errors, such as unnatural-looking skin textures or oddities in how hair lies across the face. After training, detection accuracy increased substantially, with super-recognizers spotting 64% of fake faces and typical recognizers identifying 51%.
Given how difficult the task proved to be even for highly skilled participants, how confident are you in your own ability to spot AI faces? Answer our poll below, and let us know why in the comments.
People with typical recognition capabilities are worse than chance: more often than not, they think AI-generated faces are real.
That's according to research published Nov. 12 in the journal Royal Society Open Science. However, the study also found that receiving just five minutes of training on common AI rendering errors greatly improves individuals' ability to spot the fakes.
"I think it was encouraging that our kind of quite short training procedure increased performance in both groups quite a lot," lead study author Katie Gray, an associate professor in psychology at the University of Reading in the U.K., told Live Science.
Surprisingly, the training increased accuracy by similar amounts in super recognizers and typical recognizers, Gray said. Because super recognizers are better at spotting fake faces at baseline, this suggests that they are relying on another set of clues, not simply rendering errors, to identify fake faces.
Gray hopes that scientists will be able to harness super recognizers' enhanced detection skills to better spot AI-generated images in the future.
"To best detect synthetic faces, it may be possible to use AI detection algorithms with a human-in-the-loop approach — where that human is a trained SR [super recognizer]," the authors wrote in the study.
In recent years, there has been an onslaught of AI-generated images online. Deepfake faces are created using a two-stage AI algorithm called generative adversarial networks. First, a fake image is generated based on real-world images, and the resulting image is then scrutinized by a discriminator that determines whether it is real or fake. With iteration, the fake images become realistic enough to get past the discriminator.
These algorithms have now improved to such an extent that individuals are often duped into thinking fake faces are more "real" than real faces — a phenomenon known as "hyperrealism."
As a result, researchers are now trying to design training regiments that can improve individuals' abilities to detect AI faces. These trainings point out common rendering errors in AI-generated faces, such as the face having a middle tooth, an odd-looking hairline or unnatural-looking skin texture. They also highlight that fake faces tend to be more proportional than real ones.
In theory, so-called super recognizers should be better at spotting fakes than the average person. These super recognizers are individuals who excel in facial perception and recognition tasks, in which they might be shown two photographs of unfamiliar individuals and asked to identify if they are the same person or not. But to date, few studies have examined super recognizers' abilities to detect fake faces, and whether training can improve their performance.
To fill this gap, Gray and her team ran a series of online experiments comparing the performance of a group of super recognizers to typical recognizers. The super recognizers were recruited from the Greenwich Face and Voice Recognition Laboratory volunteer database; they had performed in the top 2% of individuals in tasks where they were shown unfamiliar faces and had to remember them.
In the first experiment, an image of a face appeared onscreen and was either real or computer-generated. Participants had 10 seconds to decide if the face was real or not. Super recognizers performed no better than if they had randomly guessed, spotting only 41% of AI faces. Typical recognizers correctly identified only about 30% of fakes.
Each cohort also differed in how often they thought real faces were fake. This occurred in 39% of cases for super recognizers and in around 46% for typical recognizers.
The next experiment was identical, but included a new set of participants who received a five-minute training session in which they were shown examples of errors in AI-generated faces. They were then tested on 10 faces and provided with real-time feedback on their accuracy at detecting fakes. The final stage of the training involved a recap of rendering errors to look out for. The participants then repeated the original task from the first experiment.
Training greatly improved detection accuracy, with super recognizers spotting 64% of fake faces and typical recognizers noticing 51%. The rate that each group inaccurately called real faces fake was about the same as the first experiment, with super recognizers and typical recognizers rating real faces as "not real" in 37% and 49% of cases, respectively.
Trained participants tended to take longer to scrutinize the images than the untrained participants had — typical recognizers slowed by about 1.9 seconds and super recognizers did by 1.2 seconds. Gray said this is a key message to anyone who is trying to determine if a face they see is real or fake: slow down and really inspect the features.
It is worth noting, however, that the test was conducted immediately after participants completed the training, so it is unclear how long the effect lasts.
"The training cannot be considered a lasting, effective intervention, since it was not re-tested," Meike Ramon, a professor of applied data science and expert in face processing at the Bern University of Applied Sciences in Switzerland, wrote in a review of the study conducted before it went to print.
And since separate participants were used in the two experiments, we cannot be sure how much training improves an individual's detection skills, Ramon added. That would require testing the same set of people twice, before and after training.
We know these cities exist because ancient texts describe them, but their location may be lost to time.
In a few cases, looters have found these cities, and have looted large numbers of artifacts from them. But these robbers have not come forward to reveal their location. In this countdown Live Science takes a look at six ancient cities whose whereabouts are unknown.

Not long after the 2003 U.S. invasion of Iraq, thousands of ancient tablets from a city called "Irisagrig" began appearing on the antiquities market. From the tablets, scholars could tell that Irisagrig was in Iraq and flourished around 4,000 years ago.
Those tablets reveal that the rulers of the ancient city lived in palaces that housed many dogs. They also kept lions which were fed cattle. Those that took care of the lions, referred to as "lion shepherds," got rations of beer and bread. The inscriptions also mention a temple dedicated to Enki, a god of mischief and wisdom, and say that festivals were sometimes held within the temple.
Scholars think that looters found and looted Irisagrig around the time the 2003 U.S. invasion took place. Archaeologists have not found the city so far and the looters who did have not come forward and identified where it is.

Egyptian pharaoh Amenemhat I (reign circa 1981 to 1952 B.C.) ordered a new capital city built. This capital was known as "Itjtawy" and the name can be translated as "the seizer of the Two Lands" or "Amenemhat is the seizer of the Two Lands." As the name suggests Amenemhat faced a considerable amount of turmoil. His reign ended with his assassination.
Despite Amenemhat's assassination, Itjtawy would remain the capital of Egypt until around 1640 B.C, when the northern part of Egypt was taken over by a group known as the "Hyksos," and the kingdom fell apart.
While Itjtawy has not been found, archaeologists think it is located somewhere near the site of Lisht, in central Egypt. This is partly because many elite burials, including a pyramid belonging to Amenemhat I, are located at Lisht.

The city of Akkad (also called Agade) was the capital of the Akkadian Empire, which flourished between 2350 and 2150 B.C. At its peak the empire stretched from the Persian Gulf to Anatolia. Many of its conquests occurred during the reign of "Sargon of Akkad," who lived sometime around 2300 B.C. One of the most important structures in Akkad itself was the "Eulmash," a temple dedicated to Ishtar, a goddess associated with war, beauty and fertility.
Akkad has never been found, but it is thought to have been built somewhere in Iraq. Ancient records indicate that the city was destroyed or abandoned when the Akkadian empire ended around 2150 B.C.

Al-Yahudu, a name which means "town" or "city" of Judah, was a place in the Babylonian empire where Jews lived after the kingdom of Judah was conquered by the Babylonian king Nebuchadnezzar II in 587 B.C. He sent part of the population into exile, a practice the Babylonians often engaged in after conquering a region.
About 200 tablets from the settlement are known to exist and they indicate that the exiled people who lived in this settlement kept their faith and used Yahweh, the name of God, in their names. Al-Yahudu's location has not been identified by archaeologists, but like many of these lost cities, was likely located in what is now Iraq. Given that the tablets showed up on the antiquities market, and there is no record of them being found in an archaeological excavation, it appears that at some point looters succeeded in finding its location.

Waššukanni was the capital city of the Mitanni empire, which existed between roughly 1550 B.C. and 1300 B.C. and included parts of northeastern Syria, southern Anatolia and northern Iraq. It faced intense competition from the Hittite empire in the north and the Assyrian empire in the south and its territory was gradually lost to them.
Waššukanni has never been found and some scholars think that it may be located in northeastern Syria. The people who lived in the capital, and indeed throughout much of its empire, were known as the "Hurrians" and they had their own language which is known today from ancient texts.

Thinis (also known as Tjenu) was an ancient city in southern Egypt that flourished early in the ancient civilization's history. According to the ancient writer Manetho, it was where some of the early kings of Egypt ruled from around 5,000 years ago, when Egypt was being unified. Egypt's capital was moved to Memphis a bit after unification and Thinis became the capital of a nome (a province of Egypt) during the Old Kingdom (circa 2649 to 2150 B.C.) period, Ali Seddik Othman, an inspector with the Egyptian Ministry of Tourism and Antiquities, noted in an article published in the Journal of Abydos.
Thinis has never been identified although it is believed to be near Abydos, which is in southern Egypt. This is partly because many elite members of society, including royalty, were buried near Abydos around 5,000 years ago.
]]>