diff --git "a/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" "b/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" --- "a/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" +++ "b/raw_rss_feeds/https___www_livescience_com_feeds_all.xml" @@ -10,8 +10,101 @@
Writing rules to govern the use of AI-generated content is relatively easy. Enforcing them depends on something much harder: reliably detecting whether a piece of text was generated by artificial intelligence.
Some studies have investigated whether humans can detect AI-generated text. For example, people who themselves use AI writing tools heavily have been shown to accurately detect AI-written text. A panel of human evaluators can even outperform automated tools in a controlled setting. However, such expertise is not widespread, and individual judgment can be inconsistent. Institutions that need consistency at a large scale therefore turn to automated AI text detectors.
The basic workflow behind AI text detection is easy to describe. Start with a piece of text whose origin you want to determine. Then apply a detection tool, often an AI system itself, that analyzes the text and produces a score, usually expressed as a probability, indicating how likely the text is to have been AI-generated. Use the score to inform downstream decisions, such as whether to impose a penalty for violating a rule.
This simple description, however, hides a great deal of complexity. It glosses over a number of background assumptions that need to be made explicit. Do you know which AI tools might have plausibly been used to generate the text? What kind of access do you have to these tools? Can you run them yourself, or inspect their inner workings? How much text do you have? Do you have a single text or a collection of writings gathered over time? What AI detection tools can and cannot tell you depends critically on the answers to questions like these.
There is one additional detail that is especially important: Did the AI system that generated the text deliberately embed markers to make later detection easier?
These indicators are known as watermarks. Watermarked text looks like ordinary text, but the markers are embedded in subtle ways that do not reveal themselves to casual inspection. Someone with the right key can later check for the presence of these markers and verify that the text came from a watermarked AI-generated source. This approach, however, relies on cooperation from AI vendors and is not always available.
One obvious approach is to use AI itself to detect AI-written text. The idea is straightforward. Start by collecting a large corpus, meaning collection of writing, of examples labeled as human-written or AI-generated, then train a model to distinguish between the two. In effect, AI text detection is treated as a standard classification problem, similar in spirit to spam filtering. Once trained, the detector examines new text and predicts whether it more closely resembles the AI-generated examples or the human-written ones it has seen before.
The learned-detector approach can work even if you know little about which AI tools might have generated the text. The main requirement is that the training corpus be diverse enough to include outputs from a wide range of AI systems.
But if you do have access to the AI tools you are concerned about, a different approach becomes possible. This second strategy does not rely on collecting large labeled datasets or training a separate detector. Instead, it looks for statistical signals in the text, often in relation to how specific AI models generate language, to assess whether the text is likely to be AI-generated. For example, some methods examine the probability that an AI model assigns to a piece of text. If the model assigns an unusually high probability to the exact sequence of words, this can be a signal that the text was, in fact, generated by that model.
Finally, in the case of text that is generated by an AI system that embeds a watermark, the problem shifts from detection to verification. Using a secret key provided by the AI vendor, a verification tool can assess whether the text is consistent with having been generated by a watermarked system. This approach relies on information that is not available from the text alone, rather than on inferences drawn from the text itself.
Each family of tools comes with its own limitations, making it difficult to declare a clear winner. Learning-based detectors, for example, are sensitive to how closely new text resembles the data they were trained on. Their accuracy drops when the text differs substantially from the training corpus, which can quickly become outdated as new AI models are released. Continually curating fresh data and retraining detectors is costly, and detectors inevitably lag behind the systems they are meant to identify.
Statistical tests face a different set of constraints. Many rely on assumptions about how specific AI models generate text, or on access to those models’ probability distributions. When models are proprietary, frequently updated or simply unknown, these assumptions break down. As a result, methods that work well in controlled settings can become unreliable or inapplicable in the real world.
Watermarking shifts the problem from detection to verification, but it introduces its own dependencies. It relies on cooperation from AI vendors and applies only to text generated with watermarking enabled.
More broadly, AI text detection is part of an escalating arms race. Detection tools must be publicly available to be useful, but that same transparency enables evasion. As AI text generators grow more capable and evasion techniques more sophisticated, detectors are unlikely to gain a lasting upper hand.
The problem of AI text detection is simple to state but hard to solve reliably. Institutions with rules governing the use of AI-written text cannot rely on detection tools alone for enforcement.
As society adapts to generative AI, we are likely to refine norms around acceptable use of AI-generated text and improve detection techniques. But ultimately, we’ll have to learn to live with the fact that such tools will never be perfect.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
]]>Sensor: 25.5MP APS-C CMOS
Monitor: 3-inch vari-angle, 1.04 million dots
Image stabilization: Digital only in-camera, or optical via attached IS-equipped lens
Autofocus detection range: Down to -5EV, if using an f/1.2 lens
ISO Range: ISO 100-32,000 (expandable to ISO 51,200)
Minimum shutter speed: 30 seconds
Burst rate: Up to 15FPS with electronic shutter or 12FPS with mechanical shutter
Video: UHD 4K video up to 60p (with a crop) and uncropped 4K 30p
Battery life: Approx. 480 shots
Storage: Single SD card slot
Weight: 13.1 oz (370 g)
Close competitors include Sony’s blogger-oriented ‘ZV’ series of compact digital cameras with their flip-out, selfie-enabling rear screens. We get an angle-adjustable LCD on Canon’s EOS R50 V, too, which is also a touchscreen. Unfortunately, because of this, Canon hasn’t bothered to include our personal favorite feature of an eye-level viewfinder as an alternative means of composition and review.
An angle-adjustable screen has uses beyond the purely narcissistic, of course. It’s great for helping us achieve those otherwise awkward high or low angle shots, while including features accessible via a tap of the screen negates the need for a body crammed with buttons, wheels and dials — even if we do enjoy the tactile nature of such features.
So, how does the R50 V acquit itself in practice, especially when it comes to satisfying the needs of beginner photographers, not just videographers?



Resembling a child’s drawing of a camera — essentially a box with a sensor and a lens mount — the EOS R50 V is far from the flashiest mirrorless model we’ve seen. Especially in the age of retro-styled gems like the OM System OM-3, Fujifilm X100V / VI and the Nikon Zf and Zfc. In fact, as mentioned above, Canon’s styling is closer to the functional appearance of its pro-targeted Cinema EOS range rather than the rest of its R series lineup.
That said, it’s not quite so self-consciously wacky as Canon’s PowerShot V10, a video-first attempt with a vertically styled wide-angle lens. While that model had its cigarette packet-sized point-and-shoot charm, the fact that we can use a broad range of EOS R system lenses and accessories with the R50 V extends the creative possibilities much further. Although available at a price to entice consumers, it feels like there is serious intent here, too.

In terms of ergonomic design, the paucity of buttons here means everything falls readily to hand. The EOS R50 V’s raised shutter release is encircled via a zoom lever, something we’d more commonly expect to find on a point-and-shoot digital digicam, rather than an interchangeable lens model. This works in tandem with Canon’s motorized RF-S 14-30mm f/4-6.3 IS STM PZ or ‘Power Zoom’ lens that arrived with our review sample. Alternatively, the zoom can be more conventionally controlled via a manual twist of its lens barrel. The foreshortened lens design allows for a slimmer, more compact profile when the optic and camera are combined.
Also encircled, this time by a command/control dial, is the camera’s on/off power lever. A familiar top plate shooting mode dial features just the one dedicated stills setting rather than individual P, A, S, M settings — the selection of which brings up shooting mode options on-screen — alongside a fistful of further video options. Joined by no fewer than three custom settings is a ‘Slow and Fast’ video option. As it suggests, this gifts videographers control over the speed of capture and playback, either speeding up or slowing down footage.
Weighing just 370 g with battery and card, if we’ve one grumble it’s that we found the magnesium and aluminum alloy built camera felt a bit insubstantial when held in the palm, although adding the 14-30mm compact lens lends it marginally more heft. At least it’s comfortably portable — helpful as we’ll be taking a tripod out with us for astrophotography and potentially also for wildlife photography. That said, the camera body alone is still too wide to fit in most pockets, at dimensions of 119.3 x 73.7 x 45.2 mm.

Anyone viewing investment in the Canon EOS R50 V as a step up from their smartphone for creating videos or shooting stills may not immediately register the fact that an eye-level viewfinder, as found on most dedicated digital cameras in this price bracket, is omitted here. For those of us who love distraction-free image composition via an optical finder or EVF, it’s a real shame. That said, those composing video in the main can probably live with that, as the 3-inch LCD’s aspect ratio is closer to that of a desktop PC monitor, or TV set.
Still, it’s just as well that the monitor here is endlessly flexible, capable of being flipped out from the body to the full 180 degrees and turned to face the subject in front of the lens. We can hold the camera above our heads and tilt the screen down so we can see what the camera’s lens is seeing. Or hold it low to the ground and tilt the screen upwards to avoid us having to get down on our knees. At times like those, we found ourselves missing an eye-level viewfinder just that little bit less. The fact that we can tap the screen and take the picture once we’ve got the subject framed, just as we want it, is a further boon that adds to the camera’s overall intuitiveness.
Outwardly resembling a baby version of its maker’s Cinema EOS cameras, we found its angle-adjustable LCD screen essential for low-angle wildlife shots and night sky images alike, especially as there is no eye-level viewfinder to add interest to its boxy design. Because said panel is touch sensitive, helpfully, the manual functions required for astrophotography are placed at our fingertips and are easier to locate in the dark than the camera’s tiny black buttons.

The APS-C sensor offers 24 megapixel stills for the photographers and up to 4K 60FPS video clips for the filmmakers. What this camera does miss when it comes to still image capture, although obviously irrelevant for video, is a built-in flash bulb. There is a hot shoe atop this model for accessory attachment, however.
As we might assume, given the camera’s compositional LCD screen is presented in 3:2 aspect ratio, the default setting when it comes to image capture is 3:2, which will deliver maximum resolution shots. Lower resolution alternatives include the more standard 4:3, a widescreen 16:9, plus 1:1.
The tone of images produced can be affected in camera via its digital filter effects, selected from Canon’s familiar Picture Style options. On top of settings optimized for portraiture and landscapes, various color filter effects can provide subtle blue, green and magenta washes to images, should we feel that enhances them in any way. As we felt the warm yet naturalistic images produced by the camera as a default were punchy enough, we tended to steer clear of these and opt for the ‘Standard’ Picture Style setting as our chosen default. Clear, sharp and moreover consistent results are what this camera delivers, as we’d fully expect from Canon. Anyone getting started with this model is going to appreciate such consistency, taking both their photography and videography a step further.

The Canon EOS R50 V features its manufacturer’s Dual Pixel CMOS AF II system, offering the ability to automatically determine focus down to -5EV. The camera’s AF system also offers eye and face detection along with tracking AF for people, animals and vehicles. The animals option proves useful for wildlife and results in a higher proportion of sharp images than we’d achieve without it. We also found that if a duck were to turn its head so that we only saw the tuft at the back, the camera would still stay locked on. We wouldn't necessarily consider it one of the best cameras for wildlife photography, but it does a pretty good job.
While aimed at keen amateurs rather than pros, this mirrorless compact is as swift in its responses as we’d expect a consumer-level DSLR to be. Pressing down halfway on its shutter release button prompts the camera to acquire focus as quickly as we can blink. As the back screen is also a touchscreen, we can tap on an area of the panel to redirect the auto focus if it hasn’t alighted on the portion of the frame or the subject we originally intended. For night sky photography, we found it helpful to press left on the camera’s rear plate control dial to switch from auto to manual focus, allowing us to fine-tune the camera’s response and make sure the stars were as sharply rendered as possible.


We got our best results when attempting astrophotography by switching to manual focus rather than purely relying on the camera’s AF to try and focus on the constellations. If there is a tree line or illuminated road in the frame, the camera will obviously try and focus on that rather than more ethereal heavenly objects. It certainly helps that we can use the camera’s touchscreen to make changes and select settings in real time rather than having to otherwise rely on its tiny backplate buttons. We found the latter — black buttons against a black background — almost impossible to locate in the pitch black.

The R50 V’s rechargeable lithium battery is claimed to be good for up to 480 shots from a full charge. That sort of performance is respectable for this beginner class of camera and indeed better than average for its consumer class. Image bursts can be captured up to a maximum of 95 shots. In terms of shooting sports, action photography or capturing wildlife, buffer capacity is such that we’ve got the option here to capture up to 12FPS bursts with mechanical shutter, speed maintained for approximately 42 JPEGs or 7 RAW images. If that’s not quite enough for the purpose, alternatively, we can switch things up to 15FPS if opting for the electronic shutter for 28 JPEGs, or again, 7 RAW files.
✅ You want a comprehensive, fully featured retro-styled camera with plenty of physical control to supplement the more intuitive touch screen operation.
✅ You want an all-rounder that just happens to have a few specialist tricks up its sleeve for occasional astrophotography.
❌ You are looking for a simple point-and-shoot camera, or at the other end of the scale, would prefer a full-frame sensor.
❌ You want a compact camera with a comfortably rounded and robust handgrip for handheld captures.
The EOS R50 V may look boxy and unassuming from the outside, but the focus here is on functionality rather than form. While the build for us felt more insubstantial than we’d like, at least this aids portability. It does, however, miss a well-rounded grip that would be a further aid to shooting handheld, especially for those vloggers who may like to make ‘walk and talk’ type recordings to the camera. However, it’s not such an issue for astrophotography, whereby we’ll need to have the camera tripod-mounted and set the self-timer for firing the shutter release, so the process of otherwise pressing it ourselves doesn’t jog the camera.
As the operation is built around the touch panel LCD, inevitably, there is more to this camera than its pared-back physical control layout may suggest. This leaves room for those just getting started to grow their skill set, as greater familiarity ensures arriving quickly at the settings we want and it becoming second nature. However, if we’re more into shooting stills than video, it would make sense to look elsewhere at the multitude of alternatives, not least the likes of the OM System OM-3, which not only looks much more attractive in terms of design but also features a dedicated Starry Sky AF setting for astrophotographers.
If you’re seeking out a hybrid camera that’s just as, if not more than, focused on video as stills, then also check out the Panasonic S1R II, which resembles more of a conventionally classic camera build, as well as its updated range toppers in the S1 II and the S1 IIE. These share some of the earlier S1R II’s technological DNA, plus near-identical outer construction. Closer alternatives to the Canon EOS R50 V still come via the Sony ZV range of content creator-targeted compacts, of which there is a growing creative arsenal.



We recorded video clips at both 4K and Full HD settings, with little difference between them visible when watched back on our desktop PC. When recording outdoors and being reliant on the built-in microphones, we experienced issues with wind noise that could be much improved/avoided with the use of an accessory microphone with a windshield. Naturally, we tested the EOS R50 V out during the daytime as well as at night, with resultant images well saturated, color-rich and detailed yet naturalistic with it despite being slightly on the warm side, which is what we’d expect of a Canon.
In terms of night sky photography, we were shooting wide open at the lens’ maximum f/4 setting and trying out manually selected exposure durations of 15 and 10 seconds, down to five or four seconds, coupling this with selected ISO settings of ISO 1600 and ISO 3200. At all times, we had the camera mounted on a tripod, focusing manually and then using the model’s self-timer to remotely trigger the shutter.
]]>The dinosaur, Ahshislesaurus wimani, likely had a flat head and a bony crest low on its snout, researchers revealed in a study. The findings, which is due to be published in the Bulletin of the New Mexico Museum of Natural History and Science, suggest that duck-billed dinosaurs, or hadrosaurids, were more diverse and overlapping during the last 20 million years of the Cretaceous period (145 million to 66 million years ago) than previously thought.
Hadrosaurids were large, plant-eating dinosaurs that lived during the last 24 million years of the Cretaceous. They have "sometimes been colorfully called 'the cows of the Cretaceous,'" study co-author Steven Jasinski, a paleontologist at the Harrisburg University of Science and Technology in Pennsylvania, said in a statement. "While this may not be a perfect metaphor, they likely were living in herds and would have been conspicuously present in the environments of northern New Mexico near the end of the Cretaceous."
According to the statement, A. wimani could potentially have grown up to 40 feet (12 meters) long.
One set of A. wimani fossils discovered in 1916 were previously identified as a member of the hadrosaurid genus Kritosaurus. But existing fossil specimens are frequently being reevaluated as more data and fossils become available.
In the new study, the researchers revisited that set of fossils — including an incomplete skull, lower jawbone, and several vertebrae — from the Kirtland Formation in New Mexico. The fossils were housed at the Smithsonian National Museum of Natural History.
"As a general rule … skulls are really the basis for identifying differences in animals," study co-author Anthony Fiorillo, the executive director of the New Mexico Museum of Natural History and Science, said in a separate statement. "When you have a skull and you're noticing differences, that carries more weight than, say, you found a toe bone that looks different from that toe bone."
By comparing the skull to those of other hadrosaurids, the team found that its shape and features were distinct enough from other hadrosaurid skulls to suggest it was likely a different species. A. wimani is closely related to Kritosaurus, suggesting that their evolutionary lines had split not long before.
"Kritosaurus is still a valid genus with species of its own," study co-author Edward Malinzak, a paleontologist at Penn State University Lehigh Valley, said in the second statement. "We took a specimen that was lumped in as an individual of Kritosaurus and determined it had significantly distinct anatomical features to warrant being its own genus and species."
It's not clear yet how the related species co-existed in the same environment, the researchers wrote in the study. But tracing the history and extent of different species could help scientists understand the environment they lived in, as well as the evolutionary history of duck-billed dinosaurs.
"The lineages appear to have co-existed in the region for a time," Malinzak said. "It showed that this group not only exploded with diversity across the continent at one point, but also contributed to the world-wide spread of this group in the Late Cretaceous."
]]>The new cipher — called "Naibbe," from the name of a 14th-century Italian card game — does not decode the medieval Voynich manuscript, but it offers an idea for how the manuscript was made.
The Voynich manuscript, which has been radiocarbon-dated to the 15th century, contains roughly 38,000 words written in glyphs that have never been translated. Despite more than a century of intense scrutiny, the manuscript has not been explained conclusively. However, it continues to intrigue people, with its bizarre and inexplicable illustrations of plants, astrology and alchemy, including supposedly "biological" depictions of bathing naked women.
In the new study, published Nov. 26 in the journal Cryptologia, science journalist Michael Greshko investigated one way the manuscript may have come together. He told Live Science that he got the idea for the Naibbe cipher while researching stories about the Voynich manuscript. "It is this fascinatingly mysterious medieval artifact," he said.
Naibbe first uses the number from the throw of a die to break a block of Italian or Latin into single and double letters — so "gatto" (Italian for "cat") could become "g","at" and "to." The cipher then uses the draw of a playing card to determine which of six different tables is used to encrypt the letters into "Voynichese" — the strange and undeciphered glyphs that are apparently grouped into words in the manuscript. The tables are "weighted" by the corresponding number of cards so that the statistical occurrence of the mock-Voynichese glyphs is the same as seen in the manuscript itself.
Greshko's effort is among the leading attempts to explain how the manuscript was made. But it still only approximated Voynichese text, rather than fully replicating it, he said.
The Voynich manuscript is named after the Polish, British and American book collector Wilfrid Voynich, who acquired it in 1912 from a collection compiled by a Jesuit college near Rome. It is now housed at Yale University.
The manuscript now lies at a nexus of attempts to understand lost languages, yet experts are not entirely sure if Voynichese is even real.
One theory, taken seriously, is that the manuscript is a medieval hoax, illustrated with suitably mysterious and salacious drawings, and that the text of Voynichese glyphs is completely meaningless.
The hoax theory has grown stronger in recent years as more attempts to decipher Voynichese — some of which have used machine learning and other computerized artificial intelligence methods — have failed to crack the code, if there is a code.
But theories that Voynichese is based on a real language and can be deciphered are still prominent, and Greshko's Naibbe cipher is one of the closest attempts yet.
The mock-Voynichese output of the Naibbe cipher has several important similarities to true Voynichese, including the statistical frequencies of glyphs, the length of Voynichese "words," and certain rules of the manuscript's mysterious grammar.
Those commonalities suggested that a similar method was used to create the original Voynich manuscript, Greshko said. "The Naibbe cipher is almost certainly not the way that the manuscript was constructed," he said. "But what it does provide is a fully documented way to reliably go between Latin and something that behaves kind of like the Voynich manuscript."
Dice and playing cards were chosen as sources of randomness because it was essential for the cipher to be "hand-doable" with the technology of the time, Greshko said. At one point, he thought of taking tokens from a bag — a bit like a bingo caller — but he realized that playing cards were known in Europe at that time.
And while the Naibbe cipher does not faithfully replicate all features of Voynichese — such as the exact incidence of Voynichese words and where they appear in a line or paragraph — the discrepancies could be analyzed for potential relevance, he said.
"My hope is that this becomes adopted as a computational benchmark," Greshko said. "The points of difference between the cipher and the manuscript may point the way to how the text was actually created."
Former satellite engineer René Zandbergen, a renowned expert on the Voynich manuscript who was not directly involved in Greshko's study, said he appreciated Greshko's efforts to create an encoding method to approximate Voynichese.
But Greshko "also makes it clear that he is not suggesting that this is how the manuscript text was generated," Zandbergen said in an email. "He just demonstrates that such a method can be found, and we may assume that there may be others."
Zandbergen added that he is "essentially undecided" about whether the Voynichese text is meaningful or a hoax.
"Some people argue that 'nobody would do that,' but I think that argument is too simplistic," he said. "A more problematic point is that I find it very hard to imagine how it could have been done."
]]>But is it possible for adults to have extra bones?
Some adults do indeed have more than 206 bones. These extras, known as accessory bones or supernumerary bones, may occur when bones do not fuse together the standard way during development, per a 2024 study in the journal Scientific Reports.
However, there often aren't obvious signs that someone has more than the typical number of bones.
"It is very easy to not know that someone has an accessory bone," Dr. Vandan Patel, an orthopedic surgeon at the Institute for Foot and Ankle Reconstruction at Mercy Medical Center in Baltimore, told Live Science. Most of the time, accessory bones do not cause any symptoms. "Oftentimes, we learn that someone has an accessory bone when they have an X-ray done for something unrelated and they are found incidentally," he explained.
Even when accessory bones are seen on X-rays, they are frequently overlooked or misinterpreted as fracture fragments or age-related changes, said Eren Ogut, an associate professor of anatomy at Istanbul Medeniyet University. All in all, "studies suggest that they occur in roughly 10 to 30% of the general population," but "their true prevalence is likely higher than commonly appreciated," Ogut told Live Science.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
Accessory bones are common in the foot and ankle, Patel said. The most common accessory bone is known as the os trigonum, he said. "This is seen in up to 10 to 25% of people," Patel noted. "It is located in the back of the ankle joint. It can cause pain, especially when pointing the toes and the ankle down, such as in a ballet dancer in en pointe position."
Another common accessory bone is called the os tibiale externum, also known as accessory navicular. "It is seen in up to 12% of the population," Patel said. "It is located on the inside of the foot, next to the normal navicular bone. Sometimes the navicular bone appears enlarged. It can cause pain in the arch and is often seen with flat-foot deformity."
Doctors also know about a number of uncommon accessory bones, often through studies of cadavers or medical imaging, Ogut said. One example is the os acetabuli, an accessory bone of the hip that may be associated with hip pain, he noted. This accessory bone is seen in less than 5% of the general population, Ogut noted in a 2025 review article in the Bratislava Medical Journal.
Sometimes it's possible to possess accessory ribs. Up to 1% of people have one or even two extra bones in their neck at birth known as cervical ribs, according to the Cleveland Clinic. This rare bone doesn't resemble a typical rib; it can be more vertical or diagonal instead of horizontal like the ribs in the chest. Most of the time, cervical ribs cause no problems, but they can lead to pain or weakness in the arm. In such cases, physical therapy or medicine can help. A surgeon can also remove them, as they don't serve a purpose, the clinic noted.
T cells help train other immune cells to fight off disease. But as the body ages, the activity of these T cells declines, and they become less responsive to threats. Additionally, the thymus gland — where T cells mature — begins to shrink with age. These impacts of aging may explain why vaccines and immune-boosting cancer therapies don't work as well in older adults as they do in younger adults, Nature News reported.
In the new study, published Dec. 17 in the journal Nature, scientists tried to counteract these age-driven changes using messenger RNA (mRNA).
Among other roles, mRNA relays instructions from DNA to cells' protein-building organelles, serving as a template from which new proteins are made. The team behind the new study studied T cells in aging mice, pinpointing three proteins that seemed to decline with age, contributing to the aging process. They then generated mRNA for those three proteins, encased them in tiny bubbles of fat, and injected them into middle-aged mice, which were around 16 months old.
These mRNA-filled bubbles traveled through the bloodstream to the liver, where they accumulated. Most T cells are in the bloodstream, and because the liver filters blood, T cells were likely cycled through the liver, where they were exposed to this waiting supply of mRNA.
Mice treated with the mRNA made more T cells than mice that were left untreated. The treated mice's T cells also responded better to vaccination and to cancer immunotherapy, the experiments suggested.
The benefits of the treatment, which was given to the mice twice a week, disappeared quickly when the scientists paused the injections. That's not necessarily surprising, given that mRNA molecules degrade very quickly in the body, whether they were originally made by cells or produced in a lab.
"The transient nature of mRNA delivery necessitates repeated administrations to sustain therapeutic effects," the study authors wrote in the paper. That said, "the long-term consequences of continuous exposure to these factors, especially in aged individuals should be analysed through extensive long-term safety studies."
In short, more research is needed to see if the same approach could work in humans. You can read more about the study in Nature News.
]]>Three decades later, another new technology has unleashed another wave of exuberance. Investors are pouring billions into any company with "AI" in its name. But there is a crucial difference between these two bubbles, which isn't always recognised. The World Wide Web existed. It was real. General Artificial Intelligence does not exist, and no one knows if or when it ever will.
In February, the CEO of OpenAI, Sam Altman, wrote on his blog that the very latest systems have only just started to "point towards" AI in its "general" sense. OpenAI may market its products as "AIs," but they are merely statistical data-crunchers, rather than "intelligences" in the sense that human beings are intelligent.
So why are investors so keen to give money to the people selling AI systems? One reason might be that AI is a mythical technology. I don't mean it is a lie. I mean it evokes a powerful, foundational story of Western culture about human powers of creation.
Perhaps investors are willing to believe AI is just around the corner because it taps into myths that are deeply ingrained in their imaginations?
The most relevant myth for AI is the Ancient Greek myth of Prometheus.
There are many versions of this myth, but the most famous are found in Hesiod'spoems Theogony and Works and Days, and in the play Prometheus Bound, traditionally attributed to Aeschylus.
Prometheus was a Titan, a god in the Ancient Greek pantheon. He was also a criminal who stole fire from Hephaestus, the blacksmith god. Hiding the fire in a stalk of fennel, Prometheus came to earth and gave it to humankind. As punishment, he was chained to a mountain, where an eagle visited every day to eat his liver.
Prometheus' gift was not simply the gift of fire; it was the gift of intelligence. In Prometheus Bound, he declares that before his gift humans saw without seeing and heard without hearing. After his gift, humans could write, build houses, read the stars, perform mathematics, domesticate animals, construct ships, invent medicines, interpret dreams and give proper offerings to the gods.
The myth of Prometheus is a creation story with a difference. In the Hebrew Bible, God does not give Adam the power to create life. But Prometheus gives (some of) the gods' creative power to humankind.
Hesiod indicates this aspect of the myth in Theogony. In that poem, Zeus not only punishes Prometheus for the theft of fire; he punishes humankind as well. He orders Hephaestus to fire up his forge and construct the first woman, Pandora, who unleashes evil on the world.
The fire that Hephaestus uses to make Pandora is the same fire that Prometheus has given humankind.

The Greeks proposed the idea that humans are a form of artificial intelligence. Prometheus and Hephaestus use technology to manufacture men and women. As historian Adrienne Mayor reveals in her book Gods and Robots, the ancients often depicted Prometheus as a craftsman, using ordinary tools to create human beings in an ordinary workshop.
If Prometheus gave us the fire of the gods, it would seem to follow that we can use this fire to make our own intelligent beings. Such stories abound in Ancient Greek literature, from the inventor Daedalus, who created statues that came to life, to the witch Medea, who could restore youth and potency with her cunning drugs. Greek inventors also constructed mechanical computers for astronomy and remarkable moving figures powered by gravity, water and air.
2,700 years have passed since Hesiod first wrote down the story of Prometheus. In the ensuing centuries, the myth has been endlessly retold, especially since the publication of Mary Shelley's Frankenstein; or the Modern Prometheus in 1818.
But the myth is not always told as fiction. Here are two historical examples where the myth of Prometheus seemed to come true.
Gerbert of Aurillac was the Prometheus of the 10th century. He was born in the early 940s CE, went to school at Aurillac Abbey, and became a monk himself. He proceeded to master every known branch of learning. In the year 999, he was elected Pope. He died in 1003 under his pontifical name, Sylvester II.
Rumours about Gerbert spread wildly across Europe. Within a century of his death, his life had already become legend. One of the most famous legends, and the most pertinent in our age of AI hype, is that of Gerbert's "brazen head." The legend was told in the 1120s by the English historian William of Malmesbury, in his well researched and highly regarded book, Deeds of the English Kings.
Gerbert was deeply learned in astronomy, a science of prediction. Astronomers could use the astrolabe to predict the position of the stars and foresee cosmological events such as eclipses. According to William, Gerbert used his knowledge of astronomy to construct a talking head. After inspecting the movements of the stars and planets, he cast a head in bronze that could answer yes-or-no questions.
First Gerbert asked the head: "Will I become Pope?"
"Yes," answered the head.
Then Gerbert asked: "Will I die before I sing mass in Jerusalem?"
"No," the head replied.
In both cases, the head was correct, though not as Gerbert anticipated. He did become Pope, and he sensibly avoided going on pilgrimage to Jerusalem. One day, however, he sang mass at Santa Croce in Gerusalemme in Rome. Unfortunately for Gerbert, Santa Croce in Gerusalemme was known in those days simply as "Jerusalem."
Gerbert sickened and died. On his deathbed, he asked his attendants to cut up his body and cast away the pieces, so he could go to his true master, Satan. In this way, he was, like Prometheus, punished for his theft of fire.

It is a thrilling story. It is not clear whether William of Malmesbury actually believed it. But he does try to persuade his readers that it is plausible. Why did this great historian with a devotion to the truth insert some fanciful legends about a French pope into his history of England? Good question!
Is it so fanciful to believe that an advanced astronomer might build a general-purpose prediction machine? In those days, astronomy was the most powerful science of prediction. The sober and scholarly William was at least willing to entertain the idea that brilliant advances in astronomy might make it possible for a Pope to build an intelligent chatbot.
Today, that same possibility is credited to machine-learning algorithms, which can predict which ad you will click, which movie you will watch, which word you will type next. We can be forgiven for falling under the same spell.
The Prometheus of the 18th century was Jacques de Vaucanson, at least according to Voltaire:
Bold Vaucanson, rival of Prometheus,Seems, imitating the springs of nature,To steal the fire of heaven to animate the body.

Vaucanson was a great machinist, famous for his automata. These were clockwork devices that realistically simulated human or animal anatomy. Philosophers of the time believed that the body was a machine — so why couldn't a machinist build one?
Sometimes Vaucanson's automata were scientifically significant. He constructed a piper, for example, that had lips and lungs and fingers, and blew the pipe in much the same way a human would. Historian Jessica Riskin explains in her book The Restless Clock that Vaucanson had to make significant discoveries in acoustics in order to make his piper play in tune.
Sometimes his automata were less scientific. His digesting duck was hugely famous, but turned out to be fraudulent. It appeared to eat and digest food, but its poos were in fact prefabricated pellets hidden inside the mechanism.
Vaucanson spent decades working on what he called a "moving anatomy." In 1741, he presented a plan to the Lyons Academy to build an "imitation of all animal operations." Twenty years later, he was at it again. He secured support from King Louis XV to build a simulation of the circulatory system. He claimed he could build a complete, living artificial body.

There is no evidence that Vaucanson ever completed a whole body. In the end, he couldn't live up to the hype. But many of his contemporaries believed he could do it. They wanted to believe in his magical mechanisms. They wished he would seize the fire of life.
If Vaucanson could manufacture a new human body, couldn't he also repair an existing one? This is the promise of some AI companies today. According to Dario Amodei, CEO of Anthropic, AI will soon allow people "to live as long as they want." Immortality seems like an attractive investment.
Sylvester II and Vaucanson were great technologists, but neither was a Prometheus. They stole no fire from the gods. Will the aspiring Prometheans of Silicon Valley succeed where their predecessors have failed? If only we had Sylvester II's brazen head, we could ask it.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
]]>We’ve taken it to a nature reserve, photographed birds from our window and zoomed in on the moon to assess its performance in all-light conditions for static and moving subjects, emulating real-world shooting conditions to test its mettle.

There’s no beating around the bush here — this lens is big, and it’s heavy. Weighing about 4.5 lbs (just over 2 kilograms), this thing makes itself known both in your camera bag and out in the field. Needless to say, it got quite heavy after a while, even when resting in a hide, but it feels solid and well-built and is dust- and weather-resistant, although we never got caught out in the rain to fully test this.
We found it frustrating that it didn’t have a zoom lock, as it had an annoying amount of lens creep when we held the lens vertically, which meant we couldn’t carry the camera around our neck (as if its weight didn’t already see to that). We found the zoom ring a little on the stiff side, and, to be picky, the lens actually looked quite ugly when it was zoomed all the way in on a subject.


Focal length: 200-800 mm
Maximum aperture: f/6.3-9
Weight: 4.5 pounds (2.05 kg)
Image stabilization: 5.5 stops
Filter thread: 95 mm
Dimensions (in): ⌀4.03 x 12.37
Dimensions (mm): ⌀102.3 x 314.1
In addition, it has a control ring, AF/MF switch, image stabilizer switch and two custom buttons, although we found these buttons hard to press as they aren’t within easy reach when holding the camera’s hefty weight. When we took our hand away to try to press either of the buttons, it threw the entire weight distribution off.
It has a nice big lens hood, although we’d have liked this to have a door in order to utilize a polarizer, particularly when we were photographing waterfowl.





For wildlife photography in generally favorable conditions, this lens performed very well overall. Its obvious downfall is the limited maximum aperture — f/6.3 performs just fine during the daytime, but as the light levels fell at dusk, or even when we went into a heavily wooded area, we had to push the ISO up higher than we’d have wanted.
Luckily, we were shooting with the Canon EOS R6 II, which has excellent noise handling, so we were able to save a lot of our images. But if you often shoot at dawn or dusk, we’d recommend investing in a wider telephoto lens so you won’t need to rely on denoise software.
The autofocus was also good, but at higher focal lengths, it’s at the mercy of how steady your hand is. It generally performed very well, but it suffered when we were shooting in harsh conditions, or if there were distractions or foliage in front of our subject.
Overall, though, its performance is very good for the price. Images are sharp and it captures color very nicely — certainly more than well enough for wildlife or moon photography.





As much as it suffers from a fairly wide maximum aperture, the 200-800mm focal length offers versatility that many other lenses don’t. There’s a Sony super-telephoto with a 400-800mm range, but you’d be stuck if a subject came too close to you — with the Canon, you’d be able to zoom out easily. We never found ourselves wishing we had multiple lenses, as the 200-800mm can cover a lot of subjects, near or far.
Plus, although it doesn’t have the close focusing capabilities of a true macro lens, it can focus as close as 2.6 feet (0.8 meters) at 200mm, which is great for photographing butterflies and insects at a fairly close range.
The 5.5 stops of image stabilization were a lifesaver, and pretty crucial for such a long focal length. Even just for compositional purposes, we still struggled to follow subjects on occasion at the full 800mm, and if there had been no image stabilization, we’d have had no chance.
Best lenses for wildlife photography
Best wildlife lenses under $1,000
Best cameras for wildlife photography
Best beginner wildlife cameras
Best cameras
Best beginner cameras
Best macro lenses
Best binoculars for bird-watching
Best compact binoculars
Best wildlife observation equipment
Beginner's guide to wildlife photography
Overall, this lens provides excellent value for money. You get a lot of lens for the price, and although it’s not a low-light champion, it still produces beautifully sharp, contrast-y images, while the versatility of the focal length is hard to beat.
Considering the very best wildlife lenses are telephoto primes costing upwards of $10,000, it’s one of the best you can buy for most wildlife photographers — that is, for anyone who’s not a serious pro.
If you don't need 800mm
Another great wildlife lens with a little less reach, but a little more aperture. This lens would be better in low light if you don't need a huge zoom.
If you'd prefer a prime lens
This 800mm prime lens is perfect for bird photography or capturing distant animals on a budget — but the f/11 aperture means good lighting is essential.
If you're a professional
If you're a pro photographer and have serious cash to spend, this 400mm prime lens with an f/2.8 aperture will see you through any light conditions.
Scientists are developing a real-life tractor beam, dubbed an electrostatic tractor. This tractor beam wouldn't suck in helpless starship pilots, however. Instead, it would use electrostatic attraction to nudge hazardous space junk safely out of Earth orbit.
The stakes are high: With the commercial space industry booming, the number of satellites in Earth's orbit is forecast to rise sharply. This bonanza of new satellites will eventually wear out and turn the space around Earth into a giant junkyard of debris that could smash into working spacecraft, plummet to Earth, pollute our atmosphere with metals and obscure our view of the cosmos. And, if left unchecked, the growing space junk problem could hobble the booming space exploration industry, experts warn.
The science is pretty much there, but the funding is not.
The electrostatic tractor beam could potentially alleviate that problem by safely moving dead satellites far out of Earth orbit, where they would drift harmlessly for eternity.
While the tractor beam wouldn't completely solve the space junk problem, the concept has several advantages over other proposed space debris removal methods, which could make it a valuable tool for tackling the issue, experts told Live Science.
Related: 11 sci-fi concepts that are possible (in theory)
A prototype could cost millions, and an operational, full-scale version even more. But if the financial hurdles can be overcome, the tractor beam could be operational within a decade, its builders say.
"The science is pretty much there, but the funding is not," project researcher Kaylee Champion, a doctoral student in the Department of Aerospace Engineering Sciences at the University of Colorado Boulder (CU Boulder), told Live Science.

The tractor beams depicted in "Star Wars" and "Star Trek" suck up spacecraft via artificial gravity or an ambiguous "energy field." Such technology is likely beyond anything humans will ever achieve. But the concept inspired Hanspeter Schaub, an aerospace engineering professor at CU Boulder, to conceptualize a more realistic version.
Schaub first got the idea after the first major satellite collision in 2009, when an active communications satellite, Iridium 33, smashed into a defunct Russian military spacecraft, Kosmos 2251, scattering more than 1,800 pieces of debris into Earth's orbit.
Related: How many satellites orbit Earth?

In the wake of this disaster, Schaub wanted to be able to prevent this from happening again. To do this, he realized you could pull spacecraft out of harm's way by using the attraction between positively and negatively charged objects to make them "stick" together.
Over the next decade, Schaub and colleagues refined the concept. Now, they hope it can someday be used to move dead satellites out of geostationary orbit (GEO) — an orbit around Earth's equator where an object's speed matches the planet's rotation, making it seem like the object is fixed in place above a certain point on Earth. This would then free up space for other objects in GEO, which is considered "prime real estate" for satellites, Schaub said.

The electrostatic tractor would use a servicer spacecraft equipped with an electron gun that would fire negatively charged electrons at a dead target satellite, Champion told Live Science. The electrons would give the target a negative charge while leaving the servicer with a positive charge. The electrostatic attraction between the two would keep them locked together despite being separated by 65 to 100 feet (20 to 30 meters) of empty space, she said.
Once the servicer and target are "stuck together," the servicer would be able to pull the target out of orbit without touching it. Ideally, the defunct satellite would be pulled into a "graveyard orbit" more distant from Earth, where it could safely drift forever, Champion said.
Related: 15 of the weirdest things we have launched into space
The electrostatic attraction between the two spacecraft would be extremely weak, due to limitations in electron gun technology and the distance by which the two would need to be separated to prevent collisions, project researcher Julian Hammerl, a doctoral student at CU Boulder, told Live Science. So the servicer would have to move very slowly, and it could take more than a month to fully move a single satellite out of GEO, he added.
That's a far cry from movie tractor beams, which are inescapable and rapidly reel in their prey. This is the "main difference between sci-fi and reality," Hammerl said.

The electrostatic tractor would have one big advantage over other proposed space junk removal methods, such as harpoons, giant nets and physical docking systems: It would be completely touchless.
"You have these large, dead spacecraft about the size of a school bus rotating really fast," Hammerl said. "If you shoot a harpoon, use a big net or try to dock with them, then the physical contact can damage the spacecraft and then you are only making the [space junk] problem worse."
Scientists have proposed other touchless methods, such as using powerful magnets, but enormous magnets are both expensive to produce and would likely interfere with a servicer's controls, Champion said.
Related: How do tiny pieces of space junk cause incredible damage?
The main limitation of the electrostatic tractor is how slowly it would work. More than 550 satellites currently orbit Earth in GEO, but that number is expected to rise sharply in the coming decades.
If satellites were moved one at a time, then a single electrostatic tractor wouldn't keep pace with the number of satellites winking out of operation. Another limitation of the electrostatic tractor is that it would work too slowly to be practical for clearing smaller pieces of space junk, so it wouldn't be able to keep GEO completely free of debris.
Cost is the other big obstacle. The team has not yet done a full cost analysis for the electrostatic tractor, Schaub said, but it would likely cost tens of millions of dollars. However, once the servicer were in space, it would be relatively cost-effective to operate it, he added.

The researchers are currently working on a series of experiments in their Electrostatic Charging Laboratory for Interactions between Plasma and Spacecraft (ECLIPS) machine at CU Boulder. The bathtub-sized, metallic vacuum chamber, which is equipped with an electron gun, allows the team to "do unique experiments that almost no one else can currently do" in order to simulate the effects of an electrostatic tractor on a smaller scale, Hammerl said.
Once the team is ready, the final and most challenging hurdle will be to secure funding for the first mission, which is a process they have not yet started.
Most of the mission cost would come from building and launching the servicer. However, the researchers would ideally like to launch two satellites for the first tests, a servicer and a target that they can maneuver, which would give them more control over their experiments but also double the cost.
Related: 10 stunning shots of Earth from space in 2022
If they can somehow wrangle that funding, a prototype tractor beam could be operational in around 10 years, the team previously estimated.

While tractor beams may sound like a pipe dream, experts are optimistic about the technology.
"Their technology is still in the infancy stage," John Crassidis, an aerospace scientist at the University at Buffalo in New York, who is not involved in the research, told Live Science in an email. "But I am fairly confident it will work."
If you shoot a harpoon, use a big net or try to dock with them, then the physical contact can damage the spacecraft and then you are only making the [space junk] problem worse.
Removing space junk without touching it would also be much safer than any current alternative method, Crassidis added.
The electrostatic tractor "should be able to produce the forces necessary to move a defunct satellite" and "certainly has a high potential to work in practice," Carolin Frueh, an associate professor of aeronautics and astronautics at Purdue University in Indiana, told Live Science in an email. "But there are still several engineering challenges to be solved along the way to make it real-world-ready."
Scientists should continue to research other possible solutions, Crassidis said. Even if the CU Boulder team doesn't create a "final product" to remove nonfunctional satellites, their research will provide a stepping stone for other scientists, he added.
If they are successful, it wouldn't be the first time scientists turned fiction into fact.
"What is today's science fiction could be tomorrow's reality," Crassidis said.
]]>We’ve taken it to a nature reserve, photographed birds from our window and zoomed in on the moon to assess its performance in all-light conditions for static and moving subjects, emulating real-world shooting conditions to test its mettle.

There’s no beating around the bush here — this lens is big, and it’s heavy. Weighing about 4.5 lbs (just over 2 kilograms), this thing makes itself known both in your camera bag and out in the field. Needless to say, it got quite heavy after a while, even when resting in a hide, but it feels solid and well-built and is dust- and weather-resistant, although we never got caught out in the rain to fully test this.
We found it frustrating that it didn’t have a zoom lock, as it had an annoying amount of lens creep when we held the lens vertically, which meant we couldn’t carry the camera around our neck (as if its weight didn’t already see to that). We found the zoom ring a little on the stiff side, and, to be picky, the lens actually looked quite ugly when it was zoomed all the way in on a subject.


Focal length: 200-800 mm
Maximum aperture: f/6.3-9
Weight: 4.5 pounds (2.05 kg)
Image stabilization: 5.5 stops
Filter thread: 95 mm
Dimensions (in): ⌀4.03 x 12.37
Dimensions (mm): ⌀102.3 x 314.1
In addition, it has a control ring, AF/MF switch, image stabilizer switch and two custom buttons, although we found these buttons hard to press as they aren’t within easy reach when holding the camera’s hefty weight. When we took our hand away to try to press either of the buttons, it threw the entire weight distribution off.
It has a nice big lens hood, although we’d have liked this to have a door in order to utilize a polarizer, particularly when we were photographing waterfowl.





For wildlife photography in generally favorable conditions, this lens performed very well overall. Its obvious downfall is the limited maximum aperture — f/6.3 performs just fine during the daytime, but as the light levels fell at dusk, or even when we went into a heavily wooded area, we had to push the ISO up higher than we’d have wanted.
Luckily, we were shooting with the Canon EOS R6 II, which has excellent noise handling, so we were able to save a lot of our images. But if you often shoot at dawn or dusk, we’d recommend investing in a wider telephoto lens so you won’t need to rely on denoise software.
The autofocus was also good, but at higher focal lengths, it’s at the mercy of how steady your hand is. It generally performed very well, but it suffered when we were shooting in harsh conditions, or if there were distractions or foliage in front of our subject.
Overall, though, its performance is very good for the price. Images are sharp and it captures color very nicely — certainly more than well enough for wildlife or moon photography.





As much as it suffers from a fairly wide maximum aperture, the 200-800mm focal length offers versatility that many other lenses don’t. There’s a Sony super-telephoto with a 400-800mm range, but you’d be stuck if a subject came too close to you — with the Canon, you’d be able to zoom out easily. We never found ourselves wishing we had multiple lenses, as the 200-800mm can cover a lot of subjects, near or far.
Plus, although it doesn’t have the close focusing capabilities of a true macro lens, it can focus as close as 2.6 feet (0.8 meters) at 200mm, which is great for photographing butterflies and insects at a fairly close range.
The 5.5 stops of image stabilization were a lifesaver, and pretty crucial for such a long focal length. Even just for compositional purposes, we still struggled to follow subjects on occasion at the full 800mm, and if there had been no image stabilization, we’d have had no chance.
Best lenses for wildlife photography
Best wildlife lenses under $1,000
Best cameras for wildlife photography
Best beginner wildlife cameras
Best cameras
Best beginner cameras
Best macro lenses
Best binoculars for bird-watching
Best compact binoculars
Best wildlife observation equipment
Beginner's guide to wildlife photography
Overall, this lens provides excellent value for money. You get a lot of lens for the price, and although it’s not a low-light champion, it still produces beautifully sharp, contrast-y images, while the versatility of the focal length is hard to beat.
Considering the very best wildlife lenses are telephoto primes costing upwards of $10,000, it’s one of the best you can buy for most wildlife photographers — that is, for anyone who’s not a serious pro.
If you don't need 800mm
Another great wildlife lens with a little less reach, but a little more aperture. This lens would be better in low light if you don't need a huge zoom.
If you'd prefer a prime lens
This 800mm prime lens is perfect for bird photography or capturing distant animals on a budget — but the f/11 aperture means good lighting is essential.
If you're a professional
If you're a pro photographer and have serious cash to spend, this 400mm prime lens with an f/2.8 aperture will see you through any light conditions.
This is the vision for the Autonomous Closed-Loop Intervention System (ACIS), a device being developed by scientists at NTT Research, an arm of global technology company NTT. The device has been tested in animal experiments but not in human patients yet.
The researchers' eventual goal is to allow the heart to rest and minimize its oxygen use in that critical recovery window after a patient experiences a cardiac emergency. The jobs that would be handled by ACIS are usually done by medical providers — but the idea is that the device could standardize and optimize the process to deliver better outcomes while relieving strain on doctors' already-limited resources.
"We think that this system will outperform the standard of care," said Dr. Joe Alexander, director of NTT Research's Medical and Health Informatics (MEI) lab.
ACIS stemmed from a larger effort spearheaded by the MEI Lab known as the Bio Digital Twin program. Its aim is to construct advanced virtual models of organ systems that can be personalized with an individual patient's data, providing a detailed and dynamic representation of their medical status and a testable model for developing treatment plans.
Live Science spoke with Alexander about Digital Twins, ACIS and his vision for how they might transform health care.
Nicoletta Lanese: When we're talking about a Bio Digital Twin, is it fair to say it's a virtual copy of the patient?
Dr. Joe Alexander: Probably the layperson would think of a Bio Digital Twin as a copy of the person. But actually, it's just a system of equations, modeling and simulation to represent a person to the extent that is relevant for the disease. It's a very specific application, so there's no single Bio Digital Twin representing the [whole] person.
In our case, although we set out to build a family of Bio Digital Twins to represent different organ systems for different types of important diseases, we're starting with the cardiovascular system. So when I talk about a Cardiovascular Bio Digital Twin, I'm not talking about even a copy of the heart; I'm talking about a mathematical representation of all of the systems necessary for looking at the cardiovascular system in a particular patient.
In the case of ACIS, we're looking at acute heart failure and acute myocardial infarction [colloquially known as a heart attack].

NL: Could you talk about what kind of data goes into the model?
JA: This Cardiovascular Bio Digital Twin is representing pressures and flows throughout the cardiovascular system, including pressures and flows generated by all four chambers of the heart. … We are able to represent the cardiovascular system dynamics in pressures, flows and volumes.
NL: And how do you make that actionable for an individual patient?
JA: We're in the early stages of it, but we have a road map for how to do it. Basically, we first go after representing the "normal" cardiovascular system for patients. So, if we can get data around "normal," then that's very good. [Editor's note: The MEI Lab is working with partners such as the National Cerebral and Cardiovascular Center in Japan to get access to this kind of data.]
But probably what's most important is finding populations that are relevant to the particular patient — so, in this case, patients with cardiovascular disease or patients with heart failure. So we go after that population-level data; let's say for heart failure. Then, from that data, we can estimate parameters for our cardiovascular model that represent the general population of patients with heart failure.
Within that population, as you know, there's a lot of variability. So are there other characteristics specific to our patient that we can use? Maybe results from echocardiogram [EKG]; maybe age; maybe comorbidities [other medical conditions]; sex, male or female; or environment. And if there is genetic information available, then we can find a subpopulation that's even more relevant to the patient.
Now, with ACIS, we [would] actually hook up a patient to the "first guess" of our Cardiovascular Bio Digital Twin for what would match that patient based on population-level data. Since it's a feedback control system, the feedback will automatically adjust the parameter values to deliver the necessary drugs or device therapies that that particular patient needs for some prespecified cardiac output. In that way, we can further fine-tune the Digital Twin for that patient.
NL: Can you describe how ACIS and its feedback loop work?
JA: The idea is that it's a "self-driving" therapeutic, just like a self-driving car. But in this case, "self-driving" is delivering the appropriate drugs or, in severe cases, medical-device therapies that a patient may need.
We have a system where we specify — just type in the keyboard — the desired cardiac output, heart rate, left atrial pressure, arterial pressure that we want the patient to achieve. Then, syringes that are filled with the appropriate drugs to create those changes are driven by our model, or "best guess" for that particular patient. This is all after a patient has had the primary lesion [like a blood vessel blockage] treated in the cath lab.
Let's say they had a vessel that was occluded; it's already been opened up or a stent has been placed, and they go to the ICU [intensive care unit] or CCU [coronary care unit] in order to recover. Recovery means that the heart needs an opportunity to rest. That means letting the heart work as little as possible to maintain the desired cardiac output.
We have a certain regimen of drugs that are given. Catecholamines improve the ability of the heart to contract. Nitrates reduce afterload of the heart so it doesn't have to work against such a high load when it tries to inject into the arterial system. Diuretics decrease the circulating blood volume and remove blood from the lungs, which has built up due to the acute failure.
These drugs are typically given by a physician; they'll give one drug and look at the response, give another drug, the response, and manage that patient over several days. When our system achieves proper function — and we're almost there, I think — all those drugs can be given at once if we know how the system will respond. That saves us a lot of time in treating the patient.
The drugs are delivered by these autonomously controlled syringes; then the patient responds to them, and that response is fed back in this system. Those values are compared to the ones that we typed in the keyboard, and if there's a difference, then feedback systems work to reduce that difference. It also gives information to our Digital Twin for that patient, so that in the future, we have better representations of those resistors and capacitors in the model.

NL: What stage of development has ACIS reached at this point?
JA: So, in animal experiments in dogs, last year for the first time, we experimentally induced acute heart failure and we were able to let this autonomous system correct the cardiac output, arterial pressure autonomously, while minimizing myocardial [heart muscle] oxygen consumption.
Since that first successful experiment about a year ago, we've had several other successful [animal] experiments, all the while improving our feedback system to be more complex, making it so that it can operate based on intermittent data, so you don't have to be continuously sampling. You can do it episodically.
We have several more years of work in optimizing this system, we think, in animal experimentation — probably about three years more. And then we'll be ready for first-in-human studies where ACIS will be used but with a clinician in the loop [for the initial human tests]. What ACIS would do is tell the physician what doses of these various drugs to deliver, and the physician would then make a decision whether to do it or not, as a safety measure.
Now, what I've been describing so far has mostly been about drugs, but the same algorithms work for medical devices, such as left ventricular assist devices [LVAD, a type of mechanical pump] or extracorporeal membrane oxygenation devices [ECMO, which circulates the blood to let the heart and lungs rest]. This is all within the scope of what we expect to achieve in experimental animals within the next three years before going to first-in-human studies.
NL: What are the next steps toward getting ACIS approved? What might the trials look like?
JA: It would be kind of like [testing] an autonomous or self-driving vehicle — level 1 through 4 degrees, or stages, of autonomy.
In other words, allowing the system to have increasing responsibility and watching the performance until settling into acceptance of an autonomous system where then, still, probably a specialist would monitor it — like someone sitting in the seat of a self-driving car, ready to take over if things go wrong. I see that kind of progression, similar to the self-driving vehicle.
NL: And in the long run, would ACIS always have some kind of clinician supervision?
JA: I still hold to the concept of "autonomous," but I suspect that there will be a cardiologist somewhere roaming around, monitoring, perhaps, a number of patients at once.
I'm very committed to the idea that the device that we conceive of can actually outperform the cardiologist. And I know that we'll rub some cardiologists the wrong way. But we expect to demonstrate that point, or strongly suggest that that's true, by doing experiments in animals where we compare the ACIS system to clinically trained cardiologists. We expect reduced infarct size [degree of heart tissue death] from ACIS compared to the standard of care from cardiologists.
NL: Assuming this device gets approved in the future, where do you see it having the most benefit?
JA: There's the so-called Quintuple Aim of Health Care, which says to improve the patient experience, improve the physician experience, improve population health, reduce the cost of care, and improve health equity. These aims, I think, are all addressed by ACIS.
The patient would have more attention and minute-to-minute care — you wouldn't have a resident trying to juggle many patients at once. You could have a less-specialized clinical caretaker who is watching the behavior of the device, and so that would improve not only the patient experience and quality of the patient's care but also the health care provider's experience. They wouldn't have to be overworked to such an extent.
We think that this system will outperform the standard of care because [on paper] you more rapidly converge on the minimization of myocardial oxygen consumption and have better recovery during the hospital stay. So the patients have fewer readmissions and complications after being released. There's always some injury to the heart [with these cardiac events], and maybe, there may be some infarction of the heart. So we think that this level of care could reduce infarct size, so you preserve more of the heart, during treatment.
NL: And when you eventually hand off ACIS for clinical testing, what would the next project be?
JA: For us, the natural progression within the next 10 years, probably within the next five years, would be chronic heart failure. In chronic heart failure, you have to deal with more complexity, such as [tissue] remodeling, where the ventricles get thicker or get dilated. That kind of remodeling changes the mechanics.
You also have to deal with data from patients who are not in the hospital. We plan on building registries of patients [with Digital Twins] who would have been acutely ill to have access to that data for treating them outside. But then we have to also rely on things like wearable technologies, and we've been working on that as well. We have collaborations with folks at the Technical University of Munich who are developing special biosensors and biomaterials and implantable sensors and so forth that could help provide the data that would be important to doing predictive health maintenance in patients with chronic heart failure.
And in chronic heart failure, we have to deal with comorbidities and complications like kidney failure … and anemia. The combination of fluid overload and anemia all due to renal failure really makes the heart suffer from a lack of oxygen and causes slow deterioration.
I'm sure that complexity alone will keep me busy for the rest of my life. We have a lot of work to do with chronic heart failure; that would be next for sure.
Editor's note: This interview has been lightly edited for length and clarity.
]]>It sounds like a simple enough question to answer — list the openings and add them up. But it's not quite that easy once you start considering questions like: "What exactly is a hole?" "Does any opening count?" And "why don't mathematicians know the difference between a straw and a doughnut?"
Before we start counting, we need to agree on what constitutes a "hole." Katie Steckles, a lecturer in mathematics at Manchester Metropolitan University in the U.K. and a freelance mathematics communicator, told Live Science that mathematicians "use the term 'hole' to mean one like the hole in a donut: one that goes all the way through a shape and out the other side."
But if you dig a "hole" at the beach, your aim is probably not to dig right through to the other side of the world. Many people would think of a hole as a depression in a solid object. But "this isn't a true hole, as it has an end," Steckles said.
Similarly, mathematical communicator James Arthur, who is based in the U.K., told Live Science that "in topology, a 'hole' is a through hole, that is you can put your finger through the object."
When digging a tunnel under the sea, like the Channel Tunnel that connects the U.K. and France, engineers started off by digging two openings. But as soon as those two digging projects joined up, the Channel Tunnel became a fundamentally different object (what Arthur and engineers would call a "through hole") — like a straw, or a tube with an opening at either end.
And if you ask people how many holes a straw has you will get a range of different answers: one, two and even zero. This is a result of our colloquial understanding of what constitutes a hole.
To find a consistent answer, we can turn to mathematics. And the problem of classifying how many holes there are in an object falls squarely within the realm of topology.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
To a topologist, the actual shapes of objects are not important. Instead, "topology is more concerned with the fundamental properties of shapes and how things connect together in space," Steckles said.
In topology, objects can be grouped together by the number of holes they possess. For example, a topologist sees no difference between a golf ball, a baseball or even a Frisbee. If they were all made of plasticine, or putty, they could theoretically be squashed, stretched or otherwise manipulated to look like each other without making or closing any holes in the plasticine or sticking different parts together, Steckles argued.
However, to a topologist, these objects are fundamentally different to a bagel, a doughnut or a basketball hoop, which each have a hole through the middle of them. A figure of eight with two holes and a pretzel with three are different topological objects again.

A useful way to get into the mathematicians' way of thinking about the straw problem is to "imagine our straw is made of play dough," Arthur said. "Let's take this straw and slowly squish the top down and down and down towards the bottom, making sure the hole in the middle stays open. We will squish it until we are in a shape that looks like a doughnut." Mathematicians, Arthur said, would say that "the straw is homeomorphic to a doughnut."
The long, thin aspect ratio of the straw, and the fact that the two openings are relatively far apart, are perhaps what gives rise to the suggestion of two holes. But to a topologist, bagels, basketball hoops and doughnuts are all topologically equivalent to a straw with a single hole. "The hole in a straw goes all the way through it, and the opening at the other end is just the back of that same hole," Steckles said.
Armed with the topologists' definition of a hole, we can tackle the original question: How many holes does the human body have? Let's first try to list all the openings we have. The obvious ones are probably our mouths, our urethras (the ones we pee out of) and our anuses, as well as the openings in our nostrils and our ears. For some of us, there are also milk ducts in nipples and vaginas.
There are also four less-obvious openings that we all have in the corners of eyelids closest to our nose — the four lacrimal puncta, which drain tears from our eyes into our nasal cavities. At an even smaller scale there are the pores that enable sweat to escape our bodies and sebum to lubricate our skin. In total there are potentially millions of these openings in our bodies, but do they all count as holes?

To make the question interesting, think about whether we could pass a very thin string into one hole and out of another. If we set the size of this string to be about 60 microns (60 millionths of a meter) then it's possible that the string could enter an opening as small as a pore. However — and this is key — it wouldn't be able to leave. It wouldn't be able to come out the other end. It would be blocked by the cells at the bottom of the pore — too thick to pass through into the vasculature that supplies the pore.
"They're not actually holes in the topological sense, as they don't go all the way through," Steckles said. "They're just blind pits."
By this definition we can rule out all the pores, milk ducts and urethras. We couldn't thread a string in one of these openings and out of another. Even the ears canals have to go as they are separated from the rest of the sinuses by the ear drums.
"We have our mouth, our anus, and then our nostrils. They are four of the … openings that form a hole," Arthur said. "But we actually have eight. The remaining four come from the tear ducts, we each have two in each eye, an upper and a lower."
But this doesn't mean eight holes. Steckles pointed out ."When the holes that pass through a shape connect together inside the shape, it makes it harder to count how many there are."
A pair of underwear, for example, has three openings (one for the waist and one for each of the two legs), but it's not immediately clear how many holes a topologist would say it has. "A useful trick is to think about flattening it out," Steckles said. — "If we were to stretch the waistband of the pants out onto a big hula hoop, we'd see the two trouser legs sticking down, each being one hole."

So despite having three openings, the pair of underwear has only two holes. "So when the holes connect together in the middle, there's one fewer hole than there are openings," Steckles argued. Correspondingly, topology tells us that, despite eight interconnected openings, the human body has seven different holes.
But there might be one more. Although often counted as a blind hole, the vagina leads to the uterus, which then leads to one of two fallopian tubes. These tubes are open at the far end and lead to the peritoneal cavity near the ovary. It is the job of the finger-like projections of the funnel-shaped infundibulum at the end of the fallopian tube to catch the egg when it is released from the nearest ovary. However, it has been demonstrated that eggs released from one ovary can be captured by the fallopian tube on the other side, so that passage between the two open ends of the fallopian tubes is possible. Our tiny string could therefore be threaded all the way through the female reproductive tract and back out, counting as one more hole.
So the mathematician's answer is that humans have either seven or eight holes.
In the end, the question is not just about counting openings but about understanding connections. Topologically speaking, our bodies are less like Swiss cheese and more like a carefully constructed onesie for an octopus.
It sounds like a simple enough question to answer — list the openings and add them up. But it's not quite that easy once you start considering questions like: "What exactly is a hole?" "Does any opening count?" And "why don't mathematicians know the difference between a straw and a doughnut?"
Before we start counting, we need to agree on what constitutes a "hole." Katie Steckles, a lecturer in mathematics at Manchester Metropolitan University in the U.K. and a freelance mathematics communicator, told Live Science that mathematicians "use the term 'hole' to mean one like the hole in a donut: one that goes all the way through a shape and out the other side."
But if you dig a "hole" at the beach, your aim is probably not to dig right through to the other side of the world. Many people would think of a hole as a depression in a solid object. But "this isn't a true hole, as it has an end," Steckles said.
Similarly, mathematical communicator James Arthur, who is based in the U.K., told Live Science that "in topology, a 'hole' is a through hole, that is you can put your finger through the object."
When digging a tunnel under the sea, like the Channel Tunnel that connects the U.K. and France, engineers started off by digging two openings. But as soon as those two digging projects joined up, the Channel Tunnel became a fundamentally different object (what Arthur and engineers would call a "through hole") — like a straw, or a tube with an opening at either end.
And if you ask people how many holes a straw has you will get a range of different answers: one, two and even zero. This is a result of our colloquial understanding of what constitutes a hole.
To find a consistent answer, we can turn to mathematics. And the problem of classifying how many holes there are in an object falls squarely within the realm of topology.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
To a topologist, the actual shapes of objects are not important. Instead, "topology is more concerned with the fundamental properties of shapes and how things connect together in space," Steckles said.
In topology, objects can be grouped together by the number of holes they possess. For example, a topologist sees no difference between a golf ball, a baseball or even a Frisbee. If they were all made of plasticine, or putty, they could theoretically be squashed, stretched or otherwise manipulated to look like each other without making or closing any holes in the plasticine or sticking different parts together, Steckles argued.
However, to a topologist, these objects are fundamentally different to a bagel, a doughnut or a basketball hoop, which each have a hole through the middle of them. A figure of eight with two holes and a pretzel with three are different topological objects again.

A useful way to get into the mathematicians' way of thinking about the straw problem is to "imagine our straw is made of play dough," Arthur said. "Let's take this straw and slowly squish the top down and down and down towards the bottom, making sure the hole in the middle stays open. We will squish it until we are in a shape that looks like a doughnut." Mathematicians, Arthur said, would say that "the straw is homeomorphic to a doughnut."
The long, thin aspect ratio of the straw, and the fact that the two openings are relatively far apart, are perhaps what gives rise to the suggestion of two holes. But to a topologist, bagels, basketball hoops and doughnuts are all topologically equivalent to a straw with a single hole. "The hole in a straw goes all the way through it, and the opening at the other end is just the back of that same hole," Steckles said.
Armed with the topologists' definition of a hole, we can tackle the original question: How many holes does the human body have? Let's first try to list all the openings we have. The obvious ones are probably our mouths, our urethras (the ones we pee out of) and our anuses, as well as the openings in our nostrils and our ears. For some of us, there are also milk ducts in nipples and vaginas.
There are also four less-obvious openings that we all have in the corners of eyelids closest to our nose — the four lacrimal puncta, which drain tears from our eyes into our nasal cavities. At an even smaller scale there are the pores that enable sweat to escape our bodies and sebum to lubricate our skin. In total there are potentially millions of these openings in our bodies, but do they all count as holes?

To make the question interesting, think about whether we could pass a very thin string into one hole and out of another. If we set the size of this string to be about 60 microns (60 millionths of a meter) then it's possible that the string could enter an opening as small as a pore. However — and this is key — it wouldn't be able to leave. It wouldn't be able to come out the other end. It would be blocked by the cells at the bottom of the pore — too thick to pass through into the vasculature that supplies the pore.
"They're not actually holes in the topological sense, as they don't go all the way through," Steckles said. "They're just blind pits."
By this definition we can rule out all the pores, milk ducts and urethras. We couldn't thread a string in one of these openings and out of another. Even the ears canals have to go as they are separated from the rest of the sinuses by the ear drums.
"We have our mouth, our anus, and then our nostrils. They are four of the … openings that form a hole," Arthur said. "But we actually have eight. The remaining four come from the tear ducts, we each have two in each eye, an upper and a lower."
But this doesn't mean eight holes. Steckles pointed out ."When the holes that pass through a shape connect together inside the shape, it makes it harder to count how many there are."
A pair of underwear, for example, has three openings (one for the waist and one for each of the two legs), but it's not immediately clear how many holes a topologist would say it has. "A useful trick is to think about flattening it out," Steckles said. — "If we were to stretch the waistband of the pants out onto a big hula hoop, we'd see the two trouser legs sticking down, each being one hole."

So despite having three openings, the pair of underwear has only two holes. "So when the holes connect together in the middle, there's one fewer hole than there are openings," Steckles argued. Correspondingly, topology tells us that, despite eight interconnected openings, the human body has seven different holes.
But there might be one more. Although often counted as a blind hole, the vagina leads to the uterus, which then leads to one of two fallopian tubes. These tubes are open at the far end and lead to the peritoneal cavity near the ovary. It is the job of the finger-like projections of the funnel-shaped infundibulum at the end of the fallopian tube to catch the egg when it is released from the nearest ovary. However, it has been demonstrated that eggs released from one ovary can be captured by the fallopian tube on the other side, so that passage between the two open ends of the fallopian tubes is possible. Our tiny string could therefore be threaded all the way through the female reproductive tract and back out, counting as one more hole.
So the mathematician's answer is that humans have either seven or eight holes.
In the end, the question is not just about counting openings but about understanding connections. Topologically speaking, our bodies are less like Swiss cheese and more like a carefully constructed onesie for an octopus.
Technically, the sun is a G-type main-sequence star — specifically, a G2V star. The "V" indicates that it is a dwarf, Tony Wong, a professor of astronomy at the University of Illinois Urbana-Champaign, told Live Science.
Dwarf stars got their name when Danish astronomer Ejnar Hertzsprung noticed that the reddest stars he observed were either much brighter or much fainter than the sun. He called the brighter ones "giants" and the dimmer ones "dwarfs," according to Michael Richmond, a professor of physics and astronomy at the Rochester Institute of Technology in New York. The sun is currently more similar in size and brightness to smaller, dimmer stars called red dwarfs than to giant stars, so the sun and its brethren also became classified as dwarf stars.
"G" is astronomer code for yellow — that is, stars of a temperature range of around 9,260 to 10,340 degrees Fahrenheit (5,125 to 5,725 degrees Celsius), Lucas Guliano, an astronomer at the Harvard-Smithsonian Center for Astrophysics, told Live Science.

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.
Wong noted that G2 means it's somewhat hotter than a typical G-type star. "They range from G0 to G9 in order of decreasing temperature," he said. At its surface, the sun is about 9,980 F (5,525 C), Guliano added.
Calling the sun yellow is a bit of a misnomer, however, as the sun's visible output is greatest in the green wavelengths, Guliano explained. But the sun emits all visible colors, so "the actual color of sunlight is white," Wong said.
(On Earth, the sun appears yellow because of the way molecules in the atmosphere can scatter the different colors that make up the sun's white light, according to Stanford University's Solar Center. This is the same reason the sky appears blue.)
G-type stars also range from G0 to G9 in order of decreasing size, Guliano said. Wong explained that class G stars "range in size from somewhere around 90% the mass of the sun up to around 110% the mass of the sun."
The sun is what astronomers call a main-sequence star, a class that includes most stars. Nuclear reactions within these stars fuse hydrogen to become helium, unleashing extraordinary amounts of energy. Among the main-sequence stars, the color is determined by the star's mass.
"The sun is yellow, but less-massive main sequence stars are orange or red, and more massive main sequence stars are blue," Carles Badenes, a professor of physics and astronomy at the University of Pittsburgh, told Live Science.
The sun is slowly changing as it ages. "It has gotten about 10% larger since it started on the main sequence, and it will get much larger," Wong said. Even as it grows, however, the sun will still be considered a dwarf until its last stage of life.
In about 5 billion years, the sun will run out of hydrogen fuel and begin to swell to become a red giant, leaving its dwarf days behind. "It will engulf the orbit of Venus, and maybe Earth as well," Badenes said, "and its surface temperature will get colder, making it red in color."
Milestone: Dian Fossey found murdered
Date: Dec. 27, 1985
Where: Karisoke Research Center in Rwanda
Who: The murderer remains unknown
In late December 1985, a worker opened the door to a remote cabin in the Virunga Mountains of Rwanda and encountered a horrific scene: Gorilla researcher Dian Fossey, whose aggressive approach to conservation had pitted her against the local community, had been hacked to death with a machete, and her cabin had been ransacked.
Fossey had been working with an endangered gorilla population in Rwanda's Volcanoes National Park since the late 1960s. Along with Jane Goodall and Biruté Galdikas, she was one of the three "trimates" chosen by Louis Leakey to study primates in their natural habitat.
Fossey had no formal training in ethology, the science of animal behavior, when she set out for Africa. She began her field work in Kabara, Congo, living in a tiny tent and venturing out to study mountain gorillas (Gorilla beringei beringei) there. After civil war broke out in 1967, she escaped to the Rwandan portion of the mountains and set up a new research project near Mount Karisimbi in Rwanda.
Fossey was inspired by the work of George Schaller, a biologist who, in 1959, had also studied the gorillas of the Virunga Mountains.
"I knew that animals try to stay out of your way. If you go quietly near them, they come to accept your presence. That's what I did with gorillas. I just went near them day after day, which was fairly easy because they form cohesive social groups. Soon, I knew them as individuals, both their faces and their behavior, and I just sat and watched them," Schaller said in a 2006 interview.
Fossey operated on this same principle of patient, unobtrusive observation. Still, the gorillas initially fled from her, and she spent hours tracking and trailing them across the misty forest.

After a year, they stopped fleeing at her presence and started beating their chests and vocalizing. It was a bluff meant to scare her off, but it was still far from their ordinary, natural behavior, she said in a 1973 lecture. After two years, she received two young gorillas, Coco and Pucker; rehabilitated them; and learned about gorilla young by observing them.
"I came to know the gorillas' need for love and affection, and the young gorillas' need for constant play," she said.
It would take three years before the gorillas came to accept her presence and reveal more naturalistic behavior, she said in the lecture.
During her decades in Virunga, Fossey described and learned to mimic the vocalizations of gorillas, including the "belch vocalization" that signifies contentment. She also elucidated their tight-knit family structures, courtship and mating rituals, as well as documented the occasional murder of infant gorillas by rival males.
Although she would eventually earn her doctorate in zoology from the University of Cambridge, Fossey spent her first years studying the gorillas with no formal training. Perhaps because of her initial lack of training, she formed close bonds with individual animals and tended to ascribe more humanlike motivations and descriptions to their actions than is typically accepted in formal zoology. She often described gorillas as more altruistic than humans.
"You take these fine, regal animals,'' she told an interviewer, as reported by The New York Times. ''How many fathers have the same sense of paternity? How many human mothers are more caring? The family structure is unbelievably strong.''
She formed a particularly close bond with a gorilla she nicknamed Digit — so named for his damaged finger — who did not have playmates his age. Digit was killed by poachers in 1977.
The last years of Fossey's life were increasingly focused on conserving the gorillas' dwindling habitat and combating poachers. She used confrontational methods, such as burning snares, wearing masks to scare poachers, and spray-painting cattle to prevent herders from bringing them into the national park, according to the Dian Fossey Gorilla Fund.
She also shot over the heads of tourists to scare them away and told her graduate students to carry guns, according to The Washington Post.
Given that many of the people living on the fringes of the park lived in poverty and resorted to expansion and herding to survive, this did not earn her good will with many of the locals.
Fossey's murder was never solved. Many think poachers were responsible for the killing, but other theories have been floated as well.
]]>
Around 125 million years ago, a dinosaur longer than a pickup truck stalked rivers to gobble up fish in what is now Thailand.
The remains of the roughly 25-foot-long (7 to 8 meters) dinosaur, which include parts of its spine, pelvis and tail, represent one of the most complete spinosaurid specimens ever found in Asia, according to researchers.
Spinosaurids were a family of bipedal predators with elongated snouts, crocodile-like teeth and, in many species, sails on their backs. Researchers believe that the Thai specimen, first discovered in 2004, belonged to the Spinosaurinae subfamily, which included the longest-known carnivorous dinosaur genus, Spinosaurus — a potential swimming predator from North Africa that grew up to around 50 feet (15 m) long.
"This discovery from Thailand helps us better understand what spinosaurines looked like and how they evolved in Asia," Adun Samathi, an assistant professor at the Walai Rukhavej Botanical Research Institute and Mahasarakham University in Thailand, told Live Science in an email. "[The fossils] also show that dinosaur diversity in Southeast Asia was richer than previously known and expand our understanding of how these unusual fish-eating predators were spread around the world."
Samathi presented the spinosaurid findings Nov. 12 at the Society of Vertebrate Paleontology 2025 annual meeting in Birmingham, England. The findings haven't been peer-reviewed, as Samathi and his colleagues still have to submit them to a journal.
The researchers don't have an official name for the dinosaur. However, they've nicknamed it the Sam Ran spinosaurid, as it was found in the Sam Ran locality (area) of the Khok Kruat rock formation in northeastern Thailand, according to Samathi, who studied the spinosaurid as part of his doctoral thesis. (Samathi is one of several students and researchers to study the specimen since its discovery.)
Here's my https://t.co/maqjE6ji5r. project, a spinosaurid from the Early Cretaceous Khok Kruat Formation of Thailand or "Sam Ran spinosaurid". #spinosauridae #spinosaurus #paleontology pic.twitter.com/qtn8mHK6pHApril 13, 2022
The team quickly identified the dinosaur as a spinosaurid because it has several of the group's characteristic features, including long neck vertebrae and tall spines on its back vertebrae. However, the species also had features that distinguished it from known spinosaurid species, including shorter spines than Spinosaurus and more paddle-like spines than Ichthyovenator from Loas, which borders Thailand.
The team suspects that the Sam Ran spinosaurid was more closely related to Spinosaurus from North Africa than Ichthyovenator from Laos. However, there's a lot of uncertainty surrounding the evolution of Asian spinosaurids, as well as spinosaurids in general, and the researchers' findings are only preliminary at this stage.
The Sam Ran spinosaurid died beside a shallow river before some of its remains were fossilized. Samathi doesn't think that this spinosaurid could swim, but it seemed to be using the river ecosystem, which was teeming with life when the dinosaur perished relatively early in the Cretaceous period (145 million to 66 million years ago).
"The new spinosaur lived (or at least [was] found) in a river system with gently flowing water and occasional floods, within a dry to semi-arid landscape," Samathi said. "The site has yielded a variety of animals, including freshwater sharks, bony fish, turtles, crocodiles, and dinosaurs such as a sauropod and an iguanodontian."
]]>The adapted method lowers energy costs by up to 40% and may offer a "promising pathway for efficient and scalable hydrogen production," the researchers said in a new study published Dec. 1 in the Chemical Engineering Journal.
"Hydrogen is one of the most in demand chemicals," study co-author Hamed Heidarpour, a doctoral student at McGill University in Montreal, Canada, told Live Science. Hydrogen is used for ammonia production to produce fertilizers, in fuel cells to generate electrical energy, or burned to directly produce energy, Heidarpour said.
The main way of producing hydrogen is through steam reforming, which involves reacting water with natural gas at high temperatures and pressures to separate water's oxygen and hydrogen atoms. But these conditions mean the process is energy intensive and requires burning large amounts of fossil fuels.
Using electricity to split water into hydrogen and oxygen molecules — a method known as electrolysis — could potentially offer a way to create hydrogen with no direct carbon dioxide emissions.
This works by connecting two metal plates known as electrodes to a direct current supply and submerging the ends of the plates into water. Applying electricity to the circuit generates hydrogen at the negative electrode (anode) and oxygen at the positive one (cathode).
However, electrolysis of water is currently inefficient, expensive and uses a lot of electricity, which often comes from non-renewable sources. The main inefficiency is from producing oxygen at the anode, Heidarpour explained.
To overcome this issue, the team behind the new study adapted the standard electrolysis setup to replace the oxygen-forming reaction with one that produces hydrogen by oxidizing an organic molecule.
First, the researchers set up two chambers containing potassium hydroxide (KOH) solutions, which were separated by a thin membrane, and then connected an electrode to either chamber to form a circuit. The team added a chemical called hydroxymethylfurfural (HMF) to the anode chamber, as well as a modified copper catalyst. Heidarpour said that chromium atoms, within the surface of their specifically designed catalyst, help favor hydrogen production by stabilizing the copper atoms in their reactive state.
When the team applied electricity, electrons from the anode oxidized the aldehyde groups in the HMF molecules. This generated hydrogen and a byproduct called HMFCA, which may find use as a chemical feedstock to make bioplastics, Heidarpour said. (Aldehydes have a carbon atom doubly bonded to an oxygen atom and a single bond to a hydrogen atom.)
This adapted method effectively doubles the amount of hydrogen made in one go, when also accounting for the hydrogen created by splitting water molecules at the cathode as usual.
The reactions also ran at around 0.4 volts, which is around 1 volt lower than in conventional water electrolysis. The researchers said this helps reduce overall energy usage by up to 40%.
Heidarpour said the team is not the first to report this type of strategy but explained that they increased the overall hydrogen production rate by using a more efficient catalyst.
HMF is often made by breaking down non-food plant materials such as paper residues, making it an attractive reagent to use in these systems. However, HMF is currently an expensive material.
Other aldehyde-containing molecules such as formaldehyde could be used instead. "Where there is a surplus of low-value organic substrates, oxidizing these into more valuable chemicals with simultaneous hydrogen generation could be an attractive and environmentally-friendly way to make two feedstocks at once," Mark Symes, a professor of electrochemistry and electrochemical technology at the University of Glasgow, who was not involved in the study, told Live Science in an email.
The researchers noted that there are still ways to improve the process to make it more efficient.
For example, further work needs to be done to improve the catalyst's stability so that it "can work for thousands of hours in an industrial setting," Heidarpour said.
]]>The new study, published Dec. 10 in the journal Astronomy & Astrophysics, may also help to explain the planets' puzzling magnetic fields.
Uranus and Neptune are relatively large planets at the edge of the solar system; Neptune is the most distant planet, orbiting at 2.8 billion miles (4.5 billion kilometers) from the sun, on average. The extremely cold temperatures at these distances cause gases such as hydrogen, helium and water to condense into compressed ice slurries that form the planets' cores. As such, these planets have become known as ice giants.
"The ice giant classification is oversimplified as Uranus and Neptune are still poorly understood," lead study author Luca Morf, a doctoral student at the University of Zurich, said in a statement.
Morf and his supervisor, Ravit Helled, developed a new hybrid model in an attempt to better understand the interior of these cold planets. Models based on physics alone rely heavily on assumptions made by the modeler, while observational models can be too simplistic, Morf explained. "We combined both approaches to get interior models that are both unbiased and physically consistent," he said.
The pair started by considering how the density of each planet's core could vary with distance from the center of the planets and then adjusted the model to account for the planets' gravities. From this, they inferred the temperature and composition of the core and generated a new density profile. The team inputted the new density parameters back into the model and iterated this process until the model core fully matched current observational data.

This method generated eight total possible cores for both Uranus and Neptune, three of which had high rock-to-water ratios. This shows that the interiors of Uranus and Neptune are not limited to ice, as previously thought, the researchers said.
All of the modeled cores had convective regions where pure water exists in its ionic phase. This is where extreme temperatures and pressures cause water molecules to break apart into charged protons (H+) and hydroxide ions (OH-). The team thinks such layers may be the source of the planets' multiple magnetic fields, which cause Uranus and Neptune to have more than two poles. The model also suggests that Uranus' magnetic field is generated closer to its center than Neptune's is.
"One of the main issues is that physicists still barely understand how materials behave under the exotic conditions of [high] pressure and temperature found at the heart of a planet [and] this could impact our results," Morf said. The team aims to improve their model by including other molecules, like methane and ammonia, which also may be found in the cores.
"Both Uranus and Neptune could be rock giants or ice giants depending on the model assumptions," Helled said. She noted that much of our understanding of these planets may be incomplete, as it's based largely on data collected by the Voyager 2 space probe in the 1980s.
"Current data is insufficient to distinguish the two, and we therefore need dedicated missions to Uranus and Neptune that can reveal their true nature," Helled added
The team hopes the model may act as an unbiased tool for any new data from future space missions to these planets.
]]>