text stringlengths 0 473k |
|---|
[SOURCE: https://arstechnica.com/gadgets/2026/02/5-changes-to-know-about-in-apples-latest-ios-macos-and-ipados-betas/] | [TOKENS: 4193] |
FYI 5 changes to know about in Apple’s latest iOS, macOS, and iPadOS betas The 26.3 updates were mostly invisible; these changes are more significant. Andrew Cunningham – Feb 18, 2026 2:28 pm | 132 A collection of iPhones running iOS 26. Credit: Apple A collection of iPhones running iOS 26. Credit: Apple Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav This week, Apple released the first developer betas for iOS 26.4, iPadOS 26.4, macOS 26.4, and its other operating systems. On Tuesday, it followed those up with public beta versions of the same updates. Usually released around the midpoint between one major iOS release and the next, the *.4 updates to its operating system usually include a significant batch of new features and other refinements, and if the first beta is any indication, this year’s releases uphold that tradition. A new “Playlist Playground” feature will let Apple Music subscribers generate playlists with text prompts, and native support for video podcasts is coming to the Podcasts app. The Creator Studio version of the Freeform drawing and collaboration app is also available in the 26.4 updates, allowing subscribers to access stock images from Apple’s Content Hub and to insert AI-generated images. But we’ve spent time digging through the betas to identify some of the more below-the-surface improvements and changes that Apple is testing. Some of these changes won’t come to the public versions of the software until a later release; others may be removed or changed between now and when the 26.4 update is made available to the general public. But generally, Apple’s betas give us a good idea of what the final release will look like. One feature that hasn’t appeared in these betas? The new “more intelligent Siri” that Apple has been promising since the iOS 18 launch in 2024. Apple delayed the feature until sometime in 2026, citing that it wasn’t meeting the company’s standards for quality and reliability. Reports indicated that the company had been planning to make the new Siri part of the 26.4 update, but as of earlier this month, Apple has reportedly decided to push it to the 26.5 release or later; even releasing it as part of iOS 27 in the fall would technically not run afoul of the “2026” promise. Before we begin, the standard warning about installing beta software on hardware you rely on day to day. Although these point updates are generally more stable than the major releases Apple tests in the summer and fall, they can still contain major bugs and may cause your device to behave strangely. The first beta, in particular, tends to be the roughest—more stable versions will be released in the coming weeks, and we should see the final version of the update within the next couple months. Charging limits for MacBooks The macOS 26.4 update includes a slider for manually limiting your Mac’s battery charge percentage. Credit: Andrew Cunningham The macOS 26.4 update includes a slider for manually limiting your Mac’s battery charge percentage. Credit: Andrew Cunningham In macOS 11 Big Sur, Apple added an on-by-default “Optimized Battery Charging” toggle to the operating system that would allow macOS to limit your battery’s charge percentage to 80 percent based on your usage and charging behavior. The idea is to limit the time your battery spends charging while full, something that can gradually reduce its capacity. The macOS 26.4 update adds a new slider similar to the one in iOS, further allowing users to manually specify a maximum charge limit that is always observed, no matter what. It’s adjustable in 5 percent increments from 80 to 100 percent. Anecdotal evidence suggests that limiting your charge percentage can lengthen the useful life of your battery and reduce wear, but there’s nothing that will fully prevent a battery from wearing out and losing capacity over time. It’s up to users to decide whether an immediately noticeable everyday hit to battery life is worth a slightly longer service life. In the current macOS betas, enabling a charge limit manually doesn’t disable the Optimized Battery Charging feature the way it does in iOS. It’s unclear if this is an early bug or an intentional difference in how the feature is implemented in macOS. End-to-end encryption (and other improvements) for non-Apple texting Apple has been infamously slow to adopt support for the Rich Communication Services (RCS) messaging protocol used by most modern Android phones. Apple-to-Apple messaging was handled using iMessage, which supports end-to-end encryption among many other features. But for many years, it stuck by the aging SMS standard for “green bubble” texting between Apple’s platforms and others, to the enduring frustration of anyone with a single Android-using friend in a group chat. Apple finally began supporting RCS messaging for major cellular carriers in iOS 18, and has slowly expanded support to other networks in subsequent releases. But Apple’s implementation still doesn’t support end-to-end encryption, which was added to the RCS standard about a year ago. The 26.4 update is the first to begin testing encryption for RCS messages. But as with the initial RCS rollout, Apple is moving slowly and deliberately: for now, encrypted RCS messaging only works when texting between Apple devices, and not between Apple devices and Android phones. The feature also won’t be included in the final 26.4 release—it’s only included in the betas for testing purposes, and it “will be available to customers in a future software update for iOS, iPadOS, macOS, and watchOS.” Encrypted iMessage and RCS chats will be labeled with a lock icon, much like how most web browsers label HTTPS sites. To support encrypted messaging, Apple will jump from version 2.4 of the RCS Universal Profile to version 3.0. This should also enable support for several improvements in versions 2.5, 2.6, and 2.7 of the RCS standard, including previously iMessage-exclusive things like editing and recalling messages and replying to specific messages inline. The return of the “Compact” Safari tab bar The Compact tab view returns to Safari 26.4 and iPadOS 26.4. Credit: Andrew Cunningham The Compact tab view returns to Safari 26.4 and iPadOS 26.4. Credit: Andrew Cunningham As part of the macOS 12 Monterey/iPadOS 15 beta cycle in 2021, Apple attempted a pretty radical redesign of the Safari browser that combined your tabs and the address bar into one, with the goal of increasing the amount of viewable space on the pages you were viewing. By the time both operating systems were released to the public, Safari’s default design had more or less reverted to its previous state, but the “compact” tab view lived on as an optional view in the settings for those who liked it. Tahoe, the Safari 26 update, and iPadOS 26 all removed that Compact view entirely, though a version of the Compact view became the default for the iPhone version of Safari. The macOS 26.4, Safari 26.4, and iPadOS 26.4 updates restore the Compact tab option to the other versions of Safari. On-by-default Stolen Device Protection Originally introduced in the iOS 17.3 update, Apple’s “Stolen Device Protection” toggle for iPhones added an extra layer of security for users whose phones were stolen by people who had learned their passcodes. With Stolen Device Protection enabled, an iPhone that had been removed from “familiar locations, such as home or work” would require biometric Face ID or Touch ID authentication before accessing stored passwords and credit cards, erasing your phone, or changing Apple Account passwords. Normally, users can enter their passcodes as a fallback; Stolen Device Protection removes that fallback. The iOS 26.4 update will make Stolen Device Protection on by default. Generally, you won’t notice a difference in how your phone behaves, but if you’re traveling or away from places where you regularly use your phone and you can’t use your passcode to access certain information, this is why. It’s possible to switch off Stolen Device protection, but doing so requires biometric authentication, an hour-long wait, and then a second biometric authentication. (This extended wait is also required for disabling Find My, changing your phone’s passcode, or changing Touch ID and Face ID settings.) Rosetta’s end approaches The macOS 26.4 update will add the first user-facing notifications about the end of Rosetta support, currently slated for macOS 28 in 2027. Credit: Andrew Cunningham The macOS 26.4 update will add the first user-facing notifications about the end of Rosetta support, currently slated for macOS 28 in 2027. Credit: Andrew Cunningham Apple’s Rosetta 2 was a crucial support beam in the bridge from the Intel Mac era to the Apple Silicon era, enabling unmodified Intel-native apps to run on the M1 and later processors, with noticeable but manageable performance and responsiveness hits. As with the original Rosetta, it allowed Apple to execute a major CPU architecture switch while keeping it mostly invisible to Mac users, and it bought developers time to release Arm-native versions of their apps so they could take full advantage of the new chips. But now that the transition is complete and the last Intel Macs are fading into the rearview, Apple plans to remove the translation layer from future versions of macOS, with some exceptions for games that rely on the technology. Rosetta 2 won’t be completely removed until macOS 28, but macOS 26.4 will be the first to begin warning users about the end of Rosetta when they launch Intel-native apps. Those notifications link to an Apple support page about identifying and updating Intel-only apps to Apple Silicon-native versions (or universal binaries that support both architectures). Apple has deployed this “adding notifications without removing functionality” approach to deprecating older apps before. Versions 10.13 and 10.14 of macOS would show users pop-ups about the end of support for 32-bit apps for a couple of years before that support was removed in macOS 10.15, for example. Andrew Cunningham Senior Technology Reporter Andrew Cunningham Senior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 132 Comments 5 changes to know about in Apple’s latest iOS, macOS, and iPadOS betas The 26.3 updates were mostly invisible; these changes are more significant. This week, Apple released the first developer betas for iOS 26.4, iPadOS 26.4, macOS 26.4, and its other operating systems. On Tuesday, it followed those up with public beta versions of the same updates. Usually released around the midpoint between one major iOS release and the next, the *.4 updates to its operating system usually include a significant batch of new features and other refinements, and if the first beta is any indication, this year’s releases uphold that tradition. A new “Playlist Playground” feature will let Apple Music subscribers generate playlists with text prompts, and native support for video podcasts is coming to the Podcasts app. The Creator Studio version of the Freeform drawing and collaboration app is also available in the 26.4 updates, allowing subscribers to access stock images from Apple’s Content Hub and to insert AI-generated images. But we’ve spent time digging through the betas to identify some of the more below-the-surface improvements and changes that Apple is testing. Some of these changes won’t come to the public versions of the software until a later release; others may be removed or changed between now and when the 26.4 update is made available to the general public. But generally, Apple’s betas give us a good idea of what the final release will look like. One feature that hasn’t appeared in these betas? The new “more intelligent Siri” that Apple has been promising since the iOS 18 launch in 2024. Apple delayed the feature until sometime in 2026, citing that it wasn’t meeting the company’s standards for quality and reliability. Reports indicated that the company had been planning to make the new Siri part of the 26.4 update, but as of earlier this month, Apple has reportedly decided to push it to the 26.5 release or later; even releasing it as part of iOS 27 in the fall would technically not run afoul of the “2026” promise. Before we begin, the standard warning about installing beta software on hardware you rely on day to day. Although these point updates are generally more stable than the major releases Apple tests in the summer and fall, they can still contain major bugs and may cause your device to behave strangely. The first beta, in particular, tends to be the roughest—more stable versions will be released in the coming weeks, and we should see the final version of the update within the next couple months. Charging limits for MacBooks In macOS 11 Big Sur, Apple added an on-by-default “Optimized Battery Charging” toggle to the operating system that would allow macOS to limit your battery’s charge percentage to 80 percent based on your usage and charging behavior. The idea is to limit the time your battery spends charging while full, something that can gradually reduce its capacity. The macOS 26.4 update adds a new slider similar to the one in iOS, further allowing users to manually specify a maximum charge limit that is always observed, no matter what. It’s adjustable in 5 percent increments from 80 to 100 percent. Anecdotal evidence suggests that limiting your charge percentage can lengthen the useful life of your battery and reduce wear, but there’s nothing that will fully prevent a battery from wearing out and losing capacity over time. It’s up to users to decide whether an immediately noticeable everyday hit to battery life is worth a slightly longer service life. In the current macOS betas, enabling a charge limit manually doesn’t disable the Optimized Battery Charging feature the way it does in iOS. It’s unclear if this is an early bug or an intentional difference in how the feature is implemented in macOS. End-to-end encryption (and other improvements) for non-Apple texting Apple has been infamously slow to adopt support for the Rich Communication Services (RCS) messaging protocol used by most modern Android phones. Apple-to-Apple messaging was handled using iMessage, which supports end-to-end encryption among many other features. But for many years, it stuck by the aging SMS standard for “green bubble” texting between Apple’s platforms and others, to the enduring frustration of anyone with a single Android-using friend in a group chat. Apple finally began supporting RCS messaging for major cellular carriers in iOS 18, and has slowly expanded support to other networks in subsequent releases. But Apple’s implementation still doesn’t support end-to-end encryption, which was added to the RCS standard about a year ago. The 26.4 update is the first to begin testing encryption for RCS messages. But as with the initial RCS rollout, Apple is moving slowly and deliberately: for now, encrypted RCS messaging only works when texting between Apple devices, and not between Apple devices and Android phones. The feature also won’t be included in the final 26.4 release—it’s only included in the betas for testing purposes, and it “will be available to customers in a future software update for iOS, iPadOS, macOS, and watchOS.” Encrypted iMessage and RCS chats will be labeled with a lock icon, much like how most web browsers label HTTPS sites. To support encrypted messaging, Apple will jump from version 2.4 of the RCS Universal Profile to version 3.0. This should also enable support for several improvements in versions 2.5, 2.6, and 2.7 of the RCS standard, including previously iMessage-exclusive things like editing and recalling messages and replying to specific messages inline. The return of the “Compact” Safari tab bar As part of the macOS 12 Monterey/iPadOS 15 beta cycle in 2021, Apple attempted a pretty radical redesign of the Safari browser that combined your tabs and the address bar into one, with the goal of increasing the amount of viewable space on the pages you were viewing. By the time both operating systems were released to the public, Safari’s default design had more or less reverted to its previous state, but the “compact” tab view lived on as an optional view in the settings for those who liked it. Tahoe, the Safari 26 update, and iPadOS 26 all removed that Compact view entirely, though a version of the Compact view became the default for the iPhone version of Safari. The macOS 26.4, Safari 26.4, and iPadOS 26.4 updates restore the Compact tab option to the other versions of Safari. On-by-default Stolen Device Protection Originally introduced in the iOS 17.3 update, Apple’s “Stolen Device Protection” toggle for iPhones added an extra layer of security for users whose phones were stolen by people who had learned their passcodes. With Stolen Device Protection enabled, an iPhone that had been removed from “familiar locations, such as home or work” would require biometric Face ID or Touch ID authentication before accessing stored passwords and credit cards, erasing your phone, or changing Apple Account passwords. Normally, users can enter their passcodes as a fallback; Stolen Device Protection removes that fallback. The iOS 26.4 update will make Stolen Device Protection on by default. Generally, you won’t notice a difference in how your phone behaves, but if you’re traveling or away from places where you regularly use your phone and you can’t use your passcode to access certain information, this is why. It’s possible to switch off Stolen Device protection, but doing so requires biometric authentication, an hour-long wait, and then a second biometric authentication. (This extended wait is also required for disabling Find My, changing your phone’s passcode, or changing Touch ID and Face ID settings.) Rosetta’s end approaches Apple’s Rosetta 2 was a crucial support beam in the bridge from the Intel Mac era to the Apple Silicon era, enabling unmodified Intel-native apps to run on the M1 and later processors, with noticeable but manageable performance and responsiveness hits. As with the original Rosetta, it allowed Apple to execute a major CPU architecture switch while keeping it mostly invisible to Mac users, and it bought developers time to release Arm-native versions of their apps so they could take full advantage of the new chips. But now that the transition is complete and the last Intel Macs are fading into the rearview, Apple plans to remove the translation layer from future versions of macOS, with some exceptions for games that rely on the technology. Rosetta 2 won’t be completely removed until macOS 28, but macOS 26.4 will be the first to begin warning users about the end of Rosetta when they launch Intel-native apps. Those notifications link to an Apple support page about identifying and updating Intel-only apps to Apple Silicon-native versions (or universal binaries that support both architectures). Apple has deployed this “adding notifications without removing functionality” approach to deprecating older apps before. Versions 10.13 and 10.14 of macOS would show users pop-ups about the end of support for 32-bit apps for a couple of years before that support was removed in macOS 10.15, for example. F Fred Duck i need a TL;DR on that one, dogCertainly. Let's use Fred Intelligence (FI) to summarise the Arsticle. 1 Charging limits for MacBooks 2 End-to-end encryption (and other improvements) for non-Apple texting 3 The return of the “Compact” Safari tab bar 4 On-by-default Stolen Device Protection 5 Rosetta’s end approaches 6 Apple-shaped products If I can be of further assistance, don't hesitate. February 18, 2026 at 7:49 pm Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/lawsuit-epa-revoking-greenhouse-gas-finding-risks-thousands-of-avoidable-deaths/#comments] | [TOKENS: 3540] |
“Deadly serious” Lawsuit: EPA revoking greenhouse gas finding risks “thousands of avoidable deaths” EPA sued for abandoning its mission to protect public health. Ashley Belanger – Feb 18, 2026 2:48 pm | 63 Credit: Tramino | iStock / Getty Images Plus Credit: Tramino | iStock / Getty Images Plus Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav In a lawsuit filed Wednesday, the Environmental Protection Agency was accused of abandoning its mission to protect public health after repealing an “endangerment finding” that has served as the basis for federal climate change regulations for 17 years. The lawsuit came from more than a dozen environmental and health groups, including the American Public Health Association, the American Lung Association, the Center for Biological Diversity (CBD), the Clean Air Council, the Environmental Defense Fund (EDF), the Natural Resources Defense Council (NRDC), the Sierra Club, and the Union of Concerned Scientists. The groups have asked the US Court of Appeals for the District of Columbia Circuit to review the EPA decision, which also eliminated requirements controlling greenhouse gas emissions in new cars and trucks. Urging a return to the status quo, the groups argued that the Trump administration is anti-science and illegally moving to benefit the fossil fuel industry, despite a mountain of evidence demonstrating the deadly consequences of unchecked pollution and climate change-induced floods, droughts, wildfires, and hurricanes. “Undercutting the ability of the federal government to tackle the largest source of climate pollution is deadly serious,” Meredith Hankins, legal director for federal climate at NRDC, said in an EDF roundup of statements from plaintiffs. The science is overwhelmingly clear, the groups argued, despite the Trump EPA attempting to muddy the waters by forming a since-disbanded working group of climate contrarians. Trump is a longtime climate denier, as evidenced by a Euro News tracker monitoring his most controversial comments. Most recently, during a cold snap affecting much of the US, he predictably trolled environmentalists, writing on Truth Social, “could the Environmental Insurrectionists please explain—WHATEVER HAPPENED TO GLOBAL WARMING?” The EPA’s final rule summary bragged that “this is the single largest deregulatory action in US history and will save Americans over $1.3 trillion” by 2055. Supposedly, carmakers will pass on any savings from no longer having to meet emissions requirements, giving Americans more access to affordable cars by shutting down expensive emissions and EV mandates “strangling” the auto industry. Sounding nothing like an agency created to monitor pollutants, a fact sheet on the final rule emphasized that Trump’s EPA “chooses consumer choice over climate change zealotry every time.” Critics quickly slammed Trump’s claims that removing the endangerment finding would help the economy. Any savings from cheaper vehicles or reduced costs of charging infrastructure (as Americans ostensibly buy fewer EVs) would be offset by $1.4 trillion “in additional costs from increased fuel purchases, vehicle repair and maintenance, insurance, traffic congestion, and noise,” The Guardian reported. The EPA’s economic analysis also ignores public health costs, the groups suing alleged. David Pettit, an attorney at the CBD’s Climate Law Institute, slammed the EPA’s messaging as an attempt to sway consumers without explaining the true costs. “Nobody but Big Oil profits from Trump trashing climate science and making cars and trucks guzzle and pollute more,” Pettit said. “Consumers will pay more to fill up, and our skies and oceans will fill up with more pollution.” If the court sides with the EPA, “people everywhere will face more pollution, higher costs, and thousands of avoidable deaths,” Peter Zalzal, EDF’s associate vice president of clean air strategies, said. EPA argued climate change evidence is “out of scope” For environmentalists, the decision to sue the EPA was risky but necessary. By putting up a fight, they risk a court potentially reversing the 2009 Supreme Court ruling requiring the EPA to conduct the initial endangerment analysis and then regulate any pollution found from greenhouse gases. Seemingly, that reversal is what the Trump administration has been angling for, hoping the case will reach the Supreme Court, which is more conservative today and perhaps less likely to read the Clean Air Act as broadly as the 2009 court. It’s worth the risk, according to William Piermattei, the managing director of the Environmental Law Program at the University of Maryland Francis King Carey School of Law. He told The New York Times that environmentalists had no choice but to file the lawsuit and act on the public’s behalf. Environmentalists “must challenge this,” Piermattei said. If they didn’t, they’d be “agreeing that we should not regulate greenhouse gasses under the Clean Air Act, full stop.” He suggested that “a majority of the public, does not agree with that statement at all.” Since 2010, the EPA has found that the scientific basis for concluding that “elevated concentrations of greenhouse gases in the atmosphere may reasonably be anticipated to endanger the public health and welfare of current and future US generations is robust, voluminous, and compelling.” And since then, the evidence base has only grown, the groups suing said. Trump used to seem intimidated by the “overwhelming” evidence, environmentalists have noted. During Trump’s prior term, he notably left the endangerment finding in place, perhaps expecting that the evidence was irrefutable. He’s now renewed that fight, arguing that the evidence should be set aside, so that courts can focus on whether Congress “must weigh in on ‘major questions’ that have significant political and economic implications” and serve as a check on the EPA. In the EPA’s comments addressing public concerns about the agency ignoring evidence, the agency has already argued that evidence of climate change is “out of scope” since the EPA did not repeal the basis of the finding. Instead, the EPA claims it is merely challenging its own authority to continue to regulate the auto industry for harmful emissions, suggesting that only Congress has that authority. The Clean Air Act “does not provide EPA statutory authority to prescribe motor vehicle emission standards for the purpose of addressing global climate change concerns,” the EPA said. “In the absence of such authority, the Endangerment Finding is not valid, and EPA cannot retain the regulations that resulted from it.” Whether courts will agree that evidence supporting climate change is “out of scope” could determine whether the Supreme Court’s prior decision that compelled the endangerment finding is ultimately overturned. If that happens, subsequent administrations may struggle to issue a new endangerment finding to undo any potential damage. All eyes would then turn to Congress to pass a law to uphold protections. EPA accused of abandoning its mission By ignoring science, the EPA risks eroding public trust, according to Hana Vizcarra, a senior lawyer at the nonprofit Earthjustice, which is representing several groups in the litigation. “With this action, EPA flips its mission on its head,” Vizcarra said. “It abandons its core mandate to protect human health and the environment to boost polluting industries and attempts to rewrite the law in order to do so.” Groups appear confident that the courts will consider the science. Joanne Spalding, director of the Sierra Club’s Environmental Law Program, noted that the early 2000s litigation from the Sierra Club brought about the original EPA protections. She vowed that the Sierra Club would continue fighting to keep them. “People should not be forced to suffer for this administration’s blind allegiance to the fossil fuel industry and corporate polluters,” Spalding said. “This shortsighted rollback is blatantly unlawful and their efforts to force this upon the American people will fail.” Ankush Bansal, board president of Physicians for Social Responsibility, warned that courts cannot afford to ignore the evidence. The EPA’s “devastating decision” goes “against the science and testimony of countless scientists, health care professionals, and public health practitioners,” Bansal said. If upheld, the long-term consequences could seemingly bury courts in future legal battles. “It will result in direct harm to the health of Americans throughout the country, particularly children, older adults, those with chronic illnesses, and other vulnerable populations, rural to urban, red and blue, of all races and incomes,” Bansal said. “The increased exposure to harmful pollutants and other greenhouse gas emissions from fossil fuel production and consumption will make America sicker, not healthier, less prosperous, not more, for generations to come.” Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 63 Comments Lawsuit: EPA revoking greenhouse gas finding risks “thousands of avoidable deaths” EPA sued for abandoning its mission to protect public health. In a lawsuit filed Wednesday, the Environmental Protection Agency was accused of abandoning its mission to protect public health after repealing an “endangerment finding” that has served as the basis for federal climate change regulations for 17 years. The lawsuit came from more than a dozen environmental and health groups, including the American Public Health Association, the American Lung Association, the Center for Biological Diversity (CBD), the Clean Air Council, the Environmental Defense Fund (EDF), the Natural Resources Defense Council (NRDC), the Sierra Club, and the Union of Concerned Scientists. The groups have asked the US Court of Appeals for the District of Columbia Circuit to review the EPA decision, which also eliminated requirements controlling greenhouse gas emissions in new cars and trucks. Urging a return to the status quo, the groups argued that the Trump administration is anti-science and illegally moving to benefit the fossil fuel industry, despite a mountain of evidence demonstrating the deadly consequences of unchecked pollution and climate change-induced floods, droughts, wildfires, and hurricanes. “Undercutting the ability of the federal government to tackle the largest source of climate pollution is deadly serious,” Meredith Hankins, legal director for federal climate at NRDC, said in an EDF roundup of statements from plaintiffs. The science is overwhelmingly clear, the groups argued, despite the Trump EPA attempting to muddy the waters by forming a since-disbanded working group of climate contrarians. Trump is a longtime climate denier, as evidenced by a Euro News tracker monitoring his most controversial comments. Most recently, during a cold snap affecting much of the US, he predictably trolled environmentalists, writing on Truth Social, “could the Environmental Insurrectionists please explain—WHATEVER HAPPENED TO GLOBAL WARMING?” The EPA’s final rule summary bragged that “this is the single largest deregulatory action in US history and will save Americans over $1.3 trillion” by 2055. Supposedly, carmakers will pass on any savings from no longer having to meet emissions requirements, giving Americans more access to affordable cars by shutting down expensive emissions and EV mandates “strangling” the auto industry. Sounding nothing like an agency created to monitor pollutants, a fact sheet on the final rule emphasized that Trump’s EPA “chooses consumer choice over climate change zealotry every time.” Critics quickly slammed Trump’s claims that removing the endangerment finding would help the economy. Any savings from cheaper vehicles or reduced costs of charging infrastructure (as Americans ostensibly buy fewer EVs) would be offset by $1.4 trillion “in additional costs from increased fuel purchases, vehicle repair and maintenance, insurance, traffic congestion, and noise,” The Guardian reported. The EPA’s economic analysis also ignores public health costs, the groups suing alleged. David Pettit, an attorney at the CBD’s Climate Law Institute, slammed the EPA’s messaging as an attempt to sway consumers without explaining the true costs. “Nobody but Big Oil profits from Trump trashing climate science and making cars and trucks guzzle and pollute more,” Pettit said. “Consumers will pay more to fill up, and our skies and oceans will fill up with more pollution.” If the court sides with the EPA, “people everywhere will face more pollution, higher costs, and thousands of avoidable deaths,” Peter Zalzal, EDF’s associate vice president of clean air strategies, said. EPA argued climate change evidence is “out of scope” For environmentalists, the decision to sue the EPA was risky but necessary. By putting up a fight, they risk a court potentially reversing the 2009 Supreme Court ruling requiring the EPA to conduct the initial endangerment analysis and then regulate any pollution found from greenhouse gases. Seemingly, that reversal is what the Trump administration has been angling for, hoping the case will reach the Supreme Court, which is more conservative today and perhaps less likely to read the Clean Air Act as broadly as the 2009 court. It’s worth the risk, according to William Piermattei, the managing director of the Environmental Law Program at the University of Maryland Francis King Carey School of Law. He told The New York Times that environmentalists had no choice but to file the lawsuit and act on the public’s behalf. Environmentalists “must challenge this,” Piermattei said. If they didn’t, they’d be “agreeing that we should not regulate greenhouse gasses under the Clean Air Act, full stop.” He suggested that “a majority of the public, does not agree with that statement at all.” Since 2010, the EPA has found that the scientific basis for concluding that “elevated concentrations of greenhouse gases in the atmosphere may reasonably be anticipated to endanger the public health and welfare of current and future US generations is robust, voluminous, and compelling.” And since then, the evidence base has only grown, the groups suing said. Trump used to seem intimidated by the “overwhelming” evidence, environmentalists have noted. During Trump’s prior term, he notably left the endangerment finding in place, perhaps expecting that the evidence was irrefutable. He’s now renewed that fight, arguing that the evidence should be set aside, so that courts can focus on whether Congress “must weigh in on ‘major questions’ that have significant political and economic implications” and serve as a check on the EPA. In the EPA’s comments addressing public concerns about the agency ignoring evidence, the agency has already argued that evidence of climate change is “out of scope” since the EPA did not repeal the basis of the finding. Instead, the EPA claims it is merely challenging its own authority to continue to regulate the auto industry for harmful emissions, suggesting that only Congress has that authority. The Clean Air Act “does not provide EPA statutory authority to prescribe motor vehicle emission standards for the purpose of addressing global climate change concerns,” the EPA said. “In the absence of such authority, the Endangerment Finding is not valid, and EPA cannot retain the regulations that resulted from it.” Whether courts will agree that evidence supporting climate change is “out of scope” could determine whether the Supreme Court’s prior decision that compelled the endangerment finding is ultimately overturned. If that happens, subsequent administrations may struggle to issue a new endangerment finding to undo any potential damage. All eyes would then turn to Congress to pass a law to uphold protections. EPA accused of abandoning its mission By ignoring science, the EPA risks eroding public trust, according to Hana Vizcarra, a senior lawyer at the nonprofit Earthjustice, which is representing several groups in the litigation. “With this action, EPA flips its mission on its head,” Vizcarra said. “It abandons its core mandate to protect human health and the environment to boost polluting industries and attempts to rewrite the law in order to do so.” Groups appear confident that the courts will consider the science. Joanne Spalding, director of the Sierra Club’s Environmental Law Program, noted that the early 2000s litigation from the Sierra Club brought about the original EPA protections. She vowed that the Sierra Club would continue fighting to keep them. “People should not be forced to suffer for this administration’s blind allegiance to the fossil fuel industry and corporate polluters,” Spalding said. “This shortsighted rollback is blatantly unlawful and their efforts to force this upon the American people will fail.” Ankush Bansal, board president of Physicians for Social Responsibility, warned that courts cannot afford to ignore the evidence. The EPA’s “devastating decision” goes “against the science and testimony of countless scientists, health care professionals, and public health practitioners,” Bansal said. If upheld, the long-term consequences could seemingly bury courts in future legal battles. “It will result in direct harm to the health of Americans throughout the country, particularly children, older adults, those with chronic illnesses, and other vulnerable populations, rural to urban, red and blue, of all races and incomes,” Bansal said. “The increased exposure to harmful pollutants and other greenhouse gas emissions from fossil fuel production and consumption will make America sicker, not healthier, less prosperous, not more, for generations to come.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/google/2026/02/gemini-can-now-generate-ai-music-for-you-no-lyrics-required/] | [TOKENS: 2503] |
Rage Against the Machine Learning Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today With a simple prompt, you can generate 30 seconds of something like music. Ryan Whitwam – Feb 18, 2026 11:00 am | 214 Credit: Google Credit: Google Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The American poet Henry Wadsworth Longfellow called music “the universal language of mankind.” Is that still true when the so-called music is being generated by a probabilistic robot instead of a human? We’re about to find out. Google has announced its latest Lyria 3 AI model is being deployed in the Gemini app, vastly expanding access to AI music generation. Google DeepMind has been tinkering with Lyria for a while now, offering limited access in developer-oriented products like Vertex AI. Lyria 3 is more capable than previous versions, and it’s also quicker to use. Just select the new “Create music” option in the Gemini app or web UI to get started. You can describe what you want and even upload an image to help the robot get the right vibe. And in a few seconds, you get music (or something like it). In case there was any uncertainty about whether Lyria tracks still counted as a human artistic endeavor, worry not! Unlike past versions of the model, you don’t even have to provide lyrics in your prompt. You can be vague with your request, and the model will create suitable lyrics for the 30-second song. Although with that limit, “jingle” might be more accurate. In addition to the track, each music creation job will come with an album cover-style image created by the Nano Banana model. Gemini will also have a pre-loaded set of AI tracks that you can choose to remix to your heart’s content. The Lyria 3 tools are also coming to Google’s Dream Track toolkit for YouTube Shorts, which will pair nicely with the Veo AI video options. So what kind of tracks can you expect Gemini to spit out? Google has provided some examples: “Sweet Like Plantain“ Prompt: I’m feeling nostalgic. Create a track for my mother about the great times we had as kids and the memories of her home-cooked plantains. Make it a fun afrobeat track with a true African vibe. “Motown Parody“ Prompt: Quintessential 1970s Motown soul. Lush, orchestral R&B production. Warm bassline with melodic fills, locked into a steady drum groove with crisp snare and tambourine. Vintage organ harmonic bed. Three-piece brass section. Gritty, gospel-tinged male tenor lead. “Pop Flutter“ Prompt: Wistful and airy. Soft, breathy female vocals with intimacy. Rapid-fire drum and bass rhythm, low-passed and softened. Deep, warm bass swells. Dreamy electric piano chords and subtle chime textures. Rainy city vibes. “Sea Shanty“ Prompt: An authentic A capella Sea Shanty featuring a robust male choir singing in a traditional call-and-response format. The piece is entirely vocal, relying on synchronized foot-stomps on a wooden deck and sharp handclaps to provide the rhythmic pulse. The lead is a weathered male baritone with a gravelly timbre who sings the narrative ‘chant’ lines. He is immediately answered by a powerful male choir singing in rich, rugged harmony on the ‘response’ lines. The voices are recorded with a natural room reverb that simulates the acoustic environment of a wooden ship’s deck, giving the vocals a resonant, atmospheric quality. The performance is energetic and driving, with the choir leaning into the rhythm of the stomps to create a sense of focused, communal effort. There are no instruments, only the layered textures of collective male voices spanning tenor, baritone, and bass ranges, all contributing to a confident, monolithic sound. Sour notes AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée. Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags. Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content. Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear. Ryan Whitwam Senior Technology Reporter Ryan Whitwam Senior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 214 Comments Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today With a simple prompt, you can generate 30 seconds of something like music. The American poet Henry Wadsworth Longfellow called music “the universal language of mankind.” Is that still true when the so-called music is being generated by a probabilistic robot instead of a human? We’re about to find out. Google has announced its latest Lyria 3 AI model is being deployed in the Gemini app, vastly expanding access to AI music generation. Google DeepMind has been tinkering with Lyria for a while now, offering limited access in developer-oriented products like Vertex AI. Lyria 3 is more capable than previous versions, and it’s also quicker to use. Just select the new “Create music” option in the Gemini app or web UI to get started. You can describe what you want and even upload an image to help the robot get the right vibe. And in a few seconds, you get music (or something like it). In case there was any uncertainty about whether Lyria tracks still counted as a human artistic endeavor, worry not! Unlike past versions of the model, you don’t even have to provide lyrics in your prompt. You can be vague with your request, and the model will create suitable lyrics for the 30-second song. Although with that limit, “jingle” might be more accurate. In addition to the track, each music creation job will come with an album cover-style image created by the Nano Banana model. Gemini will also have a pre-loaded set of AI tracks that you can choose to remix to your heart’s content. The Lyria 3 tools are also coming to Google’s Dream Track toolkit for YouTube Shorts, which will pair nicely with the Veo AI video options. So what kind of tracks can you expect Gemini to spit out? Google has provided some examples: “Sweet Like Plantain“ Prompt: I’m feeling nostalgic. Create a track for my mother about the great times we had as kids and the memories of her home-cooked plantains. Make it a fun afrobeat track with a true African vibe. “Motown Parody“ Prompt: Quintessential 1970s Motown soul. Lush, orchestral R&B production. Warm bassline with melodic fills, locked into a steady drum groove with crisp snare and tambourine. Vintage organ harmonic bed. Three-piece brass section. Gritty, gospel-tinged male tenor lead. “Pop Flutter“ Prompt: Wistful and airy. Soft, breathy female vocals with intimacy. Rapid-fire drum and bass rhythm, low-passed and softened. Deep, warm bass swells. Dreamy electric piano chords and subtle chime textures. Rainy city vibes. “Sea Shanty“ Prompt: An authentic A capella Sea Shanty featuring a robust male choir singing in a traditional call-and-response format. The piece is entirely vocal, relying on synchronized foot-stomps on a wooden deck and sharp handclaps to provide the rhythmic pulse. The lead is a weathered male baritone with a gravelly timbre who sings the narrative ‘chant’ lines. He is immediately answered by a powerful male choir singing in rich, rugged harmony on the ‘response’ lines. The voices are recorded with a natural room reverb that simulates the acoustic environment of a wooden ship’s deck, giving the vocals a resonant, atmospheric quality. The performance is energetic and driving, with the choir leaning into the rhythm of the stomps to create a sense of focused, communal effort. There are no instruments, only the layered textures of collective male voices spanning tenor, baritone, and bass ranges, all contributing to a confident, monolithic sound. Sour notes AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée. Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags. Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content. Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/health/2026/02/fda-does-u-turn-will-review-modernas-mrna-flu-shot-after-shocking-rejection/] | [TOKENS: 1965] |
About-face FDA reverses surprise rejection of Moderna’s mRNA flu vaccine Trump admin’s vaccine chief overruled FDA scientists to initially reject the shot. Beth Mole – Feb 18, 2026 12:08 pm | 127 Credit: Getty | Congressional Quarterly Credit: Getty | Congressional Quarterly Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The Food and Drug Administration has reversed its shocking refusal to consider Moderna’s mRNA flu vaccine for approval. The refusal was revealed last week in a sharply worded press release from Moderna. Subsequent reporting found that the decision was made by political appointee Vinay Prasad, the Trump administration’s top vaccine regulator, who overruled a team of agency scientists and a top career official in rejecting Moderna’s application. In an announcement Wednesday morning, Moderna said the FDA has now agreed to review its vaccine after the company held a formal (Type A) meeting with the FDA and proposed a change to the regulatory pathways used in the application. “We appreciate the FDA’s engagement in a constructive Type A meeting and its agreement to advance our application for review,” Stéphane Bancel, Moderna’s CEO, said in the announcement. “Pending FDA approval, we look forward to making our flu vaccine available later this year so that America’s seniors have access to a new option to protect themselves against flu.” The agency is expected to provide a decision on the vaccine by August 5, 2026. Prasad’s ostensible reason for initially refusing to review the application was based not on Moderna’s vaccine, mRNA-1010, but on the established flu vaccine Moderna used for comparison in its Phase 3 trial. Moderna used licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline, in the trial, which involved nearly 41,000 adults aged 50 and older. In a letter to Moderna dated February 3, Prasad said this choice “does not reflect the best-available standard of care,” and therefore the trial was not “adequate and well-controlled.” Moderna acknowledged that FDA scientists had previously suggested that the company use a recommended high-dose flu vaccine in trial participants 65 and older. But the agency ultimately signed off on the trial design with the uniform standard dose, calling it “acceptable.” Moderna, meanwhile, agreed to add a comparison of a high-dose vaccine to some older participants and provide the FDA with additional analysis. Anti-vaccine agenda Agency insiders told reporters that a team of career scientists was ready to review the vaccine and held an hourlong meeting with Prasad to present the reasons for moving forward with the review. David Kaslow, a top career official responsible for reviewing vaccines, also wrote a memo detailing why the review should proceed. Prasad rejected the vaccine application anyway. According to today’s announcement, the FDA reversed that rejection when Moderna proposed splitting the application, seeking full approval for the vaccine’s use in people aged 50 to 64 and an accelerated approval for use in people 65 and up. That latter regulatory pathway means Moderna will have to conduct an additional trial in that age group to confirm its effectiveness after it’s on the market. Andrew Nixon, spokesperson for the US Department of Health and Human Services, confirmed the reversal to Ars Technica. “Discussions with the company led to a revised regulatory approach and an amended application, which FDA accepted,” Nixon said in a statement. “FDA will maintain its high standards during review and potential licensure stages as it does with all products.” The FDA typically takes a levelheaded approach to working with companies, rarely making surprising decisions or rejecting applications outright. While Prasad claimed the rejection was due to the control vaccine, the move aligns with Health Secretary Robert F. Kennedy Jr.’s broader anti-vaccine agenda. Kennedy and the allies he has installed in federal positions are particularly hostile to mRNA technology. Moderna has already lost more than $700 million in federal contracts to develop pandemic vaccines. Next month, Kennedy’s MAHA Institute is hosting an anti-vaccine event that alleges there’s a “massive epidemic of vaccine injury.” The event description claims without evidence that use of mRNA vaccines is linked to “rising rates of acute and chronic illness.” Vaccine makers and industry investors, meanwhile, are reporting that Kennedy’s relentless anti-vaccine efforts are chilling the entire industry, with companies abandoning research and cutting jobs. In comments to The New York Times, Moderna’s president, Stephen Hoge, said, “There will be less invention, investment, and innovation in vaccines generally, across all the companies.” Beth Mole Senior Health Reporter Beth Mole Senior Health Reporter Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 127 Comments FDA reverses surprise rejection of Moderna’s mRNA flu vaccine Trump admin’s vaccine chief overruled FDA scientists to initially reject the shot. The Food and Drug Administration has reversed its shocking refusal to consider Moderna’s mRNA flu vaccine for approval. The refusal was revealed last week in a sharply worded press release from Moderna. Subsequent reporting found that the decision was made by political appointee Vinay Prasad, the Trump administration’s top vaccine regulator, who overruled a team of agency scientists and a top career official in rejecting Moderna’s application. In an announcement Wednesday morning, Moderna said the FDA has now agreed to review its vaccine after the company held a formal (Type A) meeting with the FDA and proposed a change to the regulatory pathways used in the application. “We appreciate the FDA’s engagement in a constructive Type A meeting and its agreement to advance our application for review,” Stéphane Bancel, Moderna’s CEO, said in the announcement. “Pending FDA approval, we look forward to making our flu vaccine available later this year so that America’s seniors have access to a new option to protect themselves against flu.” The agency is expected to provide a decision on the vaccine by August 5, 2026. Prasad’s ostensible reason for initially refusing to review the application was based not on Moderna’s vaccine, mRNA-1010, but on the established flu vaccine Moderna used for comparison in its Phase 3 trial. Moderna used licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline, in the trial, which involved nearly 41,000 adults aged 50 and older. In a letter to Moderna dated February 3, Prasad said this choice “does not reflect the best-available standard of care,” and therefore the trial was not “adequate and well-controlled.” Moderna acknowledged that FDA scientists had previously suggested that the company use a recommended high-dose flu vaccine in trial participants 65 and older. But the agency ultimately signed off on the trial design with the uniform standard dose, calling it “acceptable.” Moderna, meanwhile, agreed to add a comparison of a high-dose vaccine to some older participants and provide the FDA with additional analysis. Anti-vaccine agenda Agency insiders told reporters that a team of career scientists was ready to review the vaccine and held an hourlong meeting with Prasad to present the reasons for moving forward with the review. David Kaslow, a top career official responsible for reviewing vaccines, also wrote a memo detailing why the review should proceed. Prasad rejected the vaccine application anyway. According to today’s announcement, the FDA reversed that rejection when Moderna proposed splitting the application, seeking full approval for the vaccine’s use in people aged 50 to 64 and an accelerated approval for use in people 65 and up. That latter regulatory pathway means Moderna will have to conduct an additional trial in that age group to confirm its effectiveness after it’s on the market. Andrew Nixon, spokesperson for the US Department of Health and Human Services, confirmed the reversal to Ars Technica. “Discussions with the company led to a revised regulatory approach and an amended application, which FDA accepted,” Nixon said in a statement. “FDA will maintain its high standards during review and potential licensure stages as it does with all products.” The FDA typically takes a levelheaded approach to working with companies, rarely making surprising decisions or rejecting applications outright. While Prasad claimed the rejection was due to the control vaccine, the move aligns with Health Secretary Robert F. Kennedy Jr.’s broader anti-vaccine agenda. Kennedy and the allies he has installed in federal positions are particularly hostile to mRNA technology. Moderna has already lost more than $700 million in federal contracts to develop pandemic vaccines. Next month, Kennedy’s MAHA Institute is hosting an anti-vaccine event that alleges there’s a “massive epidemic of vaccine injury.” The event description claims without evidence that use of mRNA vaccines is linked to “rising rates of acute and chronic illness.” Vaccine makers and industry investors, meanwhile, are reporting that Kennedy’s relentless anti-vaccine efforts are chilling the entire industry, with companies abandoning research and cutting jobs. In comments to The New York Times, Moderna’s president, Stephen Hoge, said, “There will be less invention, investment, and innovation in vaccines generally, across all the companies.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/gadgets/2026/02/5-changes-to-know-about-in-apples-latest-ios-macos-and-ipados-betas/#comments] | [TOKENS: 4193] |
FYI 5 changes to know about in Apple’s latest iOS, macOS, and iPadOS betas The 26.3 updates were mostly invisible; these changes are more significant. Andrew Cunningham – Feb 18, 2026 2:28 pm | 132 A collection of iPhones running iOS 26. Credit: Apple A collection of iPhones running iOS 26. Credit: Apple Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav This week, Apple released the first developer betas for iOS 26.4, iPadOS 26.4, macOS 26.4, and its other operating systems. On Tuesday, it followed those up with public beta versions of the same updates. Usually released around the midpoint between one major iOS release and the next, the *.4 updates to its operating system usually include a significant batch of new features and other refinements, and if the first beta is any indication, this year’s releases uphold that tradition. A new “Playlist Playground” feature will let Apple Music subscribers generate playlists with text prompts, and native support for video podcasts is coming to the Podcasts app. The Creator Studio version of the Freeform drawing and collaboration app is also available in the 26.4 updates, allowing subscribers to access stock images from Apple’s Content Hub and to insert AI-generated images. But we’ve spent time digging through the betas to identify some of the more below-the-surface improvements and changes that Apple is testing. Some of these changes won’t come to the public versions of the software until a later release; others may be removed or changed between now and when the 26.4 update is made available to the general public. But generally, Apple’s betas give us a good idea of what the final release will look like. One feature that hasn’t appeared in these betas? The new “more intelligent Siri” that Apple has been promising since the iOS 18 launch in 2024. Apple delayed the feature until sometime in 2026, citing that it wasn’t meeting the company’s standards for quality and reliability. Reports indicated that the company had been planning to make the new Siri part of the 26.4 update, but as of earlier this month, Apple has reportedly decided to push it to the 26.5 release or later; even releasing it as part of iOS 27 in the fall would technically not run afoul of the “2026” promise. Before we begin, the standard warning about installing beta software on hardware you rely on day to day. Although these point updates are generally more stable than the major releases Apple tests in the summer and fall, they can still contain major bugs and may cause your device to behave strangely. The first beta, in particular, tends to be the roughest—more stable versions will be released in the coming weeks, and we should see the final version of the update within the next couple months. Charging limits for MacBooks The macOS 26.4 update includes a slider for manually limiting your Mac’s battery charge percentage. Credit: Andrew Cunningham The macOS 26.4 update includes a slider for manually limiting your Mac’s battery charge percentage. Credit: Andrew Cunningham In macOS 11 Big Sur, Apple added an on-by-default “Optimized Battery Charging” toggle to the operating system that would allow macOS to limit your battery’s charge percentage to 80 percent based on your usage and charging behavior. The idea is to limit the time your battery spends charging while full, something that can gradually reduce its capacity. The macOS 26.4 update adds a new slider similar to the one in iOS, further allowing users to manually specify a maximum charge limit that is always observed, no matter what. It’s adjustable in 5 percent increments from 80 to 100 percent. Anecdotal evidence suggests that limiting your charge percentage can lengthen the useful life of your battery and reduce wear, but there’s nothing that will fully prevent a battery from wearing out and losing capacity over time. It’s up to users to decide whether an immediately noticeable everyday hit to battery life is worth a slightly longer service life. In the current macOS betas, enabling a charge limit manually doesn’t disable the Optimized Battery Charging feature the way it does in iOS. It’s unclear if this is an early bug or an intentional difference in how the feature is implemented in macOS. End-to-end encryption (and other improvements) for non-Apple texting Apple has been infamously slow to adopt support for the Rich Communication Services (RCS) messaging protocol used by most modern Android phones. Apple-to-Apple messaging was handled using iMessage, which supports end-to-end encryption among many other features. But for many years, it stuck by the aging SMS standard for “green bubble” texting between Apple’s platforms and others, to the enduring frustration of anyone with a single Android-using friend in a group chat. Apple finally began supporting RCS messaging for major cellular carriers in iOS 18, and has slowly expanded support to other networks in subsequent releases. But Apple’s implementation still doesn’t support end-to-end encryption, which was added to the RCS standard about a year ago. The 26.4 update is the first to begin testing encryption for RCS messages. But as with the initial RCS rollout, Apple is moving slowly and deliberately: for now, encrypted RCS messaging only works when texting between Apple devices, and not between Apple devices and Android phones. The feature also won’t be included in the final 26.4 release—it’s only included in the betas for testing purposes, and it “will be available to customers in a future software update for iOS, iPadOS, macOS, and watchOS.” Encrypted iMessage and RCS chats will be labeled with a lock icon, much like how most web browsers label HTTPS sites. To support encrypted messaging, Apple will jump from version 2.4 of the RCS Universal Profile to version 3.0. This should also enable support for several improvements in versions 2.5, 2.6, and 2.7 of the RCS standard, including previously iMessage-exclusive things like editing and recalling messages and replying to specific messages inline. The return of the “Compact” Safari tab bar The Compact tab view returns to Safari 26.4 and iPadOS 26.4. Credit: Andrew Cunningham The Compact tab view returns to Safari 26.4 and iPadOS 26.4. Credit: Andrew Cunningham As part of the macOS 12 Monterey/iPadOS 15 beta cycle in 2021, Apple attempted a pretty radical redesign of the Safari browser that combined your tabs and the address bar into one, with the goal of increasing the amount of viewable space on the pages you were viewing. By the time both operating systems were released to the public, Safari’s default design had more or less reverted to its previous state, but the “compact” tab view lived on as an optional view in the settings for those who liked it. Tahoe, the Safari 26 update, and iPadOS 26 all removed that Compact view entirely, though a version of the Compact view became the default for the iPhone version of Safari. The macOS 26.4, Safari 26.4, and iPadOS 26.4 updates restore the Compact tab option to the other versions of Safari. On-by-default Stolen Device Protection Originally introduced in the iOS 17.3 update, Apple’s “Stolen Device Protection” toggle for iPhones added an extra layer of security for users whose phones were stolen by people who had learned their passcodes. With Stolen Device Protection enabled, an iPhone that had been removed from “familiar locations, such as home or work” would require biometric Face ID or Touch ID authentication before accessing stored passwords and credit cards, erasing your phone, or changing Apple Account passwords. Normally, users can enter their passcodes as a fallback; Stolen Device Protection removes that fallback. The iOS 26.4 update will make Stolen Device Protection on by default. Generally, you won’t notice a difference in how your phone behaves, but if you’re traveling or away from places where you regularly use your phone and you can’t use your passcode to access certain information, this is why. It’s possible to switch off Stolen Device protection, but doing so requires biometric authentication, an hour-long wait, and then a second biometric authentication. (This extended wait is also required for disabling Find My, changing your phone’s passcode, or changing Touch ID and Face ID settings.) Rosetta’s end approaches The macOS 26.4 update will add the first user-facing notifications about the end of Rosetta support, currently slated for macOS 28 in 2027. Credit: Andrew Cunningham The macOS 26.4 update will add the first user-facing notifications about the end of Rosetta support, currently slated for macOS 28 in 2027. Credit: Andrew Cunningham Apple’s Rosetta 2 was a crucial support beam in the bridge from the Intel Mac era to the Apple Silicon era, enabling unmodified Intel-native apps to run on the M1 and later processors, with noticeable but manageable performance and responsiveness hits. As with the original Rosetta, it allowed Apple to execute a major CPU architecture switch while keeping it mostly invisible to Mac users, and it bought developers time to release Arm-native versions of their apps so they could take full advantage of the new chips. But now that the transition is complete and the last Intel Macs are fading into the rearview, Apple plans to remove the translation layer from future versions of macOS, with some exceptions for games that rely on the technology. Rosetta 2 won’t be completely removed until macOS 28, but macOS 26.4 will be the first to begin warning users about the end of Rosetta when they launch Intel-native apps. Those notifications link to an Apple support page about identifying and updating Intel-only apps to Apple Silicon-native versions (or universal binaries that support both architectures). Apple has deployed this “adding notifications without removing functionality” approach to deprecating older apps before. Versions 10.13 and 10.14 of macOS would show users pop-ups about the end of support for 32-bit apps for a couple of years before that support was removed in macOS 10.15, for example. Andrew Cunningham Senior Technology Reporter Andrew Cunningham Senior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 132 Comments 5 changes to know about in Apple’s latest iOS, macOS, and iPadOS betas The 26.3 updates were mostly invisible; these changes are more significant. This week, Apple released the first developer betas for iOS 26.4, iPadOS 26.4, macOS 26.4, and its other operating systems. On Tuesday, it followed those up with public beta versions of the same updates. Usually released around the midpoint between one major iOS release and the next, the *.4 updates to its operating system usually include a significant batch of new features and other refinements, and if the first beta is any indication, this year’s releases uphold that tradition. A new “Playlist Playground” feature will let Apple Music subscribers generate playlists with text prompts, and native support for video podcasts is coming to the Podcasts app. The Creator Studio version of the Freeform drawing and collaboration app is also available in the 26.4 updates, allowing subscribers to access stock images from Apple’s Content Hub and to insert AI-generated images. But we’ve spent time digging through the betas to identify some of the more below-the-surface improvements and changes that Apple is testing. Some of these changes won’t come to the public versions of the software until a later release; others may be removed or changed between now and when the 26.4 update is made available to the general public. But generally, Apple’s betas give us a good idea of what the final release will look like. One feature that hasn’t appeared in these betas? The new “more intelligent Siri” that Apple has been promising since the iOS 18 launch in 2024. Apple delayed the feature until sometime in 2026, citing that it wasn’t meeting the company’s standards for quality and reliability. Reports indicated that the company had been planning to make the new Siri part of the 26.4 update, but as of earlier this month, Apple has reportedly decided to push it to the 26.5 release or later; even releasing it as part of iOS 27 in the fall would technically not run afoul of the “2026” promise. Before we begin, the standard warning about installing beta software on hardware you rely on day to day. Although these point updates are generally more stable than the major releases Apple tests in the summer and fall, they can still contain major bugs and may cause your device to behave strangely. The first beta, in particular, tends to be the roughest—more stable versions will be released in the coming weeks, and we should see the final version of the update within the next couple months. Charging limits for MacBooks In macOS 11 Big Sur, Apple added an on-by-default “Optimized Battery Charging” toggle to the operating system that would allow macOS to limit your battery’s charge percentage to 80 percent based on your usage and charging behavior. The idea is to limit the time your battery spends charging while full, something that can gradually reduce its capacity. The macOS 26.4 update adds a new slider similar to the one in iOS, further allowing users to manually specify a maximum charge limit that is always observed, no matter what. It’s adjustable in 5 percent increments from 80 to 100 percent. Anecdotal evidence suggests that limiting your charge percentage can lengthen the useful life of your battery and reduce wear, but there’s nothing that will fully prevent a battery from wearing out and losing capacity over time. It’s up to users to decide whether an immediately noticeable everyday hit to battery life is worth a slightly longer service life. In the current macOS betas, enabling a charge limit manually doesn’t disable the Optimized Battery Charging feature the way it does in iOS. It’s unclear if this is an early bug or an intentional difference in how the feature is implemented in macOS. End-to-end encryption (and other improvements) for non-Apple texting Apple has been infamously slow to adopt support for the Rich Communication Services (RCS) messaging protocol used by most modern Android phones. Apple-to-Apple messaging was handled using iMessage, which supports end-to-end encryption among many other features. But for many years, it stuck by the aging SMS standard for “green bubble” texting between Apple’s platforms and others, to the enduring frustration of anyone with a single Android-using friend in a group chat. Apple finally began supporting RCS messaging for major cellular carriers in iOS 18, and has slowly expanded support to other networks in subsequent releases. But Apple’s implementation still doesn’t support end-to-end encryption, which was added to the RCS standard about a year ago. The 26.4 update is the first to begin testing encryption for RCS messages. But as with the initial RCS rollout, Apple is moving slowly and deliberately: for now, encrypted RCS messaging only works when texting between Apple devices, and not between Apple devices and Android phones. The feature also won’t be included in the final 26.4 release—it’s only included in the betas for testing purposes, and it “will be available to customers in a future software update for iOS, iPadOS, macOS, and watchOS.” Encrypted iMessage and RCS chats will be labeled with a lock icon, much like how most web browsers label HTTPS sites. To support encrypted messaging, Apple will jump from version 2.4 of the RCS Universal Profile to version 3.0. This should also enable support for several improvements in versions 2.5, 2.6, and 2.7 of the RCS standard, including previously iMessage-exclusive things like editing and recalling messages and replying to specific messages inline. The return of the “Compact” Safari tab bar As part of the macOS 12 Monterey/iPadOS 15 beta cycle in 2021, Apple attempted a pretty radical redesign of the Safari browser that combined your tabs and the address bar into one, with the goal of increasing the amount of viewable space on the pages you were viewing. By the time both operating systems were released to the public, Safari’s default design had more or less reverted to its previous state, but the “compact” tab view lived on as an optional view in the settings for those who liked it. Tahoe, the Safari 26 update, and iPadOS 26 all removed that Compact view entirely, though a version of the Compact view became the default for the iPhone version of Safari. The macOS 26.4, Safari 26.4, and iPadOS 26.4 updates restore the Compact tab option to the other versions of Safari. On-by-default Stolen Device Protection Originally introduced in the iOS 17.3 update, Apple’s “Stolen Device Protection” toggle for iPhones added an extra layer of security for users whose phones were stolen by people who had learned their passcodes. With Stolen Device Protection enabled, an iPhone that had been removed from “familiar locations, such as home or work” would require biometric Face ID or Touch ID authentication before accessing stored passwords and credit cards, erasing your phone, or changing Apple Account passwords. Normally, users can enter their passcodes as a fallback; Stolen Device Protection removes that fallback. The iOS 26.4 update will make Stolen Device Protection on by default. Generally, you won’t notice a difference in how your phone behaves, but if you’re traveling or away from places where you regularly use your phone and you can’t use your passcode to access certain information, this is why. It’s possible to switch off Stolen Device protection, but doing so requires biometric authentication, an hour-long wait, and then a second biometric authentication. (This extended wait is also required for disabling Find My, changing your phone’s passcode, or changing Touch ID and Face ID settings.) Rosetta’s end approaches Apple’s Rosetta 2 was a crucial support beam in the bridge from the Intel Mac era to the Apple Silicon era, enabling unmodified Intel-native apps to run on the M1 and later processors, with noticeable but manageable performance and responsiveness hits. As with the original Rosetta, it allowed Apple to execute a major CPU architecture switch while keeping it mostly invisible to Mac users, and it bought developers time to release Arm-native versions of their apps so they could take full advantage of the new chips. But now that the transition is complete and the last Intel Macs are fading into the rearview, Apple plans to remove the translation layer from future versions of macOS, with some exceptions for games that rely on the technology. Rosetta 2 won’t be completely removed until macOS 28, but macOS 26.4 will be the first to begin warning users about the end of Rosetta when they launch Intel-native apps. Those notifications link to an Apple support page about identifying and updating Intel-only apps to Apple Silicon-native versions (or universal binaries that support both architectures). Apple has deployed this “adding notifications without removing functionality” approach to deprecating older apps before. Versions 10.13 and 10.14 of macOS would show users pop-ups about the end of support for 32-bit apps for a couple of years before that support was removed in macOS 10.15, for example. F Fred Duck i need a TL;DR on that one, dogCertainly. Let's use Fred Intelligence (FI) to summarise the Arsticle. 1 Charging limits for MacBooks 2 End-to-end encryption (and other improvements) for non-Apple texting 3 The return of the “Compact” Safari tab bar 4 On-by-default Stolen Device Protection 5 Rosetta’s end approaches 6 Apple-shaped products If I can be of further assistance, don't hesitate. February 18, 2026 at 7:49 pm Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/lawsuit-epa-revoking-greenhouse-gas-finding-risks-thousands-of-avoidable-deaths/] | [TOKENS: 3540] |
“Deadly serious” Lawsuit: EPA revoking greenhouse gas finding risks “thousands of avoidable deaths” EPA sued for abandoning its mission to protect public health. Ashley Belanger – Feb 18, 2026 2:48 pm | 63 Credit: Tramino | iStock / Getty Images Plus Credit: Tramino | iStock / Getty Images Plus Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav In a lawsuit filed Wednesday, the Environmental Protection Agency was accused of abandoning its mission to protect public health after repealing an “endangerment finding” that has served as the basis for federal climate change regulations for 17 years. The lawsuit came from more than a dozen environmental and health groups, including the American Public Health Association, the American Lung Association, the Center for Biological Diversity (CBD), the Clean Air Council, the Environmental Defense Fund (EDF), the Natural Resources Defense Council (NRDC), the Sierra Club, and the Union of Concerned Scientists. The groups have asked the US Court of Appeals for the District of Columbia Circuit to review the EPA decision, which also eliminated requirements controlling greenhouse gas emissions in new cars and trucks. Urging a return to the status quo, the groups argued that the Trump administration is anti-science and illegally moving to benefit the fossil fuel industry, despite a mountain of evidence demonstrating the deadly consequences of unchecked pollution and climate change-induced floods, droughts, wildfires, and hurricanes. “Undercutting the ability of the federal government to tackle the largest source of climate pollution is deadly serious,” Meredith Hankins, legal director for federal climate at NRDC, said in an EDF roundup of statements from plaintiffs. The science is overwhelmingly clear, the groups argued, despite the Trump EPA attempting to muddy the waters by forming a since-disbanded working group of climate contrarians. Trump is a longtime climate denier, as evidenced by a Euro News tracker monitoring his most controversial comments. Most recently, during a cold snap affecting much of the US, he predictably trolled environmentalists, writing on Truth Social, “could the Environmental Insurrectionists please explain—WHATEVER HAPPENED TO GLOBAL WARMING?” The EPA’s final rule summary bragged that “this is the single largest deregulatory action in US history and will save Americans over $1.3 trillion” by 2055. Supposedly, carmakers will pass on any savings from no longer having to meet emissions requirements, giving Americans more access to affordable cars by shutting down expensive emissions and EV mandates “strangling” the auto industry. Sounding nothing like an agency created to monitor pollutants, a fact sheet on the final rule emphasized that Trump’s EPA “chooses consumer choice over climate change zealotry every time.” Critics quickly slammed Trump’s claims that removing the endangerment finding would help the economy. Any savings from cheaper vehicles or reduced costs of charging infrastructure (as Americans ostensibly buy fewer EVs) would be offset by $1.4 trillion “in additional costs from increased fuel purchases, vehicle repair and maintenance, insurance, traffic congestion, and noise,” The Guardian reported. The EPA’s economic analysis also ignores public health costs, the groups suing alleged. David Pettit, an attorney at the CBD’s Climate Law Institute, slammed the EPA’s messaging as an attempt to sway consumers without explaining the true costs. “Nobody but Big Oil profits from Trump trashing climate science and making cars and trucks guzzle and pollute more,” Pettit said. “Consumers will pay more to fill up, and our skies and oceans will fill up with more pollution.” If the court sides with the EPA, “people everywhere will face more pollution, higher costs, and thousands of avoidable deaths,” Peter Zalzal, EDF’s associate vice president of clean air strategies, said. EPA argued climate change evidence is “out of scope” For environmentalists, the decision to sue the EPA was risky but necessary. By putting up a fight, they risk a court potentially reversing the 2009 Supreme Court ruling requiring the EPA to conduct the initial endangerment analysis and then regulate any pollution found from greenhouse gases. Seemingly, that reversal is what the Trump administration has been angling for, hoping the case will reach the Supreme Court, which is more conservative today and perhaps less likely to read the Clean Air Act as broadly as the 2009 court. It’s worth the risk, according to William Piermattei, the managing director of the Environmental Law Program at the University of Maryland Francis King Carey School of Law. He told The New York Times that environmentalists had no choice but to file the lawsuit and act on the public’s behalf. Environmentalists “must challenge this,” Piermattei said. If they didn’t, they’d be “agreeing that we should not regulate greenhouse gasses under the Clean Air Act, full stop.” He suggested that “a majority of the public, does not agree with that statement at all.” Since 2010, the EPA has found that the scientific basis for concluding that “elevated concentrations of greenhouse gases in the atmosphere may reasonably be anticipated to endanger the public health and welfare of current and future US generations is robust, voluminous, and compelling.” And since then, the evidence base has only grown, the groups suing said. Trump used to seem intimidated by the “overwhelming” evidence, environmentalists have noted. During Trump’s prior term, he notably left the endangerment finding in place, perhaps expecting that the evidence was irrefutable. He’s now renewed that fight, arguing that the evidence should be set aside, so that courts can focus on whether Congress “must weigh in on ‘major questions’ that have significant political and economic implications” and serve as a check on the EPA. In the EPA’s comments addressing public concerns about the agency ignoring evidence, the agency has already argued that evidence of climate change is “out of scope” since the EPA did not repeal the basis of the finding. Instead, the EPA claims it is merely challenging its own authority to continue to regulate the auto industry for harmful emissions, suggesting that only Congress has that authority. The Clean Air Act “does not provide EPA statutory authority to prescribe motor vehicle emission standards for the purpose of addressing global climate change concerns,” the EPA said. “In the absence of such authority, the Endangerment Finding is not valid, and EPA cannot retain the regulations that resulted from it.” Whether courts will agree that evidence supporting climate change is “out of scope” could determine whether the Supreme Court’s prior decision that compelled the endangerment finding is ultimately overturned. If that happens, subsequent administrations may struggle to issue a new endangerment finding to undo any potential damage. All eyes would then turn to Congress to pass a law to uphold protections. EPA accused of abandoning its mission By ignoring science, the EPA risks eroding public trust, according to Hana Vizcarra, a senior lawyer at the nonprofit Earthjustice, which is representing several groups in the litigation. “With this action, EPA flips its mission on its head,” Vizcarra said. “It abandons its core mandate to protect human health and the environment to boost polluting industries and attempts to rewrite the law in order to do so.” Groups appear confident that the courts will consider the science. Joanne Spalding, director of the Sierra Club’s Environmental Law Program, noted that the early 2000s litigation from the Sierra Club brought about the original EPA protections. She vowed that the Sierra Club would continue fighting to keep them. “People should not be forced to suffer for this administration’s blind allegiance to the fossil fuel industry and corporate polluters,” Spalding said. “This shortsighted rollback is blatantly unlawful and their efforts to force this upon the American people will fail.” Ankush Bansal, board president of Physicians for Social Responsibility, warned that courts cannot afford to ignore the evidence. The EPA’s “devastating decision” goes “against the science and testimony of countless scientists, health care professionals, and public health practitioners,” Bansal said. If upheld, the long-term consequences could seemingly bury courts in future legal battles. “It will result in direct harm to the health of Americans throughout the country, particularly children, older adults, those with chronic illnesses, and other vulnerable populations, rural to urban, red and blue, of all races and incomes,” Bansal said. “The increased exposure to harmful pollutants and other greenhouse gas emissions from fossil fuel production and consumption will make America sicker, not healthier, less prosperous, not more, for generations to come.” Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 63 Comments Lawsuit: EPA revoking greenhouse gas finding risks “thousands of avoidable deaths” EPA sued for abandoning its mission to protect public health. In a lawsuit filed Wednesday, the Environmental Protection Agency was accused of abandoning its mission to protect public health after repealing an “endangerment finding” that has served as the basis for federal climate change regulations for 17 years. The lawsuit came from more than a dozen environmental and health groups, including the American Public Health Association, the American Lung Association, the Center for Biological Diversity (CBD), the Clean Air Council, the Environmental Defense Fund (EDF), the Natural Resources Defense Council (NRDC), the Sierra Club, and the Union of Concerned Scientists. The groups have asked the US Court of Appeals for the District of Columbia Circuit to review the EPA decision, which also eliminated requirements controlling greenhouse gas emissions in new cars and trucks. Urging a return to the status quo, the groups argued that the Trump administration is anti-science and illegally moving to benefit the fossil fuel industry, despite a mountain of evidence demonstrating the deadly consequences of unchecked pollution and climate change-induced floods, droughts, wildfires, and hurricanes. “Undercutting the ability of the federal government to tackle the largest source of climate pollution is deadly serious,” Meredith Hankins, legal director for federal climate at NRDC, said in an EDF roundup of statements from plaintiffs. The science is overwhelmingly clear, the groups argued, despite the Trump EPA attempting to muddy the waters by forming a since-disbanded working group of climate contrarians. Trump is a longtime climate denier, as evidenced by a Euro News tracker monitoring his most controversial comments. Most recently, during a cold snap affecting much of the US, he predictably trolled environmentalists, writing on Truth Social, “could the Environmental Insurrectionists please explain—WHATEVER HAPPENED TO GLOBAL WARMING?” The EPA’s final rule summary bragged that “this is the single largest deregulatory action in US history and will save Americans over $1.3 trillion” by 2055. Supposedly, carmakers will pass on any savings from no longer having to meet emissions requirements, giving Americans more access to affordable cars by shutting down expensive emissions and EV mandates “strangling” the auto industry. Sounding nothing like an agency created to monitor pollutants, a fact sheet on the final rule emphasized that Trump’s EPA “chooses consumer choice over climate change zealotry every time.” Critics quickly slammed Trump’s claims that removing the endangerment finding would help the economy. Any savings from cheaper vehicles or reduced costs of charging infrastructure (as Americans ostensibly buy fewer EVs) would be offset by $1.4 trillion “in additional costs from increased fuel purchases, vehicle repair and maintenance, insurance, traffic congestion, and noise,” The Guardian reported. The EPA’s economic analysis also ignores public health costs, the groups suing alleged. David Pettit, an attorney at the CBD’s Climate Law Institute, slammed the EPA’s messaging as an attempt to sway consumers without explaining the true costs. “Nobody but Big Oil profits from Trump trashing climate science and making cars and trucks guzzle and pollute more,” Pettit said. “Consumers will pay more to fill up, and our skies and oceans will fill up with more pollution.” If the court sides with the EPA, “people everywhere will face more pollution, higher costs, and thousands of avoidable deaths,” Peter Zalzal, EDF’s associate vice president of clean air strategies, said. EPA argued climate change evidence is “out of scope” For environmentalists, the decision to sue the EPA was risky but necessary. By putting up a fight, they risk a court potentially reversing the 2009 Supreme Court ruling requiring the EPA to conduct the initial endangerment analysis and then regulate any pollution found from greenhouse gases. Seemingly, that reversal is what the Trump administration has been angling for, hoping the case will reach the Supreme Court, which is more conservative today and perhaps less likely to read the Clean Air Act as broadly as the 2009 court. It’s worth the risk, according to William Piermattei, the managing director of the Environmental Law Program at the University of Maryland Francis King Carey School of Law. He told The New York Times that environmentalists had no choice but to file the lawsuit and act on the public’s behalf. Environmentalists “must challenge this,” Piermattei said. If they didn’t, they’d be “agreeing that we should not regulate greenhouse gasses under the Clean Air Act, full stop.” He suggested that “a majority of the public, does not agree with that statement at all.” Since 2010, the EPA has found that the scientific basis for concluding that “elevated concentrations of greenhouse gases in the atmosphere may reasonably be anticipated to endanger the public health and welfare of current and future US generations is robust, voluminous, and compelling.” And since then, the evidence base has only grown, the groups suing said. Trump used to seem intimidated by the “overwhelming” evidence, environmentalists have noted. During Trump’s prior term, he notably left the endangerment finding in place, perhaps expecting that the evidence was irrefutable. He’s now renewed that fight, arguing that the evidence should be set aside, so that courts can focus on whether Congress “must weigh in on ‘major questions’ that have significant political and economic implications” and serve as a check on the EPA. In the EPA’s comments addressing public concerns about the agency ignoring evidence, the agency has already argued that evidence of climate change is “out of scope” since the EPA did not repeal the basis of the finding. Instead, the EPA claims it is merely challenging its own authority to continue to regulate the auto industry for harmful emissions, suggesting that only Congress has that authority. The Clean Air Act “does not provide EPA statutory authority to prescribe motor vehicle emission standards for the purpose of addressing global climate change concerns,” the EPA said. “In the absence of such authority, the Endangerment Finding is not valid, and EPA cannot retain the regulations that resulted from it.” Whether courts will agree that evidence supporting climate change is “out of scope” could determine whether the Supreme Court’s prior decision that compelled the endangerment finding is ultimately overturned. If that happens, subsequent administrations may struggle to issue a new endangerment finding to undo any potential damage. All eyes would then turn to Congress to pass a law to uphold protections. EPA accused of abandoning its mission By ignoring science, the EPA risks eroding public trust, according to Hana Vizcarra, a senior lawyer at the nonprofit Earthjustice, which is representing several groups in the litigation. “With this action, EPA flips its mission on its head,” Vizcarra said. “It abandons its core mandate to protect human health and the environment to boost polluting industries and attempts to rewrite the law in order to do so.” Groups appear confident that the courts will consider the science. Joanne Spalding, director of the Sierra Club’s Environmental Law Program, noted that the early 2000s litigation from the Sierra Club brought about the original EPA protections. She vowed that the Sierra Club would continue fighting to keep them. “People should not be forced to suffer for this administration’s blind allegiance to the fossil fuel industry and corporate polluters,” Spalding said. “This shortsighted rollback is blatantly unlawful and their efforts to force this upon the American people will fail.” Ankush Bansal, board president of Physicians for Social Responsibility, warned that courts cannot afford to ignore the evidence. The EPA’s “devastating decision” goes “against the science and testimony of countless scientists, health care professionals, and public health practitioners,” Bansal said. If upheld, the long-term consequences could seemingly bury courts in future legal battles. “It will result in direct harm to the health of Americans throughout the country, particularly children, older adults, those with chronic illnesses, and other vulnerable populations, rural to urban, red and blue, of all races and incomes,” Bansal said. “The increased exposure to harmful pollutants and other greenhouse gas emissions from fossil fuel production and consumption will make America sicker, not healthier, less prosperous, not more, for generations to come.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/science/2026/02/microsofts-new-10000-year-data-storage-medium-glass/] | [TOKENS: 3335] |
Clear as glass Microsoft’s new 10,000-year data storage medium: glass Femtosecond lasers etch data into a very stable medium. John Timmer – Feb 18, 2026 2:01 pm | 201 Right now, Silica hardware isn't quite ready for commercialization. Credit: Microsoft Research Right now, Silica hardware isn't quite ready for commercialization. Credit: Microsoft Research Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Archival storage poses lots of challenges. We want media that is extremely dense and stable for centuries or more, and, ideally, doesn’t consume any energy when not being accessed. Lots of ideas have floated around—even DNA has been considered—but one of the simplest is to cut the data into glass. Many forms of glass are very physically and chemically stable, and it’s relatively easy to create features in it. There’s been a lot of preliminary work demonstrating different aspects of a glass-based storage system. But in Wednesday’s issue of Nature, Microsoft Research announced Project Silica, a working demonstration of a system that can read and write data into small slabs of glass with a density of over a Gigabit per cubic millimeter. Writing on glass We tend to think of glass as fragile, prone to shattering, and capable of flowing downward over centuries, although the last claim is a myth. Glass is a category of material, and a variety of chemicals can form glasses. With the right starting chemical, it’s possible to make a glass that is, as the researchers put it, “thermally and chemically stable and is resistant to moisture ingress, temperature fluctuations and electromagnetic interference.” While it would still need to be handled in a way to minimize damage, glass provides the sort of stability we’d want for long-term storage. Putting data into glass is as simple as etching it (to be clear, this is technically not etching, which is a chemical modification of glass’ surface—here, lasers burn features into the interior of the glass). But that’s been one of the challenges, as the writing is typically a slow process. However, the development of femtosecond lasers—lasers that emit pulses that only last 10-15 seconds and can emit millions of them per second—can significantly cut down write times and allow etching to be focused on a very small area, increasing potential data density. To read the data back, there are several options. We’ve already had great success using lasers to read data from optical disks, albeit slowly. But anything that can pick up the small features etched into the glass could conceivably work. With the above considerations in mind, everything was in place on a theoretical level for Project Silica. The big question is how to put them together into a functional system. Microsoft decided that, just to be cautious, it would answer that question twice. A real-world system The difference between these two answers comes down to how an individual unit of data (called a voxel) is written to the glass. One type of voxel they tried was based on birefringence, where refraction of photons depends on their polarization. It’s possible to etch voxels into glass to create birefringence using polarized laser light, producing features smaller than the diffraction limit. In practice, this involved using one laser pulse to create an oval-shaped void, followed by a second, polarized pulse to induce birefringence. The identity of a voxel is based on the orientation of the oval; since we can resolve multiple orientations, it’s possible to save more than one bit in each voxel. The alternative approach involves changing the magnitude of refractive effects by varying the amount of energy in the laser pulse. Again, it’s possible to discern more than two states in these voxels, allowing multiple data bits to be stored in each voxel. The map data from Microsoft Flight Simulator etched onto the Silica storage medium. Credit: Microsoft Research The map data from Microsoft Flight Simulator etched onto the Silica storage medium. Credit: Microsoft Research Reading these in Silica involves using a microscope that can pick up differences in refractive index. (For microscopy geeks, this is a way of saying “they used phase contrast microscopy.”) The microscopy sets the limits on how many layers of voxels can be placed in a single piece of glass. During etching, the layers were separated by enough distance so only a single layer would be in the microscope’s plane of focus at a time. The etching process also incorporates symbols that allow the automated microscope system to position the lens above specific points on the glass. From there, the system slowly changes its focal plane, moving through the stack and capturing images that include different layers of voxels. To interpret these microscope images, Microsoft used a convolutional neural network that combines data from images that are both in and near the plane of focus for a given layer of voxels. This is effective because the influence of nearby voxels changes how a given voxel appears in a subtle way that the AI system can pick up on if given enough training data. The final piece of the puzzle is data encoding. The Silica system takes the raw bitstream of the data it’s storing and adds error correction using a low-density parity-check code (the same error correction used in 5G networks). Neighboring bits are then combined to create symbols that take advantage of the voxels’ ability to store more than one bit. Once a stream of symbols is made, it’s ready to be written to glass. Performance Writing remains a bottleneck in the system, so Microsoft developed hardware that can write a single glass slab with four lasers simultaneously without generating too much heat. That is enough to enable writing at 66 megabits per second, and the team behind the work thinks that it would be possible to add up to a dozen additional lasers. That may be needed, given that it’s possible to store up to 4.84TB in a single slab of glass (the slabs are 12 cm x 12 cm and 0.2 cm thick). That works out to be over 150 hours to fully write a slab. The “up to” aspect of the storage system has to do with the density of data that’s possible with the two different ways of writing data. The method that relies on birefringence requires more optical hardware and only works in high-quality glasses, but can squeeze more voxels into the same volume, and so has a considerably higher data density. The alternative approach can only put a bit over two terabytes into the same slab of glass, but can be done with simpler hardware and can work on any sort of transparent material. Borosilicate glass offers extreme stability; Microsoft’s accelerated aging experiments suggest the data would be stable for over 10,000 years at room temperature. That led Microsoft to declare, “Our results demonstrate that Silica could become the archival storage solution for the digital age.” That may be overselling it just a bit. The Square Kilometer Array telescope, for example, is expected to need to archive 700 petabytes of data each year. That would mean over 140,000 glass slabs would be needed to store the data from this one telescope. Even assuming that the write speed could be boosted by adding significantly more lasers, you’d need over 600 Silica machines operating in parallel to keep up. And the Square Kilometer Array is far from the only project generating enormous amounts of data. That said, there are some features that make Silica a great match for this sort of thing, most notably the complete absence of energy needed to preserve the data, and the fact that it can be retrieved rapidly if needed (a sharp contrast to the days needed to retrieve information from DNA, for example). Plus, I’m admittedly drawn to a system with a storage medium that looks like something right out of science fiction. Nature, 2026. DOI: 10.1038/s41586-025-10042-w (About DOIs). Correction: defined how etching is used here. John Timmer Senior Science Editor John Timmer Senior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 201 Comments Microsoft’s new 10,000-year data storage medium: glass Femtosecond lasers etch data into a very stable medium. Archival storage poses lots of challenges. We want media that is extremely dense and stable for centuries or more, and, ideally, doesn’t consume any energy when not being accessed. Lots of ideas have floated around—even DNA has been considered—but one of the simplest is to cut the data into glass. Many forms of glass are very physically and chemically stable, and it’s relatively easy to create features in it. There’s been a lot of preliminary work demonstrating different aspects of a glass-based storage system. But in Wednesday’s issue of Nature, Microsoft Research announced Project Silica, a working demonstration of a system that can read and write data into small slabs of glass with a density of over a Gigabit per cubic millimeter. Writing on glass We tend to think of glass as fragile, prone to shattering, and capable of flowing downward over centuries, although the last claim is a myth. Glass is a category of material, and a variety of chemicals can form glasses. With the right starting chemical, it’s possible to make a glass that is, as the researchers put it, “thermally and chemically stable and is resistant to moisture ingress, temperature fluctuations and electromagnetic interference.” While it would still need to be handled in a way to minimize damage, glass provides the sort of stability we’d want for long-term storage. Putting data into glass is as simple as etching it (to be clear, this is technically not etching, which is a chemical modification of glass’ surface—here, lasers burn features into the interior of the glass). But that’s been one of the challenges, as the writing is typically a slow process. However, the development of femtosecond lasers—lasers that emit pulses that only last 10-15 seconds and can emit millions of them per second—can significantly cut down write times and allow etching to be focused on a very small area, increasing potential data density. To read the data back, there are several options. We’ve already had great success using lasers to read data from optical disks, albeit slowly. But anything that can pick up the small features etched into the glass could conceivably work. With the above considerations in mind, everything was in place on a theoretical level for Project Silica. The big question is how to put them together into a functional system. Microsoft decided that, just to be cautious, it would answer that question twice. A real-world system The difference between these two answers comes down to how an individual unit of data (called a voxel) is written to the glass. One type of voxel they tried was based on birefringence, where refraction of photons depends on their polarization. It’s possible to etch voxels into glass to create birefringence using polarized laser light, producing features smaller than the diffraction limit. In practice, this involved using one laser pulse to create an oval-shaped void, followed by a second, polarized pulse to induce birefringence. The identity of a voxel is based on the orientation of the oval; since we can resolve multiple orientations, it’s possible to save more than one bit in each voxel. The alternative approach involves changing the magnitude of refractive effects by varying the amount of energy in the laser pulse. Again, it’s possible to discern more than two states in these voxels, allowing multiple data bits to be stored in each voxel. Reading these in Silica involves using a microscope that can pick up differences in refractive index. (For microscopy geeks, this is a way of saying “they used phase contrast microscopy.”) The microscopy sets the limits on how many layers of voxels can be placed in a single piece of glass. During etching, the layers were separated by enough distance so only a single layer would be in the microscope’s plane of focus at a time. The etching process also incorporates symbols that allow the automated microscope system to position the lens above specific points on the glass. From there, the system slowly changes its focal plane, moving through the stack and capturing images that include different layers of voxels. To interpret these microscope images, Microsoft used a convolutional neural network that combines data from images that are both in and near the plane of focus for a given layer of voxels. This is effective because the influence of nearby voxels changes how a given voxel appears in a subtle way that the AI system can pick up on if given enough training data. The final piece of the puzzle is data encoding. The Silica system takes the raw bitstream of the data it’s storing and adds error correction using a low-density parity-check code (the same error correction used in 5G networks). Neighboring bits are then combined to create symbols that take advantage of the voxels’ ability to store more than one bit. Once a stream of symbols is made, it’s ready to be written to glass. Performance Writing remains a bottleneck in the system, so Microsoft developed hardware that can write a single glass slab with four lasers simultaneously without generating too much heat. That is enough to enable writing at 66 megabits per second, and the team behind the work thinks that it would be possible to add up to a dozen additional lasers. That may be needed, given that it’s possible to store up to 4.84TB in a single slab of glass (the slabs are 12 cm x 12 cm and 0.2 cm thick). That works out to be over 150 hours to fully write a slab. The “up to” aspect of the storage system has to do with the density of data that’s possible with the two different ways of writing data. The method that relies on birefringence requires more optical hardware and only works in high-quality glasses, but can squeeze more voxels into the same volume, and so has a considerably higher data density. The alternative approach can only put a bit over two terabytes into the same slab of glass, but can be done with simpler hardware and can work on any sort of transparent material. Borosilicate glass offers extreme stability; Microsoft’s accelerated aging experiments suggest the data would be stable for over 10,000 years at room temperature. That led Microsoft to declare, “Our results demonstrate that Silica could become the archival storage solution for the digital age.” That may be overselling it just a bit. The Square Kilometer Array telescope, for example, is expected to need to archive 700 petabytes of data each year. That would mean over 140,000 glass slabs would be needed to store the data from this one telescope. Even assuming that the write speed could be boosted by adding significantly more lasers, you’d need over 600 Silica machines operating in parallel to keep up. And the Square Kilometer Array is far from the only project generating enormous amounts of data. That said, there are some features that make Silica a great match for this sort of thing, most notably the complete absence of energy needed to preserve the data, and the fact that it can be retrieved rapidly if needed (a sharp contrast to the days needed to retrieve information from DNA, for example). Plus, I’m admittedly drawn to a system with a storage medium that looks like something right out of science fiction. Nature, 2026. DOI: 10.1038/s41586-025-10042-w (About DOIs). Correction: defined how etching is used here. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/health/2026/02/fda-does-u-turn-will-review-modernas-mrna-flu-shot-after-shocking-rejection/] | [TOKENS: 1965] |
About-face FDA reverses surprise rejection of Moderna’s mRNA flu vaccine Trump admin’s vaccine chief overruled FDA scientists to initially reject the shot. Beth Mole – Feb 18, 2026 12:08 pm | 127 Credit: Getty | Congressional Quarterly Credit: Getty | Congressional Quarterly Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The Food and Drug Administration has reversed its shocking refusal to consider Moderna’s mRNA flu vaccine for approval. The refusal was revealed last week in a sharply worded press release from Moderna. Subsequent reporting found that the decision was made by political appointee Vinay Prasad, the Trump administration’s top vaccine regulator, who overruled a team of agency scientists and a top career official in rejecting Moderna’s application. In an announcement Wednesday morning, Moderna said the FDA has now agreed to review its vaccine after the company held a formal (Type A) meeting with the FDA and proposed a change to the regulatory pathways used in the application. “We appreciate the FDA’s engagement in a constructive Type A meeting and its agreement to advance our application for review,” Stéphane Bancel, Moderna’s CEO, said in the announcement. “Pending FDA approval, we look forward to making our flu vaccine available later this year so that America’s seniors have access to a new option to protect themselves against flu.” The agency is expected to provide a decision on the vaccine by August 5, 2026. Prasad’s ostensible reason for initially refusing to review the application was based not on Moderna’s vaccine, mRNA-1010, but on the established flu vaccine Moderna used for comparison in its Phase 3 trial. Moderna used licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline, in the trial, which involved nearly 41,000 adults aged 50 and older. In a letter to Moderna dated February 3, Prasad said this choice “does not reflect the best-available standard of care,” and therefore the trial was not “adequate and well-controlled.” Moderna acknowledged that FDA scientists had previously suggested that the company use a recommended high-dose flu vaccine in trial participants 65 and older. But the agency ultimately signed off on the trial design with the uniform standard dose, calling it “acceptable.” Moderna, meanwhile, agreed to add a comparison of a high-dose vaccine to some older participants and provide the FDA with additional analysis. Anti-vaccine agenda Agency insiders told reporters that a team of career scientists was ready to review the vaccine and held an hourlong meeting with Prasad to present the reasons for moving forward with the review. David Kaslow, a top career official responsible for reviewing vaccines, also wrote a memo detailing why the review should proceed. Prasad rejected the vaccine application anyway. According to today’s announcement, the FDA reversed that rejection when Moderna proposed splitting the application, seeking full approval for the vaccine’s use in people aged 50 to 64 and an accelerated approval for use in people 65 and up. That latter regulatory pathway means Moderna will have to conduct an additional trial in that age group to confirm its effectiveness after it’s on the market. Andrew Nixon, spokesperson for the US Department of Health and Human Services, confirmed the reversal to Ars Technica. “Discussions with the company led to a revised regulatory approach and an amended application, which FDA accepted,” Nixon said in a statement. “FDA will maintain its high standards during review and potential licensure stages as it does with all products.” The FDA typically takes a levelheaded approach to working with companies, rarely making surprising decisions or rejecting applications outright. While Prasad claimed the rejection was due to the control vaccine, the move aligns with Health Secretary Robert F. Kennedy Jr.’s broader anti-vaccine agenda. Kennedy and the allies he has installed in federal positions are particularly hostile to mRNA technology. Moderna has already lost more than $700 million in federal contracts to develop pandemic vaccines. Next month, Kennedy’s MAHA Institute is hosting an anti-vaccine event that alleges there’s a “massive epidemic of vaccine injury.” The event description claims without evidence that use of mRNA vaccines is linked to “rising rates of acute and chronic illness.” Vaccine makers and industry investors, meanwhile, are reporting that Kennedy’s relentless anti-vaccine efforts are chilling the entire industry, with companies abandoning research and cutting jobs. In comments to The New York Times, Moderna’s president, Stephen Hoge, said, “There will be less invention, investment, and innovation in vaccines generally, across all the companies.” Beth Mole Senior Health Reporter Beth Mole Senior Health Reporter Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 127 Comments FDA reverses surprise rejection of Moderna’s mRNA flu vaccine Trump admin’s vaccine chief overruled FDA scientists to initially reject the shot. The Food and Drug Administration has reversed its shocking refusal to consider Moderna’s mRNA flu vaccine for approval. The refusal was revealed last week in a sharply worded press release from Moderna. Subsequent reporting found that the decision was made by political appointee Vinay Prasad, the Trump administration’s top vaccine regulator, who overruled a team of agency scientists and a top career official in rejecting Moderna’s application. In an announcement Wednesday morning, Moderna said the FDA has now agreed to review its vaccine after the company held a formal (Type A) meeting with the FDA and proposed a change to the regulatory pathways used in the application. “We appreciate the FDA’s engagement in a constructive Type A meeting and its agreement to advance our application for review,” Stéphane Bancel, Moderna’s CEO, said in the announcement. “Pending FDA approval, we look forward to making our flu vaccine available later this year so that America’s seniors have access to a new option to protect themselves against flu.” The agency is expected to provide a decision on the vaccine by August 5, 2026. Prasad’s ostensible reason for initially refusing to review the application was based not on Moderna’s vaccine, mRNA-1010, but on the established flu vaccine Moderna used for comparison in its Phase 3 trial. Moderna used licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline, in the trial, which involved nearly 41,000 adults aged 50 and older. In a letter to Moderna dated February 3, Prasad said this choice “does not reflect the best-available standard of care,” and therefore the trial was not “adequate and well-controlled.” Moderna acknowledged that FDA scientists had previously suggested that the company use a recommended high-dose flu vaccine in trial participants 65 and older. But the agency ultimately signed off on the trial design with the uniform standard dose, calling it “acceptable.” Moderna, meanwhile, agreed to add a comparison of a high-dose vaccine to some older participants and provide the FDA with additional analysis. Anti-vaccine agenda Agency insiders told reporters that a team of career scientists was ready to review the vaccine and held an hourlong meeting with Prasad to present the reasons for moving forward with the review. David Kaslow, a top career official responsible for reviewing vaccines, also wrote a memo detailing why the review should proceed. Prasad rejected the vaccine application anyway. According to today’s announcement, the FDA reversed that rejection when Moderna proposed splitting the application, seeking full approval for the vaccine’s use in people aged 50 to 64 and an accelerated approval for use in people 65 and up. That latter regulatory pathway means Moderna will have to conduct an additional trial in that age group to confirm its effectiveness after it’s on the market. Andrew Nixon, spokesperson for the US Department of Health and Human Services, confirmed the reversal to Ars Technica. “Discussions with the company led to a revised regulatory approach and an amended application, which FDA accepted,” Nixon said in a statement. “FDA will maintain its high standards during review and potential licensure stages as it does with all products.” The FDA typically takes a levelheaded approach to working with companies, rarely making surprising decisions or rejecting applications outright. While Prasad claimed the rejection was due to the control vaccine, the move aligns with Health Secretary Robert F. Kennedy Jr.’s broader anti-vaccine agenda. Kennedy and the allies he has installed in federal positions are particularly hostile to mRNA technology. Moderna has already lost more than $700 million in federal contracts to develop pandemic vaccines. Next month, Kennedy’s MAHA Institute is hosting an anti-vaccine event that alleges there’s a “massive epidemic of vaccine injury.” The event description claims without evidence that use of mRNA vaccines is linked to “rising rates of acute and chronic illness.” Vaccine makers and industry investors, meanwhile, are reporting that Kennedy’s relentless anti-vaccine efforts are chilling the entire industry, with companies abandoning research and cutting jobs. In comments to The New York Times, Moderna’s president, Stephen Hoge, said, “There will be less invention, investment, and innovation in vaccines generally, across all the companies.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/lawsuit-epa-revoking-greenhouse-gas-finding-risks-thousands-of-avoidable-deaths/] | [TOKENS: 3540] |
“Deadly serious” Lawsuit: EPA revoking greenhouse gas finding risks “thousands of avoidable deaths” EPA sued for abandoning its mission to protect public health. Ashley Belanger – Feb 18, 2026 2:48 pm | 63 Credit: Tramino | iStock / Getty Images Plus Credit: Tramino | iStock / Getty Images Plus Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav In a lawsuit filed Wednesday, the Environmental Protection Agency was accused of abandoning its mission to protect public health after repealing an “endangerment finding” that has served as the basis for federal climate change regulations for 17 years. The lawsuit came from more than a dozen environmental and health groups, including the American Public Health Association, the American Lung Association, the Center for Biological Diversity (CBD), the Clean Air Council, the Environmental Defense Fund (EDF), the Natural Resources Defense Council (NRDC), the Sierra Club, and the Union of Concerned Scientists. The groups have asked the US Court of Appeals for the District of Columbia Circuit to review the EPA decision, which also eliminated requirements controlling greenhouse gas emissions in new cars and trucks. Urging a return to the status quo, the groups argued that the Trump administration is anti-science and illegally moving to benefit the fossil fuel industry, despite a mountain of evidence demonstrating the deadly consequences of unchecked pollution and climate change-induced floods, droughts, wildfires, and hurricanes. “Undercutting the ability of the federal government to tackle the largest source of climate pollution is deadly serious,” Meredith Hankins, legal director for federal climate at NRDC, said in an EDF roundup of statements from plaintiffs. The science is overwhelmingly clear, the groups argued, despite the Trump EPA attempting to muddy the waters by forming a since-disbanded working group of climate contrarians. Trump is a longtime climate denier, as evidenced by a Euro News tracker monitoring his most controversial comments. Most recently, during a cold snap affecting much of the US, he predictably trolled environmentalists, writing on Truth Social, “could the Environmental Insurrectionists please explain—WHATEVER HAPPENED TO GLOBAL WARMING?” The EPA’s final rule summary bragged that “this is the single largest deregulatory action in US history and will save Americans over $1.3 trillion” by 2055. Supposedly, carmakers will pass on any savings from no longer having to meet emissions requirements, giving Americans more access to affordable cars by shutting down expensive emissions and EV mandates “strangling” the auto industry. Sounding nothing like an agency created to monitor pollutants, a fact sheet on the final rule emphasized that Trump’s EPA “chooses consumer choice over climate change zealotry every time.” Critics quickly slammed Trump’s claims that removing the endangerment finding would help the economy. Any savings from cheaper vehicles or reduced costs of charging infrastructure (as Americans ostensibly buy fewer EVs) would be offset by $1.4 trillion “in additional costs from increased fuel purchases, vehicle repair and maintenance, insurance, traffic congestion, and noise,” The Guardian reported. The EPA’s economic analysis also ignores public health costs, the groups suing alleged. David Pettit, an attorney at the CBD’s Climate Law Institute, slammed the EPA’s messaging as an attempt to sway consumers without explaining the true costs. “Nobody but Big Oil profits from Trump trashing climate science and making cars and trucks guzzle and pollute more,” Pettit said. “Consumers will pay more to fill up, and our skies and oceans will fill up with more pollution.” If the court sides with the EPA, “people everywhere will face more pollution, higher costs, and thousands of avoidable deaths,” Peter Zalzal, EDF’s associate vice president of clean air strategies, said. EPA argued climate change evidence is “out of scope” For environmentalists, the decision to sue the EPA was risky but necessary. By putting up a fight, they risk a court potentially reversing the 2009 Supreme Court ruling requiring the EPA to conduct the initial endangerment analysis and then regulate any pollution found from greenhouse gases. Seemingly, that reversal is what the Trump administration has been angling for, hoping the case will reach the Supreme Court, which is more conservative today and perhaps less likely to read the Clean Air Act as broadly as the 2009 court. It’s worth the risk, according to William Piermattei, the managing director of the Environmental Law Program at the University of Maryland Francis King Carey School of Law. He told The New York Times that environmentalists had no choice but to file the lawsuit and act on the public’s behalf. Environmentalists “must challenge this,” Piermattei said. If they didn’t, they’d be “agreeing that we should not regulate greenhouse gasses under the Clean Air Act, full stop.” He suggested that “a majority of the public, does not agree with that statement at all.” Since 2010, the EPA has found that the scientific basis for concluding that “elevated concentrations of greenhouse gases in the atmosphere may reasonably be anticipated to endanger the public health and welfare of current and future US generations is robust, voluminous, and compelling.” And since then, the evidence base has only grown, the groups suing said. Trump used to seem intimidated by the “overwhelming” evidence, environmentalists have noted. During Trump’s prior term, he notably left the endangerment finding in place, perhaps expecting that the evidence was irrefutable. He’s now renewed that fight, arguing that the evidence should be set aside, so that courts can focus on whether Congress “must weigh in on ‘major questions’ that have significant political and economic implications” and serve as a check on the EPA. In the EPA’s comments addressing public concerns about the agency ignoring evidence, the agency has already argued that evidence of climate change is “out of scope” since the EPA did not repeal the basis of the finding. Instead, the EPA claims it is merely challenging its own authority to continue to regulate the auto industry for harmful emissions, suggesting that only Congress has that authority. The Clean Air Act “does not provide EPA statutory authority to prescribe motor vehicle emission standards for the purpose of addressing global climate change concerns,” the EPA said. “In the absence of such authority, the Endangerment Finding is not valid, and EPA cannot retain the regulations that resulted from it.” Whether courts will agree that evidence supporting climate change is “out of scope” could determine whether the Supreme Court’s prior decision that compelled the endangerment finding is ultimately overturned. If that happens, subsequent administrations may struggle to issue a new endangerment finding to undo any potential damage. All eyes would then turn to Congress to pass a law to uphold protections. EPA accused of abandoning its mission By ignoring science, the EPA risks eroding public trust, according to Hana Vizcarra, a senior lawyer at the nonprofit Earthjustice, which is representing several groups in the litigation. “With this action, EPA flips its mission on its head,” Vizcarra said. “It abandons its core mandate to protect human health and the environment to boost polluting industries and attempts to rewrite the law in order to do so.” Groups appear confident that the courts will consider the science. Joanne Spalding, director of the Sierra Club’s Environmental Law Program, noted that the early 2000s litigation from the Sierra Club brought about the original EPA protections. She vowed that the Sierra Club would continue fighting to keep them. “People should not be forced to suffer for this administration’s blind allegiance to the fossil fuel industry and corporate polluters,” Spalding said. “This shortsighted rollback is blatantly unlawful and their efforts to force this upon the American people will fail.” Ankush Bansal, board president of Physicians for Social Responsibility, warned that courts cannot afford to ignore the evidence. The EPA’s “devastating decision” goes “against the science and testimony of countless scientists, health care professionals, and public health practitioners,” Bansal said. If upheld, the long-term consequences could seemingly bury courts in future legal battles. “It will result in direct harm to the health of Americans throughout the country, particularly children, older adults, those with chronic illnesses, and other vulnerable populations, rural to urban, red and blue, of all races and incomes,” Bansal said. “The increased exposure to harmful pollutants and other greenhouse gas emissions from fossil fuel production and consumption will make America sicker, not healthier, less prosperous, not more, for generations to come.” Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 63 Comments Lawsuit: EPA revoking greenhouse gas finding risks “thousands of avoidable deaths” EPA sued for abandoning its mission to protect public health. In a lawsuit filed Wednesday, the Environmental Protection Agency was accused of abandoning its mission to protect public health after repealing an “endangerment finding” that has served as the basis for federal climate change regulations for 17 years. The lawsuit came from more than a dozen environmental and health groups, including the American Public Health Association, the American Lung Association, the Center for Biological Diversity (CBD), the Clean Air Council, the Environmental Defense Fund (EDF), the Natural Resources Defense Council (NRDC), the Sierra Club, and the Union of Concerned Scientists. The groups have asked the US Court of Appeals for the District of Columbia Circuit to review the EPA decision, which also eliminated requirements controlling greenhouse gas emissions in new cars and trucks. Urging a return to the status quo, the groups argued that the Trump administration is anti-science and illegally moving to benefit the fossil fuel industry, despite a mountain of evidence demonstrating the deadly consequences of unchecked pollution and climate change-induced floods, droughts, wildfires, and hurricanes. “Undercutting the ability of the federal government to tackle the largest source of climate pollution is deadly serious,” Meredith Hankins, legal director for federal climate at NRDC, said in an EDF roundup of statements from plaintiffs. The science is overwhelmingly clear, the groups argued, despite the Trump EPA attempting to muddy the waters by forming a since-disbanded working group of climate contrarians. Trump is a longtime climate denier, as evidenced by a Euro News tracker monitoring his most controversial comments. Most recently, during a cold snap affecting much of the US, he predictably trolled environmentalists, writing on Truth Social, “could the Environmental Insurrectionists please explain—WHATEVER HAPPENED TO GLOBAL WARMING?” The EPA’s final rule summary bragged that “this is the single largest deregulatory action in US history and will save Americans over $1.3 trillion” by 2055. Supposedly, carmakers will pass on any savings from no longer having to meet emissions requirements, giving Americans more access to affordable cars by shutting down expensive emissions and EV mandates “strangling” the auto industry. Sounding nothing like an agency created to monitor pollutants, a fact sheet on the final rule emphasized that Trump’s EPA “chooses consumer choice over climate change zealotry every time.” Critics quickly slammed Trump’s claims that removing the endangerment finding would help the economy. Any savings from cheaper vehicles or reduced costs of charging infrastructure (as Americans ostensibly buy fewer EVs) would be offset by $1.4 trillion “in additional costs from increased fuel purchases, vehicle repair and maintenance, insurance, traffic congestion, and noise,” The Guardian reported. The EPA’s economic analysis also ignores public health costs, the groups suing alleged. David Pettit, an attorney at the CBD’s Climate Law Institute, slammed the EPA’s messaging as an attempt to sway consumers without explaining the true costs. “Nobody but Big Oil profits from Trump trashing climate science and making cars and trucks guzzle and pollute more,” Pettit said. “Consumers will pay more to fill up, and our skies and oceans will fill up with more pollution.” If the court sides with the EPA, “people everywhere will face more pollution, higher costs, and thousands of avoidable deaths,” Peter Zalzal, EDF’s associate vice president of clean air strategies, said. EPA argued climate change evidence is “out of scope” For environmentalists, the decision to sue the EPA was risky but necessary. By putting up a fight, they risk a court potentially reversing the 2009 Supreme Court ruling requiring the EPA to conduct the initial endangerment analysis and then regulate any pollution found from greenhouse gases. Seemingly, that reversal is what the Trump administration has been angling for, hoping the case will reach the Supreme Court, which is more conservative today and perhaps less likely to read the Clean Air Act as broadly as the 2009 court. It’s worth the risk, according to William Piermattei, the managing director of the Environmental Law Program at the University of Maryland Francis King Carey School of Law. He told The New York Times that environmentalists had no choice but to file the lawsuit and act on the public’s behalf. Environmentalists “must challenge this,” Piermattei said. If they didn’t, they’d be “agreeing that we should not regulate greenhouse gasses under the Clean Air Act, full stop.” He suggested that “a majority of the public, does not agree with that statement at all.” Since 2010, the EPA has found that the scientific basis for concluding that “elevated concentrations of greenhouse gases in the atmosphere may reasonably be anticipated to endanger the public health and welfare of current and future US generations is robust, voluminous, and compelling.” And since then, the evidence base has only grown, the groups suing said. Trump used to seem intimidated by the “overwhelming” evidence, environmentalists have noted. During Trump’s prior term, he notably left the endangerment finding in place, perhaps expecting that the evidence was irrefutable. He’s now renewed that fight, arguing that the evidence should be set aside, so that courts can focus on whether Congress “must weigh in on ‘major questions’ that have significant political and economic implications” and serve as a check on the EPA. In the EPA’s comments addressing public concerns about the agency ignoring evidence, the agency has already argued that evidence of climate change is “out of scope” since the EPA did not repeal the basis of the finding. Instead, the EPA claims it is merely challenging its own authority to continue to regulate the auto industry for harmful emissions, suggesting that only Congress has that authority. The Clean Air Act “does not provide EPA statutory authority to prescribe motor vehicle emission standards for the purpose of addressing global climate change concerns,” the EPA said. “In the absence of such authority, the Endangerment Finding is not valid, and EPA cannot retain the regulations that resulted from it.” Whether courts will agree that evidence supporting climate change is “out of scope” could determine whether the Supreme Court’s prior decision that compelled the endangerment finding is ultimately overturned. If that happens, subsequent administrations may struggle to issue a new endangerment finding to undo any potential damage. All eyes would then turn to Congress to pass a law to uphold protections. EPA accused of abandoning its mission By ignoring science, the EPA risks eroding public trust, according to Hana Vizcarra, a senior lawyer at the nonprofit Earthjustice, which is representing several groups in the litigation. “With this action, EPA flips its mission on its head,” Vizcarra said. “It abandons its core mandate to protect human health and the environment to boost polluting industries and attempts to rewrite the law in order to do so.” Groups appear confident that the courts will consider the science. Joanne Spalding, director of the Sierra Club’s Environmental Law Program, noted that the early 2000s litigation from the Sierra Club brought about the original EPA protections. She vowed that the Sierra Club would continue fighting to keep them. “People should not be forced to suffer for this administration’s blind allegiance to the fossil fuel industry and corporate polluters,” Spalding said. “This shortsighted rollback is blatantly unlawful and their efforts to force this upon the American people will fail.” Ankush Bansal, board president of Physicians for Social Responsibility, warned that courts cannot afford to ignore the evidence. The EPA’s “devastating decision” goes “against the science and testimony of countless scientists, health care professionals, and public health practitioners,” Bansal said. If upheld, the long-term consequences could seemingly bury courts in future legal battles. “It will result in direct harm to the health of Americans throughout the country, particularly children, older adults, those with chronic illnesses, and other vulnerable populations, rural to urban, red and blue, of all races and incomes,” Bansal said. “The increased exposure to harmful pollutants and other greenhouse gas emissions from fossil fuel production and consumption will make America sicker, not healthier, less prosperous, not more, for generations to come.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/google/2026/02/gemini-can-now-generate-ai-music-for-you-no-lyrics-required/] | [TOKENS: 2503] |
Rage Against the Machine Learning Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today With a simple prompt, you can generate 30 seconds of something like music. Ryan Whitwam – Feb 18, 2026 11:00 am | 214 Credit: Google Credit: Google Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The American poet Henry Wadsworth Longfellow called music “the universal language of mankind.” Is that still true when the so-called music is being generated by a probabilistic robot instead of a human? We’re about to find out. Google has announced its latest Lyria 3 AI model is being deployed in the Gemini app, vastly expanding access to AI music generation. Google DeepMind has been tinkering with Lyria for a while now, offering limited access in developer-oriented products like Vertex AI. Lyria 3 is more capable than previous versions, and it’s also quicker to use. Just select the new “Create music” option in the Gemini app or web UI to get started. You can describe what you want and even upload an image to help the robot get the right vibe. And in a few seconds, you get music (or something like it). In case there was any uncertainty about whether Lyria tracks still counted as a human artistic endeavor, worry not! Unlike past versions of the model, you don’t even have to provide lyrics in your prompt. You can be vague with your request, and the model will create suitable lyrics for the 30-second song. Although with that limit, “jingle” might be more accurate. In addition to the track, each music creation job will come with an album cover-style image created by the Nano Banana model. Gemini will also have a pre-loaded set of AI tracks that you can choose to remix to your heart’s content. The Lyria 3 tools are also coming to Google’s Dream Track toolkit for YouTube Shorts, which will pair nicely with the Veo AI video options. So what kind of tracks can you expect Gemini to spit out? Google has provided some examples: “Sweet Like Plantain“ Prompt: I’m feeling nostalgic. Create a track for my mother about the great times we had as kids and the memories of her home-cooked plantains. Make it a fun afrobeat track with a true African vibe. “Motown Parody“ Prompt: Quintessential 1970s Motown soul. Lush, orchestral R&B production. Warm bassline with melodic fills, locked into a steady drum groove with crisp snare and tambourine. Vintage organ harmonic bed. Three-piece brass section. Gritty, gospel-tinged male tenor lead. “Pop Flutter“ Prompt: Wistful and airy. Soft, breathy female vocals with intimacy. Rapid-fire drum and bass rhythm, low-passed and softened. Deep, warm bass swells. Dreamy electric piano chords and subtle chime textures. Rainy city vibes. “Sea Shanty“ Prompt: An authentic A capella Sea Shanty featuring a robust male choir singing in a traditional call-and-response format. The piece is entirely vocal, relying on synchronized foot-stomps on a wooden deck and sharp handclaps to provide the rhythmic pulse. The lead is a weathered male baritone with a gravelly timbre who sings the narrative ‘chant’ lines. He is immediately answered by a powerful male choir singing in rich, rugged harmony on the ‘response’ lines. The voices are recorded with a natural room reverb that simulates the acoustic environment of a wooden ship’s deck, giving the vocals a resonant, atmospheric quality. The performance is energetic and driving, with the choir leaning into the rhythm of the stomps to create a sense of focused, communal effort. There are no instruments, only the layered textures of collective male voices spanning tenor, baritone, and bass ranges, all contributing to a confident, monolithic sound. Sour notes AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée. Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags. Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content. Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear. Ryan Whitwam Senior Technology Reporter Ryan Whitwam Senior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 214 Comments Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today With a simple prompt, you can generate 30 seconds of something like music. The American poet Henry Wadsworth Longfellow called music “the universal language of mankind.” Is that still true when the so-called music is being generated by a probabilistic robot instead of a human? We’re about to find out. Google has announced its latest Lyria 3 AI model is being deployed in the Gemini app, vastly expanding access to AI music generation. Google DeepMind has been tinkering with Lyria for a while now, offering limited access in developer-oriented products like Vertex AI. Lyria 3 is more capable than previous versions, and it’s also quicker to use. Just select the new “Create music” option in the Gemini app or web UI to get started. You can describe what you want and even upload an image to help the robot get the right vibe. And in a few seconds, you get music (or something like it). In case there was any uncertainty about whether Lyria tracks still counted as a human artistic endeavor, worry not! Unlike past versions of the model, you don’t even have to provide lyrics in your prompt. You can be vague with your request, and the model will create suitable lyrics for the 30-second song. Although with that limit, “jingle” might be more accurate. In addition to the track, each music creation job will come with an album cover-style image created by the Nano Banana model. Gemini will also have a pre-loaded set of AI tracks that you can choose to remix to your heart’s content. The Lyria 3 tools are also coming to Google’s Dream Track toolkit for YouTube Shorts, which will pair nicely with the Veo AI video options. So what kind of tracks can you expect Gemini to spit out? Google has provided some examples: “Sweet Like Plantain“ Prompt: I’m feeling nostalgic. Create a track for my mother about the great times we had as kids and the memories of her home-cooked plantains. Make it a fun afrobeat track with a true African vibe. “Motown Parody“ Prompt: Quintessential 1970s Motown soul. Lush, orchestral R&B production. Warm bassline with melodic fills, locked into a steady drum groove with crisp snare and tambourine. Vintage organ harmonic bed. Three-piece brass section. Gritty, gospel-tinged male tenor lead. “Pop Flutter“ Prompt: Wistful and airy. Soft, breathy female vocals with intimacy. Rapid-fire drum and bass rhythm, low-passed and softened. Deep, warm bass swells. Dreamy electric piano chords and subtle chime textures. Rainy city vibes. “Sea Shanty“ Prompt: An authentic A capella Sea Shanty featuring a robust male choir singing in a traditional call-and-response format. The piece is entirely vocal, relying on synchronized foot-stomps on a wooden deck and sharp handclaps to provide the rhythmic pulse. The lead is a weathered male baritone with a gravelly timbre who sings the narrative ‘chant’ lines. He is immediately answered by a powerful male choir singing in rich, rugged harmony on the ‘response’ lines. The voices are recorded with a natural room reverb that simulates the acoustic environment of a wooden ship’s deck, giving the vocals a resonant, atmospheric quality. The performance is energetic and driving, with the choir leaning into the rhythm of the stomps to create a sense of focused, communal effort. There are no instruments, only the layered textures of collective male voices spanning tenor, baritone, and bass ranges, all contributing to a confident, monolithic sound. Sour notes AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée. Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags. Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content. Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/health/2026/02/fda-does-u-turn-will-review-modernas-mrna-flu-shot-after-shocking-rejection/] | [TOKENS: 1965] |
About-face FDA reverses surprise rejection of Moderna’s mRNA flu vaccine Trump admin’s vaccine chief overruled FDA scientists to initially reject the shot. Beth Mole – Feb 18, 2026 12:08 pm | 127 Credit: Getty | Congressional Quarterly Credit: Getty | Congressional Quarterly Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The Food and Drug Administration has reversed its shocking refusal to consider Moderna’s mRNA flu vaccine for approval. The refusal was revealed last week in a sharply worded press release from Moderna. Subsequent reporting found that the decision was made by political appointee Vinay Prasad, the Trump administration’s top vaccine regulator, who overruled a team of agency scientists and a top career official in rejecting Moderna’s application. In an announcement Wednesday morning, Moderna said the FDA has now agreed to review its vaccine after the company held a formal (Type A) meeting with the FDA and proposed a change to the regulatory pathways used in the application. “We appreciate the FDA’s engagement in a constructive Type A meeting and its agreement to advance our application for review,” Stéphane Bancel, Moderna’s CEO, said in the announcement. “Pending FDA approval, we look forward to making our flu vaccine available later this year so that America’s seniors have access to a new option to protect themselves against flu.” The agency is expected to provide a decision on the vaccine by August 5, 2026. Prasad’s ostensible reason for initially refusing to review the application was based not on Moderna’s vaccine, mRNA-1010, but on the established flu vaccine Moderna used for comparison in its Phase 3 trial. Moderna used licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline, in the trial, which involved nearly 41,000 adults aged 50 and older. In a letter to Moderna dated February 3, Prasad said this choice “does not reflect the best-available standard of care,” and therefore the trial was not “adequate and well-controlled.” Moderna acknowledged that FDA scientists had previously suggested that the company use a recommended high-dose flu vaccine in trial participants 65 and older. But the agency ultimately signed off on the trial design with the uniform standard dose, calling it “acceptable.” Moderna, meanwhile, agreed to add a comparison of a high-dose vaccine to some older participants and provide the FDA with additional analysis. Anti-vaccine agenda Agency insiders told reporters that a team of career scientists was ready to review the vaccine and held an hourlong meeting with Prasad to present the reasons for moving forward with the review. David Kaslow, a top career official responsible for reviewing vaccines, also wrote a memo detailing why the review should proceed. Prasad rejected the vaccine application anyway. According to today’s announcement, the FDA reversed that rejection when Moderna proposed splitting the application, seeking full approval for the vaccine’s use in people aged 50 to 64 and an accelerated approval for use in people 65 and up. That latter regulatory pathway means Moderna will have to conduct an additional trial in that age group to confirm its effectiveness after it’s on the market. Andrew Nixon, spokesperson for the US Department of Health and Human Services, confirmed the reversal to Ars Technica. “Discussions with the company led to a revised regulatory approach and an amended application, which FDA accepted,” Nixon said in a statement. “FDA will maintain its high standards during review and potential licensure stages as it does with all products.” The FDA typically takes a levelheaded approach to working with companies, rarely making surprising decisions or rejecting applications outright. While Prasad claimed the rejection was due to the control vaccine, the move aligns with Health Secretary Robert F. Kennedy Jr.’s broader anti-vaccine agenda. Kennedy and the allies he has installed in federal positions are particularly hostile to mRNA technology. Moderna has already lost more than $700 million in federal contracts to develop pandemic vaccines. Next month, Kennedy’s MAHA Institute is hosting an anti-vaccine event that alleges there’s a “massive epidemic of vaccine injury.” The event description claims without evidence that use of mRNA vaccines is linked to “rising rates of acute and chronic illness.” Vaccine makers and industry investors, meanwhile, are reporting that Kennedy’s relentless anti-vaccine efforts are chilling the entire industry, with companies abandoning research and cutting jobs. In comments to The New York Times, Moderna’s president, Stephen Hoge, said, “There will be less invention, investment, and innovation in vaccines generally, across all the companies.” Beth Mole Senior Health Reporter Beth Mole Senior Health Reporter Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 127 Comments FDA reverses surprise rejection of Moderna’s mRNA flu vaccine Trump admin’s vaccine chief overruled FDA scientists to initially reject the shot. The Food and Drug Administration has reversed its shocking refusal to consider Moderna’s mRNA flu vaccine for approval. The refusal was revealed last week in a sharply worded press release from Moderna. Subsequent reporting found that the decision was made by political appointee Vinay Prasad, the Trump administration’s top vaccine regulator, who overruled a team of agency scientists and a top career official in rejecting Moderna’s application. In an announcement Wednesday morning, Moderna said the FDA has now agreed to review its vaccine after the company held a formal (Type A) meeting with the FDA and proposed a change to the regulatory pathways used in the application. “We appreciate the FDA’s engagement in a constructive Type A meeting and its agreement to advance our application for review,” Stéphane Bancel, Moderna’s CEO, said in the announcement. “Pending FDA approval, we look forward to making our flu vaccine available later this year so that America’s seniors have access to a new option to protect themselves against flu.” The agency is expected to provide a decision on the vaccine by August 5, 2026. Prasad’s ostensible reason for initially refusing to review the application was based not on Moderna’s vaccine, mRNA-1010, but on the established flu vaccine Moderna used for comparison in its Phase 3 trial. Moderna used licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline, in the trial, which involved nearly 41,000 adults aged 50 and older. In a letter to Moderna dated February 3, Prasad said this choice “does not reflect the best-available standard of care,” and therefore the trial was not “adequate and well-controlled.” Moderna acknowledged that FDA scientists had previously suggested that the company use a recommended high-dose flu vaccine in trial participants 65 and older. But the agency ultimately signed off on the trial design with the uniform standard dose, calling it “acceptable.” Moderna, meanwhile, agreed to add a comparison of a high-dose vaccine to some older participants and provide the FDA with additional analysis. Anti-vaccine agenda Agency insiders told reporters that a team of career scientists was ready to review the vaccine and held an hourlong meeting with Prasad to present the reasons for moving forward with the review. David Kaslow, a top career official responsible for reviewing vaccines, also wrote a memo detailing why the review should proceed. Prasad rejected the vaccine application anyway. According to today’s announcement, the FDA reversed that rejection when Moderna proposed splitting the application, seeking full approval for the vaccine’s use in people aged 50 to 64 and an accelerated approval for use in people 65 and up. That latter regulatory pathway means Moderna will have to conduct an additional trial in that age group to confirm its effectiveness after it’s on the market. Andrew Nixon, spokesperson for the US Department of Health and Human Services, confirmed the reversal to Ars Technica. “Discussions with the company led to a revised regulatory approach and an amended application, which FDA accepted,” Nixon said in a statement. “FDA will maintain its high standards during review and potential licensure stages as it does with all products.” The FDA typically takes a levelheaded approach to working with companies, rarely making surprising decisions or rejecting applications outright. While Prasad claimed the rejection was due to the control vaccine, the move aligns with Health Secretary Robert F. Kennedy Jr.’s broader anti-vaccine agenda. Kennedy and the allies he has installed in federal positions are particularly hostile to mRNA technology. Moderna has already lost more than $700 million in federal contracts to develop pandemic vaccines. Next month, Kennedy’s MAHA Institute is hosting an anti-vaccine event that alleges there’s a “massive epidemic of vaccine injury.” The event description claims without evidence that use of mRNA vaccines is linked to “rising rates of acute and chronic illness.” Vaccine makers and industry investors, meanwhile, are reporting that Kennedy’s relentless anti-vaccine efforts are chilling the entire industry, with companies abandoning research and cutting jobs. In comments to The New York Times, Moderna’s president, Stephen Hoge, said, “There will be less invention, investment, and innovation in vaccines generally, across all the companies.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/health/2026/02/fda-does-u-turn-will-review-modernas-mrna-flu-shot-after-shocking-rejection/#comments] | [TOKENS: 1965] |
About-face FDA reverses surprise rejection of Moderna’s mRNA flu vaccine Trump admin’s vaccine chief overruled FDA scientists to initially reject the shot. Beth Mole – Feb 18, 2026 12:08 pm | 127 Credit: Getty | Congressional Quarterly Credit: Getty | Congressional Quarterly Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The Food and Drug Administration has reversed its shocking refusal to consider Moderna’s mRNA flu vaccine for approval. The refusal was revealed last week in a sharply worded press release from Moderna. Subsequent reporting found that the decision was made by political appointee Vinay Prasad, the Trump administration’s top vaccine regulator, who overruled a team of agency scientists and a top career official in rejecting Moderna’s application. In an announcement Wednesday morning, Moderna said the FDA has now agreed to review its vaccine after the company held a formal (Type A) meeting with the FDA and proposed a change to the regulatory pathways used in the application. “We appreciate the FDA’s engagement in a constructive Type A meeting and its agreement to advance our application for review,” Stéphane Bancel, Moderna’s CEO, said in the announcement. “Pending FDA approval, we look forward to making our flu vaccine available later this year so that America’s seniors have access to a new option to protect themselves against flu.” The agency is expected to provide a decision on the vaccine by August 5, 2026. Prasad’s ostensible reason for initially refusing to review the application was based not on Moderna’s vaccine, mRNA-1010, but on the established flu vaccine Moderna used for comparison in its Phase 3 trial. Moderna used licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline, in the trial, which involved nearly 41,000 adults aged 50 and older. In a letter to Moderna dated February 3, Prasad said this choice “does not reflect the best-available standard of care,” and therefore the trial was not “adequate and well-controlled.” Moderna acknowledged that FDA scientists had previously suggested that the company use a recommended high-dose flu vaccine in trial participants 65 and older. But the agency ultimately signed off on the trial design with the uniform standard dose, calling it “acceptable.” Moderna, meanwhile, agreed to add a comparison of a high-dose vaccine to some older participants and provide the FDA with additional analysis. Anti-vaccine agenda Agency insiders told reporters that a team of career scientists was ready to review the vaccine and held an hourlong meeting with Prasad to present the reasons for moving forward with the review. David Kaslow, a top career official responsible for reviewing vaccines, also wrote a memo detailing why the review should proceed. Prasad rejected the vaccine application anyway. According to today’s announcement, the FDA reversed that rejection when Moderna proposed splitting the application, seeking full approval for the vaccine’s use in people aged 50 to 64 and an accelerated approval for use in people 65 and up. That latter regulatory pathway means Moderna will have to conduct an additional trial in that age group to confirm its effectiveness after it’s on the market. Andrew Nixon, spokesperson for the US Department of Health and Human Services, confirmed the reversal to Ars Technica. “Discussions with the company led to a revised regulatory approach and an amended application, which FDA accepted,” Nixon said in a statement. “FDA will maintain its high standards during review and potential licensure stages as it does with all products.” The FDA typically takes a levelheaded approach to working with companies, rarely making surprising decisions or rejecting applications outright. While Prasad claimed the rejection was due to the control vaccine, the move aligns with Health Secretary Robert F. Kennedy Jr.’s broader anti-vaccine agenda. Kennedy and the allies he has installed in federal positions are particularly hostile to mRNA technology. Moderna has already lost more than $700 million in federal contracts to develop pandemic vaccines. Next month, Kennedy’s MAHA Institute is hosting an anti-vaccine event that alleges there’s a “massive epidemic of vaccine injury.” The event description claims without evidence that use of mRNA vaccines is linked to “rising rates of acute and chronic illness.” Vaccine makers and industry investors, meanwhile, are reporting that Kennedy’s relentless anti-vaccine efforts are chilling the entire industry, with companies abandoning research and cutting jobs. In comments to The New York Times, Moderna’s president, Stephen Hoge, said, “There will be less invention, investment, and innovation in vaccines generally, across all the companies.” Beth Mole Senior Health Reporter Beth Mole Senior Health Reporter Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 127 Comments FDA reverses surprise rejection of Moderna’s mRNA flu vaccine Trump admin’s vaccine chief overruled FDA scientists to initially reject the shot. The Food and Drug Administration has reversed its shocking refusal to consider Moderna’s mRNA flu vaccine for approval. The refusal was revealed last week in a sharply worded press release from Moderna. Subsequent reporting found that the decision was made by political appointee Vinay Prasad, the Trump administration’s top vaccine regulator, who overruled a team of agency scientists and a top career official in rejecting Moderna’s application. In an announcement Wednesday morning, Moderna said the FDA has now agreed to review its vaccine after the company held a formal (Type A) meeting with the FDA and proposed a change to the regulatory pathways used in the application. “We appreciate the FDA’s engagement in a constructive Type A meeting and its agreement to advance our application for review,” Stéphane Bancel, Moderna’s CEO, said in the announcement. “Pending FDA approval, we look forward to making our flu vaccine available later this year so that America’s seniors have access to a new option to protect themselves against flu.” The agency is expected to provide a decision on the vaccine by August 5, 2026. Prasad’s ostensible reason for initially refusing to review the application was based not on Moderna’s vaccine, mRNA-1010, but on the established flu vaccine Moderna used for comparison in its Phase 3 trial. Moderna used licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline, in the trial, which involved nearly 41,000 adults aged 50 and older. In a letter to Moderna dated February 3, Prasad said this choice “does not reflect the best-available standard of care,” and therefore the trial was not “adequate and well-controlled.” Moderna acknowledged that FDA scientists had previously suggested that the company use a recommended high-dose flu vaccine in trial participants 65 and older. But the agency ultimately signed off on the trial design with the uniform standard dose, calling it “acceptable.” Moderna, meanwhile, agreed to add a comparison of a high-dose vaccine to some older participants and provide the FDA with additional analysis. Anti-vaccine agenda Agency insiders told reporters that a team of career scientists was ready to review the vaccine and held an hourlong meeting with Prasad to present the reasons for moving forward with the review. David Kaslow, a top career official responsible for reviewing vaccines, also wrote a memo detailing why the review should proceed. Prasad rejected the vaccine application anyway. According to today’s announcement, the FDA reversed that rejection when Moderna proposed splitting the application, seeking full approval for the vaccine’s use in people aged 50 to 64 and an accelerated approval for use in people 65 and up. That latter regulatory pathway means Moderna will have to conduct an additional trial in that age group to confirm its effectiveness after it’s on the market. Andrew Nixon, spokesperson for the US Department of Health and Human Services, confirmed the reversal to Ars Technica. “Discussions with the company led to a revised regulatory approach and an amended application, which FDA accepted,” Nixon said in a statement. “FDA will maintain its high standards during review and potential licensure stages as it does with all products.” The FDA typically takes a levelheaded approach to working with companies, rarely making surprising decisions or rejecting applications outright. While Prasad claimed the rejection was due to the control vaccine, the move aligns with Health Secretary Robert F. Kennedy Jr.’s broader anti-vaccine agenda. Kennedy and the allies he has installed in federal positions are particularly hostile to mRNA technology. Moderna has already lost more than $700 million in federal contracts to develop pandemic vaccines. Next month, Kennedy’s MAHA Institute is hosting an anti-vaccine event that alleges there’s a “massive epidemic of vaccine injury.” The event description claims without evidence that use of mRNA vaccines is linked to “rising rates of acute and chronic illness.” Vaccine makers and industry investors, meanwhile, are reporting that Kennedy’s relentless anti-vaccine efforts are chilling the entire industry, with companies abandoning research and cutting jobs. In comments to The New York Times, Moderna’s president, Stephen Hoge, said, “There will be less invention, investment, and innovation in vaccines generally, across all the companies.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://www.bbc.com/audio/categories] | [TOKENS: 49] |
PodcastsBusinessComedyHistoryNewsScience and healthSociety and cultureSportTrue crime Podcasts Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Feedforward_neural_network] | [TOKENS: 1525] |
Contents Feedforward neural network A feedforward neural network is an artificial neural network in which information flows in a single direction – inputs are multiplied by weights to obtain outputs (inputs-to-output). It contrasts with a recurrent neural network, in which loops allow information from later processing stages to feed back to earlier stages. Feedforward multiplication is essential for backpropagation, because feedback, where the outputs feed back to the very same inputs and modify them, forms an infinite loop which is not possible to differentiate through backpropagation. This nomenclature appears to be a point of confusion between some computer scientists and scientists in other fields studying brain networks. Mathematical foundations The two historically common activation functions are both sigmoids, and are described by y ( v i ) = tanh ( v i ) and y ( v i ) = ( 1 + e − v i ) − 1 . {\displaystyle y(v_{i})=\tanh(v_{i})~~{\text{and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}.} The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here y i {\displaystyle y_{i}} is the output of the i {\displaystyle i} -th node (neuron) and v i {\displaystyle v_{i}} is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models). In recent developments of deep learning, the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids. Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation. We can represent the degree of error in an output node j {\displaystyle j} in the n {\displaystyle n} -th data point (training example) by e j ( n ) = d j ( n ) − y j ( n ) {\displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)} , where d j ( n ) {\displaystyle d_{j}(n)} is the desired target value for n {\displaystyle n} -th data point at node j {\displaystyle j} , and y j ( n ) {\displaystyle y_{j}(n)} is the value produced at node j {\displaystyle j} when the n {\displaystyle n} -th data point is given as an input. The node weights can then be adjusted based on corrections that minimize the error in the entire output for the n {\displaystyle n} -th data point, given by E ( n ) = 1 2 ∑ output node j e j 2 ( n ) . {\displaystyle {\mathcal {E}}(n)={\frac {1}{2}}\sum _{{\text{output node }}j}e_{j}^{2}(n).} Using gradient descent, the change in each weight w i j {\displaystyle w_{ij}} is Δ w j i ( n ) = − η ∂ E ( n ) ∂ v j ( n ) y i ( n ) {\displaystyle \Delta w_{ji}(n)=-\eta {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}y_{i}(n)} where y i ( n ) {\displaystyle y_{i}(n)} is the output of the previous neuron i {\displaystyle i} , and η {\displaystyle \eta } is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression, ∂ E ( n ) ∂ v j ( n ) {\displaystyle {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}} denotes the partial derivative of the error E ( n ) {\displaystyle {\mathcal {E}}(n)} according to the weighted sum v j ( n ) {\displaystyle v_{j}(n)} of the input connections of neuron i {\displaystyle i} . The derivative to be calculated depends on the induced local field v j {\displaystyle v_{j}} , which itself varies. It is easy to prove that for an output node this derivative can be simplified to − ∂ E ( n ) ∂ v j ( n ) = e j ( n ) ϕ ′ ( v j ( n ) ) {\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=e_{j}(n)\phi ^{\prime }(v_{j}(n))} where ϕ ′ {\displaystyle \phi ^{\prime }} is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is − ∂ E ( n ) ∂ v j ( n ) = ϕ ′ ( v j ( n ) ) ∑ k − ∂ E ( n ) ∂ v k ( n ) w k j ( n ) . {\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=\phi ^{\prime }(v_{j}(n))\sum _{k}-{\frac {\partial {\mathcal {E}}(n)}{\partial v_{k}(n)}}w_{kj}(n).} This depends on the change in weights of the k {\displaystyle k} th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. History If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function. Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent. A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable. Other feedforward networks Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Markus_Persson#cite_note-GSint-9] | [TOKENS: 3525] |
Contents Markus Persson Markus Alexej Persson (/ˈpɪərsən/ ⓘ PEER-sən, Swedish: [ˈmǎrːkɵs ˈpæ̌ːʂɔn] ⓘ; born 1 June 1979), known by the pseudonym Notch, is a Swedish video game programmer and designer. He is the creator of Minecraft, the best-selling video game in history. He founded the video game development company Mojang Studios in 2009. Persson began developing video games at an early age. His commercial success began after he published an early version of Minecraft in 2009. Prior to the game's official retail release in 2011, it had sold over four million copies. After this point Persson stood down as the lead designer and transferred his creative authority to Jens Bergensten. In September 2014 Persson announced his intention to leave Mojang, and in November of that year the company was sold to Microsoft reportedly for US$2.5 billion, which made him a billionaire. Since 2016 several of Persson's posts on Twitter regarding feminism, race, and transgender rights have caused public controversies. He has been described as "an increasingly polarizing figure, tweeting offensive statements regarding race, the LGBTQ community, gender, and other topics." In an effort to distance itself from Persson, Microsoft removed mentions of his name from Minecraft (excluding one instance in the game's end credits) and did not invite him to the game's tenth anniversary celebration. In 2015 he co-founded a separate game studio called Rubberbrain, which was relaunched in 2024 as Bitshift Entertainment. Early life Markus Alexej Persson was born in Stockholm, Sweden, to a Finnish mother, Ritva, and a Swedish father, Birger, on 1 June 1979. He has one sister. He grew up in Edsbyn until he was seven years old, when his family moved back to Stockholm. In Edsbyn, Persson's father worked for the railroad, and his mother was a nurse. He spent much time outdoors in Edsbyn, exploring the woods with his friends. When Persson was about seven years old, his parents divorced, and he and his sister lived with their mother. His father moved to a cabin in the countryside. Persson said in an interview that they experienced food insecurity around once a month. Persson lost contact with his father for several years after the divorce. According to Persson, his father suffered from depression, bipolar disorder, alcoholism, and medication abuse, and went to jail for robberies. While his father had somewhat recovered during Persson's early life, his father relapsed, contributing to the divorce. His sister also experimented with drugs and ran away from home. He had gained interest in video games at an early age. His father was "a really big nerd", who built his own modem and taught Persson to use the family's Commodore 128. On it, Persson played bootleg games and loaded in various type-in programs from computer magazines with the help of his sister. The first game he purchased with his own money was The Bard's Tale. He began programming on his father's Commodore 128 home computer at the age of seven. He produced his first game at the age of eight, a text-based adventure game. By 1994 Persson knew he wanted to become a video game developer, but his teachers advised him to study graphic design, which he did from ages 15 to 18. Persson, although introverted, was well-liked by his peers, but after entering secondary school was a "loner" and reportedly had only one friend. He spent most of his spare time with games and programming at home. He managed to reverse-engineer the Doom engine, which he continued to take great pride in as of 2014[update]. He never finished high school, but was reportedly a good student. Career Persson started his career working as a web designer. He later found employment at Game Federation, where he met Rolf Jansson. The pair worked in their spare time to build the 2006 video game Wurm Online. The game was released through a new entity, "Mojang Specifications AB". Persson left the project in late 2007. As Persson wanted to reuse the name "Mojang", Jansson agreed to rename the company to Onetoofree AB. Between 2004 and 2009 Persson worked as a game developer for Midasplayer (later known as King). There, he worked as a programmer, mostly building browser games made in Flash. He later worked as a programmer for jAlbum. Prior to creating Minecraft, Persson developed multiple, small games. He also entered a number of game design competitions and participated in discussions on the TIGSource forums, a web forum for independent game developers. One of Persson's more notable personal projects was called RubyDung, an isometric three-dimensional base-building game like RollerCoaster Tycoon and Dwarf Fortress. While working on RubyDung, Persson experimented with a first-person view mode similar to that found in Dungeon Keeper. However, he felt the graphics were too pixelated and omitted this mode. In 2009 Persson found inspiration in Infiniminer, a block-based open-ended mining game. Infiniminer heavily influenced his future work on RubyDung, and was behind Persson's reasoning for returning the first-person mode, the "blocky" visual style and the block-building fundamentals to the game. RubyDung is the earliest known Minecraft prototype created by Persson. On 17 May 2009 Persson released the original edition (later called "Classic version") of Minecraft on the TIGSource forums. He regularly updated the game based on feedback from TIGSource users. Persson released several new versions of Minecraft throughout 2009 and 2010, going through several phases of development including Survival Test, Indev, and Infdev. On 30 June 2010 Persson released the game's Alpha version. While working on the pre-Alpha version of Minecraft, Persson continued working at jAlbum. In 2010, after the release and subsequent success of Minecraft's Alpha version, Persson moved from a full-time role to a part-time role at jAlbum. He left jAlbum later that same year. In September 2010 Persson travelled to Valve Corporation's headquarters in Bellevue, Washington, United States, where he took part in a programming exercise and met Gabe Newell. Persson was subsequently offered a job at Valve, which he turned down in order to continue work on Minecraft. On 20 December 2010 Minecraft moved into its beta phase and began expanding to other platforms, including mobile. In January 2011 Minecraft reached one million registered accounts. Six months afterwards, it reached ten million. The game has sold over four million copies by 7 November 2011. Mojang held the first Minecon from 18 to 19 November 2011 to celebrate its full release, and subsequently made it an annual event. Following this, on 11 December 2011, Persson transferred creative control of Minecraft to Jens Bergensten and began working on another game title, 0x10c, although he reportedly abandoned the project around 2013. In 2013 Mojang recorded revenues of $330 million and profits of $129 million. Persson has stated that, due to the intense media attention and public pressure, he became exhausted with running Minecraft and Mojang. In a September 2014 blog post he shared his realization that he "didn't have the connection to my fans I thought I had", that he had "become a symbol", and that he did not wish to be responsible for Mojang's increasingly large operation. In June 2014 Persson tweeted "Anyone want to buy my share of Mojang so I can move on with my life? Getting hate for trying to do the right thing is not my gig", reportedly partly as a joke. Persson controlled a 71% stake in Mojang at the time. The offer attracted significant interest from Activision Blizzard, EA, and Microsoft. Forbes later reported that Microsoft wanted to purchase the game as a "tax dodge" to turn their taxable excess liquid cash into other assets. In September 2014 Microsoft agreed to purchase Mojang for $2.5 billion, making Persson a billionaire. He then left the company after the deal was finalised in November. Since leaving Mojang, Persson has worked on several small projects. On 23 June 2014 he founded a company with Porsér called Rubberbrain AB; the company had no games by 2021, despite spending SEK 60 million. The company was relaunched as Bitshift Entertainment, LLC on 28 March 2024. Persson expressed interest in creating a new video game studio in 2020, and in developing virtual reality games. He has also since created a series of narrative-driven immersive events called ".party()", which uses extensive visual effects and has been hosted in multiple cities. At the beginning of 2025 Persson decided to create a spiritual successor to Minecraft, referred to as "Minecraft 2", in response to the results of a poll on X. However, after speaking to his team, he shortly went against this in favour of developing the other choice on his Twitter poll, a roguelike titled Levers and Chests. Games Persson's most popular creation is the survival sandbox game Minecraft, which was first publicly available on 17 May 2009 and fully released on 18 November 2011. Persson left his job as a game developer to work on Minecraft full-time until completion. In early 2011, Mojang AB sold the one millionth copy of the game, several months later their second, and several more their third. Mojang hired several new staff members for the Minecraft team, while Persson passed the lead developer role to Jens Bergensten. He stopped working on Minecraft after a deal with Microsoft to sell Mojang for $2.5 billion. This brought his net worth to US$1.5 billion. Persson and Jakob Porsér came up with the idea for Scrolls including elements from board games and collectible card games. Persson noted that he will not be actively involved in development of the game and that Porsér will be developing it. Persson revealed on his Tumblr blog on 5 August 2011 that he was being sued by a Swedish law firm representing Bethesda Softworks over the trademarked name of Scrolls, claiming that it conflicted with their The Elder Scrolls series of games. On 17 August 2011 Persson challenged Bethesda to a Quake 3 tournament to decide the outcome of the naming dispute. On 27 September 2011 Persson confirmed that the lawsuit was going to court. ZeniMax Media, owner of Bethesda Softworks, announced the lawsuit's settlement in March 2012. The settlement allowed Mojang to continue using the Scrolls trademark. In 2018, Scrolls was made available free of charge and renamed to Caller's Bane. Cliffhorse is a humorous game programmed in two hours using the Unity game engine and free assets. The game took inspiration from Skyrim's physics engine, "the more embarrassing minimum-effort Greenlight games", Goat Simulator, and Big Rigs: Over the Road Racing. The game was released to Microsoft Windows systems as an early access and honourware game on the first day of E3 2014, instructing users to donate Dogecoin to "buy" the game before downloading it. The game accumulated over 280,000 dogecoins. Following the end to his involvement with Minecraft, Persson began pre-production of an alternate reality space game set in the distant future in March 2012. On April Fools' Day Mojang launched a satirical website for Mars Effect (parody of Mass Effect), citing the lawsuit with Bethesda as an inspiration. However, the gameplay elements remained true and on 4 April, Mojang revealed 0x10c (pronounced "Ten to the C") as a space sandbox title. Persson officially halted game production in August 2013. However, C418, the composer of the game's soundtrack (as well as that of Minecraft), released an album of the work he had made for the game. In 2013, Persson made a free game called Shambles in the Unity game engine. Persson has also participated in several Ludum Dare 48-hour game making competitions. Personal life In 2011 Persson married Elin Zetterstrand, whom he had dated for four years before. Zetterstrand was a former moderator on the Minecraft forums. They had a daughter together, but by mid-2012, he began to see little of her. On 15 August 2012 he announced that he and his wife had filed for divorce. The divorce was finalised later that year. On 14 December 2011 Persson's father committed suicide with a handgun after drinking heavily. In an interview with The New Yorker, Persson said of his father: When I decided I wanted to quit my day job and work on my own games, he was the only person who supported my decision. He was proud of me and made sure I knew. When I added the monsters to Minecraft, he told me that the dark caves became too scary for him. But I think that was the only true criticism I ever heard from him. Persson later admitted that he himself suffered from depression and various highs and lows in his mood. Persson has criticised the stance of large game companies on piracy. He once stated that "piracy is not theft", viewing unauthorised downloads as potential future customers. Persson stated himself to be a member of the Pirate Party of Sweden in 2011. He is also a member of Mensa. He has donated to numerous charities, including Médecins Sans Frontières (Doctors Without Borders). Under his direction, Mojang spent a week developing Catacomb Snatch for the Humble Indie Bundle and raised US$458,248 for charity. He also donated $250,000 to the Electronic Frontier Foundation in 2012. In 2011 he gave $3 million in dividends back to Mojang employees. According to Forbes, his net worth in 2023 was around $1.2 billion. In 2014 Persson was one of the biggest taxpayers in Sweden. Around 2014, he lived in a multi-level penthouse in Östermalm, Stockholm, an area he described as "where the rich people live". In December 2014 Persson purchased a home in Trousdale Estates, a neighbourhood in Beverly Hills, California, in the United States, for $70 million, a record sales price for Beverly Hills at the time. Persson reportedly outbid Beyoncé and Jay-Z for the property. Persson began receiving criticism for political and social opinions he expressed on social media as early as 2016. November 30, 2017 In 2017, he proposed a heterosexual pride holiday, and wrote that those who opposed the idea "deserve to be shot." After facing backlash, he deleted the tweets and rescinded his statements, writing, "So yeah, it's about pride of daring to express, not about pride of being who you are. I get it now." Later in the year, he wrote that feminism is a "social disease" and called the video game developer and feminist Zoë Quinn a "cunt", although he was generally critical of the GamerGate movement. He has described intersectional feminism as a "framework for bigotry" and the use of the word mansplaining as being sexist. Also in 2017, Persson tweeted that "It's okay to be white". Later that year, he stated that he believed in the Pizzagate conspiracy theory. In 2019, he tweeted referencing QAnon, saying "Q is legit. Don't trust the media." Later in 2019, he tweeted in response to a pro-transgender internet meme that, "You are absolutely evil if you want to encourage delusion. What happened to not stigmatizing mental illness?" He then also promoted claims that people were fined for "using the wrong pronoun". However, after facing backlash, he tweeted a day afterwards that he had "no idea what [being trans is] like of course, but it's inspiring as hell when people open up and choose to actually be who they know themselves as. Not because it's a cool choice, because it's a big step. I gues [sic] that's actually cool nvm". Later that year, Microsoft removed two mentions of Persson's name in the "19w13a" snapshot of Minecraft and did not invite him to the 10-year anniversary celebration of the game. A spokesperson for Microsoft stated that his views "do not reflect those of Microsoft or Mojang". He is still mentioned in the End Poem ("a flat, infinite world created by a man called Markus").[citation needed] Awards References External links |
======================================== |
[SOURCE: https://arstechnica.com/gadgets/2026/02/googles-pixel-10a-arrives-on-march-5-for-499-with-specs-and-design-of-yesteryear/#comments] | [TOKENS: 2597] |
Stay the course Google’s Pixel 10a arrives on March 5 for $499 with specs and design of yesteryear Google’s new budget phone is here, but don’t expect a big upgrade. Ryan Whitwam – Feb 18, 2026 10:00 am | 113 Credit: Google Credit: Google Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav It’s that time of year—a new budget Pixel phone is about to hit virtual shelves. The Pixel 10a will be available on March 5, and pre-orders go live today. The 9a will still be on sale for a while, but the 10a will be headlining Google’s store. However, you might not notice unless you keep up with the Pixel numbering scheme. This year’s A-series Pixel is virtually identical to last year’s, both inside and out. Last year’s Pixel 9a was a notable departure from the older design language, but Google made few changes for 2026. We liked that the Pixel 9a emphasized battery capacity and moved to a flat camera bump, and this time, it’s really flat. Google says the camera now sits totally flush with the back panel. This is probably the only change you’ll be able to identify visually. Specs at a glance: Google Pixel 9a vs. Pixel 10a Phone Pixel 9a Pixel 10a SoC Google Tensor G4 Google Tensor G4 Memory 8GB 8GB Storage 128GB, 256GB 128GB, 256GB Display 1080×2424 6.3″ pOLED, 60–120 Hz, Gorilla Glass 3, 2700 nits (peak) 1080×2424 6.3″ pOLED, 60–120 Hz, Gorilla Glass 7i, 3000 nits (peak) Cameras 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2 Software Android 15 (at launch), 7 years of OS updates Android 16, 7 years of OS updates Battery 5,100 mAh, 23 W wired charging, 7.5 W wireless charging 5,100 mAh, 30 W wired charging, 10 W wireless charging Connectivity Wi-Fi 6e, NFC, Bluetooth 5.3, sub-6 GHz 5G, USB-C 3.2 Wi-Fi 6e, NFC, Bluetooth 6.0, sub-6 GHz 5G, USB-C 3.2 Measurements 154.7×73.3×8.9 mm; 185 g 153.9×73×9 mm; 183 g Google also says the new Pixel will have a slightly upgraded screen. The resolution, size, and refresh rate are unchanged, but peak brightness has been bumped from 2,700 nits to 3,000 nits (the same as the base model Pixel 10). Plus, the cover glass has finally moved beyond Gorilla Glass 3 to Gorilla Glass 7i, which supposedly has improved scratch and drop protection. Credit: Google Credit: Google Google notes that more of the phone is constructed from recycled material, 100 percent for the aluminum frame and 81 percent for the plastic back. There’s also recycled gold, tungsten, cobalt, and copper inside, amounting to about 36 percent of the phone’s weight. The phone also continues to have a physical SIM slot, which was removed from the Pixel 10 series last year. The device’s USB-C 3.2 port can also charge slightly faster than the 9a (30 W versus 23 W), and wireless charging has gone from 7.5 W to 10 W. There are no Qi2 magnets inside, though. Internally, the Pixel 10a is even more like its predecessor. Unlike past A-series phones, this one doesn’t have the latest Tensor chip—it’s sticking with the same Tensor G4 from the 9a. That’s a bummer, as the G5 was a bigger leap than most of Google’s chip upgrades. The company says it stuck with the G4 to “balance affordability and performance.” In fairness, Google has managed to keep the price steady at $499. With components like RAM and storage in short supply this year, prices could rise for many device refreshes. Why the sidegrade? Google’s position is that the Pixel 10a still offers a good value despite the middling upgrades compared to last year’s phone. It keeps the Pixel camera experience (which is admittedly great) available at a lower price than the flagship phones. While you get better results with the more expensive Pixels, the 9a was still one of the mobile photography options in 2025. With identical camera hardware in 2026, we expect the 10a to be the same. But why not make the Pixel 10a a bigger upgrade? Making a better phone theoretically means you sell more of them, right? There are a few possibilities. By slowing the improvement of the A-series Pixels, Google is making the Pixel 10 look like a better option for buyers. Upgrading the processor and adding features like PixelSnap could make the 10a too appealing compared to the $800 Pixel 10—Google’s A-series phones have been regularly recommended over the flagships due to the lower price and similar capabilities. The Pixel 10’s camera is also less capable than the Pixel 9 was for that generation, making it that much more similar to the A-series. Component prices are also a concern for 2026. While smartphone development cycles can easily range from 18 to 24 months, Google may have decided late in the game to stick with the Tensor G4 in this phone to offset the higher cost of storage and memory. Google has stressed that it wanted to keep the A-series at its traditional $499 price. As a major player in AI, an industry that is vacuuming up all those parts, Google may have had insight into the coming shortage before most. Credit: Google Credit: Google Whatever the reason, the Pixel 10a is looking like a very modest upgrade. If you still think it’s the right phone for you, Google will take your money starting today. The device is at the Google Store for $499 with 128GB of storage, and some carriers should begin offering the phone soon. The 256GB version runs $100 more. The Pixel 10a is available in Lavender, Fog, Obsidian, and the new Berry color. Ryan Whitwam Senior Technology Reporter Ryan Whitwam Senior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 113 Comments Google’s Pixel 10a arrives on March 5 for $499 with specs and design of yesteryear Google’s new budget phone is here, but don’t expect a big upgrade. It’s that time of year—a new budget Pixel phone is about to hit virtual shelves. The Pixel 10a will be available on March 5, and pre-orders go live today. The 9a will still be on sale for a while, but the 10a will be headlining Google’s store. However, you might not notice unless you keep up with the Pixel numbering scheme. This year’s A-series Pixel is virtually identical to last year’s, both inside and out. Last year’s Pixel 9a was a notable departure from the older design language, but Google made few changes for 2026. We liked that the Pixel 9a emphasized battery capacity and moved to a flat camera bump, and this time, it’s really flat. Google says the camera now sits totally flush with the back panel. This is probably the only change you’ll be able to identify visually. Google also says the new Pixel will have a slightly upgraded screen. The resolution, size, and refresh rate are unchanged, but peak brightness has been bumped from 2,700 nits to 3,000 nits (the same as the base model Pixel 10). Plus, the cover glass has finally moved beyond Gorilla Glass 3 to Gorilla Glass 7i, which supposedly has improved scratch and drop protection. Credit: Google Credit: Google Google notes that more of the phone is constructed from recycled material, 100 percent for the aluminum frame and 81 percent for the plastic back. There’s also recycled gold, tungsten, cobalt, and copper inside, amounting to about 36 percent of the phone’s weight. The phone also continues to have a physical SIM slot, which was removed from the Pixel 10 series last year. The device’s USB-C 3.2 port can also charge slightly faster than the 9a (30 W versus 23 W), and wireless charging has gone from 7.5 W to 10 W. There are no Qi2 magnets inside, though. Internally, the Pixel 10a is even more like its predecessor. Unlike past A-series phones, this one doesn’t have the latest Tensor chip—it’s sticking with the same Tensor G4 from the 9a. That’s a bummer, as the G5 was a bigger leap than most of Google’s chip upgrades. The company says it stuck with the G4 to “balance affordability and performance.” In fairness, Google has managed to keep the price steady at $499. With components like RAM and storage in short supply this year, prices could rise for many device refreshes. Why the sidegrade? Google’s position is that the Pixel 10a still offers a good value despite the middling upgrades compared to last year’s phone. It keeps the Pixel camera experience (which is admittedly great) available at a lower price than the flagship phones. While you get better results with the more expensive Pixels, the 9a was still one of the mobile photography options in 2025. With identical camera hardware in 2026, we expect the 10a to be the same. But why not make the Pixel 10a a bigger upgrade? Making a better phone theoretically means you sell more of them, right? There are a few possibilities. By slowing the improvement of the A-series Pixels, Google is making the Pixel 10 look like a better option for buyers. Upgrading the processor and adding features like PixelSnap could make the 10a too appealing compared to the $800 Pixel 10—Google’s A-series phones have been regularly recommended over the flagships due to the lower price and similar capabilities. The Pixel 10’s camera is also less capable than the Pixel 9 was for that generation, making it that much more similar to the A-series. Component prices are also a concern for 2026. While smartphone development cycles can easily range from 18 to 24 months, Google may have decided late in the game to stick with the Tensor G4 in this phone to offset the higher cost of storage and memory. Google has stressed that it wanted to keep the A-series at its traditional $499 price. As a major player in AI, an industry that is vacuuming up all those parts, Google may have had insight into the coming shortage before most. Credit: Google Credit: Google Whatever the reason, the Pixel 10a is looking like a very modest upgrade. If you still think it’s the right phone for you, Google will take your money starting today. The device is at the Google Store for $499 with 128GB of storage, and some carriers should begin offering the phone soon. The 256GB version runs $100 more. The Pixel 10a is available in Lavender, Fog, Obsidian, and the new Berry color. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/ars-technica-posting-guidelines-v3-0/] | [TOKENS: 1032] |
Orbiting HQ Ars Technica Posting Guidelines (v3.0) Ars Technica’s forum and news comment posting guidelines, last updated July 16, 2025. Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The Ars OpenForum and article discussion threads are moderated by a group of Ars staff and volunteer moderators who make complex judgment calls for the benefit of the community as a whole. These Posting Guidelines form the core of this effort, establishing a set of rules to foster open, frank, and diverse discussions. These Posting Guidelines work in concert with the Conde Nast User Agreement and the Ars Technica Addendum (protecting your rights to your own posted content). Most infractions will come with a warning. Stacking warnings or ignoring moderation directives will result in longer temporary bans and ultimately a permanent ban. Particularly egregious violations may result in the omission of the warning process altogether, and accounts without an established posting history may be banned outright at our discretion. Our goal at all times is to promote the well-being of our community and maintain conversations that are respectful and productive. The Posting Guidelines Ad hominem or personal attacks against other community members (including Ars staff) are prohibited. No posting content that is hateful, violent, or that victimizes, degrades, defiles or disparages any group based on race, gender, gender identity, religion, national origin, disability, sexual orientation, or age, or otherwise engage in what we deem to be racism, sexism, ageism, religious intolerance, bigotry, ethnic slurs, or homophobia. No trolling. In general, this means avoiding posts that are inflammatory and intended to rile people up, but it will always be a matter of moderator judgment. No pornographic, sexually offensive, sexually explicit, or objectifying material is allowed. Respect the privacy of others. Do not post others’ private phone numbers, addresses, pictures, etc., without permission. Abuse of editing privileges is not permitted. Users may not delete/edit content to evade possible moderation (removing flames, trolls, etc.). Do not edit quotes in your own posts to change what people said. Each forum member is limited to one account. Do not use Ars to spread misinformation. Moderators are not the argument police, but promoting unscientific or false narratives about things like vaccinations, water fluoridation, or climate change may lead to bans, possibly permanent. Pay attention to moderation directives, and refrain from arguing moderation in-thread. No armchair moderating. Start a thread in the Feedback forum if you wish to debate a matter of moderation. Even with all of this spelled out, there is still plenty of gray area. Moderators will always use their best judgment in moderation. How moderation works The moderators have the following tools: Moderation directives in threads. Directives might be to stop going off-topic, to refrain from hostility, or to cease some other ongoing concern that has not yet warranted outright moderation. Issuing private warnings that are logged to your account. Warnings can be permanent or set to expire at a specific time. Accumulating multiple warnings will result in increased ban lengths. Thread Eject can be used to eject someone from a specific thread, making it impossible for them to continue contributing to said thread. This does not constitute a warning and will not be recorded on your permanent record. Ejections can be temporary or permanent. Temporary Bans can be issued for periods ranging from a few hours to as long as a month. Permanent Bans can be issued, which effectively bar someone from future participation in the community. Ars Technica Posting Guidelines (v3.0) Ars Technica’s forum and news comment posting guidelines, last updated July 16, 2025. The Ars OpenForum and article discussion threads are moderated by a group of Ars staff and volunteer moderators who make complex judgment calls for the benefit of the community as a whole. These Posting Guidelines form the core of this effort, establishing a set of rules to foster open, frank, and diverse discussions. These Posting Guidelines work in concert with the Conde Nast User Agreement and the Ars Technica Addendum (protecting your rights to your own posted content). Most infractions will come with a warning. Stacking warnings or ignoring moderation directives will result in longer temporary bans and ultimately a permanent ban. Particularly egregious violations may result in the omission of the warning process altogether, and accounts without an established posting history may be banned outright at our discretion. Our goal at all times is to promote the well-being of our community and maintain conversations that are respectful and productive. The Posting Guidelines Even with all of this spelled out, there is still plenty of gray area. Moderators will always use their best judgment in moderation. How moderation works The moderators have the following tools: Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/science/2026/02/what-the-chinese-art-of-tian-tsui-has-to-do-with-kingfishers/] | [TOKENS: 2625] |
birds of a feather X-rays reveal kingfisher feather structure in unprecedented detail Synchrotron radiation imaging revealed a porous, almost sponge-like nanostructure to create bright hues Jennifer Ouellette – Feb 18, 2026 9:24 am | 18 Credit: Jim Bendon/CC BY-SA 2.0 Credit: Jim Bendon/CC BY-SA 2.0 Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav In Qing dynasty China, artisans augmented decorative pieces by incorporating iridescent kingfisher feathers—a technique known as tian-tsui. Scientists at Northwestern University’s Center for Scientific Studies in the Arts have used high-energy X-ray imaging to achieve unprecedented nanoscale resolution of the unique structure of those feathers, presenting their findings at the annual meeting of the American Association for the Advancement of Science. As previously reported, nature is the ultimate nanofabricator. The bright iridescent colors in butterfly wings, soap bubbles, opals, or beetle shells don’t come from any pigment molecules but from how they are structured—naturally occurring photonic crystals. In nature, scales of chitin (a polysaccharide common to insects), for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce specific colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism. In the case of kingfisher feathers, the color is due to the microscopic ridges that cover the parallel rows of keratin strands that grow along the central shaft. Also known as photonic band-gap materials, photonic crystals are “tunable,” which means they are precisely ordered to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. They are used in optical communications as waveguides and switches, as well as in filters, lasers, mirrors, and various anti-reflection stealth devices. The 19th century poet Gerard Manley Hopkins paid homage to the kingfisher’s brilliant plumage in his poem “As Kingfishers Catch Fire,” but Chinese poets and artists were extolling their praises long before that. Tian-tsui (“dotting with kingfishers”) is a prime example of how much the feathers were valued. The feathers were cut and glued onto gilt silver and used as inlays for things like fans, hairpins, screens, and panels, or headdresses—carefully oriented in intricate patterns to enhance the dazzling hues. The feathers were so popular, in fact, that kingfisher populations were declared endangered following the Chinese Communist Revolution. The last tian-tsui studio closed in 1933. A spongy nanostructure Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University The Northwestern team started looking at kingfisher feathers in tian-tsui objects via postdoc Madeline Meier, who has a background in chemistry and nanostructures and was interested in combining that expertise with studies of cultural heritage. The first step was to identify the bird species whose feathers were used in Qing dynasty screens and panels, as well as other materials used. Researchers carefully scraped away the topmost layers and imaged the feathers with scanning electron microscopy to get a better look at the underlying nanostructure. Hyperspectral imaging revealed how different areas of the screens absorbed and reflected light. The team also made use of the center’s partnership with Chicago’s Field Museum, comparing the screen feathers with the museum’s vast collection of taxidermied bird species. The screens and panels contained feathers from common kingfishers and black-capped kingfishers, as well as mallard ducks (used to add green hues). Finally, X-ray fluorescence and Fourier-transform infrared spectroscopy enabled them to create a map of the various chemicals used in the gilding, pigments, glues, and other materials. Most recently, the lab has partnered with Argonne National Laboratory and used synchrotron radiation to get an ever-better look at the nanostructure of kingfisher feathers. Synchrotron radiation differs from conventional X-rays in that it’s a thin beam of very high-intensity X-rays generated within a particle accelerator. Electrons are fired into a linear accelerator (linac), get a speed boost in a small synchrotron, and are injected into a storage ring, where they zoom along at near-light speed. A series of magnets bends and focuses the electrons, and in the process, they give off X-rays, which can then be focused down beam lines. That makes it ideal for noninvasive imaging, since, in general, the shorter the wavelength used (and the higher the light’s energy), the finer the details one can image and/or analyze. It has become a popular technique for imaging fragile archaeological artifacts without damaging them—like Qing dynasty headdresses with inlays of kingfisher feathers. In this case, the imaging revealed that the feathers’ microscopic ridges have an underlying semi-ordered, porous, sponge-like shape that reflect and scatter light, thereby giving the feathers their gloriously brilliant hues. “Long admired in Chinese poetry and art, kingfisher feathers have amazing optical properties,” co-author Maria Kokkori said. “Our discoveries not only enhance our understanding of historical materials but also reshape how we think about artistic and scientific innovation, and the future of sustainable materials.” Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 18 Comments X-rays reveal kingfisher feather structure in unprecedented detail Synchrotron radiation imaging revealed a porous, almost sponge-like nanostructure to create bright hues In Qing dynasty China, artisans augmented decorative pieces by incorporating iridescent kingfisher feathers—a technique known as tian-tsui. Scientists at Northwestern University’s Center for Scientific Studies in the Arts have used high-energy X-ray imaging to achieve unprecedented nanoscale resolution of the unique structure of those feathers, presenting their findings at the annual meeting of the American Association for the Advancement of Science. As previously reported, nature is the ultimate nanofabricator. The bright iridescent colors in butterfly wings, soap bubbles, opals, or beetle shells don’t come from any pigment molecules but from how they are structured—naturally occurring photonic crystals. In nature, scales of chitin (a polysaccharide common to insects), for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce specific colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism. In the case of kingfisher feathers, the color is due to the microscopic ridges that cover the parallel rows of keratin strands that grow along the central shaft. Also known as photonic band-gap materials, photonic crystals are “tunable,” which means they are precisely ordered to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. They are used in optical communications as waveguides and switches, as well as in filters, lasers, mirrors, and various anti-reflection stealth devices. The 19th century poet Gerard Manley Hopkins paid homage to the kingfisher’s brilliant plumage in his poem “As Kingfishers Catch Fire,” but Chinese poets and artists were extolling their praises long before that. Tian-tsui (“dotting with kingfishers”) is a prime example of how much the feathers were valued. The feathers were cut and glued onto gilt silver and used as inlays for things like fans, hairpins, screens, and panels, or headdresses—carefully oriented in intricate patterns to enhance the dazzling hues. The feathers were so popular, in fact, that kingfisher populations were declared endangered following the Chinese Communist Revolution. The last tian-tsui studio closed in 1933. A spongy nanostructure The Northwestern team started looking at kingfisher feathers in tian-tsui objects via postdoc Madeline Meier, who has a background in chemistry and nanostructures and was interested in combining that expertise with studies of cultural heritage. The first step was to identify the bird species whose feathers were used in Qing dynasty screens and panels, as well as other materials used. Researchers carefully scraped away the topmost layers and imaged the feathers with scanning electron microscopy to get a better look at the underlying nanostructure. Hyperspectral imaging revealed how different areas of the screens absorbed and reflected light. The team also made use of the center’s partnership with Chicago’s Field Museum, comparing the screen feathers with the museum’s vast collection of taxidermied bird species. The screens and panels contained feathers from common kingfishers and black-capped kingfishers, as well as mallard ducks (used to add green hues). Finally, X-ray fluorescence and Fourier-transform infrared spectroscopy enabled them to create a map of the various chemicals used in the gilding, pigments, glues, and other materials. Most recently, the lab has partnered with Argonne National Laboratory and used synchrotron radiation to get an ever-better look at the nanostructure of kingfisher feathers. Synchrotron radiation differs from conventional X-rays in that it’s a thin beam of very high-intensity X-rays generated within a particle accelerator. Electrons are fired into a linear accelerator (linac), get a speed boost in a small synchrotron, and are injected into a storage ring, where they zoom along at near-light speed. A series of magnets bends and focuses the electrons, and in the process, they give off X-rays, which can then be focused down beam lines. That makes it ideal for noninvasive imaging, since, in general, the shorter the wavelength used (and the higher the light’s energy), the finer the details one can image and/or analyze. It has become a popular technique for imaging fragile archaeological artifacts without damaging them—like Qing dynasty headdresses with inlays of kingfisher feathers. In this case, the imaging revealed that the feathers’ microscopic ridges have an underlying semi-ordered, porous, sponge-like shape that reflect and scatter light, thereby giving the feathers their gloriously brilliant hues. “Long admired in Chinese poetry and art, kingfisher feathers have amazing optical properties,” co-author Maria Kokkori said. “Our discoveries not only enhance our understanding of historical materials but also reshape how we think about artistic and scientific innovation, and the future of sustainable materials.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/rss-feeds/] | [TOKENS: 570] |
Orbiting HQ RSS Feeds Really Simple Syndication (RSS) is a popular XML-based format designed powerful content distribution (more). RSS allows you to easily keep track of news and happenings at your favorite RSS-savvy sites, and Ars Technica offers RSS feeds for all of its content. Just visit a feed and you’ll be presented with several subscription options, or just Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Really Simple Syndication (RSS) is a popular XML-based format designed powerful content distribution (more). RSS allows you to easily keep track of news and happenings at your favorite RSS-savvy sites, and Ars Technica offers RSS feeds for all of its content. Just visit a feed and you’ll be presented with several subscription options, or just use the link provided to add our RSS feed to your favorite feed reader. Main Feeds All News: Every article from every section of the site Ars Features: All our long-form feature articles Section Feeds Technology Lab: Information Technology Gear & Gadgets: Product News & Reviews Law & Disorder: Civilization & Discontents Infinite Loop: The Apple Ecosystem Opposable Thumbs: Gaming & Entertainment The Scientific Method: Science & Exploration Cars Technica: All Things Automotive Staff Blogs: From the Minds of Ars Ars Cardboard: Board Games News & Reviews Subscriber Feeds Logged in subscribers can find their customized feeds here: https://arstechnica.com/civis/pages/full-text-rss-feeds/. RSS Feeds Really Simple Syndication (RSS) is a popular XML-based format designed powerful content distribution (more). RSS allows you to easily keep track of news and happenings at your favorite RSS-savvy sites, and Ars Technica offers RSS feeds for all of its content. Just visit a feed and you’ll be presented with several subscription options, or just Really Simple Syndication (RSS) is a popular XML-based format designed powerful content distribution (more). RSS allows you to easily keep track of news and happenings at your favorite RSS-savvy sites, and Ars Technica offers RSS feeds for all of its content. Just visit a feed and you’ll be presented with several subscription options, or just use the link provided to add our RSS feed to your favorite feed reader. Main Feeds Section Feeds Subscriber Feeds Logged in subscribers can find their customized feeds here: https://arstechnica.com/civis/pages/full-text-rss-feeds/. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/science/2026/02/what-the-chinese-art-of-tian-tsui-has-to-do-with-kingfishers/] | [TOKENS: 2625] |
birds of a feather X-rays reveal kingfisher feather structure in unprecedented detail Synchrotron radiation imaging revealed a porous, almost sponge-like nanostructure to create bright hues Jennifer Ouellette – Feb 18, 2026 9:24 am | 18 Credit: Jim Bendon/CC BY-SA 2.0 Credit: Jim Bendon/CC BY-SA 2.0 Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav In Qing dynasty China, artisans augmented decorative pieces by incorporating iridescent kingfisher feathers—a technique known as tian-tsui. Scientists at Northwestern University’s Center for Scientific Studies in the Arts have used high-energy X-ray imaging to achieve unprecedented nanoscale resolution of the unique structure of those feathers, presenting their findings at the annual meeting of the American Association for the Advancement of Science. As previously reported, nature is the ultimate nanofabricator. The bright iridescent colors in butterfly wings, soap bubbles, opals, or beetle shells don’t come from any pigment molecules but from how they are structured—naturally occurring photonic crystals. In nature, scales of chitin (a polysaccharide common to insects), for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce specific colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism. In the case of kingfisher feathers, the color is due to the microscopic ridges that cover the parallel rows of keratin strands that grow along the central shaft. Also known as photonic band-gap materials, photonic crystals are “tunable,” which means they are precisely ordered to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. They are used in optical communications as waveguides and switches, as well as in filters, lasers, mirrors, and various anti-reflection stealth devices. The 19th century poet Gerard Manley Hopkins paid homage to the kingfisher’s brilliant plumage in his poem “As Kingfishers Catch Fire,” but Chinese poets and artists were extolling their praises long before that. Tian-tsui (“dotting with kingfishers”) is a prime example of how much the feathers were valued. The feathers were cut and glued onto gilt silver and used as inlays for things like fans, hairpins, screens, and panels, or headdresses—carefully oriented in intricate patterns to enhance the dazzling hues. The feathers were so popular, in fact, that kingfisher populations were declared endangered following the Chinese Communist Revolution. The last tian-tsui studio closed in 1933. A spongy nanostructure Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University The Northwestern team started looking at kingfisher feathers in tian-tsui objects via postdoc Madeline Meier, who has a background in chemistry and nanostructures and was interested in combining that expertise with studies of cultural heritage. The first step was to identify the bird species whose feathers were used in Qing dynasty screens and panels, as well as other materials used. Researchers carefully scraped away the topmost layers and imaged the feathers with scanning electron microscopy to get a better look at the underlying nanostructure. Hyperspectral imaging revealed how different areas of the screens absorbed and reflected light. The team also made use of the center’s partnership with Chicago’s Field Museum, comparing the screen feathers with the museum’s vast collection of taxidermied bird species. The screens and panels contained feathers from common kingfishers and black-capped kingfishers, as well as mallard ducks (used to add green hues). Finally, X-ray fluorescence and Fourier-transform infrared spectroscopy enabled them to create a map of the various chemicals used in the gilding, pigments, glues, and other materials. Most recently, the lab has partnered with Argonne National Laboratory and used synchrotron radiation to get an ever-better look at the nanostructure of kingfisher feathers. Synchrotron radiation differs from conventional X-rays in that it’s a thin beam of very high-intensity X-rays generated within a particle accelerator. Electrons are fired into a linear accelerator (linac), get a speed boost in a small synchrotron, and are injected into a storage ring, where they zoom along at near-light speed. A series of magnets bends and focuses the electrons, and in the process, they give off X-rays, which can then be focused down beam lines. That makes it ideal for noninvasive imaging, since, in general, the shorter the wavelength used (and the higher the light’s energy), the finer the details one can image and/or analyze. It has become a popular technique for imaging fragile archaeological artifacts without damaging them—like Qing dynasty headdresses with inlays of kingfisher feathers. In this case, the imaging revealed that the feathers’ microscopic ridges have an underlying semi-ordered, porous, sponge-like shape that reflect and scatter light, thereby giving the feathers their gloriously brilliant hues. “Long admired in Chinese poetry and art, kingfisher feathers have amazing optical properties,” co-author Maria Kokkori said. “Our discoveries not only enhance our understanding of historical materials but also reshape how we think about artistic and scientific innovation, and the future of sustainable materials.” Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 18 Comments X-rays reveal kingfisher feather structure in unprecedented detail Synchrotron radiation imaging revealed a porous, almost sponge-like nanostructure to create bright hues In Qing dynasty China, artisans augmented decorative pieces by incorporating iridescent kingfisher feathers—a technique known as tian-tsui. Scientists at Northwestern University’s Center for Scientific Studies in the Arts have used high-energy X-ray imaging to achieve unprecedented nanoscale resolution of the unique structure of those feathers, presenting their findings at the annual meeting of the American Association for the Advancement of Science. As previously reported, nature is the ultimate nanofabricator. The bright iridescent colors in butterfly wings, soap bubbles, opals, or beetle shells don’t come from any pigment molecules but from how they are structured—naturally occurring photonic crystals. In nature, scales of chitin (a polysaccharide common to insects), for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce specific colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism. In the case of kingfisher feathers, the color is due to the microscopic ridges that cover the parallel rows of keratin strands that grow along the central shaft. Also known as photonic band-gap materials, photonic crystals are “tunable,” which means they are precisely ordered to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. They are used in optical communications as waveguides and switches, as well as in filters, lasers, mirrors, and various anti-reflection stealth devices. The 19th century poet Gerard Manley Hopkins paid homage to the kingfisher’s brilliant plumage in his poem “As Kingfishers Catch Fire,” but Chinese poets and artists were extolling their praises long before that. Tian-tsui (“dotting with kingfishers”) is a prime example of how much the feathers were valued. The feathers were cut and glued onto gilt silver and used as inlays for things like fans, hairpins, screens, and panels, or headdresses—carefully oriented in intricate patterns to enhance the dazzling hues. The feathers were so popular, in fact, that kingfisher populations were declared endangered following the Chinese Communist Revolution. The last tian-tsui studio closed in 1933. A spongy nanostructure The Northwestern team started looking at kingfisher feathers in tian-tsui objects via postdoc Madeline Meier, who has a background in chemistry and nanostructures and was interested in combining that expertise with studies of cultural heritage. The first step was to identify the bird species whose feathers were used in Qing dynasty screens and panels, as well as other materials used. Researchers carefully scraped away the topmost layers and imaged the feathers with scanning electron microscopy to get a better look at the underlying nanostructure. Hyperspectral imaging revealed how different areas of the screens absorbed and reflected light. The team also made use of the center’s partnership with Chicago’s Field Museum, comparing the screen feathers with the museum’s vast collection of taxidermied bird species. The screens and panels contained feathers from common kingfishers and black-capped kingfishers, as well as mallard ducks (used to add green hues). Finally, X-ray fluorescence and Fourier-transform infrared spectroscopy enabled them to create a map of the various chemicals used in the gilding, pigments, glues, and other materials. Most recently, the lab has partnered with Argonne National Laboratory and used synchrotron radiation to get an ever-better look at the nanostructure of kingfisher feathers. Synchrotron radiation differs from conventional X-rays in that it’s a thin beam of very high-intensity X-rays generated within a particle accelerator. Electrons are fired into a linear accelerator (linac), get a speed boost in a small synchrotron, and are injected into a storage ring, where they zoom along at near-light speed. A series of magnets bends and focuses the electrons, and in the process, they give off X-rays, which can then be focused down beam lines. That makes it ideal for noninvasive imaging, since, in general, the shorter the wavelength used (and the higher the light’s energy), the finer the details one can image and/or analyze. It has become a popular technique for imaging fragile archaeological artifacts without damaging them—like Qing dynasty headdresses with inlays of kingfisher feathers. In this case, the imaging revealed that the feathers’ microscopic ridges have an underlying semi-ordered, porous, sponge-like shape that reflect and scatter light, thereby giving the feathers their gloriously brilliant hues. “Long admired in Chinese poetry and art, kingfisher feathers have amazing optical properties,” co-author Maria Kokkori said. “Our discoveries not only enhance our understanding of historical materials but also reshape how we think about artistic and scientific innovation, and the future of sustainable materials.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/science/2026/02/what-the-chinese-art-of-tian-tsui-has-to-do-with-kingfishers/] | [TOKENS: 2625] |
birds of a feather X-rays reveal kingfisher feather structure in unprecedented detail Synchrotron radiation imaging revealed a porous, almost sponge-like nanostructure to create bright hues Jennifer Ouellette – Feb 18, 2026 9:24 am | 18 Credit: Jim Bendon/CC BY-SA 2.0 Credit: Jim Bendon/CC BY-SA 2.0 Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav In Qing dynasty China, artisans augmented decorative pieces by incorporating iridescent kingfisher feathers—a technique known as tian-tsui. Scientists at Northwestern University’s Center for Scientific Studies in the Arts have used high-energy X-ray imaging to achieve unprecedented nanoscale resolution of the unique structure of those feathers, presenting their findings at the annual meeting of the American Association for the Advancement of Science. As previously reported, nature is the ultimate nanofabricator. The bright iridescent colors in butterfly wings, soap bubbles, opals, or beetle shells don’t come from any pigment molecules but from how they are structured—naturally occurring photonic crystals. In nature, scales of chitin (a polysaccharide common to insects), for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce specific colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism. In the case of kingfisher feathers, the color is due to the microscopic ridges that cover the parallel rows of keratin strands that grow along the central shaft. Also known as photonic band-gap materials, photonic crystals are “tunable,” which means they are precisely ordered to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. They are used in optical communications as waveguides and switches, as well as in filters, lasers, mirrors, and various anti-reflection stealth devices. The 19th century poet Gerard Manley Hopkins paid homage to the kingfisher’s brilliant plumage in his poem “As Kingfishers Catch Fire,” but Chinese poets and artists were extolling their praises long before that. Tian-tsui (“dotting with kingfishers”) is a prime example of how much the feathers were valued. The feathers were cut and glued onto gilt silver and used as inlays for things like fans, hairpins, screens, and panels, or headdresses—carefully oriented in intricate patterns to enhance the dazzling hues. The feathers were so popular, in fact, that kingfisher populations were declared endangered following the Chinese Communist Revolution. The last tian-tsui studio closed in 1933. A spongy nanostructure Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago Scanning a Qing dynasty screen with X-ray fluorescence spectroscopy Northwestern University China Cap, Qing dynasty (1644–1912), 18th–19th century, Gold wire, kingfisher feathers, amber, coral, jadeite, ivory, glass and silk courtesy of The Art Institute of Chicago A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University A scanning electron microscopy image of Kingfisher feathers reveals the semi-ordered nanostructure Maria Kokkori/Northwestern University By increasing the magnification of the scanning electron microscopy image, researchers discovered a nanoscale, spongey architecture. Maria Kokkori/Northwestern University The Northwestern team started looking at kingfisher feathers in tian-tsui objects via postdoc Madeline Meier, who has a background in chemistry and nanostructures and was interested in combining that expertise with studies of cultural heritage. The first step was to identify the bird species whose feathers were used in Qing dynasty screens and panels, as well as other materials used. Researchers carefully scraped away the topmost layers and imaged the feathers with scanning electron microscopy to get a better look at the underlying nanostructure. Hyperspectral imaging revealed how different areas of the screens absorbed and reflected light. The team also made use of the center’s partnership with Chicago’s Field Museum, comparing the screen feathers with the museum’s vast collection of taxidermied bird species. The screens and panels contained feathers from common kingfishers and black-capped kingfishers, as well as mallard ducks (used to add green hues). Finally, X-ray fluorescence and Fourier-transform infrared spectroscopy enabled them to create a map of the various chemicals used in the gilding, pigments, glues, and other materials. Most recently, the lab has partnered with Argonne National Laboratory and used synchrotron radiation to get an ever-better look at the nanostructure of kingfisher feathers. Synchrotron radiation differs from conventional X-rays in that it’s a thin beam of very high-intensity X-rays generated within a particle accelerator. Electrons are fired into a linear accelerator (linac), get a speed boost in a small synchrotron, and are injected into a storage ring, where they zoom along at near-light speed. A series of magnets bends and focuses the electrons, and in the process, they give off X-rays, which can then be focused down beam lines. That makes it ideal for noninvasive imaging, since, in general, the shorter the wavelength used (and the higher the light’s energy), the finer the details one can image and/or analyze. It has become a popular technique for imaging fragile archaeological artifacts without damaging them—like Qing dynasty headdresses with inlays of kingfisher feathers. In this case, the imaging revealed that the feathers’ microscopic ridges have an underlying semi-ordered, porous, sponge-like shape that reflect and scatter light, thereby giving the feathers their gloriously brilliant hues. “Long admired in Chinese poetry and art, kingfisher feathers have amazing optical properties,” co-author Maria Kokkori said. “Our discoveries not only enhance our understanding of historical materials but also reshape how we think about artistic and scientific innovation, and the future of sustainable materials.” Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 18 Comments X-rays reveal kingfisher feather structure in unprecedented detail Synchrotron radiation imaging revealed a porous, almost sponge-like nanostructure to create bright hues In Qing dynasty China, artisans augmented decorative pieces by incorporating iridescent kingfisher feathers—a technique known as tian-tsui. Scientists at Northwestern University’s Center for Scientific Studies in the Arts have used high-energy X-ray imaging to achieve unprecedented nanoscale resolution of the unique structure of those feathers, presenting their findings at the annual meeting of the American Association for the Advancement of Science. As previously reported, nature is the ultimate nanofabricator. The bright iridescent colors in butterfly wings, soap bubbles, opals, or beetle shells don’t come from any pigment molecules but from how they are structured—naturally occurring photonic crystals. In nature, scales of chitin (a polysaccharide common to insects), for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce specific colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism. In the case of kingfisher feathers, the color is due to the microscopic ridges that cover the parallel rows of keratin strands that grow along the central shaft. Also known as photonic band-gap materials, photonic crystals are “tunable,” which means they are precisely ordered to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. They are used in optical communications as waveguides and switches, as well as in filters, lasers, mirrors, and various anti-reflection stealth devices. The 19th century poet Gerard Manley Hopkins paid homage to the kingfisher’s brilliant plumage in his poem “As Kingfishers Catch Fire,” but Chinese poets and artists were extolling their praises long before that. Tian-tsui (“dotting with kingfishers”) is a prime example of how much the feathers were valued. The feathers were cut and glued onto gilt silver and used as inlays for things like fans, hairpins, screens, and panels, or headdresses—carefully oriented in intricate patterns to enhance the dazzling hues. The feathers were so popular, in fact, that kingfisher populations were declared endangered following the Chinese Communist Revolution. The last tian-tsui studio closed in 1933. A spongy nanostructure The Northwestern team started looking at kingfisher feathers in tian-tsui objects via postdoc Madeline Meier, who has a background in chemistry and nanostructures and was interested in combining that expertise with studies of cultural heritage. The first step was to identify the bird species whose feathers were used in Qing dynasty screens and panels, as well as other materials used. Researchers carefully scraped away the topmost layers and imaged the feathers with scanning electron microscopy to get a better look at the underlying nanostructure. Hyperspectral imaging revealed how different areas of the screens absorbed and reflected light. The team also made use of the center’s partnership with Chicago’s Field Museum, comparing the screen feathers with the museum’s vast collection of taxidermied bird species. The screens and panels contained feathers from common kingfishers and black-capped kingfishers, as well as mallard ducks (used to add green hues). Finally, X-ray fluorescence and Fourier-transform infrared spectroscopy enabled them to create a map of the various chemicals used in the gilding, pigments, glues, and other materials. Most recently, the lab has partnered with Argonne National Laboratory and used synchrotron radiation to get an ever-better look at the nanostructure of kingfisher feathers. Synchrotron radiation differs from conventional X-rays in that it’s a thin beam of very high-intensity X-rays generated within a particle accelerator. Electrons are fired into a linear accelerator (linac), get a speed boost in a small synchrotron, and are injected into a storage ring, where they zoom along at near-light speed. A series of magnets bends and focuses the electrons, and in the process, they give off X-rays, which can then be focused down beam lines. That makes it ideal for noninvasive imaging, since, in general, the shorter the wavelength used (and the higher the light’s energy), the finer the details one can image and/or analyze. It has become a popular technique for imaging fragile archaeological artifacts without damaging them—like Qing dynasty headdresses with inlays of kingfisher feathers. In this case, the imaging revealed that the feathers’ microscopic ridges have an underlying semi-ordered, porous, sponge-like shape that reflect and scatter light, thereby giving the feathers their gloriously brilliant hues. “Long admired in Chinese poetry and art, kingfisher feathers have amazing optical properties,” co-author Maria Kokkori said. “Our discoveries not only enhance our understanding of historical materials but also reshape how we think about artistic and scientific innovation, and the future of sustainable materials.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/about-us/] | [TOKENS: 2447] |
Orbiting HQ About Us Serving the Technologist since 1998. News, reviews, and analysis. Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon. Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers. And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001). The Ars editorial team didn’t fret over journalistic innovation, however. Ars fused opinion, analysis, and straight-laced reporting into an editorial product long before commercial “blogs” arrived on the scene and claimed to reinvent journalism by doing the same. The company pushed the ideals of transparency and community before these were buzzwords. It is these ideals that have kept the company growing since its birth, and readers can expect more of the same in the future. Ars Technica was founded in Cambridge, Massachusetts. Amongst those joining Ars Technica in its infancy was Jon Stokes, co-founder and renowned CPU Editor for Ars Technica’s first 12 years (Jon served also as Deputy Editor from 2008-2011). Eric Bangeman, co-founder and Managing Editor, joined the site during its earliest years and remains in the thick of the Ars Technica newsroom. Acquired in 2008 by Advance, the parent company of Condé Nast, Ars Technica has offices in Boston, New York, Chicago, and San Francisco. Today, Ars Technica operates as Condé Nast’s only 100% digitally native editorial publication. The Ars Technica Ethos Ars longa, vita brevis, occasio praeceps, experimentum periculosum, iudicium difficile. —Hippocrates When Hippocrates said that “life is short, art is long,” he did not mean that art outlives the artist. The “father of medicine” instead diagnosed a basic fact of life: true art or skill takes a lifetime of effort to perfect, and the path is fraught with “occasional crises, perilous experiences, and difficult judgments.” Technology is the “art” at the forefront of our changing world, and we’re here to chronicle that story and even help with the difficult judgments. At Ars Technica—the name is Latin-derived for the “art of technology”—we specialize in news and reviews, analysis of technology trends, and expert advice on topics ranging from the most fundamental aspects of technology to the many ways technology is helping us discover our world. We work for the reader who not only needs to keep up on technology, but is passionate about it. We at Ars take great pride in our unique combination of technical savvy and wide-ranging interest in the human arts and sciences. Our editorial team is at home on Linux, Mac, and Windows; they know both the home and the enterprise; they understand law and politics; and they specialize in bringing readers the right answer, the first time. It’s no wonder that Ars has become a “go-to” destination for those who need to sift the wheat from the chaff. Ars Technica is also unique in a number of ways. We are a proud leader in conversational media, a new and exciting answer to the reader’s need and desire for fresh voices, informed reporting, and reader engagement. Ars writers aren’t afraid of wit or strongly-held opinions, and readers find both on display throughout our work. But at Ars, “opinion” never devolves into dogma; we strive for measured judgments and carefully relayed contexts. Those who come to Ars looking for computing religion won’t find it, and that’s why millions of readers trust our take on the day’s tech news and look forward to our original reporting. Then there’s our formidable community. While “community” has lately become a Web buzzword, Ars has been building a real online community since its founding in 1998. We encourage reader feedback and participation in conversation via discussion on every article, as well as in the renowned Ars OpenForum—one of the Internet’s true treasure troves, and one of the largest, documented community databases of tips, technical help, and camaraderie on the planet. It was once said that sine scientia ars nihil est, that is, “without knowledge, art is nothing.” We agree, but there’s also a corollary: sine Ars, scientia nihil est. Welcome to Ars Technica! About Us Serving the Technologist since 1998. News, reviews, and analysis. Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon. Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers. And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001). The Ars editorial team didn’t fret over journalistic innovation, however. Ars fused opinion, analysis, and straight-laced reporting into an editorial product long before commercial “blogs” arrived on the scene and claimed to reinvent journalism by doing the same. The company pushed the ideals of transparency and community before these were buzzwords. It is these ideals that have kept the company growing since its birth, and readers can expect more of the same in the future. Ars Technica was founded in Cambridge, Massachusetts. Amongst those joining Ars Technica in its infancy was Jon Stokes, co-founder and renowned CPU Editor for Ars Technica’s first 12 years (Jon served also as Deputy Editor from 2008-2011). Eric Bangeman, co-founder and Managing Editor, joined the site during its earliest years and remains in the thick of the Ars Technica newsroom. Acquired in 2008 by Advance, the parent company of Condé Nast, Ars Technica has offices in Boston, New York, Chicago, and San Francisco. Today, Ars Technica operates as Condé Nast’s only 100% digitally native editorial publication. The Ars Technica Ethos Ars longa, vita brevis, occasio praeceps, experimentum periculosum, iudicium difficile. —Hippocrates When Hippocrates said that “life is short, art is long,” he did not mean that art outlives the artist. The “father of medicine” instead diagnosed a basic fact of life: true art or skill takes a lifetime of effort to perfect, and the path is fraught with “occasional crises, perilous experiences, and difficult judgments.” Technology is the “art” at the forefront of our changing world, and we’re here to chronicle that story and even help with the difficult judgments. At Ars Technica—the name is Latin-derived for the “art of technology”—we specialize in news and reviews, analysis of technology trends, and expert advice on topics ranging from the most fundamental aspects of technology to the many ways technology is helping us discover our world. We work for the reader who not only needs to keep up on technology, but is passionate about it. We at Ars take great pride in our unique combination of technical savvy and wide-ranging interest in the human arts and sciences. Our editorial team is at home on Linux, Mac, and Windows; they know both the home and the enterprise; they understand law and politics; and they specialize in bringing readers the right answer, the first time. It’s no wonder that Ars has become a “go-to” destination for those who need to sift the wheat from the chaff. Ars Technica is also unique in a number of ways. We are a proud leader in conversational media, a new and exciting answer to the reader’s need and desire for fresh voices, informed reporting, and reader engagement. Ars writers aren’t afraid of wit or strongly-held opinions, and readers find both on display throughout our work. But at Ars, “opinion” never devolves into dogma; we strive for measured judgments and carefully relayed contexts. Those who come to Ars looking for computing religion won’t find it, and that’s why millions of readers trust our take on the day’s tech news and look forward to our original reporting. Then there’s our formidable community. While “community” has lately become a Web buzzword, Ars has been building a real online community since its founding in 1998. We encourage reader feedback and participation in conversation via discussion on every article, as well as in the renowned Ars OpenForum—one of the Internet’s true treasure troves, and one of the largest, documented community databases of tips, technical help, and camaraderie on the planet. It was once said that sine scientia ars nihil est, that is, “without knowledge, art is nothing.” We agree, but there’s also a corollary: sine Ars, scientia nihil est. Welcome to Ars Technica! Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/amendment-to-conde-nast-user-agreement-privacy-policy/] | [TOKENS: 1067] |
Orbiting HQ Amendment to Conde Nast User Agreement & Privacy Policy Applicable Only to Use of ArsTechnica.com For ArsTechnica.com only, Section VI(2)(B) of the Conde Nast User Agreement is deleted in its entirety and replaced with the following: Except as expressly provided otherwise in the Agreement, you or the owner of any Content you post, upload, transmit, send or otherwise make available on or through the Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Applicable Only to Use of ArsTechnica.com For ArsTechnica.com only, Section VI(2)(B) of the Conde Nast User Agreement is deleted in its entirety and replaced with the following: Except as expressly provided otherwise in the Agreement, you or the owner of any Content you post, upload, transmit, send or otherwise make available on or through the Service retains ownership of all rights, title, and interests in such Content. However, by posting, uploading, transmitting, sending or otherwise making available Content, registering for the Service, entering a sweepstakes or contest, or engaging in any other form of communication with us (on or through the Service or otherwise) you irrevocably grant us a royalty-free, perpetual, non-exclusive, unrestricted, worldwide right and license to copy, reproduce, modify, edit, crop, alter, revise, adapt, translate, enhance, reformat, remix, rearrange, resize, create derivative works of, move, remove, delete, erase, reverse-engineer, store, cache, aggregate, publish, post, display, distribute, broadcast, perform, transmit, rent, sell, share, sublicense, syndicate, or otherwise provide to others, use, or change all such Content and communications, in any medium (now in existence or hereinafter developed) and for any purpose on or in connection with the Service, or the promotion thereof, including commercial purposes, and to authorize others to do so. Among other things, this means that we may use any ideas, suggestions, developments, and/or inventions that you post, upload, transmit, send or otherwise make available in any manner as we see fit on or in connection with the Service, or the promotion thereof without any compensation or attribution to you. In any event, you should make copies of or otherwise back-up any and all Content, personal data or communications you post, upload, transmit, send or otherwise make available on or through the Service that you may wish to retain. Amendment to Conde Nast User Agreement & Privacy Policy Applicable Only to Use of ArsTechnica.com For ArsTechnica.com only, Section VI(2)(B) of the Conde Nast User Agreement is deleted in its entirety and replaced with the following: Except as expressly provided otherwise in the Agreement, you or the owner of any Content you post, upload, transmit, send or otherwise make available on or through the Applicable Only to Use of ArsTechnica.com For ArsTechnica.com only, Section VI(2)(B) of the Conde Nast User Agreement is deleted in its entirety and replaced with the following: Except as expressly provided otherwise in the Agreement, you or the owner of any Content you post, upload, transmit, send or otherwise make available on or through the Service retains ownership of all rights, title, and interests in such Content. However, by posting, uploading, transmitting, sending or otherwise making available Content, registering for the Service, entering a sweepstakes or contest, or engaging in any other form of communication with us (on or through the Service or otherwise) you irrevocably grant us a royalty-free, perpetual, non-exclusive, unrestricted, worldwide right and license to copy, reproduce, modify, edit, crop, alter, revise, adapt, translate, enhance, reformat, remix, rearrange, resize, create derivative works of, move, remove, delete, erase, reverse-engineer, store, cache, aggregate, publish, post, display, distribute, broadcast, perform, transmit, rent, sell, share, sublicense, syndicate, or otherwise provide to others, use, or change all such Content and communications, in any medium (now in existence or hereinafter developed) and for any purpose on or in connection with the Service, or the promotion thereof, including commercial purposes, and to authorize others to do so. Among other things, this means that we may use any ideas, suggestions, developments, and/or inventions that you post, upload, transmit, send or otherwise make available in any manner as we see fit on or in connection with the Service, or the promotion thereof without any compensation or attribution to you. In any event, you should make copies of or otherwise back-up any and all Content, personal data or communications you post, upload, transmit, send or otherwise make available on or through the Service that you may wish to retain. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Deep_learning] | [TOKENS: 10401] |
Contents Deep learning In machine learning, deep learning focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and revolves around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be supervised, semi-supervised or unsupervised. Some common deep learning network architectures include fully connected networks, deep belief networks, recurrent neural networks, convolutional neural networks, generative adversarial networks, transformers, and neural radiance fields. These architectures have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. Early forms of neural networks were inspired by information processing and distributed communication nodes in biological systems, particularly the human brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose. Overview Most modern deep learning models are based on multi-layered neural networks such as convolutional neural networks and transformers, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines. Fundamentally, deep learning refers to a class of machine learning algorithms in which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in an image recognition model, the raw input may be an image (represented as a tensor of pixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which level on its own. Prior to deep learning, machine learning techniques often involved hand-crafted feature engineering to transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the model discovers useful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction. The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance. Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are deep belief networks. The term deep learning was introduced to the machine learning community by Rina Dechter in 1986, and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons. Although the history of its appearance is apparently more complicated. Interpretations Deep neural networks are generally interpreted in terms of the universal approximation theorem or probabilistic inference. The classic universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions. In 1989, the first proof was published by George Cybenko for sigmoid activation functions and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik. Recent work also showed that universal approximation also holds for non-bounded activation functions such as Kunihiko Fukushima's rectified linear unit. The universal approximation theorem for deep neural networks concerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al. proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. The probabilistic interpretation derives from the field of machine learning. It features inference, as well as the optimization concepts of training and testing, related to fitting and generalization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function. The probabilistic interpretation led to the introduction of dropout as regularizer in neural networks. The probabilistic interpretation was introduced by researchers including Hopfield, Widrow and Narendra and popularized in surveys such as the one by Bishop. History There are two types of artificial neural network (ANN): feedforward neural network (FNN) or multilayer perceptron (MLP) and recurrent neural networks (RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s, Wilhelm Lenz and Ernst Ising created the Ising model which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was republished by John Hopfield in 1982. Other early recurrent neural networks were published by Kaoru Nakano in 1971. Already in 1948, Alan Turing produced work on "Intelligent Machinery" that was not published in his lifetime, containing "ideas related to artificial evolution and learning RNNs". Frank Rosenblatt (1958) proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).: section 16 The book cites an earlier network by R. D. Joseph (1960) "functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptive multilayer perceptrons with learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in 1965. They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron to handle more complex, nonlinear, and hierarchical relationships. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates". The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning. Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation. Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. The modern form of backpropagation was first published in Seppo Linnainmaa's master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work. The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition. In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images. Recurrent neural networks (RNN) were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study problems in cognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991, Jürgen Schmidhuber proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning where each RNN tries to predict its own next input, which is the next unexpected input of the RNN below. This "neural history compressor" uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. The "P" in ChatGPT refers to such pre-training. Sepp Hochreiter's diploma thesis (1991) implemented the neural history compressor, and identified and analyzed the vanishing gradient problem. Hochreiter proposed recurrent residual connections to solve the vanishing gradient problem. This led to the long short-term memory (LSTM), published in 1995. LSTM can learn "very deep learning" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999, which became the standard RNN architecture. In 1991, Jürgen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used in generative adversarial networks (GANs). During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112 ). A 1988 network became state of the art in protein structure prediction, an early application of deep learning to bioinformatics. Both shallow and deep learning (e.g., recurrent nets) of ANNs for speech recognition have been explored for many years. These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. Key difficulties have been analyzed, including gradient diminishing and weak temporal correlation structure in neural predictive models. Additional difficulties were the lack of training data and limited computing power. Most speech recognition researchers moved away from neural nets to pursue generative modeling. An exception was at SRI International in the late 1990s. Funded by the US government's NSA and DARPA, SRI researched in speech and speaker recognition. The speaker recognition team led by Larry Heck reported significant success with deep neural networks in speech processing in the 1998 NIST Speaker Recognition benchmark. It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning. The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features in the late 1990s, showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results. Neural networks entered a lull, and simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.[citation needed] In 2003, LSTM became competitive with traditional speech recognizers on certain tasks. In 2006, Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it with connectionist temporal classification (CTC) in stacks of LSTMs. In 2009, it became the first RNN to win a pattern recognition contest, in connected handwriting recognition. In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh deep belief networks were developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionally fine-tuned using supervised backpropagation. They could model high-dimensional probability distributions, such as the distribution of MNIST images, but convergence was slow. The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. The nature of the recognition errors produced by the two types of systems was characteristically different, offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems. Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition. That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models. In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees. The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years, including CNNs, faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning. A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004. In 2009, Raina, Madhavan, and Andrew Ng reported a 100M deep belief network trained on 30 Nvidia GeForce GTX 280 GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly. In 2012, Andrew Ng and Jeff Dean created an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken from YouTube videos. In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3. The success in image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs. In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples included Google DeepDream (2015), and neural style transfer (2015), both of which were based on pretrained image classification neural networks, such as VGG-19. Generative adversarial network (GAN) by (Ian Goodfellow et al., 2014) (based on Jürgen Schmidhuber's principle of artificial curiosity) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available through Google Voice Search on smartphone. Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. Convolutional neural networks were superseded for ASR by LSTM. but are more successful in computer vision. Yoshua Bengio, Geoffrey Hinton and Yann LeCun were awarded the 2018 Turing Award for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing". Neural networks Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming. An ANN is based on a collection of connected units called artificial neurons, (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"). A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers. There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.[citation needed] For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives. The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network. For instance, it was proved that sparse multivariate polynomials are exponentially easier to approximate with DNNs than with shallow networks. Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets. DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights. That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use. Convolutional neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR). As with ANNs, many issues can arise with naively trained DNNs. Two common issues are overfitting and computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data. Regularization methods such as Ivakhnenko's unit pruning or weight decay ( ℓ 2 {\displaystyle \ell _{2}} -regularization) or sparsity ( ℓ 1 {\displaystyle \ell _{1}} -regularization) can be applied during training to combat overfitting. Alternatively dropout regularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies. Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction. Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting. DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), the learning rate, and initial weights. Sweeping through the parameter space for optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such as batching (computing the gradient on several training examples at once rather than individual examples) speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations. Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved. Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI . OpenAI estimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months. Special electronic circuits called deep learning processors were designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) in Huawei cellphones and cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform. Cerebras Systems has also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2). Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs). In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications. Applications Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates is competitive with traditional speech recognizers on certain tasks. The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight major dialects of American English, where each speaker reads 10 sentences. Its small size lets many configurations be tried. More importantly, the TIMIT task concerns phone-sequence recognition, which, unlike word-sequence recognition, allows weak phone bigram language models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas: More recent speech recognition models use Transformers or Temporal Convolution Networks with significant success and widespread applications. All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning. A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available. Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces. Deep learning-trained vehicles now interpret 360° camera views. Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling. Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN. Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing. Deep neural architectures provide the best results for constituency parsing, sentiment analysis, information retrieval, spoken language understanding, machine translation, contextual entity linking, writing style recognition, named-entity recognition (token classification), text classification, and others. Recent developments generalize word embedding to sentence embedding. Google Translate (GT) uses a large end-to-end long short-term memory (LSTM) network. Google Neural Machine Translation (GNMT) uses an example-based machine translation method in which the system "learns from millions of examples". It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages. The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations". GT uses English as an intermediate between most language pairs. A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. Research has explored use of deep learning to predict the biomolecular targets, off-targets, and toxic effects of environmental chemicals in nutrients, household products and drugs. AtomNet is a deep learning system for structure-based rational drug design. AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis. In 2017 graph neural networks were used for the first time to predict various properties of molecules in a large toxicology data set. In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice. Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships. In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data. Deep neural networks have shown unparalleled performance in predicting protein structure, according to the sequence of the amino acids that make it up. In 2020, AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods. Deep neural networks can be used to estimate the entropy of a stochastic process through an arrangement called a Neural Joint Entropy Estimator (NJEE). Such an estimation provides insights on the effects of input random variables on an independent random variable. Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in image classification tasks, the NJEE maps a vector of pixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in cases of large alphabet sizes. Deep learning has been shown to produce competitive results in medical applications such as cancer cell classification, lesion detection, organ segmentation and image enhancement. Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency. Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, inpainting, and film colorization. These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration" which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration. Deep learning is being successfully applied to financial fraud detection, tax evasion detection, and anti-money laundering. In November 2023, researchers at Google DeepMind and Lawrence Berkeley National Laboratory announced that they had developed an AI system known as GNoME. This system has contributed to materials science by discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganic crystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through the Materials Project database, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds. The United States Department of Defense applied deep learning to train robots in new tasks through observation. Physics informed neural networks have been used to solve partial differential equations in both forward and inverse problems in a data driven manner. One example is the reconstructing fluid flow governed by the Navier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventional CFD methods rely on. It is evident that geometric and physical constraints have a synergistic effect on neural PDE surrogates, thereby enhancing their efficacy in predicting stable and super long rollouts. Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. In addition, the integration of Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging and ultrasound imaging. Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems. An epigenetic clock is a biochemical test that can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples. The clock uses information from 1000 CpG sites and predicts people with certain conditions older than healthy controls: IBD, frontotemporal dementia, ovarian cancer, obesity. The aging clock was planned to be released for public use in 2021 by an Insilico Medicine spinoff company Deep Longevity. Relation to human cognitive and brain development Deep learning is closely related to a class of theories of brain development (specifically, neocortical development) proposed by cognitive neuroscientists in the early 1990s. These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave of nerve growth factor) support the self-organization somewhat analogous to the neural networks utilized in deep learning models. Like the neocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack of transducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature". A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of the backpropagation algorithm have been proposed in order to increase its processing realism. Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchical generative models and deep belief networks, may be closer to biological reality. In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex. Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons and neural populations. Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system both at the single-unit and at the population levels. Commercial activity Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them. Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, which learned the game of Go well enough to beat a professional Go player. Google Translate uses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories. As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation. Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job". Criticism and comment Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear.[citation needed] (e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as a black box, with most confirmations done empirically, rather than theoretically. In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article on The Guardian's website. With the support of Innovation Diffusion Theory (IDT), a study analyzed the diffusion of Deep Learning in BRICS and OECD countries using data from Google Trends. Some deep learning architectures display problematic behaviors, such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014) and misclassifying minuscule perturbations of correctly classified images (2013). Goertzel hypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-component artificial general intelligence (AGI) architectures. These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar decompositions of observed entities and events. Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI). As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception. By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack". In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system. One defense is reverse image search, in which a possible fake image is submitted to a site such as TinEye that can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken. Another group showed that certain psychedelic spectacles could fool a facial recognition system into thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers to stop signs and caused an ANN to misclassify them. ANNs can however be further trained to detect attempts at deception, potentially leading attackers and defenders into an arms race similar to the kind that already defines the malware defense industry. ANNs have been trained to defeat ANN-based anti-malware software by repeatedly attacking a defense with malware that was continually altered by a genetic algorithm until it tricked the anti-malware while retaining its ability to damage the target. In 2016, another group demonstrated that certain sounds could make the Google Now voice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)". In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery. The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both. It has been argued that not only low-paid clickwork (such as on Amazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of human microwork that are often not recognized as such. The philosopher Rainer Mühlhoff distinguishes five types of "machinic capture" of human microwork to generate training data: (1) gamification (the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g. CAPTCHAs for image recognition or click-tracking on Google search results pages), (3) exploitation of social motivations (e.g. tagging faces on Facebook to obtain labeled facial images), (4) information mining (e.g. by leveraging quantified-self devices such as activity trackers) and (5) clickwork. See also References Further reading |
======================================== |
[SOURCE: https://arstechnica.com/general-faq/] | [TOKENS: 1117] |
Orbiting HQ General FAQ Welcome to our FAQ. Below are some of the more common questions we get asked, and their answers. Feel free to contact us if your answer is not addressed below, but we do ask that you check the FAQ first, to ensure the best service possible. Do you accept news tips / suggestions? We love Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Welcome to our FAQ. Below are some of the more common questions we get asked, and their answers. Feel free to contact us if your answer is not addressed below, but we do ask that you check the FAQ first, to ensure the best service possible. Do you accept news tips / suggestions? We love tips, we really do. Steak tips, news tips, you name it. Hit our contact page to let us know what you’d like to share. If you want credit for passing along a tip, be sure to let us know. Where can I get help with my subscription? We’re glad you asked. Check out our subscriber support page for more information! How do I contact the OpenForum moderators? Information on contacting our moderation team is available here. Please note that we receive many moderation requests, often stemming from the same incident or post. We do not always respond to reports, but we do investigate all of them. I sent Ars an email, and never heard back. Are you ignoring me? The spam filtering on the Ars mail server is set to reject anything vaguely suspicious. Kill the HTML and eliminate any words frequently associated with porn, drugs, or lottery winnings, and your mail has a better chance to get through. That said, we collectively receive thousands of emails every day, and simply cannot respond to all of them. We read your messages, we really do. If you are looking for interaction, we do recommend that you use the discussion threads for articles and reports, and that you turn to the OpenForum for general technology questions. How can I correct the record? a.k.a., You made an error! If you think you’ve spotted an error of syntax, style, or fact, please let us know using the contact form, and selecting “Corrections.” Generally speaking, posting in the comments is not a reliable way to notify our editors of critical problems. Why was my comment edited or erased? While the OpenForum is a wild and fun place, we take the discussion threads of our articles quite seriously. To keep them on topic, we routinely remove comments that point out errors that we can fix. No one wants to see “hey dude, you made a typo” as the first post in a discussion if it has already been corrected. We also reserve the right to remove any and all spam, and we will also remove posts that are highly offensive in nature. General FAQ Welcome to our FAQ. Below are some of the more common questions we get asked, and their answers. Feel free to contact us if your answer is not addressed below, but we do ask that you check the FAQ first, to ensure the best service possible. Do you accept news tips / suggestions? We love Welcome to our FAQ. Below are some of the more common questions we get asked, and their answers. Feel free to contact us if your answer is not addressed below, but we do ask that you check the FAQ first, to ensure the best service possible. We love tips, we really do. Steak tips, news tips, you name it. Hit our contact page to let us know what you’d like to share. If you want credit for passing along a tip, be sure to let us know. We’re glad you asked. Check out our subscriber support page for more information! Information on contacting our moderation team is available here. Please note that we receive many moderation requests, often stemming from the same incident or post. We do not always respond to reports, but we do investigate all of them. The spam filtering on the Ars mail server is set to reject anything vaguely suspicious. Kill the HTML and eliminate any words frequently associated with porn, drugs, or lottery winnings, and your mail has a better chance to get through. That said, we collectively receive thousands of emails every day, and simply cannot respond to all of them. We read your messages, we really do. If you are looking for interaction, we do recommend that you use the discussion threads for articles and reports, and that you turn to the OpenForum for general technology questions. If you think you’ve spotted an error of syntax, style, or fact, please let us know using the contact form, and selecting “Corrections.” Generally speaking, posting in the comments is not a reliable way to notify our editors of critical problems. While the OpenForum is a wild and fun place, we take the discussion threads of our articles quite seriously. To keep them on topic, we routinely remove comments that point out errors that we can fix. No one wants to see “hey dude, you made a typo” as the first post in a discussion if it has already been corrected. We also reserve the right to remove any and all spam, and we will also remove posts that are highly offensive in nature. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.