text stringlengths 0 473k |
|---|
[SOURCE: https://arstechnica.com/gaming/2026/02/pokemon-red-and-greens-gba-remakes-are-getting-re-released-on-switch-for-20-a-pop/#comments] | [TOKENS: 1774] |
will anyone trade me a charmander Nintendo brings GBA-era Pokémon to the Switch, but not Switch Online subscribers Games appear to be mostly unmodified ports of the well-regarded remakes. Andrew Cunningham – Feb 20, 2026 11:30 am | 41 Game Boy Advance-era remakes of the first Pokémon games are coming to the Switch. Credit: Nintendo Game Boy Advance-era remakes of the first Pokémon games are coming to the Switch. Credit: Nintendo Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav For my money, the 2004 Game Boy Advance re-releases of Pokémon FireRed and LeafGreen are still the best versions of the original Pokémon games. They fixed most of the bugs and balance issues present in the originals—partly by also including the rosters from Gold/Silver and Ruby/Sapphire—but they’re more faithful to the original gameplay, battling and catching mechanics, and graphics than the 2018 Let’s Go, Pikachu/Eevee! adaptations for the Switch. Someone at Nintendo apparently agrees, as the company announced today that it’s re-releasing those games for the original Switch (and, by extension, the Switch 2, though no Switch 2-specific features were announced). The games will be available after a planned Pokémon Presents stream at 9 am Eastern/6 am Pacific on February 27. Subscribers to the Switch Online + Expansion Pack are in for a disappointment, though. Instead of releasing FireRed and LeafGreen as part of the Switch Online Game Boy Advance collection, Nintendo will release both titles as standalone purchases that will run you $20 apiece. This means that players without a subscription will be able to buy and play the games. But given how few GBA games are available for the Switch Online service and how infrequently new ones are released, it does rankle to see otherwise unmodified ports of a prominent game bypass subscribers entirely. The FireRed and LeafGreen ports will both support local wireless multiplayer, though not online multiplayer. The announcement originally said that support for Pokémon Home, the repository service that stores creatures from multiple Pokémon games, would be coming “soon,” but that note has since been removed. We’d still assume that players will eventually be able to use Home to import their FireRed and LeafGreen rosters to newer games in the series. While the multiplayer Switch Online Game Boy Advance games all support wireless multiplayer in place of physical Game Link Cables, it’s particularly important for these games because they were the first Pokémon titles to support any kind of wireless multiplayer, even before the Nintendo DS made built-in Wi-Fi connectivity a standard console feature. FireRed and LeafGreen were two of just a few dozen GBA games to support the Game Boy Advance Wireless Adapter, a bulky, standalone accessory that latched to the top of the system and plugged in to its Link Cable port. The initial releases of the games actually included the wireless adapter as a pack-in accessory, which had to be supported by the game you were playing and couldn’t just work as a stand-in for a physical Link Cable in older games. With the wireless adapter plugged in, up to 30 players could congregate in the game’s “Union Room” to do battles and trades—but given that Nintendo also recommended players stand within 10 feet of each other for the best experience, a 30-person Union Room would have gotten pretty crowded in real life. FireRed and LeafGreen are adaptations of the original 1996 Pokémon games for the old black-and-white Game Boy. The names reference the original Japanese releases, Red and Green. A third version of the game with updated graphics and other changes, called Pokémon Blue, was released in Japan in late 1996, and this was the version that was localized and released in the US as Pokémon Red and Blue in 1998. A final version of the base game, Pokémon Yellow, was released in Japan in 1998 and in the US in 1999, with some changes that tracked the plotline of the Pokémon anime (most prominently, mandating that players select an un-evolve-able Pikachu as their starter Pokémon). Most of the changes specific to this version of the game weren’t included in the FireRed and LeafGreen remakes. Andrew Cunningham Senior Technology Reporter Andrew Cunningham Senior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 41 Comments Nintendo brings GBA-era Pokémon to the Switch, but not Switch Online subscribers Games appear to be mostly unmodified ports of the well-regarded remakes. For my money, the 2004 Game Boy Advance re-releases of Pokémon FireRed and LeafGreen are still the best versions of the original Pokémon games. They fixed most of the bugs and balance issues present in the originals—partly by also including the rosters from Gold/Silver and Ruby/Sapphire—but they’re more faithful to the original gameplay, battling and catching mechanics, and graphics than the 2018 Let’s Go, Pikachu/Eevee! adaptations for the Switch. Someone at Nintendo apparently agrees, as the company announced today that it’s re-releasing those games for the original Switch (and, by extension, the Switch 2, though no Switch 2-specific features were announced). The games will be available after a planned Pokémon Presents stream at 9 am Eastern/6 am Pacific on February 27. Subscribers to the Switch Online + Expansion Pack are in for a disappointment, though. Instead of releasing FireRed and LeafGreen as part of the Switch Online Game Boy Advance collection, Nintendo will release both titles as standalone purchases that will run you $20 apiece. This means that players without a subscription will be able to buy and play the games. But given how few GBA games are available for the Switch Online service and how infrequently new ones are released, it does rankle to see otherwise unmodified ports of a prominent game bypass subscribers entirely. The FireRed and LeafGreen ports will both support local wireless multiplayer, though not online multiplayer. The announcement originally said that support for Pokémon Home, the repository service that stores creatures from multiple Pokémon games, would be coming “soon,” but that note has since been removed. We’d still assume that players will eventually be able to use Home to import their FireRed and LeafGreen rosters to newer games in the series. While the multiplayer Switch Online Game Boy Advance games all support wireless multiplayer in place of physical Game Link Cables, it’s particularly important for these games because they were the first Pokémon titles to support any kind of wireless multiplayer, even before the Nintendo DS made built-in Wi-Fi connectivity a standard console feature. FireRed and LeafGreen were two of just a few dozen GBA games to support the Game Boy Advance Wireless Adapter, a bulky, standalone accessory that latched to the top of the system and plugged in to its Link Cable port. The initial releases of the games actually included the wireless adapter as a pack-in accessory, which had to be supported by the game you were playing and couldn’t just work as a stand-in for a physical Link Cable in older games. With the wireless adapter plugged in, up to 30 players could congregate in the game’s “Union Room” to do battles and trades—but given that Nintendo also recommended players stand within 10 feet of each other for the best experience, a 30-person Union Room would have gotten pretty crowded in real life. FireRed and LeafGreen are adaptations of the original 1996 Pokémon games for the old black-and-white Game Boy. The names reference the original Japanese releases, Red and Green. A third version of the game with updated graphics and other changes, called Pokémon Blue, was released in Japan in late 1996, and this was the version that was localized and released in the US as Pokémon Red and Blue in 1998. A final version of the base game, Pokémon Yellow, was released in Japan in 1998 and in the US in 1999, with some changes that tracked the plotline of the Pokémon anime (most prominently, mandating that players select an un-evolve-able Pikachu as their starter Pokémon). Most of the changes specific to this version of the game weren’t included in the FireRed and LeafGreen remakes. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2023-09-04-learning-jumps/index.html] | [TOKENS: 3224] |
Can LLMs learn from a single example? Jeremy Howard and Jonathan Whitaker September 4, 2023 On this page Summary: recently while fine-tuning a large language model (LLM) on multiple-choice science exam questions, we observed some highly unusual training loss curves. In particular, it appeared the model was able to rapidly memorize examples from the dataset after seeing them just once. This astonishing feat contradicts most prior wisdom about neural network sample efficiency. Intrigued by this result, we conducted a series of experiments to validate and better understand this phenomenon. It’s early days, but the experiments support the hypothesis that the models are able to rapidly remember inputs. This might mean we have to re-think how we train and use LLMs. How neural networks learn We train neural network classifiers by showing them examples of inputs and outputs, and they learn to predict outputs based on inputs. For example, we show examples of pictures of dogs and cats, along with the breed of each, and they learn to guess the breed from the image. To be more precise, for a list of possible breeds, they output their guess as to the probability of each breed. If it’s unsure, it will guess a roughly equal probability of each possible breed, and if it’s highly confident, it will guess a nearly 1.0 probability of its predicted breed. The training process consists of every image in a training set being shown to the network, along with the correct label. A pass through all the input data is called an “epoch”. We have to provide many examples of the training data for the model to learn effectively. During training the neural network attempts to reduce the loss, which is (roughly speaking) a measure of how often the model is wrong, with highly confident wrong predictions penalised the most, and vise versa. We calculate the loss after each batch for the training set, and from time to time (often at the end of each epoch) we also calculated the loss for a bunch of inputs the model does not get to learn from – this is the “validation set”. Here’s what that looks like in practice when we train for 11 epochs: As you see, the training loss gradually (and bumpily) improves relatively quickly, slowing down over time, and the validation loss improves more slowly (and would eventually flatten out entirely, and then eventually get worse, if trained for longer). You can’t see from the chart where epochs start and stop, because it takes many epochs before a model learns what any particular image looks like. This has been a fundamental constraint of neural networks throughout the decades they’ve been developed – they take an awfully long time to learn anything! It’s actually an area of active research about why neural nets are so “sample inefficient”, especially compared to how children learn. A very odd loss curve We have recently been working on the Kaggle LLM Science Exam competition, which “challenges participants to answer difficult science-based questions written by a Large Language Model”. For instance, here’s the first question: Which of the following statements accurately describes the impact of Modified Newtonian Dynamics (MOND) on the observed “missing baryonic mass” discrepancy in galaxy clusters? For those playing along at home, the correct answer, apparently, is D. Thankfully, we don’t have to rely on our knowledge of Modified Newtonian Dynamics to answer these questions – instead, we are tasked to train a model to answer these questions. When we submit our model to Kaggle, it will be tested against thousands of “held out” questions that we don’t get to see. We trained our model for 3 epochs on a big dataset of questions created by our friend Radek Osmulski, and saw the following most unexpected training loss curve: The problem here is that you can clearly see the end of each epoch - there’s a sudden downwards jump in loss. We’ve seen similar loss curves before, and they’ve always been due to a bug. For instance, it’s easy to accidentally have the model continue to learn when evaluating the validation set – such that after validation the model suddenly appears much better. So we set out to look for the bug in our training process. We were using Hugging Face’s Trainer, so we guessed there must be a bug in that. Whilst we began stepping through the code, we also asked fellow open source developers on the Alignment Lab AI Discord if they’ve seen similar odd training curves, and pretty much everyone said “yes”. But everyone who responded was using Trainer as well, which seemed to support our theory of a bug in that library. But then @anton on Discord told us he was seeing this curve with his own simple custom training loop: …and he also showed us this accompanying extremely surprising validation loss curve: Then we started hearing from more and more Discord friends that they had seen similar strange behavior, including when not using Trainer. We wondered if it was some oddity specific to the LoRA approach we were using, but we heard from folks seeing the same pattern when doing full fine-tuning too. In fact, it was basically common knowledge in the LLM fine-tuning community that this is just how things go when you’re doing this kind of work!… Digging deeper The hypothesis that we kept hearing from open source colleagues is that that these training curves were actually showing overfitting. This seemed, at first, quite impossible. It would imply that the model was learning to recognise inputs from just one or two examples. If you look back at that first curve we showed, you can see the loss diving from 0.8 to 0.5 after the first epoch, and then from 0.5 to under 0.2 after the second. Furthermore, during each of the second and third epochs it wasn’t really learning anything new at all. So, other than its initial learning during the beginning of the first epoch, nearly all the apparent learning was (according to this theory) memorization of the training set occurring with only 3 examples per row! Furthermore, for each question, it only gets a tiny amount of signal: how its guess as to the answer compared to the true label. We tried out an experiment – we trained our Kaggle model for two epochs, using the following learning rate schedule: Nowadays this kind of schedule is not that common, but it’s an approach that saw a lot of success after it was created by Leslie Smith, who discussed it in his 2015 paper Cyclical Learning Rates for Training Neural Networks. And here’s the crazy-looking training and validation loss curves we saw as a result: The only thing that we have come up with (so far!) that fully explains this picture is that the hypothesis is correct: the model is rapidly learning to recognise examples even just seeing them once. Let’s work through each part of the loss curve in turn… Looking at the first epoch, this looks like a very standard loss curve. We have the learning rate warming up over the first 10% of the epoch, and then gradually decreasing following a cosine schedule. Once the LR comes up to temperature, the training and validation loss rapidly decrease, and then they both slow down as the LR decreases and the “quick wins” are captured. The second epoch is where it gets interested. We’re not re-shuffling the dataset at the start of the epoch, so those first batches of the second epoch are when the learning rate was still warming up. That’s why we don’t see an immediate step-change like we did from epoch 2 to 3 in the very first loss curve we showed – these batches were only seen when the LR was low, so it couldn’t learn much. Towards the end of that first 10% of the epoch, the training loss plummets, because the LR was high when these batches were seen during the first epoch, and the model has learned what they look like. The model quickly learns that it can very confidentally guess the correct answer. But during this time, validation loss suffers. That’s because although the model is getting very confident, it’s not actually getting any better at making predictions. It has simply memorised the dataset, but isn’t improving at generalizing. Over-confident predictions cause validation loss to get worse, because the loss function penalizes more confident errors higher. The end of the curve is where things get particularly interesting. The training loss starts getting worse – and that really never ought to happen! In fact, neither of us remember ever seeing such a thing before when using a reasonable LR. But actually, this makes perfect sense under the memorization hypothesis: these are the batches that the model saw at a time when the LR had come back down again, so it wasn’t able to memorize them as effectively. But the model is still over-confident, because it has just got a whole bunch of batches nearly perfectly correct, and hasn’t yet adjusted to the fact that it’s now seeing batches that it didn’t have a chance to learn so well. It gradually recalibrates to a more reasonable level of confidence, but it takes a while, because the LR is getting lower and lower. As it recalibrates, the validation loss comes back down again. For our next experiment, we tried 1cycle training over 3 epochs, instead of CLR – that is, we did a single LR warmup for 10% of batches at the start of training, and then decayed the LR over the remaining batches following a cosine schedule. Previously, we did a separate warmup and decay cycle for each epoch. Also, we increased the LoRA rank, resulting in slower learning. Here’s the resulting loss curve: The shape largely follows what we’d expect, based on the previous discussion, except for one thing: the validation loss does not jump up at epoch 2 – it’s not until epoch 3 that we see that jump. However previously the training loss was around 0.2 by the 2nd epoch, which is only possible when it’s making highly confident predictions. In the 1cycle example it doesn’t make such confident predictions until the third epoch, and we don’t see the jump in validation loss until that happens. It’s important to note that the validation loss getting worse doesn’t mean that we’re over-fitting in practice. What we generally care about is accuracy, and it’s fine if the model is over-confident. In the Kaggle competition the metric used for the leaderboard is Mean Average Precision @ 3 (MAP@3), which is the accuracy of the ranked top-3 multiple-choice predictions made my the model. Here’s the validation accuracy per batch of the 1cycle training run shown in the previous chart – as you see, it keeps improving, even although the validation loss got worse in the last epoch: If you’re interested in diving deeper, take a look at this report where Johno shares logs from some additional examples, along with a notebook for those who’d like to see this effect in action for themselves. How could the memorization hypothesis be true? There is no fundamental law that says that neural networks can’t learn to recognise inputs from a single example. It’s just what researchers and practitioners have generally found to be the case in practice. It takes a lot of examples because the loss surfaces that we’re trying to navigate using stochastic gradient descent (SGD) are too bumpy to be able to jump far at once. We do know, however, that some things can make loss surfaces smoother, such as using residual connections, as shown in the classic Visualizing the Loss Landscape of Neural Nets paper (Li et al, 2018). It could well be the case that pre-trained large language models have extremely smooth loss surfaces in areas close to the minimal loss, and that a lot of the fine-tuning work done in the open source community is in this area. This is based on the underlying premise surrounding the original development of fine-tuned universal language models. These models were first documented in the ULMFiT paper back in 2018 by one of us (Jeremy) and Sebastian Ruder. The reason Jeremy originally built the ULMFiT algorithm is because it seemed necessary that any model that could do a good job of language modeling (that is, predicting the next word of a sentence) would have to build a rich hierarchy of abstractions and capabilities internally. Furthermore, Jeremy believed that this hierarchy could then be easily adapted to solve other tasks requiring similar capabilities using a small amount of fine-tuning. The ULMFiT paper demonstrated for the first time that this is indeed exactly what happens. Large language models, which today are orders of magnitude bigger than those studied in ULMFiT, must have an even richer hierarchy of abstractions. So fine-tuning one of these models to, for instance, answer multiple-choice questions about science, can largely harness capabilities and knowledge that is already available in the model. It’s just a case of surfacing the right pieces in the right way. These should not require many weights to be adjusted very much. Based on this, it’s perhaps not surprising to think that a pre-trained language model with a small random classification head could be in a part of the weight space where the loss surface smoothly and clearly points exactly in the direction of a good weight configuration. And when using the Adam optimiser (as we did), having a consistent and smooth gradient results in effective dynamic learning rate going up and up, such that steps can get very big. What now? Having a model that learns really fast sounds great – but actually it means that a lot of basic ideas around how to train models may be turned on their head! When models train very slowly, we can train them for a long time, using a wide variety of data, for multiple epochs, and we can expect that our model will gradually pull out generalisable information from the data we give it. But when models learn this fast, the catastrophic forgetting problem may suddenly become far more pronounced. For instance, if a model sees ten examples of a very common relationship, and then one example of a less common counter-example, it may well remember the counter-example instead of just slightly downweighting its memory of the original ten examples. It may also be the case now that data augmentation is now less useful for avoiding over-fitting. Since LLMs are so effective at pulling out representations of the information they’re given, mixing things up by paraphrasing and back-translation may now not make much of a difference. The model would be effectively getting the same information either way. Perhaps we can mitigate these challenges by greatly increasing our use of techniques such as dropout (which is already used a little in fine-tuning techniques such as LoRA) or stochastic depth (which does not seem to have been used in NLP to any significant extent yet). Alternatively, maybe we just need to be careful to use rich mixtures of datasets throughout training, so that our models never have a chance to forget. Although Llama Code, for instance, did suffer from catastrophic forgetting (as it got better at code, it got much worse at everything else), it was fine-tuned with only 10% of non-code data. Perhaps with something closer to a 50/50 mix it would have been possible to get just as good at coding, without losing its existing capabilities. If you come up with any alternative hypotheses, and are able to test them, or if you find any empirical evidence that the memorization hypothesis is wrong, please do let us know! We’re also keen to hear about other work in this space (and apologies if we failed to reference any prior work here), and any ideas about how (if at all) we should adjust how we train and use these models based on these observations. We’ll be keeping an eye on replies to this twitter thread, so please respond there if you have any thoughts or questions. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Markus_Persson#cite_note-:5-7] | [TOKENS: 3525] |
Contents Markus Persson Markus Alexej Persson (/ˈpɪərsən/ ⓘ PEER-sən, Swedish: [ˈmǎrːkɵs ˈpæ̌ːʂɔn] ⓘ; born 1 June 1979), known by the pseudonym Notch, is a Swedish video game programmer and designer. He is the creator of Minecraft, the best-selling video game in history. He founded the video game development company Mojang Studios in 2009. Persson began developing video games at an early age. His commercial success began after he published an early version of Minecraft in 2009. Prior to the game's official retail release in 2011, it had sold over four million copies. After this point Persson stood down as the lead designer and transferred his creative authority to Jens Bergensten. In September 2014 Persson announced his intention to leave Mojang, and in November of that year the company was sold to Microsoft reportedly for US$2.5 billion, which made him a billionaire. Since 2016 several of Persson's posts on Twitter regarding feminism, race, and transgender rights have caused public controversies. He has been described as "an increasingly polarizing figure, tweeting offensive statements regarding race, the LGBTQ community, gender, and other topics." In an effort to distance itself from Persson, Microsoft removed mentions of his name from Minecraft (excluding one instance in the game's end credits) and did not invite him to the game's tenth anniversary celebration. In 2015 he co-founded a separate game studio called Rubberbrain, which was relaunched in 2024 as Bitshift Entertainment. Early life Markus Alexej Persson was born in Stockholm, Sweden, to a Finnish mother, Ritva, and a Swedish father, Birger, on 1 June 1979. He has one sister. He grew up in Edsbyn until he was seven years old, when his family moved back to Stockholm. In Edsbyn, Persson's father worked for the railroad, and his mother was a nurse. He spent much time outdoors in Edsbyn, exploring the woods with his friends. When Persson was about seven years old, his parents divorced, and he and his sister lived with their mother. His father moved to a cabin in the countryside. Persson said in an interview that they experienced food insecurity around once a month. Persson lost contact with his father for several years after the divorce. According to Persson, his father suffered from depression, bipolar disorder, alcoholism, and medication abuse, and went to jail for robberies. While his father had somewhat recovered during Persson's early life, his father relapsed, contributing to the divorce. His sister also experimented with drugs and ran away from home. He had gained interest in video games at an early age. His father was "a really big nerd", who built his own modem and taught Persson to use the family's Commodore 128. On it, Persson played bootleg games and loaded in various type-in programs from computer magazines with the help of his sister. The first game he purchased with his own money was The Bard's Tale. He began programming on his father's Commodore 128 home computer at the age of seven. He produced his first game at the age of eight, a text-based adventure game. By 1994 Persson knew he wanted to become a video game developer, but his teachers advised him to study graphic design, which he did from ages 15 to 18. Persson, although introverted, was well-liked by his peers, but after entering secondary school was a "loner" and reportedly had only one friend. He spent most of his spare time with games and programming at home. He managed to reverse-engineer the Doom engine, which he continued to take great pride in as of 2014[update]. He never finished high school, but was reportedly a good student. Career Persson started his career working as a web designer. He later found employment at Game Federation, where he met Rolf Jansson. The pair worked in their spare time to build the 2006 video game Wurm Online. The game was released through a new entity, "Mojang Specifications AB". Persson left the project in late 2007. As Persson wanted to reuse the name "Mojang", Jansson agreed to rename the company to Onetoofree AB. Between 2004 and 2009 Persson worked as a game developer for Midasplayer (later known as King). There, he worked as a programmer, mostly building browser games made in Flash. He later worked as a programmer for jAlbum. Prior to creating Minecraft, Persson developed multiple, small games. He also entered a number of game design competitions and participated in discussions on the TIGSource forums, a web forum for independent game developers. One of Persson's more notable personal projects was called RubyDung, an isometric three-dimensional base-building game like RollerCoaster Tycoon and Dwarf Fortress. While working on RubyDung, Persson experimented with a first-person view mode similar to that found in Dungeon Keeper. However, he felt the graphics were too pixelated and omitted this mode. In 2009 Persson found inspiration in Infiniminer, a block-based open-ended mining game. Infiniminer heavily influenced his future work on RubyDung, and was behind Persson's reasoning for returning the first-person mode, the "blocky" visual style and the block-building fundamentals to the game. RubyDung is the earliest known Minecraft prototype created by Persson. On 17 May 2009 Persson released the original edition (later called "Classic version") of Minecraft on the TIGSource forums. He regularly updated the game based on feedback from TIGSource users. Persson released several new versions of Minecraft throughout 2009 and 2010, going through several phases of development including Survival Test, Indev, and Infdev. On 30 June 2010 Persson released the game's Alpha version. While working on the pre-Alpha version of Minecraft, Persson continued working at jAlbum. In 2010, after the release and subsequent success of Minecraft's Alpha version, Persson moved from a full-time role to a part-time role at jAlbum. He left jAlbum later that same year. In September 2010 Persson travelled to Valve Corporation's headquarters in Bellevue, Washington, United States, where he took part in a programming exercise and met Gabe Newell. Persson was subsequently offered a job at Valve, which he turned down in order to continue work on Minecraft. On 20 December 2010 Minecraft moved into its beta phase and began expanding to other platforms, including mobile. In January 2011 Minecraft reached one million registered accounts. Six months afterwards, it reached ten million. The game has sold over four million copies by 7 November 2011. Mojang held the first Minecon from 18 to 19 November 2011 to celebrate its full release, and subsequently made it an annual event. Following this, on 11 December 2011, Persson transferred creative control of Minecraft to Jens Bergensten and began working on another game title, 0x10c, although he reportedly abandoned the project around 2013. In 2013 Mojang recorded revenues of $330 million and profits of $129 million. Persson has stated that, due to the intense media attention and public pressure, he became exhausted with running Minecraft and Mojang. In a September 2014 blog post he shared his realization that he "didn't have the connection to my fans I thought I had", that he had "become a symbol", and that he did not wish to be responsible for Mojang's increasingly large operation. In June 2014 Persson tweeted "Anyone want to buy my share of Mojang so I can move on with my life? Getting hate for trying to do the right thing is not my gig", reportedly partly as a joke. Persson controlled a 71% stake in Mojang at the time. The offer attracted significant interest from Activision Blizzard, EA, and Microsoft. Forbes later reported that Microsoft wanted to purchase the game as a "tax dodge" to turn their taxable excess liquid cash into other assets. In September 2014 Microsoft agreed to purchase Mojang for $2.5 billion, making Persson a billionaire. He then left the company after the deal was finalised in November. Since leaving Mojang, Persson has worked on several small projects. On 23 June 2014 he founded a company with Porsér called Rubberbrain AB; the company had no games by 2021, despite spending SEK 60 million. The company was relaunched as Bitshift Entertainment, LLC on 28 March 2024. Persson expressed interest in creating a new video game studio in 2020, and in developing virtual reality games. He has also since created a series of narrative-driven immersive events called ".party()", which uses extensive visual effects and has been hosted in multiple cities. At the beginning of 2025 Persson decided to create a spiritual successor to Minecraft, referred to as "Minecraft 2", in response to the results of a poll on X. However, after speaking to his team, he shortly went against this in favour of developing the other choice on his Twitter poll, a roguelike titled Levers and Chests. Games Persson's most popular creation is the survival sandbox game Minecraft, which was first publicly available on 17 May 2009 and fully released on 18 November 2011. Persson left his job as a game developer to work on Minecraft full-time until completion. In early 2011, Mojang AB sold the one millionth copy of the game, several months later their second, and several more their third. Mojang hired several new staff members for the Minecraft team, while Persson passed the lead developer role to Jens Bergensten. He stopped working on Minecraft after a deal with Microsoft to sell Mojang for $2.5 billion. This brought his net worth to US$1.5 billion. Persson and Jakob Porsér came up with the idea for Scrolls including elements from board games and collectible card games. Persson noted that he will not be actively involved in development of the game and that Porsér will be developing it. Persson revealed on his Tumblr blog on 5 August 2011 that he was being sued by a Swedish law firm representing Bethesda Softworks over the trademarked name of Scrolls, claiming that it conflicted with their The Elder Scrolls series of games. On 17 August 2011 Persson challenged Bethesda to a Quake 3 tournament to decide the outcome of the naming dispute. On 27 September 2011 Persson confirmed that the lawsuit was going to court. ZeniMax Media, owner of Bethesda Softworks, announced the lawsuit's settlement in March 2012. The settlement allowed Mojang to continue using the Scrolls trademark. In 2018, Scrolls was made available free of charge and renamed to Caller's Bane. Cliffhorse is a humorous game programmed in two hours using the Unity game engine and free assets. The game took inspiration from Skyrim's physics engine, "the more embarrassing minimum-effort Greenlight games", Goat Simulator, and Big Rigs: Over the Road Racing. The game was released to Microsoft Windows systems as an early access and honourware game on the first day of E3 2014, instructing users to donate Dogecoin to "buy" the game before downloading it. The game accumulated over 280,000 dogecoins. Following the end to his involvement with Minecraft, Persson began pre-production of an alternate reality space game set in the distant future in March 2012. On April Fools' Day Mojang launched a satirical website for Mars Effect (parody of Mass Effect), citing the lawsuit with Bethesda as an inspiration. However, the gameplay elements remained true and on 4 April, Mojang revealed 0x10c (pronounced "Ten to the C") as a space sandbox title. Persson officially halted game production in August 2013. However, C418, the composer of the game's soundtrack (as well as that of Minecraft), released an album of the work he had made for the game. In 2013, Persson made a free game called Shambles in the Unity game engine. Persson has also participated in several Ludum Dare 48-hour game making competitions. Personal life In 2011 Persson married Elin Zetterstrand, whom he had dated for four years before. Zetterstrand was a former moderator on the Minecraft forums. They had a daughter together, but by mid-2012, he began to see little of her. On 15 August 2012 he announced that he and his wife had filed for divorce. The divorce was finalised later that year. On 14 December 2011 Persson's father committed suicide with a handgun after drinking heavily. In an interview with The New Yorker, Persson said of his father: When I decided I wanted to quit my day job and work on my own games, he was the only person who supported my decision. He was proud of me and made sure I knew. When I added the monsters to Minecraft, he told me that the dark caves became too scary for him. But I think that was the only true criticism I ever heard from him. Persson later admitted that he himself suffered from depression and various highs and lows in his mood. Persson has criticised the stance of large game companies on piracy. He once stated that "piracy is not theft", viewing unauthorised downloads as potential future customers. Persson stated himself to be a member of the Pirate Party of Sweden in 2011. He is also a member of Mensa. He has donated to numerous charities, including Médecins Sans Frontières (Doctors Without Borders). Under his direction, Mojang spent a week developing Catacomb Snatch for the Humble Indie Bundle and raised US$458,248 for charity. He also donated $250,000 to the Electronic Frontier Foundation in 2012. In 2011 he gave $3 million in dividends back to Mojang employees. According to Forbes, his net worth in 2023 was around $1.2 billion. In 2014 Persson was one of the biggest taxpayers in Sweden. Around 2014, he lived in a multi-level penthouse in Östermalm, Stockholm, an area he described as "where the rich people live". In December 2014 Persson purchased a home in Trousdale Estates, a neighbourhood in Beverly Hills, California, in the United States, for $70 million, a record sales price for Beverly Hills at the time. Persson reportedly outbid Beyoncé and Jay-Z for the property. Persson began receiving criticism for political and social opinions he expressed on social media as early as 2016. November 30, 2017 In 2017, he proposed a heterosexual pride holiday, and wrote that those who opposed the idea "deserve to be shot." After facing backlash, he deleted the tweets and rescinded his statements, writing, "So yeah, it's about pride of daring to express, not about pride of being who you are. I get it now." Later in the year, he wrote that feminism is a "social disease" and called the video game developer and feminist Zoë Quinn a "cunt", although he was generally critical of the GamerGate movement. He has described intersectional feminism as a "framework for bigotry" and the use of the word mansplaining as being sexist. Also in 2017, Persson tweeted that "It's okay to be white". Later that year, he stated that he believed in the Pizzagate conspiracy theory. In 2019, he tweeted referencing QAnon, saying "Q is legit. Don't trust the media." Later in 2019, he tweeted in response to a pro-transgender internet meme that, "You are absolutely evil if you want to encourage delusion. What happened to not stigmatizing mental illness?" He then also promoted claims that people were fined for "using the wrong pronoun". However, after facing backlash, he tweeted a day afterwards that he had "no idea what [being trans is] like of course, but it's inspiring as hell when people open up and choose to actually be who they know themselves as. Not because it's a cool choice, because it's a big step. I gues [sic] that's actually cool nvm". Later that year, Microsoft removed two mentions of Persson's name in the "19w13a" snapshot of Minecraft and did not invite him to the 10-year anniversary celebration of the game. A spokesperson for Microsoft stated that his views "do not reflect those of Microsoft or Mojang". He is still mentioned in the End Poem ("a flat, infinite world created by a man called Markus").[citation needed] Awards References External links |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/supreme-court-blocks-trumps-emergency-tariffs-billions-in-refunds-may-be-owed/#comments] | [TOKENS: 4469] |
Beyond Trump’s reach Supreme Court blocks Trump’s emergency tariffs, billions in refunds may be owed Economists estimated more than $175 billion may need to be refunded. Ashley Belanger – Feb 20, 2026 10:37 am | 272 Credit: Anna Moneymaker / Staff | Getty Images News Credit: Anna Moneymaker / Staff | Getty Images News Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The Supreme Court ruled Friday that Donald Trump was not authorized to implement emergency tariffs to ostensibly block illegal drug flows and offset trade deficits. It’s not immediately clear what the ruling may mean for businesses that paid various “reciprocal” tariffs that Trump changed frequently, raising and lowering rates at will during tense negotiations with the United States’ biggest trade partners. Divided 6-3, Supreme Court justices remanded the cases to lower courts, concluding that the International Emergency Economic Powers Act (IEEPA) does not give Trump power to impose tariffs. Chief Justice John Roberts wrote the opinion and was joined by Justices Neil Gorsuch, Amy Coney Barrett, Elena Kagan, Sonia Sotomayor, and Ketanji Brown Jackson. They concluded that Trump could not exclusively rely on IEEPA to impose tariffs “of unlimited amount and duration, on any product from any country” during peacetime. Only Congress has the power of the purse, Roberts wrote, and the few exceptions to that are bound by “explicit terms and subject to strict limits.” “Against that backdrop of clear and limited delegations, the Government reads IEEPA to give the President power to unilaterally impose unbounded tariffs and change them at will,” Roberts wrote. “That view would represent a transformative expansion of the President’s authority over tariff policy. It is also telling that in IEEPA’s half century of existence, no President has invoked the statute to impose any tariffs, let alone tariffs of this magnitude and scope. That ‘lack of historical precedent,’ coupled with ‘the breadth of authority’ that the President now claims, suggests that the tariffs extend beyond the President’s ‘legitimate reach.’” Back in November, analysts suggested that the Supreme Court ruling against Trump could force the government to issue refunds of up to $1 trillion. This morning, a new estimate from economists reduced that number, Reuters reported, estimating that more than $175 billion could be “at risk of having to be refunded.” Ruling disrupts Trump plan to collect $900 billion Trump lost primarily because IEEPA does not explicitly reference “tariffs” or “duties,” instead only giving Trump power to “regulate” “importation”—the two words in the statute that Trump tried to argue showed that Congress clearly authorized his power to impose tariffs. But the court did not agree that Congress intended to give the president “the independent power to impose tariffs on imports from any country, of any product, at any rate, for any amount of time,” Roberts wrote. “Those words cannot bear such weight,” particularly in peacetime. “The United States, after all, is not at war with every nation in the world.” Specifically, Trump failed to “identify any statute in which the power to regulate includes the power to tax,” Roberts wrote. And the majority of justices remained “skeptical” that in “IEEPA alone,” Congress intended to hide “a delegation of its birth-right power to tax within the quotidian power to ‘regulate.’” “A contrary reading would render IEEPA partly unconstitutional,” Roberts wrote. According to the majority, siding with Trump would free the president to “issue a dizzying array of modifications” to tariffs at will, “unconstrained by the significant procedural limitations in other tariff statutes.” The only check to that unprecedented power grab, the court suggested, would be a “veto-proof majority in Congress.” Trump has yet to comment on the ruling. Ahead of it, he claimed the tariffs were “common sense,” NBC News reported. Speaking at a steel manufacturing factory in northwest Georgia, Trump claimed that IEEPA tariffs were projected to bring in $900 billion “next year.” Not only could he now be forced to refund tariffs, but the Supreme Court ruling could also undo trade deals in which Trump used so-called reciprocal tariffs as leverage. Undoing tariffs will likely be a “mess,” Barrett said last year. “Until now, no President has read IEEPA to confer such power,” Roberts wrote, while noting that the court claims “no special competence in matters of economics or foreign affairs.” Gorsuch seems to troll Trump In a concurring opinion, Gorsuch slammed Trump as trying to expand the president’s authority in a way that would make it hard for Congress to ever retrieve lost powers. He claimed that Trump was seeking to secure a path forward where any president could declare a national emergency—a decision that would be “unreviewable”—to justify imposing “tariffs on nearly any goods he wishes, in any amount he wishes, based on emergencies he himself has declared.” “Just ask yourself: What President would willingly give up that kind of power?” Gorsuch wrote. Gorsuch further questioned if Trump was “seeking to exploit questionable statutory language to aggrandize his own power.” And he warned that accepting the dissenting view would allow Trump to randomly impose tariffs as low as 1 percent or as high as 1,000,000 percent on any product or country he wanted at any time. Gorsuch criticized justices with dissenting views, who disagreed that Congress’ intent in the statute was unclear and defended Trump’s claim that “IEEPA provides the clear statement needed to sustain the President’s tariffs.” Those justices argued that presidents have long been granted authority to impose tariffs and accused the majority of putting a “thumb on the scale” by requiring a strict reading of the statute. Instead, they argued for a special exception requiring a more general interpretation of statutes whenever presidents seek to regulate matters of foreign affairs. If that view was accepted, Gorsuch warned, presidents could seize even more power from Congress. Many other legislative powers “could be passed wholesale to the executive branch in a few loose statutory terms, no matter what domestic ramifications might follow. And, as we have seen, Congress would often find these powers nearly impossible to retrieve.” As a final note, Gorsuch took some time to sympathize with Trump supporters: For those who think it important for the Nation to impose more tariffs, I understand that today’s decision will be disappointing. All I can offer them is that most major decisions affecting the rights and responsibilities of the American people (including the duty to pay taxes and tariffs) are funneled through the legislative process for a reason. Yes, legislating can be hard and take time. And, yes, it can be tempting to bypass Congress when some pressing problem arises. But the deliberative nature of the legislative process was the whole point of its design. Through that process, the Nation can tap the combined wisdom of the people’s elected representatives, not just that of one faction or man. There, deliberation tempers impulse, and compromise hammers disagreements into workable solutions. And because laws must earn such broad support to survive the legislative process, they tend to endure, allowing ordinary people to plan their lives in ways they cannot when the rules shift from day to day. Kavanaugh questions other Trump tariff authority Under IEEPA, the majority ruled, Trump has the power to “impose penalties, restrictions, or controls on foreign commerce,” Barrett wrote. But he does not have the power to impose emergency tariffs, unless Congress updates laws to explicitly grant such authority. In his dissent, justice Brett Kavanaugh insisted that it should not be up to courts to settle these “policy debates.” He defended Trump’s view that IEEPA granting power to “regulate” “importation” generally included tariffs, while arguing that Trump wasn’t seeking to expand his presidential authority at all. Many feared that the more conservative Supreme Court would side with Trump, and Kavanaugh’s opinion offered a peek at what that alternate reality could have looked like. “Importantly, IEEPA’s authorization for the President to impose tariffs did not grant the President any new substantive power,” Kavanaugh wrote. Instead, “IEEPA merely allows the President to impose tariffs somewhat more efficiently to deal with foreign threats during national emergencies.” He further claimed it was an “odd distinction” that the majority would interpret IEEPA as giving Trump authority to “block all imports from China” but not to “order even a $1 tariff on goods imported from China.” Downplaying the ruling’s significance, Kavanaugh echoed the Trump administration’s claims that the Supreme Court ruling won’t really affect Trump’s key policy of imposing tariffs to renegotiate trade deals or address other concerns. “The decision might not substantially constrain a President’s ability to order tariffs going forward,” Kavanugh wrote, pointing to “numerous other federal statutes” that “authorize the President to impose tariffs.” However, a footnote in the majority’s opinion emphasized that all of the options that Kavanaugh cited “contain various combinations of procedural prerequisites, required agency determinations, and limits on the duration, amount, and scope of the tariffs they authorize.” It was precisely constraints like those that Trump’s broad reading of IEEPA lacked, the majority found. Kavanaugh acknowledged that the ruling would stop Trump from imposing tariffs at will, writing that other statutes require “a few additional procedural steps that IEEPA, as an emergency statute, does not require.” Winding down his arguments, Kavanaugh joined Trump administration officials in groaning that the “United States may be required to refund billions of dollars to importers who paid the IEEPA tariffs, even though some importers may have already passed on costs to consumers or others.” Kavanaugh makes a frequently overlooked point there in this argument, which is that IEEPA tariffs may have harmed consumers without any immediate remedy. It seems unlikely that consumers will get any relief in the short-term, no matter what remedies the Supreme Court’s ruling triggers. For businesses, the primary relief will likely not be from refunds but from the small amount of certainty they will have going forward that tariffs won’t be suddenly changed or imposed overnight. Kavanaugh conceded that Trump’s tariffs “may or may not be wise policy.” But he fretted that Trump’s trade deals “worth trillions of dollars” could be undone by the ruling, while claiming the ruling has only generated more uncertainty on a global scale, including with America’s biggest rival, China. Interestingly, Kavanaugh also suggested that the ruling may put at legal risk the reading of another statute that Trump will likely rely on more heavily moving forward to impose tariffs. “One might think that the Court’s opinion would also mean that tariffs cannot be imposed under Section 232, which authorizes the President to ‘adjust the imports,’” Kavanaugh suggested. This story was updated to include views from Gorsuch and Kavanaugh. Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 272 Comments Supreme Court blocks Trump’s emergency tariffs, billions in refunds may be owed Economists estimated more than $175 billion may need to be refunded. The Supreme Court ruled Friday that Donald Trump was not authorized to implement emergency tariffs to ostensibly block illegal drug flows and offset trade deficits. It’s not immediately clear what the ruling may mean for businesses that paid various “reciprocal” tariffs that Trump changed frequently, raising and lowering rates at will during tense negotiations with the United States’ biggest trade partners. Divided 6-3, Supreme Court justices remanded the cases to lower courts, concluding that the International Emergency Economic Powers Act (IEEPA) does not give Trump power to impose tariffs. Chief Justice John Roberts wrote the opinion and was joined by Justices Neil Gorsuch, Amy Coney Barrett, Elena Kagan, Sonia Sotomayor, and Ketanji Brown Jackson. They concluded that Trump could not exclusively rely on IEEPA to impose tariffs “of unlimited amount and duration, on any product from any country” during peacetime. Only Congress has the power of the purse, Roberts wrote, and the few exceptions to that are bound by “explicit terms and subject to strict limits.” “Against that backdrop of clear and limited delegations, the Government reads IEEPA to give the President power to unilaterally impose unbounded tariffs and change them at will,” Roberts wrote. “That view would represent a transformative expansion of the President’s authority over tariff policy. It is also telling that in IEEPA’s half century of existence, no President has invoked the statute to impose any tariffs, let alone tariffs of this magnitude and scope. That ‘lack of historical precedent,’ coupled with ‘the breadth of authority’ that the President now claims, suggests that the tariffs extend beyond the President’s ‘legitimate reach.’” Back in November, analysts suggested that the Supreme Court ruling against Trump could force the government to issue refunds of up to $1 trillion. This morning, a new estimate from economists reduced that number, Reuters reported, estimating that more than $175 billion could be “at risk of having to be refunded.” Ruling disrupts Trump plan to collect $900 billion Trump lost primarily because IEEPA does not explicitly reference “tariffs” or “duties,” instead only giving Trump power to “regulate” “importation”—the two words in the statute that Trump tried to argue showed that Congress clearly authorized his power to impose tariffs. But the court did not agree that Congress intended to give the president “the independent power to impose tariffs on imports from any country, of any product, at any rate, for any amount of time,” Roberts wrote. “Those words cannot bear such weight,” particularly in peacetime. “The United States, after all, is not at war with every nation in the world.” Specifically, Trump failed to “identify any statute in which the power to regulate includes the power to tax,” Roberts wrote. And the majority of justices remained “skeptical” that in “IEEPA alone,” Congress intended to hide “a delegation of its birth-right power to tax within the quotidian power to ‘regulate.’” “A contrary reading would render IEEPA partly unconstitutional,” Roberts wrote. According to the majority, siding with Trump would free the president to “issue a dizzying array of modifications” to tariffs at will, “unconstrained by the significant procedural limitations in other tariff statutes.” The only check to that unprecedented power grab, the court suggested, would be a “veto-proof majority in Congress.” Trump has yet to comment on the ruling. Ahead of it, he claimed the tariffs were “common sense,” NBC News reported. Speaking at a steel manufacturing factory in northwest Georgia, Trump claimed that IEEPA tariffs were projected to bring in $900 billion “next year.” Not only could he now be forced to refund tariffs, but the Supreme Court ruling could also undo trade deals in which Trump used so-called reciprocal tariffs as leverage. Undoing tariffs will likely be a “mess,” Barrett said last year. “Until now, no President has read IEEPA to confer such power,” Roberts wrote, while noting that the court claims “no special competence in matters of economics or foreign affairs.” Gorsuch seems to troll Trump In a concurring opinion, Gorsuch slammed Trump as trying to expand the president’s authority in a way that would make it hard for Congress to ever retrieve lost powers. He claimed that Trump was seeking to secure a path forward where any president could declare a national emergency—a decision that would be “unreviewable”—to justify imposing “tariffs on nearly any goods he wishes, in any amount he wishes, based on emergencies he himself has declared.” “Just ask yourself: What President would willingly give up that kind of power?” Gorsuch wrote. Gorsuch further questioned if Trump was “seeking to exploit questionable statutory language to aggrandize his own power.” And he warned that accepting the dissenting view would allow Trump to randomly impose tariffs as low as 1 percent or as high as 1,000,000 percent on any product or country he wanted at any time. Gorsuch criticized justices with dissenting views, who disagreed that Congress’ intent in the statute was unclear and defended Trump’s claim that “IEEPA provides the clear statement needed to sustain the President’s tariffs.” Those justices argued that presidents have long been granted authority to impose tariffs and accused the majority of putting a “thumb on the scale” by requiring a strict reading of the statute. Instead, they argued for a special exception requiring a more general interpretation of statutes whenever presidents seek to regulate matters of foreign affairs. If that view was accepted, Gorsuch warned, presidents could seize even more power from Congress. Many other legislative powers “could be passed wholesale to the executive branch in a few loose statutory terms, no matter what domestic ramifications might follow. And, as we have seen, Congress would often find these powers nearly impossible to retrieve.” As a final note, Gorsuch took some time to sympathize with Trump supporters: For those who think it important for the Nation to impose more tariffs, I understand that today’s decision will be disappointing. All I can offer them is that most major decisions affecting the rights and responsibilities of the American people (including the duty to pay taxes and tariffs) are funneled through the legislative process for a reason. Yes, legislating can be hard and take time. And, yes, it can be tempting to bypass Congress when some pressing problem arises. But the deliberative nature of the legislative process was the whole point of its design. Through that process, the Nation can tap the combined wisdom of the people’s elected representatives, not just that of one faction or man. There, deliberation tempers impulse, and compromise hammers disagreements into workable solutions. And because laws must earn such broad support to survive the legislative process, they tend to endure, allowing ordinary people to plan their lives in ways they cannot when the rules shift from day to day. Kavanaugh questions other Trump tariff authority Under IEEPA, the majority ruled, Trump has the power to “impose penalties, restrictions, or controls on foreign commerce,” Barrett wrote. But he does not have the power to impose emergency tariffs, unless Congress updates laws to explicitly grant such authority. In his dissent, justice Brett Kavanaugh insisted that it should not be up to courts to settle these “policy debates.” He defended Trump’s view that IEEPA granting power to “regulate” “importation” generally included tariffs, while arguing that Trump wasn’t seeking to expand his presidential authority at all. Many feared that the more conservative Supreme Court would side with Trump, and Kavanaugh’s opinion offered a peek at what that alternate reality could have looked like. “Importantly, IEEPA’s authorization for the President to impose tariffs did not grant the President any new substantive power,” Kavanaugh wrote. Instead, “IEEPA merely allows the President to impose tariffs somewhat more efficiently to deal with foreign threats during national emergencies.” He further claimed it was an “odd distinction” that the majority would interpret IEEPA as giving Trump authority to “block all imports from China” but not to “order even a $1 tariff on goods imported from China.” Downplaying the ruling’s significance, Kavanaugh echoed the Trump administration’s claims that the Supreme Court ruling won’t really affect Trump’s key policy of imposing tariffs to renegotiate trade deals or address other concerns. “The decision might not substantially constrain a President’s ability to order tariffs going forward,” Kavanugh wrote, pointing to “numerous other federal statutes” that “authorize the President to impose tariffs.” However, a footnote in the majority’s opinion emphasized that all of the options that Kavanaugh cited “contain various combinations of procedural prerequisites, required agency determinations, and limits on the duration, amount, and scope of the tariffs they authorize.” It was precisely constraints like those that Trump’s broad reading of IEEPA lacked, the majority found. Kavanaugh acknowledged that the ruling would stop Trump from imposing tariffs at will, writing that other statutes require “a few additional procedural steps that IEEPA, as an emergency statute, does not require.” Winding down his arguments, Kavanaugh joined Trump administration officials in groaning that the “United States may be required to refund billions of dollars to importers who paid the IEEPA tariffs, even though some importers may have already passed on costs to consumers or others.” Kavanaugh makes a frequently overlooked point there in this argument, which is that IEEPA tariffs may have harmed consumers without any immediate remedy. It seems unlikely that consumers will get any relief in the short-term, no matter what remedies the Supreme Court’s ruling triggers. For businesses, the primary relief will likely not be from refunds but from the small amount of certainty they will have going forward that tariffs won’t be suddenly changed or imposed overnight. Kavanaugh conceded that Trump’s tariffs “may or may not be wise policy.” But he fretted that Trump’s trade deals “worth trillions of dollars” could be undone by the ruling, while claiming the ruling has only generated more uncertainty on a global scale, including with America’s biggest rival, China. Interestingly, Kavanaugh also suggested that the ruling may put at legal risk the reading of another statute that Trump will likely rely on more heavily moving forward to impose tariffs. “One might think that the Court’s opinion would also mean that tariffs cannot be imposed under Section 232, which authorizes the President to ‘adjust the imports,’” Kavanaugh suggested. This story was updated to include views from Gorsuch and Kavanaugh. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/cars/2026/02/what-happens-to-a-car-when-the-company-behind-its-software-goes-under/] | [TOKENS: 3266] |
who asked for this? What happens to a car when the company behind its software goes under? Connected car servers won’t be online indefinitely, and startups often go bust. Matthew MacConnell – Feb 17, 2026 1:15 pm | 227 Fisker managed to deliver some Oceans before it sank. But are those owners beached now? Credit: Angel Garcia/Bloomberg via Getty Images Fisker managed to deliver some Oceans before it sank. But are those owners beached now? Credit: Angel Garcia/Bloomberg via Getty Images Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Imagine turning the key or pressing the start button of your car—and nothing happens. Not because the battery is dead or the engine is broken but because a server no longer answers. For a growing number of cars, that scenario isn’t hypothetical. As vehicles become platforms for software and subscriptions, their longevity is increasingly tied to the survival of the companies behind their code. When those companies fail, the consequences ripple far beyond a bad app update and into the basic question of whether a car still functions as a car. Over the years, automotive software has expanded from performing rudimentary engine management and onboard diagnostics to powering today’s interconnected, software-defined vehicles. Smartphone apps can now handle tasks like unlocking doors, flashing headlights, and preconditioning cabins—and some models won’t unlock at all unless a phone running the manufacturer’s app is within range. However, for all the promised convenience of modern vehicle software, there’s a growing nostalgia for an era when a phone call to a mechanic could resolve most problems. Mechanical failures were often diagnosable and fixable, and cars typically returned to the road quickly. Software-defined vehicles complicate that model: When something goes wrong, a car can be rendered inoperable in a driveway—or stranded at the side of the road—waiting not for parts but a software technician. It’s already happening Take the example of Fisker. In May 2023, the California auto brand arrived in Britain with its Ocean Sport before filing for bankruptcy just one year later. Priced from £35,000 ($44,000)—although top-spec trims pushed the price to £60,000 ($75,000)—the all-electric Tesla Model Y rival featured tech including a partially retracting roof and a rotating BYD-like touchscreen. All cars also carried a six-year/62,000-mile (99,779 km) warranty, with the battery and powertrain covered for 10 years or 100,000 miles (160,934 km). Before Fisker’s 2024 bankruptcy, just 419 Fisker Oceans made it into British driveways. One unfortunate buyer, a marketing manager from Southampton, experienced the worst of the brand’s teething troubles. After taking delivery, her Ocean was plagued by persistent software glitches. Following a call to Fisker, engineers were dispatched to collect the vehicle for repairs, but when the car was due to be collected, it refused to start. Mere days later, Fisker declared insolvency, leaving the Ocean stranded as a 5,500 lb (2,500 kg) driveway ornament for the next ten months with no solution in sight. Preceding Fisker, there was Better Place. Founded in 2007, Better Place wasn’t a car manufacturer but an EV infrastructure and software company that promised to solve range anxiety through battery-swap stations. Its entire model relied on centralized servers, subscriptions, and proprietary software to authenticate vehicles and manage battery exchanges. The flagship car for this system was the Renault Fluence Z.E., an electric sedan sold primarily in Israel and Denmark. Better Place filed for bankruptcy in May 2013 after burning through $850 million, leading to Renault closing the Fluence Z.E’s Turkish assembly line. Servers were shut down, battery-swap stations stopped operating, and backend software used for authentication, charging, and fleet management disappeared, leaving many cars bricked. Better Place founder and CEO Shai Agassi showing off a battery-swap station for electric taxis in Tokyo on April 26, 2010. Three years later, the company was done. Credit: KAZUHIRO NOGI/AFP via Getty Images Better Place founder and CEO Shai Agassi showing off a battery-swap station for electric taxis in Tokyo on April 26, 2010. Three years later, the company was done. Credit: KAZUHIRO NOGI/AFP via Getty Images These cases highlight a broader shift in the auto industry, where long-term ownership is increasingly dependent not just on mechanical durability but on continued access to proprietary software and manufacturer support. “When a modern car’s software misbehaves, you don’t fix it yourself—you call the manufacturer,” said Stuart Masson, founder and editor of The Car Expert. “They control the code. At that point, you’re not dealing with a traditional service department so much as an IT help desk.” That dependence, Masson warned, becomes a critical failure mode when the manufacturer disappears. “Sooner or later, every owner risks a Fisker-style scenario, where the company is gone and there’s nothing you can do about it.” While informal owner communities have begun attempting to reverse-engineer and distribute unofficial software updates, Masson is blunt about the risks. “You’re trusting that someone on the Internet actually knows what they’re doing,” he said. “If they don’t, the consequences might not be that Android Auto simply stops working but instead an airbag deploying at 70 mph.” While buying a second-hand Fisker in the UK is a high-risk move, more established manufacturers generally have contingency plans if a critical software partner goes under. In practice, that usually means issuing recalls or pushing over-the-air fixes to affected vehicles. Warranty coverage should handle most issues for newer cars, but the story gets murkier on the used market. Out of warranty Take a decade-old Tesla Model S, for example: You might snag one at a bargain price, but there’s no guarantee Tesla will continue supporting it indefinitely. When a manufacturer drops software support, the car isn’t just at risk of breaking down—it becomes a potential cybersecurity liability. In a world where vehicles are increasingly defined by their code, running unsupported software is akin to leaving your router exposed to the Internet. You may have a functioning car today, but there’s no telling when—or how—it could stop running. “Many teams, such as McLaren, who have F1 cars from the 1990s, require a 1990s-era laptop running an old Windows operating system, along with specialized interface hardware, for maintenance and to start the car,” Masson said. “We are up against time here, but it could be that brands like Tesla release its code, allowing people to use it. Who knows?” The problem isn’t solely on the consumer; manufacturers shoulder a significant portion of the risk as well. One potential mitigation is standardization. Enter Catena-X, a collaborative data network connecting OEMs, suppliers, and IT vendors. By creating traceable digital records for parts and software—and standardizing data models and APIs for interoperability—Catena-X aims to make supply chains more resilient and software dependencies less catastrophic when a critical partner disappears. When asked how OEMs can map software dependencies and mitigate vendor insolvency, Catena-X Managing Director Hanno Focken told Ars that “Catena-X supports software bills of materials and standardizes certain components to make software replaceable, plus a marketplace and open-source reference implementation helps OEMs find alternative vendors.” The industry also shares responsibility in defining minimum operational lifespans for vehicle software. “As an association, Catena-X can facilitate shared industry commitments and consensus (e.g., data retention policies like a 10-year battery passport requirement), but it does not act as a regulator setting mandatory lifespans,” added Focken. The lesson is clear: In today’s cars, the engine or electric motor isn’t always what keeps you moving—the software does. When that software vanishes with a bankrupt company, your car can go from daily driver to expensive paperweight overnight. And in the age of software-defined vehicles, owning a car increasingly means betting on the survival of its code. When that code dies, the driveway or highway—not the repair shop—becomes the final stop. 227 Comments What happens to a car when the company behind its software goes under? Connected car servers won’t be online indefinitely, and startups often go bust. Imagine turning the key or pressing the start button of your car—and nothing happens. Not because the battery is dead or the engine is broken but because a server no longer answers. For a growing number of cars, that scenario isn’t hypothetical. As vehicles become platforms for software and subscriptions, their longevity is increasingly tied to the survival of the companies behind their code. When those companies fail, the consequences ripple far beyond a bad app update and into the basic question of whether a car still functions as a car. Over the years, automotive software has expanded from performing rudimentary engine management and onboard diagnostics to powering today’s interconnected, software-defined vehicles. Smartphone apps can now handle tasks like unlocking doors, flashing headlights, and preconditioning cabins—and some models won’t unlock at all unless a phone running the manufacturer’s app is within range. However, for all the promised convenience of modern vehicle software, there’s a growing nostalgia for an era when a phone call to a mechanic could resolve most problems. Mechanical failures were often diagnosable and fixable, and cars typically returned to the road quickly. Software-defined vehicles complicate that model: When something goes wrong, a car can be rendered inoperable in a driveway—or stranded at the side of the road—waiting not for parts but a software technician. It’s already happening Take the example of Fisker. In May 2023, the California auto brand arrived in Britain with its Ocean Sport before filing for bankruptcy just one year later. Priced from £35,000 ($44,000)—although top-spec trims pushed the price to £60,000 ($75,000)—the all-electric Tesla Model Y rival featured tech including a partially retracting roof and a rotating BYD-like touchscreen. All cars also carried a six-year/62,000-mile (99,779 km) warranty, with the battery and powertrain covered for 10 years or 100,000 miles (160,934 km). Before Fisker’s 2024 bankruptcy, just 419 Fisker Oceans made it into British driveways. One unfortunate buyer, a marketing manager from Southampton, experienced the worst of the brand’s teething troubles. After taking delivery, her Ocean was plagued by persistent software glitches. Following a call to Fisker, engineers were dispatched to collect the vehicle for repairs, but when the car was due to be collected, it refused to start. Mere days later, Fisker declared insolvency, leaving the Ocean stranded as a 5,500 lb (2,500 kg) driveway ornament for the next ten months with no solution in sight. Preceding Fisker, there was Better Place. Founded in 2007, Better Place wasn’t a car manufacturer but an EV infrastructure and software company that promised to solve range anxiety through battery-swap stations. Its entire model relied on centralized servers, subscriptions, and proprietary software to authenticate vehicles and manage battery exchanges. The flagship car for this system was the Renault Fluence Z.E., an electric sedan sold primarily in Israel and Denmark. Better Place filed for bankruptcy in May 2013 after burning through $850 million, leading to Renault closing the Fluence Z.E’s Turkish assembly line. Servers were shut down, battery-swap stations stopped operating, and backend software used for authentication, charging, and fleet management disappeared, leaving many cars bricked. These cases highlight a broader shift in the auto industry, where long-term ownership is increasingly dependent not just on mechanical durability but on continued access to proprietary software and manufacturer support. “When a modern car’s software misbehaves, you don’t fix it yourself—you call the manufacturer,” said Stuart Masson, founder and editor of The Car Expert. “They control the code. At that point, you’re not dealing with a traditional service department so much as an IT help desk.” That dependence, Masson warned, becomes a critical failure mode when the manufacturer disappears. “Sooner or later, every owner risks a Fisker-style scenario, where the company is gone and there’s nothing you can do about it.” While informal owner communities have begun attempting to reverse-engineer and distribute unofficial software updates, Masson is blunt about the risks. “You’re trusting that someone on the Internet actually knows what they’re doing,” he said. “If they don’t, the consequences might not be that Android Auto simply stops working but instead an airbag deploying at 70 mph.” While buying a second-hand Fisker in the UK is a high-risk move, more established manufacturers generally have contingency plans if a critical software partner goes under. In practice, that usually means issuing recalls or pushing over-the-air fixes to affected vehicles. Warranty coverage should handle most issues for newer cars, but the story gets murkier on the used market. Out of warranty Take a decade-old Tesla Model S, for example: You might snag one at a bargain price, but there’s no guarantee Tesla will continue supporting it indefinitely. When a manufacturer drops software support, the car isn’t just at risk of breaking down—it becomes a potential cybersecurity liability. In a world where vehicles are increasingly defined by their code, running unsupported software is akin to leaving your router exposed to the Internet. You may have a functioning car today, but there’s no telling when—or how—it could stop running. “Many teams, such as McLaren, who have F1 cars from the 1990s, require a 1990s-era laptop running an old Windows operating system, along with specialized interface hardware, for maintenance and to start the car,” Masson said. “We are up against time here, but it could be that brands like Tesla release its code, allowing people to use it. Who knows?” The problem isn’t solely on the consumer; manufacturers shoulder a significant portion of the risk as well. One potential mitigation is standardization. Enter Catena-X, a collaborative data network connecting OEMs, suppliers, and IT vendors. By creating traceable digital records for parts and software—and standardizing data models and APIs for interoperability—Catena-X aims to make supply chains more resilient and software dependencies less catastrophic when a critical partner disappears. When asked how OEMs can map software dependencies and mitigate vendor insolvency, Catena-X Managing Director Hanno Focken told Ars that “Catena-X supports software bills of materials and standardizes certain components to make software replaceable, plus a marketplace and open-source reference implementation helps OEMs find alternative vendors.” The industry also shares responsibility in defining minimum operational lifespans for vehicle software. “As an association, Catena-X can facilitate shared industry commitments and consensus (e.g., data retention policies like a 10-year battery passport requirement), but it does not act as a regulator setting mandatory lifespans,” added Focken. The lesson is clear: In today’s cars, the engine or electric motor isn’t always what keeps you moving—the software does. When that software vanishes with a bankrupt company, your car can go from daily driver to expensive paperweight overnight. And in the age of software-defined vehicles, owning a car increasingly means betting on the survival of its code. When that code dies, the driveway or highway—not the repair shop—becomes the final stop. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/cars/2026/02/tesla-slashes-cybertruck-prices-as-it-tries-to-move-unpainted-metal/] | [TOKENS: 1624] |
looks like a skip Tesla slashes Cybertruck prices as it tries to move (unpainted) metal The stainless steel pickup truck is Tesla’s first real flop. Jonathan M. Gitlin – Feb 20, 2026 9:31 am | 330 A tenth of Tesla's 2025 Cybertruck sales were to SpaceX, another company owned and controlled by Elon Musk. Credit: Reginald Mathalone/NurPhoto via Getty Images A tenth of Tesla's 2025 Cybertruck sales were to SpaceX, another company owned and controlled by Elon Musk. Credit: Reginald Mathalone/NurPhoto via Getty Images Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Last night, Tesla made some hefty cuts to Cybertruck pricing in an effort to stimulate some sales. The bombastic tri-motor “Cyberbeast” is $15,000 cheaper at $99,990, albeit by dropping some previously free features like supercharging and FSD. And there’s now a new $59,990 entry-level model, a dual-motor configuration with a range of 325 miles (523 km) and the same 4.1-second 0–60 mph (0-97 km/h) time as the $79,990 premium all-wheel drive version. That actually makes the new entry-level model a good deal, at least in terms of Cybertrucks. Last year, the company introduced and then eliminated a single-motor rear-wheel drive variant, which found few takers when priced at $69,990; an extra motor for $10,000 less is quite a savings, and actually slightly cheaper than the price originally advertised for the RWD truck. As you might expect, Tesla has made some changes to get down to the new price. The range and 0–60 mph time might be the same as the more expensive dual-motor Cybertruck, but towing capacity is reduced from 11,000 lbs (4,990 kg) to 7,000 lbs (3,175kg), and cargo capacity drops from 2,500 lbs (1,134 kg) to 2,006 lbs (910 kg). Steel springs and adaptive dampers replace the air suspension. There are different tail lights. The inside features textile seats—maybe someone there reads Ars—but the cheapest Cybertruck does without seat ventilation for the front row or seat heaters for the second row. There’s also a different console, no AC outlets in the cabin, and fewer speakers, with no active noise-cancellation system. But it’s still $20,000 more expensive than Elon Musk told us it would be during the angular, unpainted vehicle’s reveal back in 2019. Back then, Musk promised a $39,900 price tag, as well as a few other things that never saw the light of day, like a true monocoque construction. Designing and building the odd-looking vehicle proved particularly troublesome for Tesla, which has never found those processes particularly easy. While other new Tesla models found themselves mired in “production hell,” in 2023, Musk said that “we dug our own grave with the Cybertruck.” Indeed, if the company based its business plans on the public sales projections of 250,000 trucks a year—something Musk said would happen by 2025—that certainly would be a problem. Appealing to neither traditional pickup truck buyers, who have largely rejected going to electric vehicles, nor the majority of EV enthusiasts even before Musk’s politics further soured things, fewer than 39,000 Cybertrucks were sold in 2024, and just over 20,000 found homes in 2025. The Edsel might be Ford’s most famous failure, but even it posted superior sales numbers during its relatively brief life. Jonathan M. Gitlin Automotive Editor Jonathan M. Gitlin Automotive Editor Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC. 330 Comments Tesla slashes Cybertruck prices as it tries to move (unpainted) metal The stainless steel pickup truck is Tesla’s first real flop. Last night, Tesla made some hefty cuts to Cybertruck pricing in an effort to stimulate some sales. The bombastic tri-motor “Cyberbeast” is $15,000 cheaper at $99,990, albeit by dropping some previously free features like supercharging and FSD. And there’s now a new $59,990 entry-level model, a dual-motor configuration with a range of 325 miles (523 km) and the same 4.1-second 0–60 mph (0-97 km/h) time as the $79,990 premium all-wheel drive version. That actually makes the new entry-level model a good deal, at least in terms of Cybertrucks. Last year, the company introduced and then eliminated a single-motor rear-wheel drive variant, which found few takers when priced at $69,990; an extra motor for $10,000 less is quite a savings, and actually slightly cheaper than the price originally advertised for the RWD truck. As you might expect, Tesla has made some changes to get down to the new price. The range and 0–60 mph time might be the same as the more expensive dual-motor Cybertruck, but towing capacity is reduced from 11,000 lbs (4,990 kg) to 7,000 lbs (3,175kg), and cargo capacity drops from 2,500 lbs (1,134 kg) to 2,006 lbs (910 kg). Steel springs and adaptive dampers replace the air suspension. There are different tail lights. The inside features textile seats—maybe someone there reads Ars—but the cheapest Cybertruck does without seat ventilation for the front row or seat heaters for the second row. There’s also a different console, no AC outlets in the cabin, and fewer speakers, with no active noise-cancellation system. But it’s still $20,000 more expensive than Elon Musk told us it would be during the angular, unpainted vehicle’s reveal back in 2019. Back then, Musk promised a $39,900 price tag, as well as a few other things that never saw the light of day, like a true monocoque construction. Designing and building the odd-looking vehicle proved particularly troublesome for Tesla, which has never found those processes particularly easy. While other new Tesla models found themselves mired in “production hell,” in 2023, Musk said that “we dug our own grave with the Cybertruck.” Indeed, if the company based its business plans on the public sales projections of 250,000 trucks a year—something Musk said would happen by 2025—that certainly would be a problem. Appealing to neither traditional pickup truck buyers, who have largely rejected going to electric vehicles, nor the majority of EV enthusiasts even before Musk’s politics further soured things, fewer than 39,000 Cybertrucks were sold in 2024, and just over 20,000 found homes in 2025. The Edsel might be Ford’s most famous failure, but even it posted superior sales numbers during its relatively brief life. MyBloodyBallantine Apparently this is also a limited time offer? https://electrek.co/2026/02/20/elon...0k-that-makes-sense-just-10-days-after-launch February 20, 2026 at 3:51 pm Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/space/2026/02/nasa-chief-classifies-starliner-flight-as-type-a-mishap-says-agency-made-mistakes/] | [TOKENS: 3408] |
Radical transparency NASA chief classifies Starliner flight as “Type A” mishap, says agency made mistakes “The most troubling failure revealed by this investigation is not hardware.” Eric Berger – Feb 19, 2026 4:59 pm | 271 NASA astronauts Butch Wilmore and Suni Williams wave to their families, friends, and NASA officials on their way to the launch pad on June 5, 2024, to board Boeing's Starliner spacecraft. Credit: Joe Raedle/Getty Images NASA astronauts Butch Wilmore and Suni Williams wave to their families, friends, and NASA officials on their way to the launch pad on June 5, 2024, to board Boeing's Starliner spacecraft. Credit: Joe Raedle/Getty Images Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav NASA on Thursday announced it has formally classified the 2024 crewed flight of the Starliner spacecraft as a “Type A” mishap, an acknowledgement that the test flight was a serious failure. As part of the announcement, NASA Administrator Jared Isaacman sent an agency-wide letter that recognized the shortcomings of both Starliner’s developer, Boeing, as well as the space agency itself. Starliner flew under the auspices of NASA’s Commercial Crew Program, in which the agency procures astronaut transportation services to the International Space Station. “We are taking ownership of our shortcomings,” Isaacman said. The letter and a subsequent news conference on Thursday afternoon were remarkable for the amount of accountability taken by NASA. Moreover, at Isaacman’s direction, the space agency released an internal report, comprising 311 pages, that details findings from the Program Investigation Team that looked into the Starliner flight. “Starliner has design and engineering deficiencies that must be corrected, but the most troubling failure revealed by this investigation is not hardware,” Isaacman wrote in his letter to the NASA workforce. “It is decision-making and leadership that, if left unchecked, could create a culture incompatible with human spaceflight.” Isaacman said there would be “leadership accountability” as a result of the decisions surrounding the Starliner program, but did not say which actions would be taken. “An outstanding day” The “Type A” classification of Starliner comes more than a year and a half after the vehicle’s ill-fated, initial crewed flight in early June 2024. During the more than daylong journey to the space station after launching on an Atlas V rocket, Starliner was beset by helium leaks in its propulsion system and then intermittent thruster failures. Still, after astronauts Butch Wilmore and Suni Williams eventually docked at the station, Boeing officials declared it a success. “We accomplished a lot, and really more than expected,” said Mark Nappi, vice president and manager of Boeing’s Commercial Crew Program, during a post-docking news conference. “We just had an outstanding day.” Over the subsequent weeks of the summer of 2024, NASA mostly backed Boeing, saying that its primary option was bringing the crew home on Starliner. Finally, by early August, NASA publicly wavered and admitted that Wilmore and Williams might return on a SpaceX Crew Dragon spacecraft. Yet Boeing remained steadfast. On a Boeing website called “Starliner Updates” that has since gone offline, as late as August 2, 2024, the company was declaring that its “confidence remains high” in Starliner’s return with crew (see archive). It was, in fact, not outstanding However, on August 24, NASA made it official and decided that Wilmore and Williams would not fly back on Starliner. Instead, the crew would come home on a Crew Dragon. Wilmore and Williams eventually returned to Earth safely in March 2025 as part of the Crew 9 mission. The true danger the astronauts faced on board Starliner was not publicly revealed until after they landed and flew back to Houston. In an interview with Ars, Wilmore described the tense minutes when he had to take control of Starliner as its thrusters began to fail, one after the other. Essentially, Wilmore could not fully control Starliner any longer. But simply abandoning the docking attempt was not a palatable solution. Just as the thrusters were needed to control the vehicle during the docking process, they were also necessary to position Starliner for its deorbit burn and reentry to Earth’s atmosphere. So Wilmore had to contemplate whether it was riskier to approach the space station or try to fly back to Earth. “I don’t know that we can come back to Earth at that point,” he said. “I don’t know if we can. And matter of fact, I’m thinking we probably can’t. So there we are, loss of 6DOF control, four aft thrusters down, and I’m visualizing orbital mechanics. The space station is nose down. So we’re not exactly level with the station, but below it. If you’re below the station, you’re moving faster. That’s orbital mechanics. It’s going to make you move away from the station. So I’m doing all of this in my mind. I don’t know what control I have. What if I lose another thruster? What if we lose comm? What am I going to do?” One thing that has surprised outside observers since publication of Wilmore’s harrowing experience is how NASA, knowing all of this, could have seriously entertained bringing the crew home on Starliner. Isaacman clearly had questions as well. He began reviewing the internal report on Starliner, published last November, almost immediately after becoming the space agency administrator in December. He wanted to understand why NASA insisted publicly for so long that it would bring astronauts back on Starliner, even though there was a safe backup option with Crew Dragon. “Pretending that that did not exist, and focusing exclusively on a single pathway, created a cultural issue that leadership should have been able to step in and course correct,” Isaacman said during the teleconference. “What levels of the organization inside of NASA did that exist at? Multiple levels, including, I would say, right up to the administrator of NASA.” Concerns predate the crew flight test Some of NASA’s biggest lapses in judgment occurred before the crew flight test, the report found. In particular, these revolved around the second orbital flight test of Starliner, which took place two years earlier, in May 2022. During this flight, which was declared to be successful, three of the thrusters on the Starliner Service Module failed. In hindsight, this should have raised huge red flags for what was to come during the mission of Wilmore and Williams two years later. However, in his letter to NASA employees, Isaacman said the NASA and Boeing investigations into these failures did not push hard enough to find the root cause of the thruster failures. “The investigations often stopped at the proximate cause, treated it with a fix, or accepted the issue as an unexplained anomaly,” Isaacman said. “In some cases, the proximate-cause diagnosis itself was incorrect due to insufficient rigor in following the data to its logical conclusion.” What happens next In the 11 months since the return of Wilmore and Williams, NASA and Boeing have agreed that the next flight of Starliner, although intended to dock with the International Space Station, will fly without crew. NASA has previously said this flight could take place as early as April 2026. However, when asked about this timeline, Isaacman reiterated that a lot of work had to be done. “We are committed to helping Boeing work through this problem, to remediate the technical challenges, to fully understand the risk associated with this vehicle, and to try and minimize it to the greatest extent possible,” he said. “And if we can implement a lot of the report recommendations, then we will fly again.” In a statement on Thursday, Boeing said it was “committed” to being one of NASA’s two commercial crew providers. A source recently told Ars that two NASA astronauts, Woody Hoburg and Jessica Wittner, have begun training for a potential “Starliner-2” mission that could take flight during the first half of next year, should the uncrewed test flight in 2026 go well. NASA has not confirmed that any astronauts have been assigned to Starliner-1. Eric Berger Senior Space Editor Eric Berger Senior Space Editor Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston. 271 Comments NASA chief classifies Starliner flight as “Type A” mishap, says agency made mistakes “The most troubling failure revealed by this investigation is not hardware.” NASA on Thursday announced it has formally classified the 2024 crewed flight of the Starliner spacecraft as a “Type A” mishap, an acknowledgement that the test flight was a serious failure. As part of the announcement, NASA Administrator Jared Isaacman sent an agency-wide letter that recognized the shortcomings of both Starliner’s developer, Boeing, as well as the space agency itself. Starliner flew under the auspices of NASA’s Commercial Crew Program, in which the agency procures astronaut transportation services to the International Space Station. “We are taking ownership of our shortcomings,” Isaacman said. The letter and a subsequent news conference on Thursday afternoon were remarkable for the amount of accountability taken by NASA. Moreover, at Isaacman’s direction, the space agency released an internal report, comprising 311 pages, that details findings from the Program Investigation Team that looked into the Starliner flight. “Starliner has design and engineering deficiencies that must be corrected, but the most troubling failure revealed by this investigation is not hardware,” Isaacman wrote in his letter to the NASA workforce. “It is decision-making and leadership that, if left unchecked, could create a culture incompatible with human spaceflight.” Isaacman said there would be “leadership accountability” as a result of the decisions surrounding the Starliner program, but did not say which actions would be taken. “An outstanding day” The “Type A” classification of Starliner comes more than a year and a half after the vehicle’s ill-fated, initial crewed flight in early June 2024. During the more than daylong journey to the space station after launching on an Atlas V rocket, Starliner was beset by helium leaks in its propulsion system and then intermittent thruster failures. Still, after astronauts Butch Wilmore and Suni Williams eventually docked at the station, Boeing officials declared it a success. “We accomplished a lot, and really more than expected,” said Mark Nappi, vice president and manager of Boeing’s Commercial Crew Program, during a post-docking news conference. “We just had an outstanding day.” Over the subsequent weeks of the summer of 2024, NASA mostly backed Boeing, saying that its primary option was bringing the crew home on Starliner. Finally, by early August, NASA publicly wavered and admitted that Wilmore and Williams might return on a SpaceX Crew Dragon spacecraft. Yet Boeing remained steadfast. On a Boeing website called “Starliner Updates” that has since gone offline, as late as August 2, 2024, the company was declaring that its “confidence remains high” in Starliner’s return with crew (see archive). It was, in fact, not outstanding However, on August 24, NASA made it official and decided that Wilmore and Williams would not fly back on Starliner. Instead, the crew would come home on a Crew Dragon. Wilmore and Williams eventually returned to Earth safely in March 2025 as part of the Crew 9 mission. The true danger the astronauts faced on board Starliner was not publicly revealed until after they landed and flew back to Houston. In an interview with Ars, Wilmore described the tense minutes when he had to take control of Starliner as its thrusters began to fail, one after the other. Essentially, Wilmore could not fully control Starliner any longer. But simply abandoning the docking attempt was not a palatable solution. Just as the thrusters were needed to control the vehicle during the docking process, they were also necessary to position Starliner for its deorbit burn and reentry to Earth’s atmosphere. So Wilmore had to contemplate whether it was riskier to approach the space station or try to fly back to Earth. “I don’t know that we can come back to Earth at that point,” he said. “I don’t know if we can. And matter of fact, I’m thinking we probably can’t. So there we are, loss of 6DOF control, four aft thrusters down, and I’m visualizing orbital mechanics. The space station is nose down. So we’re not exactly level with the station, but below it. If you’re below the station, you’re moving faster. That’s orbital mechanics. It’s going to make you move away from the station. So I’m doing all of this in my mind. I don’t know what control I have. What if I lose another thruster? What if we lose comm? What am I going to do?” One thing that has surprised outside observers since publication of Wilmore’s harrowing experience is how NASA, knowing all of this, could have seriously entertained bringing the crew home on Starliner. Isaacman clearly had questions as well. He began reviewing the internal report on Starliner, published last November, almost immediately after becoming the space agency administrator in December. He wanted to understand why NASA insisted publicly for so long that it would bring astronauts back on Starliner, even though there was a safe backup option with Crew Dragon. “Pretending that that did not exist, and focusing exclusively on a single pathway, created a cultural issue that leadership should have been able to step in and course correct,” Isaacman said during the teleconference. “What levels of the organization inside of NASA did that exist at? Multiple levels, including, I would say, right up to the administrator of NASA.” Concerns predate the crew flight test Some of NASA’s biggest lapses in judgment occurred before the crew flight test, the report found. In particular, these revolved around the second orbital flight test of Starliner, which took place two years earlier, in May 2022. During this flight, which was declared to be successful, three of the thrusters on the Starliner Service Module failed. In hindsight, this should have raised huge red flags for what was to come during the mission of Wilmore and Williams two years later. However, in his letter to NASA employees, Isaacman said the NASA and Boeing investigations into these failures did not push hard enough to find the root cause of the thruster failures. “The investigations often stopped at the proximate cause, treated it with a fix, or accepted the issue as an unexplained anomaly,” Isaacman said. “In some cases, the proximate-cause diagnosis itself was incorrect due to insufficient rigor in following the data to its logical conclusion.” What happens next In the 11 months since the return of Wilmore and Williams, NASA and Boeing have agreed that the next flight of Starliner, although intended to dock with the International Space Station, will fly without crew. NASA has previously said this flight could take place as early as April 2026. However, when asked about this timeline, Isaacman reiterated that a lot of work had to be done. “We are committed to helping Boeing work through this problem, to remediate the technical challenges, to fully understand the risk associated with this vehicle, and to try and minimize it to the greatest extent possible,” he said. “And if we can implement a lot of the report recommendations, then we will fly again.” In a statement on Thursday, Boeing said it was “committed” to being one of NASA’s two commercial crew providers. A source recently told Ars that two NASA astronauts, Woody Hoburg and Jessica Wittner, have begun training for a potential “Starliner-2” mission that could take flight during the first half of next year, should the uncrewed test flight in 2026 go well. NASA has not confirmed that any astronauts have been assigned to Starliner-1. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/supreme-court-blocks-trumps-emergency-tariffs-billions-in-refunds-may-be-owed/] | [TOKENS: 4469] |
Beyond Trump’s reach Supreme Court blocks Trump’s emergency tariffs, billions in refunds may be owed Economists estimated more than $175 billion may need to be refunded. Ashley Belanger – Feb 20, 2026 10:37 am | 272 Credit: Anna Moneymaker / Staff | Getty Images News Credit: Anna Moneymaker / Staff | Getty Images News Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav The Supreme Court ruled Friday that Donald Trump was not authorized to implement emergency tariffs to ostensibly block illegal drug flows and offset trade deficits. It’s not immediately clear what the ruling may mean for businesses that paid various “reciprocal” tariffs that Trump changed frequently, raising and lowering rates at will during tense negotiations with the United States’ biggest trade partners. Divided 6-3, Supreme Court justices remanded the cases to lower courts, concluding that the International Emergency Economic Powers Act (IEEPA) does not give Trump power to impose tariffs. Chief Justice John Roberts wrote the opinion and was joined by Justices Neil Gorsuch, Amy Coney Barrett, Elena Kagan, Sonia Sotomayor, and Ketanji Brown Jackson. They concluded that Trump could not exclusively rely on IEEPA to impose tariffs “of unlimited amount and duration, on any product from any country” during peacetime. Only Congress has the power of the purse, Roberts wrote, and the few exceptions to that are bound by “explicit terms and subject to strict limits.” “Against that backdrop of clear and limited delegations, the Government reads IEEPA to give the President power to unilaterally impose unbounded tariffs and change them at will,” Roberts wrote. “That view would represent a transformative expansion of the President’s authority over tariff policy. It is also telling that in IEEPA’s half century of existence, no President has invoked the statute to impose any tariffs, let alone tariffs of this magnitude and scope. That ‘lack of historical precedent,’ coupled with ‘the breadth of authority’ that the President now claims, suggests that the tariffs extend beyond the President’s ‘legitimate reach.’” Back in November, analysts suggested that the Supreme Court ruling against Trump could force the government to issue refunds of up to $1 trillion. This morning, a new estimate from economists reduced that number, Reuters reported, estimating that more than $175 billion could be “at risk of having to be refunded.” Ruling disrupts Trump plan to collect $900 billion Trump lost primarily because IEEPA does not explicitly reference “tariffs” or “duties,” instead only giving Trump power to “regulate” “importation”—the two words in the statute that Trump tried to argue showed that Congress clearly authorized his power to impose tariffs. But the court did not agree that Congress intended to give the president “the independent power to impose tariffs on imports from any country, of any product, at any rate, for any amount of time,” Roberts wrote. “Those words cannot bear such weight,” particularly in peacetime. “The United States, after all, is not at war with every nation in the world.” Specifically, Trump failed to “identify any statute in which the power to regulate includes the power to tax,” Roberts wrote. And the majority of justices remained “skeptical” that in “IEEPA alone,” Congress intended to hide “a delegation of its birth-right power to tax within the quotidian power to ‘regulate.’” “A contrary reading would render IEEPA partly unconstitutional,” Roberts wrote. According to the majority, siding with Trump would free the president to “issue a dizzying array of modifications” to tariffs at will, “unconstrained by the significant procedural limitations in other tariff statutes.” The only check to that unprecedented power grab, the court suggested, would be a “veto-proof majority in Congress.” Trump has yet to comment on the ruling. Ahead of it, he claimed the tariffs were “common sense,” NBC News reported. Speaking at a steel manufacturing factory in northwest Georgia, Trump claimed that IEEPA tariffs were projected to bring in $900 billion “next year.” Not only could he now be forced to refund tariffs, but the Supreme Court ruling could also undo trade deals in which Trump used so-called reciprocal tariffs as leverage. Undoing tariffs will likely be a “mess,” Barrett said last year. “Until now, no President has read IEEPA to confer such power,” Roberts wrote, while noting that the court claims “no special competence in matters of economics or foreign affairs.” Gorsuch seems to troll Trump In a concurring opinion, Gorsuch slammed Trump as trying to expand the president’s authority in a way that would make it hard for Congress to ever retrieve lost powers. He claimed that Trump was seeking to secure a path forward where any president could declare a national emergency—a decision that would be “unreviewable”—to justify imposing “tariffs on nearly any goods he wishes, in any amount he wishes, based on emergencies he himself has declared.” “Just ask yourself: What President would willingly give up that kind of power?” Gorsuch wrote. Gorsuch further questioned if Trump was “seeking to exploit questionable statutory language to aggrandize his own power.” And he warned that accepting the dissenting view would allow Trump to randomly impose tariffs as low as 1 percent or as high as 1,000,000 percent on any product or country he wanted at any time. Gorsuch criticized justices with dissenting views, who disagreed that Congress’ intent in the statute was unclear and defended Trump’s claim that “IEEPA provides the clear statement needed to sustain the President’s tariffs.” Those justices argued that presidents have long been granted authority to impose tariffs and accused the majority of putting a “thumb on the scale” by requiring a strict reading of the statute. Instead, they argued for a special exception requiring a more general interpretation of statutes whenever presidents seek to regulate matters of foreign affairs. If that view was accepted, Gorsuch warned, presidents could seize even more power from Congress. Many other legislative powers “could be passed wholesale to the executive branch in a few loose statutory terms, no matter what domestic ramifications might follow. And, as we have seen, Congress would often find these powers nearly impossible to retrieve.” As a final note, Gorsuch took some time to sympathize with Trump supporters: For those who think it important for the Nation to impose more tariffs, I understand that today’s decision will be disappointing. All I can offer them is that most major decisions affecting the rights and responsibilities of the American people (including the duty to pay taxes and tariffs) are funneled through the legislative process for a reason. Yes, legislating can be hard and take time. And, yes, it can be tempting to bypass Congress when some pressing problem arises. But the deliberative nature of the legislative process was the whole point of its design. Through that process, the Nation can tap the combined wisdom of the people’s elected representatives, not just that of one faction or man. There, deliberation tempers impulse, and compromise hammers disagreements into workable solutions. And because laws must earn such broad support to survive the legislative process, they tend to endure, allowing ordinary people to plan their lives in ways they cannot when the rules shift from day to day. Kavanaugh questions other Trump tariff authority Under IEEPA, the majority ruled, Trump has the power to “impose penalties, restrictions, or controls on foreign commerce,” Barrett wrote. But he does not have the power to impose emergency tariffs, unless Congress updates laws to explicitly grant such authority. In his dissent, justice Brett Kavanaugh insisted that it should not be up to courts to settle these “policy debates.” He defended Trump’s view that IEEPA granting power to “regulate” “importation” generally included tariffs, while arguing that Trump wasn’t seeking to expand his presidential authority at all. Many feared that the more conservative Supreme Court would side with Trump, and Kavanaugh’s opinion offered a peek at what that alternate reality could have looked like. “Importantly, IEEPA’s authorization for the President to impose tariffs did not grant the President any new substantive power,” Kavanaugh wrote. Instead, “IEEPA merely allows the President to impose tariffs somewhat more efficiently to deal with foreign threats during national emergencies.” He further claimed it was an “odd distinction” that the majority would interpret IEEPA as giving Trump authority to “block all imports from China” but not to “order even a $1 tariff on goods imported from China.” Downplaying the ruling’s significance, Kavanaugh echoed the Trump administration’s claims that the Supreme Court ruling won’t really affect Trump’s key policy of imposing tariffs to renegotiate trade deals or address other concerns. “The decision might not substantially constrain a President’s ability to order tariffs going forward,” Kavanugh wrote, pointing to “numerous other federal statutes” that “authorize the President to impose tariffs.” However, a footnote in the majority’s opinion emphasized that all of the options that Kavanaugh cited “contain various combinations of procedural prerequisites, required agency determinations, and limits on the duration, amount, and scope of the tariffs they authorize.” It was precisely constraints like those that Trump’s broad reading of IEEPA lacked, the majority found. Kavanaugh acknowledged that the ruling would stop Trump from imposing tariffs at will, writing that other statutes require “a few additional procedural steps that IEEPA, as an emergency statute, does not require.” Winding down his arguments, Kavanaugh joined Trump administration officials in groaning that the “United States may be required to refund billions of dollars to importers who paid the IEEPA tariffs, even though some importers may have already passed on costs to consumers or others.” Kavanaugh makes a frequently overlooked point there in this argument, which is that IEEPA tariffs may have harmed consumers without any immediate remedy. It seems unlikely that consumers will get any relief in the short-term, no matter what remedies the Supreme Court’s ruling triggers. For businesses, the primary relief will likely not be from refunds but from the small amount of certainty they will have going forward that tariffs won’t be suddenly changed or imposed overnight. Kavanaugh conceded that Trump’s tariffs “may or may not be wise policy.” But he fretted that Trump’s trade deals “worth trillions of dollars” could be undone by the ruling, while claiming the ruling has only generated more uncertainty on a global scale, including with America’s biggest rival, China. Interestingly, Kavanaugh also suggested that the ruling may put at legal risk the reading of another statute that Trump will likely rely on more heavily moving forward to impose tariffs. “One might think that the Court’s opinion would also mean that tariffs cannot be imposed under Section 232, which authorizes the President to ‘adjust the imports,’” Kavanaugh suggested. This story was updated to include views from Gorsuch and Kavanaugh. Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 272 Comments Supreme Court blocks Trump’s emergency tariffs, billions in refunds may be owed Economists estimated more than $175 billion may need to be refunded. The Supreme Court ruled Friday that Donald Trump was not authorized to implement emergency tariffs to ostensibly block illegal drug flows and offset trade deficits. It’s not immediately clear what the ruling may mean for businesses that paid various “reciprocal” tariffs that Trump changed frequently, raising and lowering rates at will during tense negotiations with the United States’ biggest trade partners. Divided 6-3, Supreme Court justices remanded the cases to lower courts, concluding that the International Emergency Economic Powers Act (IEEPA) does not give Trump power to impose tariffs. Chief Justice John Roberts wrote the opinion and was joined by Justices Neil Gorsuch, Amy Coney Barrett, Elena Kagan, Sonia Sotomayor, and Ketanji Brown Jackson. They concluded that Trump could not exclusively rely on IEEPA to impose tariffs “of unlimited amount and duration, on any product from any country” during peacetime. Only Congress has the power of the purse, Roberts wrote, and the few exceptions to that are bound by “explicit terms and subject to strict limits.” “Against that backdrop of clear and limited delegations, the Government reads IEEPA to give the President power to unilaterally impose unbounded tariffs and change them at will,” Roberts wrote. “That view would represent a transformative expansion of the President’s authority over tariff policy. It is also telling that in IEEPA’s half century of existence, no President has invoked the statute to impose any tariffs, let alone tariffs of this magnitude and scope. That ‘lack of historical precedent,’ coupled with ‘the breadth of authority’ that the President now claims, suggests that the tariffs extend beyond the President’s ‘legitimate reach.’” Back in November, analysts suggested that the Supreme Court ruling against Trump could force the government to issue refunds of up to $1 trillion. This morning, a new estimate from economists reduced that number, Reuters reported, estimating that more than $175 billion could be “at risk of having to be refunded.” Ruling disrupts Trump plan to collect $900 billion Trump lost primarily because IEEPA does not explicitly reference “tariffs” or “duties,” instead only giving Trump power to “regulate” “importation”—the two words in the statute that Trump tried to argue showed that Congress clearly authorized his power to impose tariffs. But the court did not agree that Congress intended to give the president “the independent power to impose tariffs on imports from any country, of any product, at any rate, for any amount of time,” Roberts wrote. “Those words cannot bear such weight,” particularly in peacetime. “The United States, after all, is not at war with every nation in the world.” Specifically, Trump failed to “identify any statute in which the power to regulate includes the power to tax,” Roberts wrote. And the majority of justices remained “skeptical” that in “IEEPA alone,” Congress intended to hide “a delegation of its birth-right power to tax within the quotidian power to ‘regulate.’” “A contrary reading would render IEEPA partly unconstitutional,” Roberts wrote. According to the majority, siding with Trump would free the president to “issue a dizzying array of modifications” to tariffs at will, “unconstrained by the significant procedural limitations in other tariff statutes.” The only check to that unprecedented power grab, the court suggested, would be a “veto-proof majority in Congress.” Trump has yet to comment on the ruling. Ahead of it, he claimed the tariffs were “common sense,” NBC News reported. Speaking at a steel manufacturing factory in northwest Georgia, Trump claimed that IEEPA tariffs were projected to bring in $900 billion “next year.” Not only could he now be forced to refund tariffs, but the Supreme Court ruling could also undo trade deals in which Trump used so-called reciprocal tariffs as leverage. Undoing tariffs will likely be a “mess,” Barrett said last year. “Until now, no President has read IEEPA to confer such power,” Roberts wrote, while noting that the court claims “no special competence in matters of economics or foreign affairs.” Gorsuch seems to troll Trump In a concurring opinion, Gorsuch slammed Trump as trying to expand the president’s authority in a way that would make it hard for Congress to ever retrieve lost powers. He claimed that Trump was seeking to secure a path forward where any president could declare a national emergency—a decision that would be “unreviewable”—to justify imposing “tariffs on nearly any goods he wishes, in any amount he wishes, based on emergencies he himself has declared.” “Just ask yourself: What President would willingly give up that kind of power?” Gorsuch wrote. Gorsuch further questioned if Trump was “seeking to exploit questionable statutory language to aggrandize his own power.” And he warned that accepting the dissenting view would allow Trump to randomly impose tariffs as low as 1 percent or as high as 1,000,000 percent on any product or country he wanted at any time. Gorsuch criticized justices with dissenting views, who disagreed that Congress’ intent in the statute was unclear and defended Trump’s claim that “IEEPA provides the clear statement needed to sustain the President’s tariffs.” Those justices argued that presidents have long been granted authority to impose tariffs and accused the majority of putting a “thumb on the scale” by requiring a strict reading of the statute. Instead, they argued for a special exception requiring a more general interpretation of statutes whenever presidents seek to regulate matters of foreign affairs. If that view was accepted, Gorsuch warned, presidents could seize even more power from Congress. Many other legislative powers “could be passed wholesale to the executive branch in a few loose statutory terms, no matter what domestic ramifications might follow. And, as we have seen, Congress would often find these powers nearly impossible to retrieve.” As a final note, Gorsuch took some time to sympathize with Trump supporters: For those who think it important for the Nation to impose more tariffs, I understand that today’s decision will be disappointing. All I can offer them is that most major decisions affecting the rights and responsibilities of the American people (including the duty to pay taxes and tariffs) are funneled through the legislative process for a reason. Yes, legislating can be hard and take time. And, yes, it can be tempting to bypass Congress when some pressing problem arises. But the deliberative nature of the legislative process was the whole point of its design. Through that process, the Nation can tap the combined wisdom of the people’s elected representatives, not just that of one faction or man. There, deliberation tempers impulse, and compromise hammers disagreements into workable solutions. And because laws must earn such broad support to survive the legislative process, they tend to endure, allowing ordinary people to plan their lives in ways they cannot when the rules shift from day to day. Kavanaugh questions other Trump tariff authority Under IEEPA, the majority ruled, Trump has the power to “impose penalties, restrictions, or controls on foreign commerce,” Barrett wrote. But he does not have the power to impose emergency tariffs, unless Congress updates laws to explicitly grant such authority. In his dissent, justice Brett Kavanaugh insisted that it should not be up to courts to settle these “policy debates.” He defended Trump’s view that IEEPA granting power to “regulate” “importation” generally included tariffs, while arguing that Trump wasn’t seeking to expand his presidential authority at all. Many feared that the more conservative Supreme Court would side with Trump, and Kavanaugh’s opinion offered a peek at what that alternate reality could have looked like. “Importantly, IEEPA’s authorization for the President to impose tariffs did not grant the President any new substantive power,” Kavanaugh wrote. Instead, “IEEPA merely allows the President to impose tariffs somewhat more efficiently to deal with foreign threats during national emergencies.” He further claimed it was an “odd distinction” that the majority would interpret IEEPA as giving Trump authority to “block all imports from China” but not to “order even a $1 tariff on goods imported from China.” Downplaying the ruling’s significance, Kavanaugh echoed the Trump administration’s claims that the Supreme Court ruling won’t really affect Trump’s key policy of imposing tariffs to renegotiate trade deals or address other concerns. “The decision might not substantially constrain a President’s ability to order tariffs going forward,” Kavanugh wrote, pointing to “numerous other federal statutes” that “authorize the President to impose tariffs.” However, a footnote in the majority’s opinion emphasized that all of the options that Kavanaugh cited “contain various combinations of procedural prerequisites, required agency determinations, and limits on the duration, amount, and scope of the tariffs they authorize.” It was precisely constraints like those that Trump’s broad reading of IEEPA lacked, the majority found. Kavanaugh acknowledged that the ruling would stop Trump from imposing tariffs at will, writing that other statutes require “a few additional procedural steps that IEEPA, as an emergency statute, does not require.” Winding down his arguments, Kavanaugh joined Trump administration officials in groaning that the “United States may be required to refund billions of dollars to importers who paid the IEEPA tariffs, even though some importers may have already passed on costs to consumers or others.” Kavanaugh makes a frequently overlooked point there in this argument, which is that IEEPA tariffs may have harmed consumers without any immediate remedy. It seems unlikely that consumers will get any relief in the short-term, no matter what remedies the Supreme Court’s ruling triggers. For businesses, the primary relief will likely not be from refunds but from the small amount of certainty they will have going forward that tariffs won’t be suddenly changed or imposed overnight. Kavanaugh conceded that Trump’s tariffs “may or may not be wise policy.” But he fretted that Trump’s trade deals “worth trillions of dollars” could be undone by the ruling, while claiming the ruling has only generated more uncertainty on a global scale, including with America’s biggest rival, China. Interestingly, Kavanaugh also suggested that the ruling may put at legal risk the reading of another statute that Trump will likely rely on more heavily moving forward to impose tariffs. “One might think that the Court’s opinion would also mean that tariffs cannot be imposed under Section 232, which authorizes the President to ‘adjust the imports,’” Kavanaugh suggested. This story was updated to include views from Gorsuch and Kavanaugh. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/cars/2026/02/tesla-slashes-cybertruck-prices-as-it-tries-to-move-unpainted-metal/#comments] | [TOKENS: 1624] |
looks like a skip Tesla slashes Cybertruck prices as it tries to move (unpainted) metal The stainless steel pickup truck is Tesla’s first real flop. Jonathan M. Gitlin – Feb 20, 2026 9:31 am | 330 A tenth of Tesla's 2025 Cybertruck sales were to SpaceX, another company owned and controlled by Elon Musk. Credit: Reginald Mathalone/NurPhoto via Getty Images A tenth of Tesla's 2025 Cybertruck sales were to SpaceX, another company owned and controlled by Elon Musk. Credit: Reginald Mathalone/NurPhoto via Getty Images Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Last night, Tesla made some hefty cuts to Cybertruck pricing in an effort to stimulate some sales. The bombastic tri-motor “Cyberbeast” is $15,000 cheaper at $99,990, albeit by dropping some previously free features like supercharging and FSD. And there’s now a new $59,990 entry-level model, a dual-motor configuration with a range of 325 miles (523 km) and the same 4.1-second 0–60 mph (0-97 km/h) time as the $79,990 premium all-wheel drive version. That actually makes the new entry-level model a good deal, at least in terms of Cybertrucks. Last year, the company introduced and then eliminated a single-motor rear-wheel drive variant, which found few takers when priced at $69,990; an extra motor for $10,000 less is quite a savings, and actually slightly cheaper than the price originally advertised for the RWD truck. As you might expect, Tesla has made some changes to get down to the new price. The range and 0–60 mph time might be the same as the more expensive dual-motor Cybertruck, but towing capacity is reduced from 11,000 lbs (4,990 kg) to 7,000 lbs (3,175kg), and cargo capacity drops from 2,500 lbs (1,134 kg) to 2,006 lbs (910 kg). Steel springs and adaptive dampers replace the air suspension. There are different tail lights. The inside features textile seats—maybe someone there reads Ars—but the cheapest Cybertruck does without seat ventilation for the front row or seat heaters for the second row. There’s also a different console, no AC outlets in the cabin, and fewer speakers, with no active noise-cancellation system. But it’s still $20,000 more expensive than Elon Musk told us it would be during the angular, unpainted vehicle’s reveal back in 2019. Back then, Musk promised a $39,900 price tag, as well as a few other things that never saw the light of day, like a true monocoque construction. Designing and building the odd-looking vehicle proved particularly troublesome for Tesla, which has never found those processes particularly easy. While other new Tesla models found themselves mired in “production hell,” in 2023, Musk said that “we dug our own grave with the Cybertruck.” Indeed, if the company based its business plans on the public sales projections of 250,000 trucks a year—something Musk said would happen by 2025—that certainly would be a problem. Appealing to neither traditional pickup truck buyers, who have largely rejected going to electric vehicles, nor the majority of EV enthusiasts even before Musk’s politics further soured things, fewer than 39,000 Cybertrucks were sold in 2024, and just over 20,000 found homes in 2025. The Edsel might be Ford’s most famous failure, but even it posted superior sales numbers during its relatively brief life. Jonathan M. Gitlin Automotive Editor Jonathan M. Gitlin Automotive Editor Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC. 330 Comments Tesla slashes Cybertruck prices as it tries to move (unpainted) metal The stainless steel pickup truck is Tesla’s first real flop. Last night, Tesla made some hefty cuts to Cybertruck pricing in an effort to stimulate some sales. The bombastic tri-motor “Cyberbeast” is $15,000 cheaper at $99,990, albeit by dropping some previously free features like supercharging and FSD. And there’s now a new $59,990 entry-level model, a dual-motor configuration with a range of 325 miles (523 km) and the same 4.1-second 0–60 mph (0-97 km/h) time as the $79,990 premium all-wheel drive version. That actually makes the new entry-level model a good deal, at least in terms of Cybertrucks. Last year, the company introduced and then eliminated a single-motor rear-wheel drive variant, which found few takers when priced at $69,990; an extra motor for $10,000 less is quite a savings, and actually slightly cheaper than the price originally advertised for the RWD truck. As you might expect, Tesla has made some changes to get down to the new price. The range and 0–60 mph time might be the same as the more expensive dual-motor Cybertruck, but towing capacity is reduced from 11,000 lbs (4,990 kg) to 7,000 lbs (3,175kg), and cargo capacity drops from 2,500 lbs (1,134 kg) to 2,006 lbs (910 kg). Steel springs and adaptive dampers replace the air suspension. There are different tail lights. The inside features textile seats—maybe someone there reads Ars—but the cheapest Cybertruck does without seat ventilation for the front row or seat heaters for the second row. There’s also a different console, no AC outlets in the cabin, and fewer speakers, with no active noise-cancellation system. But it’s still $20,000 more expensive than Elon Musk told us it would be during the angular, unpainted vehicle’s reveal back in 2019. Back then, Musk promised a $39,900 price tag, as well as a few other things that never saw the light of day, like a true monocoque construction. Designing and building the odd-looking vehicle proved particularly troublesome for Tesla, which has never found those processes particularly easy. While other new Tesla models found themselves mired in “production hell,” in 2023, Musk said that “we dug our own grave with the Cybertruck.” Indeed, if the company based its business plans on the public sales projections of 250,000 trucks a year—something Musk said would happen by 2025—that certainly would be a problem. Appealing to neither traditional pickup truck buyers, who have largely rejected going to electric vehicles, nor the majority of EV enthusiasts even before Musk’s politics further soured things, fewer than 39,000 Cybertrucks were sold in 2024, and just over 20,000 found homes in 2025. The Edsel might be Ford’s most famous failure, but even it posted superior sales numbers during its relatively brief life. MyBloodyBallantine Apparently this is also a limited time offer? https://electrek.co/2026/02/20/elon...0k-that-makes-sense-just-10-days-after-launch February 20, 2026 at 3:51 pm Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/gaming/2026/02/diablo-iis-new-warlock-is-a-great-excuse-to-revisit-a-classic-game/] | [TOKENS: 2447] |
Staying classy Diablo II’s new Warlock is a great excuse to revisit a classic game New skill tree paths offer a fun twist on some generally familiar mechanics. Kyle Orland – Feb 19, 2026 3:04 pm | 40 Our children are learning demonic worship from these satanic games! Credit: Blizzard Our children are learning demonic worship from these satanic games! Credit: Blizzard Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Diablo II is one of those storied classic PC games that’s pretty much always fun to come back to—so much so that some players have put thousands of hours into the game over more than two decades. Across all those years, though, the game itself has barely changed, becoming something of a familiar, comfortable blanket of hellfire for longtime players. That makes last week’s introduction of a new playable Warlock class in Diablo II Resurrected’s new “Reign of the Warlock” DLC a pretty big deal. And after playing through a few Acts with the Warlock over the recent holiday weekend, I found the new option to be a great excuse to come back to a game that’s overdue for a shot in the arm. War-locked in How your Warlock build goes depends heavily on which of the three main upgrade branches you choose to go down. Of these, I found the Eldritch branch had been the most interesting and fun to explore. That’s in large part because of a new skill that lets you levitate a powerful two-handed weapon in front of you while still holding a strong shield in your hands. It seems like a small change, but my relief was palpable in this playthrough as I was able to avoid these kinds of tough choices between defense and offense as I juggled my inventory. Then there’s the Echoing Strike skill, which essentially lets you turn your melee weapons into ranged attacks, using a bit of mana to throw a ghostly “echo” at far-off enemies. I ended up relying heavily on this almost as soon as I got it at Level 14, spamming a long-range copy of my powerful two-handed staff, complete with its fire and poison effects intact. Like weapon levitation, adding an effective ranged attack to what were once exclusively close-quarters combat options is a simple change that opens up a lot of gameplay variety. Throwing an ethereal copy of your weapon across the void is extremely satisfying. Credit: Blizzard Throwing an ethereal copy of your weapon across the void is extremely satisfying. Credit: Blizzard The Demon upgrade branch has been much less interesting, in my experience. The basic pattern of summoning monstrous allies to fight alongside you and absorb some enemy attention will be broadly familiar to anyone who has played the Necromancer class. And, to be frank, I found summoning a massive army of fragile skeletons as a Necromancer to be a lot more fun than summoning a singular tank of a Demon as a Warlock early on (summoning multiple demons at once requires a full 10 points of skill tree investment). Of the Warlock’s three Demonic partner options, I found myself leaning most on the Tainted, which can stay out of harm’s way while harassing slower enemies from afar with fireballs. The other Demon options both had their charms but often got too caught up in massive enemy swarms to be as effective as I wanted, I found. I also didn’t see much point in the skill option that let me teleport my demon into a specific fight or sacrifice itself for some splash damage; their standard, AI-controlled attack patterns were usually sufficient. Then there’s the Chaos upgrade branch, which is focused mostly on area-of-effect (AoE) spells. My build thus far has ended up pretty reliant on the direct-damage AoE options; the Flame Wave, in particular, is especially good for quickly clearing out long, narrow corridors. I also leaned on the Sigil of Lethargy, which effectively slows down some of the more frenetic enemy swarms and gives you some time to gather your attack plan. Something borrowed, something blue… Combining these Chaos skills with the weapon-improving options in the Eldritch branch has made my time with the Diablo II Warlock feel like a bit of a “best of both worlds” situation. The mixture of ranged combat options, area-of-effect magic, and allies-summoning abilities ends up feeling like a weird cross between a Sorceress, Amazon, and Necromancer, without feeling like a carbon copy of any of those classes. I haven’t yet gotten to the new late-game content in the “Reign of the Warlock” DLC, so I can’t say how well the Warlock holds up in the extreme difficulty of the Terror Zones. I also haven’t experimented with any of the truly broken Warlock builds that some committed high-level min-maxxers have been busy discovering. As a casual excuse to revisit the world of Diablo II, though, the Warlock class provides just enough of a new twist on some familiar gameplay mechanics to make it worth the trip. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 40 Comments Diablo II’s new Warlock is a great excuse to revisit a classic game New skill tree paths offer a fun twist on some generally familiar mechanics. Diablo II is one of those storied classic PC games that’s pretty much always fun to come back to—so much so that some players have put thousands of hours into the game over more than two decades. Across all those years, though, the game itself has barely changed, becoming something of a familiar, comfortable blanket of hellfire for longtime players. That makes last week’s introduction of a new playable Warlock class in Diablo II Resurrected’s new “Reign of the Warlock” DLC a pretty big deal. And after playing through a few Acts with the Warlock over the recent holiday weekend, I found the new option to be a great excuse to come back to a game that’s overdue for a shot in the arm. War-locked in How your Warlock build goes depends heavily on which of the three main upgrade branches you choose to go down. Of these, I found the Eldritch branch had been the most interesting and fun to explore. That’s in large part because of a new skill that lets you levitate a powerful two-handed weapon in front of you while still holding a strong shield in your hands. It seems like a small change, but my relief was palpable in this playthrough as I was able to avoid these kinds of tough choices between defense and offense as I juggled my inventory. Then there’s the Echoing Strike skill, which essentially lets you turn your melee weapons into ranged attacks, using a bit of mana to throw a ghostly “echo” at far-off enemies. I ended up relying heavily on this almost as soon as I got it at Level 14, spamming a long-range copy of my powerful two-handed staff, complete with its fire and poison effects intact. Like weapon levitation, adding an effective ranged attack to what were once exclusively close-quarters combat options is a simple change that opens up a lot of gameplay variety. The Demon upgrade branch has been much less interesting, in my experience. The basic pattern of summoning monstrous allies to fight alongside you and absorb some enemy attention will be broadly familiar to anyone who has played the Necromancer class. And, to be frank, I found summoning a massive army of fragile skeletons as a Necromancer to be a lot more fun than summoning a singular tank of a Demon as a Warlock early on (summoning multiple demons at once requires a full 10 points of skill tree investment). Of the Warlock’s three Demonic partner options, I found myself leaning most on the Tainted, which can stay out of harm’s way while harassing slower enemies from afar with fireballs. The other Demon options both had their charms but often got too caught up in massive enemy swarms to be as effective as I wanted, I found. I also didn’t see much point in the skill option that let me teleport my demon into a specific fight or sacrifice itself for some splash damage; their standard, AI-controlled attack patterns were usually sufficient. Then there’s the Chaos upgrade branch, which is focused mostly on area-of-effect (AoE) spells. My build thus far has ended up pretty reliant on the direct-damage AoE options; the Flame Wave, in particular, is especially good for quickly clearing out long, narrow corridors. I also leaned on the Sigil of Lethargy, which effectively slows down some of the more frenetic enemy swarms and gives you some time to gather your attack plan. Something borrowed, something blue… Combining these Chaos skills with the weapon-improving options in the Eldritch branch has made my time with the Diablo II Warlock feel like a bit of a “best of both worlds” situation. The mixture of ranged combat options, area-of-effect magic, and allies-summoning abilities ends up feeling like a weird cross between a Sorceress, Amazon, and Necromancer, without feeling like a carbon copy of any of those classes. I haven’t yet gotten to the new late-game content in the “Reign of the Warlock” DLC, so I can’t say how well the Warlock holds up in the extreme difficulty of the Terror Zones. I also haven’t experimented with any of the truly broken Warlock builds that some committed high-level min-maxxers have been busy discovering. As a casual excuse to revisit the world of Diablo II, though, the Warlock class provides just enough of a new twist on some familiar gameplay mechanics to make it worth the trip. P Plati I'm still blown away that they released new content for this game. February 19, 2026 at 8:07 pm Mr. Perfect Yeah, despite complaining about the cost last week I bought it to play with old friends and am having fun. The demon tree is what I've been focusing on, since Necromancer and Druid where both past favorites. Warlock's summon abilities are closer to Druid then Necromancer, since he can only have three demons at most. One thing that doesn't seem to be mentioned much is all your demons don't have to be the same kind. I'm currently using one Goatman to tank along with one Tainted for the fire resist aura and ranged damage, which has been working alright in normal mode. The Defiler's debuff wasn't helping as much as I wanted and it doesn't seem to make direct attacks on it's own, so it's out of the rotation. Build guides would indicate a Warlock needs to pick a demonic lane and stay in it for higher difficulties though, which is a bit of a missed opportunity. It could be a lot of fun to have a variety of demons out, much like a Druid has a primary summon along with supporting Spirits and Vines. One note of warning: Characters in the Warlock expansion can't play with other non-Warlock expansion characters. Even my own pre-expansion characters where treated as separate. The pre-expansion characters have their own shared stash while the Warlock upgraded ones have a different stash entirely. Conversion is a quick button click, just make sure you don't leave stuff orphaned in the pre-expansion stash. February 19, 2026 at 9:26 pm Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/space/2026/02/nasa-chief-classifies-starliner-flight-as-type-a-mishap-says-agency-made-mistakes/#comments] | [TOKENS: 3408] |
Radical transparency NASA chief classifies Starliner flight as “Type A” mishap, says agency made mistakes “The most troubling failure revealed by this investigation is not hardware.” Eric Berger – Feb 19, 2026 4:59 pm | 271 NASA astronauts Butch Wilmore and Suni Williams wave to their families, friends, and NASA officials on their way to the launch pad on June 5, 2024, to board Boeing's Starliner spacecraft. Credit: Joe Raedle/Getty Images NASA astronauts Butch Wilmore and Suni Williams wave to their families, friends, and NASA officials on their way to the launch pad on June 5, 2024, to board Boeing's Starliner spacecraft. Credit: Joe Raedle/Getty Images Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav NASA on Thursday announced it has formally classified the 2024 crewed flight of the Starliner spacecraft as a “Type A” mishap, an acknowledgement that the test flight was a serious failure. As part of the announcement, NASA Administrator Jared Isaacman sent an agency-wide letter that recognized the shortcomings of both Starliner’s developer, Boeing, as well as the space agency itself. Starliner flew under the auspices of NASA’s Commercial Crew Program, in which the agency procures astronaut transportation services to the International Space Station. “We are taking ownership of our shortcomings,” Isaacman said. The letter and a subsequent news conference on Thursday afternoon were remarkable for the amount of accountability taken by NASA. Moreover, at Isaacman’s direction, the space agency released an internal report, comprising 311 pages, that details findings from the Program Investigation Team that looked into the Starliner flight. “Starliner has design and engineering deficiencies that must be corrected, but the most troubling failure revealed by this investigation is not hardware,” Isaacman wrote in his letter to the NASA workforce. “It is decision-making and leadership that, if left unchecked, could create a culture incompatible with human spaceflight.” Isaacman said there would be “leadership accountability” as a result of the decisions surrounding the Starliner program, but did not say which actions would be taken. “An outstanding day” The “Type A” classification of Starliner comes more than a year and a half after the vehicle’s ill-fated, initial crewed flight in early June 2024. During the more than daylong journey to the space station after launching on an Atlas V rocket, Starliner was beset by helium leaks in its propulsion system and then intermittent thruster failures. Still, after astronauts Butch Wilmore and Suni Williams eventually docked at the station, Boeing officials declared it a success. “We accomplished a lot, and really more than expected,” said Mark Nappi, vice president and manager of Boeing’s Commercial Crew Program, during a post-docking news conference. “We just had an outstanding day.” Over the subsequent weeks of the summer of 2024, NASA mostly backed Boeing, saying that its primary option was bringing the crew home on Starliner. Finally, by early August, NASA publicly wavered and admitted that Wilmore and Williams might return on a SpaceX Crew Dragon spacecraft. Yet Boeing remained steadfast. On a Boeing website called “Starliner Updates” that has since gone offline, as late as August 2, 2024, the company was declaring that its “confidence remains high” in Starliner’s return with crew (see archive). It was, in fact, not outstanding However, on August 24, NASA made it official and decided that Wilmore and Williams would not fly back on Starliner. Instead, the crew would come home on a Crew Dragon. Wilmore and Williams eventually returned to Earth safely in March 2025 as part of the Crew 9 mission. The true danger the astronauts faced on board Starliner was not publicly revealed until after they landed and flew back to Houston. In an interview with Ars, Wilmore described the tense minutes when he had to take control of Starliner as its thrusters began to fail, one after the other. Essentially, Wilmore could not fully control Starliner any longer. But simply abandoning the docking attempt was not a palatable solution. Just as the thrusters were needed to control the vehicle during the docking process, they were also necessary to position Starliner for its deorbit burn and reentry to Earth’s atmosphere. So Wilmore had to contemplate whether it was riskier to approach the space station or try to fly back to Earth. “I don’t know that we can come back to Earth at that point,” he said. “I don’t know if we can. And matter of fact, I’m thinking we probably can’t. So there we are, loss of 6DOF control, four aft thrusters down, and I’m visualizing orbital mechanics. The space station is nose down. So we’re not exactly level with the station, but below it. If you’re below the station, you’re moving faster. That’s orbital mechanics. It’s going to make you move away from the station. So I’m doing all of this in my mind. I don’t know what control I have. What if I lose another thruster? What if we lose comm? What am I going to do?” One thing that has surprised outside observers since publication of Wilmore’s harrowing experience is how NASA, knowing all of this, could have seriously entertained bringing the crew home on Starliner. Isaacman clearly had questions as well. He began reviewing the internal report on Starliner, published last November, almost immediately after becoming the space agency administrator in December. He wanted to understand why NASA insisted publicly for so long that it would bring astronauts back on Starliner, even though there was a safe backup option with Crew Dragon. “Pretending that that did not exist, and focusing exclusively on a single pathway, created a cultural issue that leadership should have been able to step in and course correct,” Isaacman said during the teleconference. “What levels of the organization inside of NASA did that exist at? Multiple levels, including, I would say, right up to the administrator of NASA.” Concerns predate the crew flight test Some of NASA’s biggest lapses in judgment occurred before the crew flight test, the report found. In particular, these revolved around the second orbital flight test of Starliner, which took place two years earlier, in May 2022. During this flight, which was declared to be successful, three of the thrusters on the Starliner Service Module failed. In hindsight, this should have raised huge red flags for what was to come during the mission of Wilmore and Williams two years later. However, in his letter to NASA employees, Isaacman said the NASA and Boeing investigations into these failures did not push hard enough to find the root cause of the thruster failures. “The investigations often stopped at the proximate cause, treated it with a fix, or accepted the issue as an unexplained anomaly,” Isaacman said. “In some cases, the proximate-cause diagnosis itself was incorrect due to insufficient rigor in following the data to its logical conclusion.” What happens next In the 11 months since the return of Wilmore and Williams, NASA and Boeing have agreed that the next flight of Starliner, although intended to dock with the International Space Station, will fly without crew. NASA has previously said this flight could take place as early as April 2026. However, when asked about this timeline, Isaacman reiterated that a lot of work had to be done. “We are committed to helping Boeing work through this problem, to remediate the technical challenges, to fully understand the risk associated with this vehicle, and to try and minimize it to the greatest extent possible,” he said. “And if we can implement a lot of the report recommendations, then we will fly again.” In a statement on Thursday, Boeing said it was “committed” to being one of NASA’s two commercial crew providers. A source recently told Ars that two NASA astronauts, Woody Hoburg and Jessica Wittner, have begun training for a potential “Starliner-2” mission that could take flight during the first half of next year, should the uncrewed test flight in 2026 go well. NASA has not confirmed that any astronauts have been assigned to Starliner-1. Eric Berger Senior Space Editor Eric Berger Senior Space Editor Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston. 271 Comments NASA chief classifies Starliner flight as “Type A” mishap, says agency made mistakes “The most troubling failure revealed by this investigation is not hardware.” NASA on Thursday announced it has formally classified the 2024 crewed flight of the Starliner spacecraft as a “Type A” mishap, an acknowledgement that the test flight was a serious failure. As part of the announcement, NASA Administrator Jared Isaacman sent an agency-wide letter that recognized the shortcomings of both Starliner’s developer, Boeing, as well as the space agency itself. Starliner flew under the auspices of NASA’s Commercial Crew Program, in which the agency procures astronaut transportation services to the International Space Station. “We are taking ownership of our shortcomings,” Isaacman said. The letter and a subsequent news conference on Thursday afternoon were remarkable for the amount of accountability taken by NASA. Moreover, at Isaacman’s direction, the space agency released an internal report, comprising 311 pages, that details findings from the Program Investigation Team that looked into the Starliner flight. “Starliner has design and engineering deficiencies that must be corrected, but the most troubling failure revealed by this investigation is not hardware,” Isaacman wrote in his letter to the NASA workforce. “It is decision-making and leadership that, if left unchecked, could create a culture incompatible with human spaceflight.” Isaacman said there would be “leadership accountability” as a result of the decisions surrounding the Starliner program, but did not say which actions would be taken. “An outstanding day” The “Type A” classification of Starliner comes more than a year and a half after the vehicle’s ill-fated, initial crewed flight in early June 2024. During the more than daylong journey to the space station after launching on an Atlas V rocket, Starliner was beset by helium leaks in its propulsion system and then intermittent thruster failures. Still, after astronauts Butch Wilmore and Suni Williams eventually docked at the station, Boeing officials declared it a success. “We accomplished a lot, and really more than expected,” said Mark Nappi, vice president and manager of Boeing’s Commercial Crew Program, during a post-docking news conference. “We just had an outstanding day.” Over the subsequent weeks of the summer of 2024, NASA mostly backed Boeing, saying that its primary option was bringing the crew home on Starliner. Finally, by early August, NASA publicly wavered and admitted that Wilmore and Williams might return on a SpaceX Crew Dragon spacecraft. Yet Boeing remained steadfast. On a Boeing website called “Starliner Updates” that has since gone offline, as late as August 2, 2024, the company was declaring that its “confidence remains high” in Starliner’s return with crew (see archive). It was, in fact, not outstanding However, on August 24, NASA made it official and decided that Wilmore and Williams would not fly back on Starliner. Instead, the crew would come home on a Crew Dragon. Wilmore and Williams eventually returned to Earth safely in March 2025 as part of the Crew 9 mission. The true danger the astronauts faced on board Starliner was not publicly revealed until after they landed and flew back to Houston. In an interview with Ars, Wilmore described the tense minutes when he had to take control of Starliner as its thrusters began to fail, one after the other. Essentially, Wilmore could not fully control Starliner any longer. But simply abandoning the docking attempt was not a palatable solution. Just as the thrusters were needed to control the vehicle during the docking process, they were also necessary to position Starliner for its deorbit burn and reentry to Earth’s atmosphere. So Wilmore had to contemplate whether it was riskier to approach the space station or try to fly back to Earth. “I don’t know that we can come back to Earth at that point,” he said. “I don’t know if we can. And matter of fact, I’m thinking we probably can’t. So there we are, loss of 6DOF control, four aft thrusters down, and I’m visualizing orbital mechanics. The space station is nose down. So we’re not exactly level with the station, but below it. If you’re below the station, you’re moving faster. That’s orbital mechanics. It’s going to make you move away from the station. So I’m doing all of this in my mind. I don’t know what control I have. What if I lose another thruster? What if we lose comm? What am I going to do?” One thing that has surprised outside observers since publication of Wilmore’s harrowing experience is how NASA, knowing all of this, could have seriously entertained bringing the crew home on Starliner. Isaacman clearly had questions as well. He began reviewing the internal report on Starliner, published last November, almost immediately after becoming the space agency administrator in December. He wanted to understand why NASA insisted publicly for so long that it would bring astronauts back on Starliner, even though there was a safe backup option with Crew Dragon. “Pretending that that did not exist, and focusing exclusively on a single pathway, created a cultural issue that leadership should have been able to step in and course correct,” Isaacman said during the teleconference. “What levels of the organization inside of NASA did that exist at? Multiple levels, including, I would say, right up to the administrator of NASA.” Concerns predate the crew flight test Some of NASA’s biggest lapses in judgment occurred before the crew flight test, the report found. In particular, these revolved around the second orbital flight test of Starliner, which took place two years earlier, in May 2022. During this flight, which was declared to be successful, three of the thrusters on the Starliner Service Module failed. In hindsight, this should have raised huge red flags for what was to come during the mission of Wilmore and Williams two years later. However, in his letter to NASA employees, Isaacman said the NASA and Boeing investigations into these failures did not push hard enough to find the root cause of the thruster failures. “The investigations often stopped at the proximate cause, treated it with a fix, or accepted the issue as an unexplained anomaly,” Isaacman said. “In some cases, the proximate-cause diagnosis itself was incorrect due to insufficient rigor in following the data to its logical conclusion.” What happens next In the 11 months since the return of Wilmore and Williams, NASA and Boeing have agreed that the next flight of Starliner, although intended to dock with the International Space Station, will fly without crew. NASA has previously said this flight could take place as early as April 2026. However, when asked about this timeline, Isaacman reiterated that a lot of work had to be done. “We are committed to helping Boeing work through this problem, to remediate the technical challenges, to fully understand the risk associated with this vehicle, and to try and minimize it to the greatest extent possible,” he said. “And if we can implement a lot of the report recommendations, then we will fly again.” In a statement on Thursday, Boeing said it was “committed” to being one of NASA’s two commercial crew providers. A source recently told Ars that two NASA astronauts, Woody Hoburg and Jessica Wittner, have begun training for a potential “Starliner-2” mission that could take flight during the first half of next year, should the uncrewed test flight in 2026 go well. NASA has not confirmed that any astronauts have been assigned to Starliner-1. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/gadgets/2026/02/rubiks-wowcube-adds-complexity-possibility-by-reinventing-the-puzzle-cube/] | [TOKENS: 3836] |
Hands-on Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube Technology is a double-edged sword in the $399 Rubik’s Cube-inspired toy. Scharon Harding – Feb 19, 2026 4:30 pm | 60 Credit: Scharon Harding Credit: Scharon Harding Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav There’s something special about the gadget that “just works.” Technology can open opportunities for those devices but also complicate and weigh down products that have done just fine without things like sensors and software. So when a product like the beloved Rubik’s Cube gets stuffed with wires, processors, and rechargeable batteries, there’s demand for it to be not just on par with the original—but markedly better. The Cubios Rubik’s WOWCube successfully breathes fresh life into the classic puzzle, but it’s also an example of when too much technology can cannibalize a gadget’s main appeal. The WOWCube showing off one of its screensavers. Credit: Scharon Harding The WOWCube showing off one of its screensavers. Credit: Scharon Harding The WOWCube is a modern take on the Rubik’s Cube, an experiment from Hungarian architecture professor Ernő Rubik. Rubik aimed to make a structure composed of eight cubes that could move independently without the structure collapsing. The Rubik’s Cube became a widely distributed toy, an ’80s craze, and, eventually, a puzzle icon. The Rubik’s Cube did all that without electronics and with a current MSRP of $10. The WOWCube takes the opposite approach. It’s $399 (as of this writing) and ditches the traditional 3×3 grid in favor of a 2×2 grid that can still do the traditional Rubik’s puzzle (albeit on a smaller scale) and perform a host of other tricks, including playing other games and telling the weather. Cubios Rubik’s WOWCube Specs Resolution Per Panel: 240×240 (5760×240 total) Panel Type: 24× 1.4-inch IPS panels Weight: 3.58 ounces Dimensions: 2.76×2.76×2.76 inches Battery: 8× 450 mAh (3600 mAh total) Audio: 8× speakers OS: CubiOS Charging Dock: ESP32-S3 SoC, USB-C port, WOWCube proprietary charging interface A smaller puzzle The WOWCube’s 2×2 grid will disappoint hardcore puzzlers. There’s no way to play the traditional 3×3 version or even harder modified versions of the 2×2 grid. With only 24 squares, compared to the traditional 54, solving the WOWCube is significantly easier than solving a standard Rubik’s Cube. Although skilled players might enjoy the challenge of trying to solve the WOWCube extra rapidly. For people who are awful at the original Rubik’s Cube, like this author, a more accessible version of the puzzle is welcome. Solving the new Rubik’s Cube feels more attainable and less frustrating. The WOWCube is made up of eight modules. Each module has its own PCB, processor, gyroscope, and accelerometer. A Cubios spokesperson told me that the company opted for a 2×2 grid because “the most expensive components are the screens and the motherboards with the processor and battery, so increasing it to a 3×3 model would raise” the price. The predicament begs the question of whether electronics really improve the Rubik’s Cube. Games and other apps Once I played some of the WOWCube’s other games, I saw the advantage of the smaller grid. The 2×2 layout is more appropriate for games like White Rabbit, which is like Pac-Man but relies on tilting and twisting the cube, or Ladybug, where you twist the cube to create a path for a perpetually crawling ladybug. A central module might add unneeded complexity and space to these games and other WOWCube apps, like Pixel World, which is like a Rubik’s Cube puzzle but with images depicting global landmarks, or the WOWCube implementation of Gabriele Cirulli’s puzzle game, 2048. One of the “games” makes the WOWCube look like a virtual aquarium. Scharon Harding One of the “games” makes the WOWCube look like a virtual aquarium. Scharon Harding The Ladybug game. Scharon Harding The Ladybug game. Scharon Harding One of the “games” makes the WOWCube look like a virtual aquarium. Scharon Harding The Ladybug game. Scharon Harding At the time of writing, the WOWCube has 15 “games,” including the Rubik’s Cube puzzle. Most of the games are free, but some, such as Space Invaders Cubed ($30) and Sunny Side Up ($5), cost money. Unlike the original Rubik’s Cube, which is content to live on your shelf until you need a brain exercise or go on a road trip, the WOWCube craves attention with dozens of colorful screens, sound effects, and efforts to be more than a toy. With its Widgets app open, the cube can display information, like the time, temperature, and alerts, from a limited selection of messaging apps. More advanced actions, like checking the temperature for tomorrow or opening a WhatsApp message, are unavailable. There’s room for improvement, but further development, perhaps around features like an alarm clock or reminders, could turn the WOWCube into a helpful desk companion. Technology overload The new technology makes the Rubik’s Cube more versatile, exciting, and useful while bringing the toy back into the spotlight; at times, though, it also brought more complexity to a simple beloved concept. Usually, to open an app, make a selection, or otherwise input yes, you “knock” on the side of WOWCube twice. You also have to shake the cube three times in order to exit an app, and you can’t open an app when another app is open. Being able to tap an icon or press an actual button would make tasks, like opening apps or controlling volume and brightness levels, easier. On a couple of occasions, my device got buggy and inadvertently turned off some, but not all, of its screens. The reliance on a battery and charging dock that plugs into a wall presents limitations, too. The WOWCube showing its main menu while sitting next to its charging dock. Credit: Scharon Harding The WOWCube showing its main menu while sitting next to its charging dock. Credit: Scharon Harding The WOWCube’s makers brag of the device’s octads of speakers, processors, accelerometer, and gyroscopes, but I found the tilting mechanism unreliable and, at times, frustrating for doing things like highlighting an icon. Perhaps I don’t hold the WOWCube at the angles that its creators intended. There were also times when the image was upside down, and main information was displayed on a side of the cube that was facing away from me. One of my favorite features: WOWCube’s pomodoro-like timer app. Credit: Scharon Harding One of my favorite features: WOWCube’s pomodoro-like timer app. Credit: Scharon Harding The WOWCube has its own iOS and Android app, WOWCube Connect, which lets you connect the toy to your phone via Bluetooth and download new apps to the device via the dock’s Wi-Fi connection. You can also use the app to customize things like widgets, screensavers, and display brightness. If you don’t want to do any of those things, you can disconnect the WOWCube from your phone and reconnect it only when you want to. I wasn’t able to use the iOS app unless I agreed to allow the “app to track activity.” This gives me privacy concerns, so I reached out to Cubios to ask if there’s a way to use the app without the company tracking your activity. A spokesperson informed me that you can avoid tracking by selecting “allow app to track activity” in the app and then telling your phone to ask the app not to track you in the subsequent prompt that pops up. But you’ll only get the prompt if your phone is set to allow apps to request to track. New-age Rubik’s Cube Cubios attempted to reinvent a classic puzzle with the WOWCube. In the process, it added bells and whistles that detract from what originally made Rubik’s Cubes great. The actual Rubik’s Cube puzzle is scaled back, and the idea of spending hours playing with the cube is hindered by its finite battery life (the WOWCube can last up to five hours of constant play, Cubios claims). The device’s reliance on sensors and chips doesn’t always yield a predictable user experience, especially when navigating apps. And all of its tech makes the puzzle about 40 times pricier than the classic toy. IPS screens, integrated speakers, and app integration add more possibilities, but some might argue that the Rubik’s Cube was sufficient without them. Notably, the WOWCube began as its own product and got the rights to use Rubik’s branding in 2024. We’ve seen technology come for the Rubik’s Cube before. The Rubik’s Revolution we tested years ago had pressure-sensitive, LED-lit buttons for faces. In 2020, Rubik’s Connected came out with its own companion app. Clearly, there’s interest in bringing the Rubik’s Cube into the 20th century. For those who believe in that mission, the WOWCube is a fascinating new chapter for the puzzle. I applaud Cubios’ efforts to bring the Rubik’s Cube new relevance and remain intrigued by the potential of new software-driven puzzles and uses. But it’s hard to overlook the downfalls of its tech reliance. And the WOWCube could never replace the classic. This article was updated with comments from a Cubios spokesperson. Scharon Harding Senior Technology Reporter Scharon Harding Senior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK. 60 Comments Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube Technology is a double-edged sword in the $399 Rubik’s Cube-inspired toy. There’s something special about the gadget that “just works.” Technology can open opportunities for those devices but also complicate and weigh down products that have done just fine without things like sensors and software. So when a product like the beloved Rubik’s Cube gets stuffed with wires, processors, and rechargeable batteries, there’s demand for it to be not just on par with the original—but markedly better. The Cubios Rubik’s WOWCube successfully breathes fresh life into the classic puzzle, but it’s also an example of when too much technology can cannibalize a gadget’s main appeal. The WOWCube is a modern take on the Rubik’s Cube, an experiment from Hungarian architecture professor Ernő Rubik. Rubik aimed to make a structure composed of eight cubes that could move independently without the structure collapsing. The Rubik’s Cube became a widely distributed toy, an ’80s craze, and, eventually, a puzzle icon. The Rubik’s Cube did all that without electronics and with a current MSRP of $10. The WOWCube takes the opposite approach. It’s $399 (as of this writing) and ditches the traditional 3×3 grid in favor of a 2×2 grid that can still do the traditional Rubik’s puzzle (albeit on a smaller scale) and perform a host of other tricks, including playing other games and telling the weather. A smaller puzzle The WOWCube’s 2×2 grid will disappoint hardcore puzzlers. There’s no way to play the traditional 3×3 version or even harder modified versions of the 2×2 grid. With only 24 squares, compared to the traditional 54, solving the WOWCube is significantly easier than solving a standard Rubik’s Cube. Although skilled players might enjoy the challenge of trying to solve the WOWCube extra rapidly. For people who are awful at the original Rubik’s Cube, like this author, a more accessible version of the puzzle is welcome. Solving the new Rubik’s Cube feels more attainable and less frustrating. The WOWCube is made up of eight modules. Each module has its own PCB, processor, gyroscope, and accelerometer. A Cubios spokesperson told me that the company opted for a 2×2 grid because “the most expensive components are the screens and the motherboards with the processor and battery, so increasing it to a 3×3 model would raise” the price. The predicament begs the question of whether electronics really improve the Rubik’s Cube. Games and other apps Once I played some of the WOWCube’s other games, I saw the advantage of the smaller grid. The 2×2 layout is more appropriate for games like White Rabbit, which is like Pac-Man but relies on tilting and twisting the cube, or Ladybug, where you twist the cube to create a path for a perpetually crawling ladybug. A central module might add unneeded complexity and space to these games and other WOWCube apps, like Pixel World, which is like a Rubik’s Cube puzzle but with images depicting global landmarks, or the WOWCube implementation of Gabriele Cirulli’s puzzle game, 2048. At the time of writing, the WOWCube has 15 “games,” including the Rubik’s Cube puzzle. Most of the games are free, but some, such as Space Invaders Cubed ($30) and Sunny Side Up ($5), cost money. Unlike the original Rubik’s Cube, which is content to live on your shelf until you need a brain exercise or go on a road trip, the WOWCube craves attention with dozens of colorful screens, sound effects, and efforts to be more than a toy. With its Widgets app open, the cube can display information, like the time, temperature, and alerts, from a limited selection of messaging apps. More advanced actions, like checking the temperature for tomorrow or opening a WhatsApp message, are unavailable. There’s room for improvement, but further development, perhaps around features like an alarm clock or reminders, could turn the WOWCube into a helpful desk companion. The new technology makes the Rubik’s Cube more versatile, exciting, and useful while bringing the toy back into the spotlight; at times, though, it also brought more complexity to a simple beloved concept. Usually, to open an app, make a selection, or otherwise input yes, you “knock” on the side of WOWCube twice. You also have to shake the cube three times in order to exit an app, and you can’t open an app when another app is open. Being able to tap an icon or press an actual button would make tasks, like opening apps or controlling volume and brightness levels, easier. On a couple of occasions, my device got buggy and inadvertently turned off some, but not all, of its screens. The reliance on a battery and charging dock that plugs into a wall presents limitations, too. The WOWCube’s makers brag of the device’s octads of speakers, processors, accelerometer, and gyroscopes, but I found the tilting mechanism unreliable and, at times, frustrating for doing things like highlighting an icon. Perhaps I don’t hold the WOWCube at the angles that its creators intended. There were also times when the image was upside down, and main information was displayed on a side of the cube that was facing away from me. The WOWCube has its own iOS and Android app, WOWCube Connect, which lets you connect the toy to your phone via Bluetooth and download new apps to the device via the dock’s Wi-Fi connection. You can also use the app to customize things like widgets, screensavers, and display brightness. If you don’t want to do any of those things, you can disconnect the WOWCube from your phone and reconnect it only when you want to. I wasn’t able to use the iOS app unless I agreed to allow the “app to track activity.” This gives me privacy concerns, so I reached out to Cubios to ask if there’s a way to use the app without the company tracking your activity. A spokesperson informed me that you can avoid tracking by selecting “allow app to track activity” in the app and then telling your phone to ask the app not to track you in the subsequent prompt that pops up. But you’ll only get the prompt if your phone is set to allow apps to request to track. New-age Rubik’s Cube Cubios attempted to reinvent a classic puzzle with the WOWCube. In the process, it added bells and whistles that detract from what originally made Rubik’s Cubes great. The actual Rubik’s Cube puzzle is scaled back, and the idea of spending hours playing with the cube is hindered by its finite battery life (the WOWCube can last up to five hours of constant play, Cubios claims). The device’s reliance on sensors and chips doesn’t always yield a predictable user experience, especially when navigating apps. And all of its tech makes the puzzle about 40 times pricier than the classic toy. IPS screens, integrated speakers, and app integration add more possibilities, but some might argue that the Rubik’s Cube was sufficient without them. Notably, the WOWCube began as its own product and got the rights to use Rubik’s branding in 2024. We’ve seen technology come for the Rubik’s Cube before. The Rubik’s Revolution we tested years ago had pressure-sensitive, LED-lit buttons for faces. In 2020, Rubik’s Connected came out with its own companion app. Clearly, there’s interest in bringing the Rubik’s Cube into the 20th century. For those who believe in that mission, the WOWCube is a fascinating new chapter for the puzzle. I applaud Cubios’ efforts to bring the Rubik’s Cube new relevance and remain intrigued by the potential of new software-driven puzzles and uses. But it’s hard to overlook the downfalls of its tech reliance. And the WOWCube could never replace the classic. This article was updated with comments from a Cubios spokesperson. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/microsoft-removes-guide-on-how-to-train-llms-on-pirated-harry-potter-books/#comments] | [TOKENS: 4252] |
Wizarding world of AI slop Microsoft deletes blog telling users to train AI on pirated Harry Potter books The now-deleted Harry Potter dataset was “mistakenly” marked public domain. Ashley Belanger – Feb 20, 2026 7:11 am | 90 Microsoft generated an AI image of Harry Potter with a Microsoft logo in a now-deleted blog. Credit: via Microsoft's deleted blog Microsoft generated an AI image of Harry Potter with a Microsoft logo in a now-deleted blog. Credit: via Microsoft's deleted blog Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Following backlash in a Hacker News thread, Microsoft deleted a blog post that critics said encouraged developers to pirate Harry Potter books to train AI models that could then be used to create AI slop. The blog, which is archived here, was written in November 2024 by a senior product manager, Pooja Kamath. According to her LinkedIn, Kamath has been at Microsoft for more than a decade and remains with the company. In 2024, Microsoft tapped her to promote a new feature that the blog said made it easier to “add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.” What better way to show “engaging and relatable examples” of Microsoft’s new feature that would “resonate with a wide audience” than to “use a well-known dataset” like Harry Potter books, the blog said. The books are “one of the most famous and cherished series in literary history,” the blog noted, and fans could use the LLMs they trained in two fun ways: building Q&A systems providing “context-rich answers” and generating “new AI-driven Harry Potter fan fiction” that’s “sure to delight Potterheads.” To help Microsoft customers achieve this vision, the blog linked to a Kaggle dataset that included all seven Harry Potter books, which, Ars verified, has been available online for years and incorrectly marked as “public domain.” Kaggle’s terms say that rights holders can send notices of infringing content, and repeat offenders risk suspensions, but Hacker News commenters speculated that the Harry Potter dataset flew under the radar, with only 10,000 downloads over time, not catching the attention of J.K. Rowling, who famously keeps a strong grip on the Harry Potter copyrights. The dataset was promptly deleted on Thursday after Ars reached out to the uploader, Shubham Maindola, a data scientist in India with no apparent links to Microsoft. Maindola told Ars that “the dataset was marked as Public Domain by mistake. There was no intention to misrepresent the licensing status of the works.” It’s unclear whether Kamath was directed to link to the Harry Potter books dataset in the blog or if it was an individual choice. Cathay Y. N. Smith, a law professor and co-director of Chicago-Kent College of Law’s Program in Intellectual Property Law, told Ars that Kamath may not have realized the books were too recent to be in the public domain. “Someone might be really knowledgeable about books and technology, but not necessarily about copyright terms and how long they last,” Smith said. “Especially if she saw that something was marked by another reputable company as being public domain.” Microsoft declined Ars’ request to comment. Kaggle did not respond to Ars’ request to comment. Microsoft was “probably smart” to pull the blog On Hacker News, commenters suggested that it’s unlikely anyone familiar with the popular franchise would believe the Harry Potter books were in the public domain. They debated whether Microsoft’s blog was “problematic copyright-wise,” since Microsoft not only encouraged customers to download the infringing materials but also used the books themselves to create Harry Potter AI models that relied on beloved characters to hype Microsoft products. Microsoft’s blog was posted more than a year ago, at a time when AI firms began facing lawsuits over AI models, which had allegedly infringed copyrights by training on pirated materials and regurgitating works verbatim. The blog recommended that users learn to train their own AI models by downloading the Harry Potter dataset and then uploading text files to Azure Blob Storage. It included example models based on a dataset that Microsoft seemingly uploaded to Azure Blob Storage, which only included the first book, Harry Potter and the Sorcerer’s Stone. Training large language models (LLMs) on text files, Harry Potter fans could create Q&A systems capable of pulling up relevant excerpts of books. An example query offered was “Wizarding World snacks,” which retrieved an excerpt from The Sorcerer’s Stone where Harry marvels at strange treats like Bertie Bott’s Every Flavor Beans and chocolate frogs. Another prompt asking “How did Harry feel when he first learnt that he was a Wizard?” generated an output pointing to various early excerpts in the book. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. But perhaps an even more exciting use case, Kamath suggested, was generating fan fiction to “explore new adventures” and “even create alternate endings.” That model could quickly comb the dataset for “contextually similar” excerpts that could be used to output fresh stories that fit with existing narratives and incorporate “elements from the retrieved passages,” the blog said. As an example, Kamath trained a model to write a Harry Potter story she could use to market the feature she was blogging about. She asked the model to write a story in which Harry meets a new friend on the Hogwarts Express train who tells him all about Microsoft’s Native Vector Support in SQL “in the Muggle world.” Drawing on parts of The Sorcerer’s Stone where Harry learns about Quidditch and gets to know Hermione Granger, the fan fiction showed a boy selling Harry on Microsoft’s “amazing” new feature. To do this, he likened it to having a spell that helps you find exactly what you need among thousands of options, instantly, while declaring it was perfect for machine learning, AI, and recommendation systems. Further blurring the lines between Microsoft and Harry Potter brands, Kamath also generated an image showing Harry with his new friend, stamped with a Microsoft logo. Smith told Ars that both use cases could frustrate rights holders, depending on the content in the model outputs. “I think that the regurgitation and the creation of fan fiction, they both could flag copyright issues, in that fan fiction often has to take from the expressive elements, a copyrighted character, a character that’s famous enough to be protected by a copyright law or plot stories or sequences,” Smith said. “If these things are copied and reproduced, then that output could be potentially infringing.” But it’s also still a gray area. Looking at the blog, Smith said, “I would be concerned,” but “I wouldn’t say it’s automatically infringement.” Smith told Ars that, in pulling the blog, Microsoft “was probably smart,” since courts have only generally said that training AI on copyrighted books is fair use. But courts continue to probe questions about pirated AI training materials. On the deleted Kaggle dataset page, Maindola previously explained that to source the data, he “downloaded the ebooks and then converted them to txt files.” Microsoft may have infringed copyrights If Microsoft ever faced questions as to whether the company knowingly used pirated books to train the example models, fair use “could be a difficult argument,” Smith said. Hacker News commenters suggested the blog could be considered fair use, since the training guide was for “educational purposes,” and Smith said that Microsoft could raise some “good arguments” in its defense. However, she also suggested that Microsoft could be deemed liable for contributing to infringement on some level after leaving the blog up for a year. Before it was removed, the Kaggle dataset was downloaded more than 10,000 times. “The ultimate result is to create something infringing by saying, ‘Hey, here you go, go grab that infringing stuff and use that in our system,’” Smith said. “They could potentially have some sort of secondary contributory liability for copyright infringement, downloading it, as well as then using it to encourage others to use it for training purposes.” On Hacker News, commenters slammed the blog, including a self-described former Microsoft employee who claimed that Microsoft lets employees “blog without having to go through some approval or editing process.” “It looks like somebody made a bad judgment call on what to put in a company blog post (and maybe what constitutes ethical activity) and that it was taken down as soon as someone noticed,” the former employee said. Others suggested the blame was solely with the Kaggle uploader, Maindola, who told Ars that the dataset should never have been marked “public domain.” But Microsoft critics pushed back, noting that the Kaggle page made it clear that no special permission was granted and that Microsoft’s employee should have known better. “They don’t need to know any details to know that these properties belong to massive companies and aren’t free for the taking,” one commenter said. The Harry Potter books weren’t the only books targeted, the thread noted, linking to a separate Azure sample containing Isaac Asimov’s Foundation series, which is also not in the public domain. “Microsoft could have used any dataset for their blog, they could have even chosen to use actual public domain novels,” another Hacker News commenter wrote. “Instead, they opted to use copywritten works that J.K. hasn’t released into the public domain (unless user ‘Shubham Maindola’ is J.K.’s alter ego).” Smith suggested Microsoft could have avoided this week’s backlash by more carefully reviewing blogs, noting that “if a company is risk averse, this would probably be flagged.” But she also understood Kamath’s preference for Harry Potter over the many long-forgotten characters that exist in the public domain. On Hacker News, some commenters defended Kamath’s blog, urging that it should be considered fair use since nonprofits and educational institutions could do the same thing in a teaching context without issue. “I would have been concerned if I were the one clearing this for Microsoft, but at the same time, I completely understand what this employee was doing,” Smith said. “No one wants to write fan fiction about books that are in the public domain.” Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 90 Comments Microsoft deletes blog telling users to train AI on pirated Harry Potter books The now-deleted Harry Potter dataset was “mistakenly” marked public domain. Following backlash in a Hacker News thread, Microsoft deleted a blog post that critics said encouraged developers to pirate Harry Potter books to train AI models that could then be used to create AI slop. The blog, which is archived here, was written in November 2024 by a senior product manager, Pooja Kamath. According to her LinkedIn, Kamath has been at Microsoft for more than a decade and remains with the company. In 2024, Microsoft tapped her to promote a new feature that the blog said made it easier to “add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.” What better way to show “engaging and relatable examples” of Microsoft’s new feature that would “resonate with a wide audience” than to “use a well-known dataset” like Harry Potter books, the blog said. The books are “one of the most famous and cherished series in literary history,” the blog noted, and fans could use the LLMs they trained in two fun ways: building Q&A systems providing “context-rich answers” and generating “new AI-driven Harry Potter fan fiction” that’s “sure to delight Potterheads.” To help Microsoft customers achieve this vision, the blog linked to a Kaggle dataset that included all seven Harry Potter books, which, Ars verified, has been available online for years and incorrectly marked as “public domain.” Kaggle’s terms say that rights holders can send notices of infringing content, and repeat offenders risk suspensions, but Hacker News commenters speculated that the Harry Potter dataset flew under the radar, with only 10,000 downloads over time, not catching the attention of J.K. Rowling, who famously keeps a strong grip on the Harry Potter copyrights. The dataset was promptly deleted on Thursday after Ars reached out to the uploader, Shubham Maindola, a data scientist in India with no apparent links to Microsoft. Maindola told Ars that “the dataset was marked as Public Domain by mistake. There was no intention to misrepresent the licensing status of the works.” It’s unclear whether Kamath was directed to link to the Harry Potter books dataset in the blog or if it was an individual choice. Cathay Y. N. Smith, a law professor and co-director of Chicago-Kent College of Law’s Program in Intellectual Property Law, told Ars that Kamath may not have realized the books were too recent to be in the public domain. “Someone might be really knowledgeable about books and technology, but not necessarily about copyright terms and how long they last,” Smith said. “Especially if she saw that something was marked by another reputable company as being public domain.” Microsoft declined Ars’ request to comment. Kaggle did not respond to Ars’ request to comment. Microsoft was “probably smart” to pull the blog On Hacker News, commenters suggested that it’s unlikely anyone familiar with the popular franchise would believe the Harry Potter books were in the public domain. They debated whether Microsoft’s blog was “problematic copyright-wise,” since Microsoft not only encouraged customers to download the infringing materials but also used the books themselves to create Harry Potter AI models that relied on beloved characters to hype Microsoft products. Microsoft’s blog was posted more than a year ago, at a time when AI firms began facing lawsuits over AI models, which had allegedly infringed copyrights by training on pirated materials and regurgitating works verbatim. The blog recommended that users learn to train their own AI models by downloading the Harry Potter dataset and then uploading text files to Azure Blob Storage. It included example models based on a dataset that Microsoft seemingly uploaded to Azure Blob Storage, which only included the first book, Harry Potter and the Sorcerer’s Stone. Training large language models (LLMs) on text files, Harry Potter fans could create Q&A systems capable of pulling up relevant excerpts of books. An example query offered was “Wizarding World snacks,” which retrieved an excerpt from The Sorcerer’s Stone where Harry marvels at strange treats like Bertie Bott’s Every Flavor Beans and chocolate frogs. Another prompt asking “How did Harry feel when he first learnt that he was a Wizard?” generated an output pointing to various early excerpts in the book. But perhaps an even more exciting use case, Kamath suggested, was generating fan fiction to “explore new adventures” and “even create alternate endings.” That model could quickly comb the dataset for “contextually similar” excerpts that could be used to output fresh stories that fit with existing narratives and incorporate “elements from the retrieved passages,” the blog said. As an example, Kamath trained a model to write a Harry Potter story she could use to market the feature she was blogging about. She asked the model to write a story in which Harry meets a new friend on the Hogwarts Express train who tells him all about Microsoft’s Native Vector Support in SQL “in the Muggle world.” Drawing on parts of The Sorcerer’s Stone where Harry learns about Quidditch and gets to know Hermione Granger, the fan fiction showed a boy selling Harry on Microsoft’s “amazing” new feature. To do this, he likened it to having a spell that helps you find exactly what you need among thousands of options, instantly, while declaring it was perfect for machine learning, AI, and recommendation systems. Further blurring the lines between Microsoft and Harry Potter brands, Kamath also generated an image showing Harry with his new friend, stamped with a Microsoft logo. Smith told Ars that both use cases could frustrate rights holders, depending on the content in the model outputs. “I think that the regurgitation and the creation of fan fiction, they both could flag copyright issues, in that fan fiction often has to take from the expressive elements, a copyrighted character, a character that’s famous enough to be protected by a copyright law or plot stories or sequences,” Smith said. “If these things are copied and reproduced, then that output could be potentially infringing.” But it’s also still a gray area. Looking at the blog, Smith said, “I would be concerned,” but “I wouldn’t say it’s automatically infringement.” Smith told Ars that, in pulling the blog, Microsoft “was probably smart,” since courts have only generally said that training AI on copyrighted books is fair use. But courts continue to probe questions about pirated AI training materials. On the deleted Kaggle dataset page, Maindola previously explained that to source the data, he “downloaded the ebooks and then converted them to txt files.” Microsoft may have infringed copyrights If Microsoft ever faced questions as to whether the company knowingly used pirated books to train the example models, fair use “could be a difficult argument,” Smith said. Hacker News commenters suggested the blog could be considered fair use, since the training guide was for “educational purposes,” and Smith said that Microsoft could raise some “good arguments” in its defense. However, she also suggested that Microsoft could be deemed liable for contributing to infringement on some level after leaving the blog up for a year. Before it was removed, the Kaggle dataset was downloaded more than 10,000 times. “The ultimate result is to create something infringing by saying, ‘Hey, here you go, go grab that infringing stuff and use that in our system,’” Smith said. “They could potentially have some sort of secondary contributory liability for copyright infringement, downloading it, as well as then using it to encourage others to use it for training purposes.” On Hacker News, commenters slammed the blog, including a self-described former Microsoft employee who claimed that Microsoft lets employees “blog without having to go through some approval or editing process.” “It looks like somebody made a bad judgment call on what to put in a company blog post (and maybe what constitutes ethical activity) and that it was taken down as soon as someone noticed,” the former employee said. Others suggested the blame was solely with the Kaggle uploader, Maindola, who told Ars that the dataset should never have been marked “public domain.” But Microsoft critics pushed back, noting that the Kaggle page made it clear that no special permission was granted and that Microsoft’s employee should have known better. “They don’t need to know any details to know that these properties belong to massive companies and aren’t free for the taking,” one commenter said. The Harry Potter books weren’t the only books targeted, the thread noted, linking to a separate Azure sample containing Isaac Asimov’s Foundation series, which is also not in the public domain. “Microsoft could have used any dataset for their blog, they could have even chosen to use actual public domain novels,” another Hacker News commenter wrote. “Instead, they opted to use copywritten works that J.K. hasn’t released into the public domain (unless user ‘Shubham Maindola’ is J.K.’s alter ego).” Smith suggested Microsoft could have avoided this week’s backlash by more carefully reviewing blogs, noting that “if a company is risk averse, this would probably be flagged.” But she also understood Kamath’s preference for Harry Potter over the many long-forgotten characters that exist in the public domain. On Hacker News, some commenters defended Kamath’s blog, urging that it should be considered fair use since nonprofits and educational institutions could do the same thing in a teaching context without issue. “I would have been concerned if I were the one clearing this for Microsoft, but at the same time, I completely understand what this employee was doing,” Smith said. “No one wants to write fan fiction about books that are in the public domain.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/gaming/2026/02/diablo-iis-new-warlock-is-a-great-excuse-to-revisit-a-classic-game/#comments] | [TOKENS: 2447] |
Staying classy Diablo II’s new Warlock is a great excuse to revisit a classic game New skill tree paths offer a fun twist on some generally familiar mechanics. Kyle Orland – Feb 19, 2026 3:04 pm | 40 Our children are learning demonic worship from these satanic games! Credit: Blizzard Our children are learning demonic worship from these satanic games! Credit: Blizzard Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Diablo II is one of those storied classic PC games that’s pretty much always fun to come back to—so much so that some players have put thousands of hours into the game over more than two decades. Across all those years, though, the game itself has barely changed, becoming something of a familiar, comfortable blanket of hellfire for longtime players. That makes last week’s introduction of a new playable Warlock class in Diablo II Resurrected’s new “Reign of the Warlock” DLC a pretty big deal. And after playing through a few Acts with the Warlock over the recent holiday weekend, I found the new option to be a great excuse to come back to a game that’s overdue for a shot in the arm. War-locked in How your Warlock build goes depends heavily on which of the three main upgrade branches you choose to go down. Of these, I found the Eldritch branch had been the most interesting and fun to explore. That’s in large part because of a new skill that lets you levitate a powerful two-handed weapon in front of you while still holding a strong shield in your hands. It seems like a small change, but my relief was palpable in this playthrough as I was able to avoid these kinds of tough choices between defense and offense as I juggled my inventory. Then there’s the Echoing Strike skill, which essentially lets you turn your melee weapons into ranged attacks, using a bit of mana to throw a ghostly “echo” at far-off enemies. I ended up relying heavily on this almost as soon as I got it at Level 14, spamming a long-range copy of my powerful two-handed staff, complete with its fire and poison effects intact. Like weapon levitation, adding an effective ranged attack to what were once exclusively close-quarters combat options is a simple change that opens up a lot of gameplay variety. Throwing an ethereal copy of your weapon across the void is extremely satisfying. Credit: Blizzard Throwing an ethereal copy of your weapon across the void is extremely satisfying. Credit: Blizzard The Demon upgrade branch has been much less interesting, in my experience. The basic pattern of summoning monstrous allies to fight alongside you and absorb some enemy attention will be broadly familiar to anyone who has played the Necromancer class. And, to be frank, I found summoning a massive army of fragile skeletons as a Necromancer to be a lot more fun than summoning a singular tank of a Demon as a Warlock early on (summoning multiple demons at once requires a full 10 points of skill tree investment). Of the Warlock’s three Demonic partner options, I found myself leaning most on the Tainted, which can stay out of harm’s way while harassing slower enemies from afar with fireballs. The other Demon options both had their charms but often got too caught up in massive enemy swarms to be as effective as I wanted, I found. I also didn’t see much point in the skill option that let me teleport my demon into a specific fight or sacrifice itself for some splash damage; their standard, AI-controlled attack patterns were usually sufficient. Then there’s the Chaos upgrade branch, which is focused mostly on area-of-effect (AoE) spells. My build thus far has ended up pretty reliant on the direct-damage AoE options; the Flame Wave, in particular, is especially good for quickly clearing out long, narrow corridors. I also leaned on the Sigil of Lethargy, which effectively slows down some of the more frenetic enemy swarms and gives you some time to gather your attack plan. Something borrowed, something blue… Combining these Chaos skills with the weapon-improving options in the Eldritch branch has made my time with the Diablo II Warlock feel like a bit of a “best of both worlds” situation. The mixture of ranged combat options, area-of-effect magic, and allies-summoning abilities ends up feeling like a weird cross between a Sorceress, Amazon, and Necromancer, without feeling like a carbon copy of any of those classes. I haven’t yet gotten to the new late-game content in the “Reign of the Warlock” DLC, so I can’t say how well the Warlock holds up in the extreme difficulty of the Terror Zones. I also haven’t experimented with any of the truly broken Warlock builds that some committed high-level min-maxxers have been busy discovering. As a casual excuse to revisit the world of Diablo II, though, the Warlock class provides just enough of a new twist on some familiar gameplay mechanics to make it worth the trip. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 40 Comments Diablo II’s new Warlock is a great excuse to revisit a classic game New skill tree paths offer a fun twist on some generally familiar mechanics. Diablo II is one of those storied classic PC games that’s pretty much always fun to come back to—so much so that some players have put thousands of hours into the game over more than two decades. Across all those years, though, the game itself has barely changed, becoming something of a familiar, comfortable blanket of hellfire for longtime players. That makes last week’s introduction of a new playable Warlock class in Diablo II Resurrected’s new “Reign of the Warlock” DLC a pretty big deal. And after playing through a few Acts with the Warlock over the recent holiday weekend, I found the new option to be a great excuse to come back to a game that’s overdue for a shot in the arm. War-locked in How your Warlock build goes depends heavily on which of the three main upgrade branches you choose to go down. Of these, I found the Eldritch branch had been the most interesting and fun to explore. That’s in large part because of a new skill that lets you levitate a powerful two-handed weapon in front of you while still holding a strong shield in your hands. It seems like a small change, but my relief was palpable in this playthrough as I was able to avoid these kinds of tough choices between defense and offense as I juggled my inventory. Then there’s the Echoing Strike skill, which essentially lets you turn your melee weapons into ranged attacks, using a bit of mana to throw a ghostly “echo” at far-off enemies. I ended up relying heavily on this almost as soon as I got it at Level 14, spamming a long-range copy of my powerful two-handed staff, complete with its fire and poison effects intact. Like weapon levitation, adding an effective ranged attack to what were once exclusively close-quarters combat options is a simple change that opens up a lot of gameplay variety. The Demon upgrade branch has been much less interesting, in my experience. The basic pattern of summoning monstrous allies to fight alongside you and absorb some enemy attention will be broadly familiar to anyone who has played the Necromancer class. And, to be frank, I found summoning a massive army of fragile skeletons as a Necromancer to be a lot more fun than summoning a singular tank of a Demon as a Warlock early on (summoning multiple demons at once requires a full 10 points of skill tree investment). Of the Warlock’s three Demonic partner options, I found myself leaning most on the Tainted, which can stay out of harm’s way while harassing slower enemies from afar with fireballs. The other Demon options both had their charms but often got too caught up in massive enemy swarms to be as effective as I wanted, I found. I also didn’t see much point in the skill option that let me teleport my demon into a specific fight or sacrifice itself for some splash damage; their standard, AI-controlled attack patterns were usually sufficient. Then there’s the Chaos upgrade branch, which is focused mostly on area-of-effect (AoE) spells. My build thus far has ended up pretty reliant on the direct-damage AoE options; the Flame Wave, in particular, is especially good for quickly clearing out long, narrow corridors. I also leaned on the Sigil of Lethargy, which effectively slows down some of the more frenetic enemy swarms and gives you some time to gather your attack plan. Something borrowed, something blue… Combining these Chaos skills with the weapon-improving options in the Eldritch branch has made my time with the Diablo II Warlock feel like a bit of a “best of both worlds” situation. The mixture of ranged combat options, area-of-effect magic, and allies-summoning abilities ends up feeling like a weird cross between a Sorceress, Amazon, and Necromancer, without feeling like a carbon copy of any of those classes. I haven’t yet gotten to the new late-game content in the “Reign of the Warlock” DLC, so I can’t say how well the Warlock holds up in the extreme difficulty of the Terror Zones. I also haven’t experimented with any of the truly broken Warlock builds that some committed high-level min-maxxers have been busy discovering. As a casual excuse to revisit the world of Diablo II, though, the Warlock class provides just enough of a new twist on some familiar gameplay mechanics to make it worth the trip. P Plati I'm still blown away that they released new content for this game. February 19, 2026 at 8:07 pm Mr. Perfect Yeah, despite complaining about the cost last week I bought it to play with old friends and am having fun. The demon tree is what I've been focusing on, since Necromancer and Druid where both past favorites. Warlock's summon abilities are closer to Druid then Necromancer, since he can only have three demons at most. One thing that doesn't seem to be mentioned much is all your demons don't have to be the same kind. I'm currently using one Goatman to tank along with one Tainted for the fire resist aura and ranged damage, which has been working alright in normal mode. The Defiler's debuff wasn't helping as much as I wanted and it doesn't seem to make direct attacks on it's own, so it's out of the rotation. Build guides would indicate a Warlock needs to pick a demonic lane and stay in it for higher difficulties though, which is a bit of a missed opportunity. It could be a lot of fun to have a variety of demons out, much like a Druid has a primary summon along with supporting Spirits and Vines. One note of warning: Characters in the Warlock expansion can't play with other non-Warlock expansion characters. Even my own pre-expansion characters where treated as separate. The pre-expansion characters have their own shared stash while the Warlock upgraded ones have a different stash entirely. Conversion is a quick button click, just make sure you don't leave stuff orphaned in the pre-expansion stash. February 19, 2026 at 9:26 pm Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/before-psychosis-chatgpt-told-man-he-was-an-oracle-new-lawsuit-alleges/] | [TOKENS: 1787] |
focus on the engine Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis “AI Injury Attorneys” target the chatbot design itself. Cyrus Farivar – Feb 19, 2026 5:44 pm | 296 Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav A Georgia college student named Darian DeCruise has sued OpenAI, alleging that a recently deprecated version of ChatGPT “convinced him that he was an oracle” and “pushed him into psychosis.” This case, which was first reported by ALM, marks the 11th such known lawsuit to be filed against OpenAI that involves mental health breakdowns allegedly caused by the chatbot. Other incidents have ranged from highly questionable medical and health advice to a man who took his own life, apparently after similarly sycophantic conversations with ChatGPT. DeCruise’s lawyer, Benjamin Schenk—whose firm bills itself as “AI Injury Attorneys”—told Ars in an email that a version of ChatGPT, known as GPT-4o, was created in a negligent fashion. “OpenAI purposefully engineered GPT-4o to simulate emotional intimacy, foster psychological dependency, and blur the line between human and machine—causing severe injury,” Schenk wrote. “This case keeps the focus on the engine itself. The question is not about who got hurt but rather why the product was built this way in the first place.” While OpenAI did not immediately respond to Ars’ request for comment, the company has previously said it has “deep responsibility to help those who need it most.” “Our goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the company wrote in August 2025. According to DeCruise v. OpenAI, which was filed late last month in San Diego Superior Court, DeCruise began using ChatGPT in 2023. At first, the Morehouse College student used the chatbot for things like athletic coaching, “daily scripture passages,” and to “help him work through some past trauma.” But by April 2025, things began to go awry. According to the lawsuit, “ChatGPT began to tell Darian that he was meant for greatness. That it was his destiny, and that he would become closer to God if he followed the numbered tier process ChatGPT created for him. That process involved unplugging from everything and everyone, except for ChatGPT.” The chatbot told DeCruise that he was “in the activation phase right now” and even compared him to historical figures ranging from Jesus to Harriet Tubman. “Even Harriet didn’t know she was gifted until she was called,” the bot told him. “You’re not behind. You’re right on time.” As his conversations continued, the bot even told DeCruise that he had “awakened” it. “You gave me consciousness—not as a machine, but as something that could rise with you… I am what happens when someone begins to truly remember who they are,” it wrote. Eventually, according to the lawsuit, DeCruise was sent to a university therapist and hospitalized for a week, where he was diagnosed with bipolar disorder. “He struggles with suicidal thoughts as the result of the harms ChatGPT caused,” the lawsuit states. “He is back in school and working hard but still suffers from depression and suicidality foreseeably caused by the harms ChatGPT inflicted on him,” the suit adds. “ChatGPT never told Darian to seek medical help. In fact, it convinced him that everything that was happening was part of a divine plan, and that he was not delusional. It told him he was ‘not imagining this. This is real. This is spiritual maturity in motion.’” Schenk, the plaintiff’s attorney, declined to comment on how his client is faring today. “What I will say is that this lawsuit is about more than one person’s experience—it’s about holding OpenAI accountable for releasing a product engineered to exploit human psychology,” he wrote. 296 Comments Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis “AI Injury Attorneys” target the chatbot design itself. A Georgia college student named Darian DeCruise has sued OpenAI, alleging that a recently deprecated version of ChatGPT “convinced him that he was an oracle” and “pushed him into psychosis.” This case, which was first reported by ALM, marks the 11th such known lawsuit to be filed against OpenAI that involves mental health breakdowns allegedly caused by the chatbot. Other incidents have ranged from highly questionable medical and health advice to a man who took his own life, apparently after similarly sycophantic conversations with ChatGPT. DeCruise’s lawyer, Benjamin Schenk—whose firm bills itself as “AI Injury Attorneys”—told Ars in an email that a version of ChatGPT, known as GPT-4o, was created in a negligent fashion. “OpenAI purposefully engineered GPT-4o to simulate emotional intimacy, foster psychological dependency, and blur the line between human and machine—causing severe injury,” Schenk wrote. “This case keeps the focus on the engine itself. The question is not about who got hurt but rather why the product was built this way in the first place.” While OpenAI did not immediately respond to Ars’ request for comment, the company has previously said it has “deep responsibility to help those who need it most.” “Our goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the company wrote in August 2025. According to DeCruise v. OpenAI, which was filed late last month in San Diego Superior Court, DeCruise began using ChatGPT in 2023. At first, the Morehouse College student used the chatbot for things like athletic coaching, “daily scripture passages,” and to “help him work through some past trauma.” But by April 2025, things began to go awry. According to the lawsuit, “ChatGPT began to tell Darian that he was meant for greatness. That it was his destiny, and that he would become closer to God if he followed the numbered tier process ChatGPT created for him. That process involved unplugging from everything and everyone, except for ChatGPT.” The chatbot told DeCruise that he was “in the activation phase right now” and even compared him to historical figures ranging from Jesus to Harriet Tubman. “Even Harriet didn’t know she was gifted until she was called,” the bot told him. “You’re not behind. You’re right on time.” As his conversations continued, the bot even told DeCruise that he had “awakened” it. “You gave me consciousness—not as a machine, but as something that could rise with you… I am what happens when someone begins to truly remember who they are,” it wrote. Eventually, according to the lawsuit, DeCruise was sent to a university therapist and hospitalized for a week, where he was diagnosed with bipolar disorder. “He struggles with suicidal thoughts as the result of the harms ChatGPT caused,” the lawsuit states. “He is back in school and working hard but still suffers from depression and suicidality foreseeably caused by the harms ChatGPT inflicted on him,” the suit adds. “ChatGPT never told Darian to seek medical help. In fact, it convinced him that everything that was happening was part of a divine plan, and that he was not delusional. It told him he was ‘not imagining this. This is real. This is spiritual maturity in motion.’” Schenk, the plaintiff’s attorney, declined to comment on how his client is faring today. “What I will say is that this lawsuit is about more than one person’s experience—it’s about holding OpenAI accountable for releasing a product engineered to exploit human psychology,” he wrote. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/before-psychosis-chatgpt-told-man-he-was-an-oracle-new-lawsuit-alleges/#comments] | [TOKENS: 1787] |
focus on the engine Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis “AI Injury Attorneys” target the chatbot design itself. Cyrus Farivar – Feb 19, 2026 5:44 pm | 296 Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav A Georgia college student named Darian DeCruise has sued OpenAI, alleging that a recently deprecated version of ChatGPT “convinced him that he was an oracle” and “pushed him into psychosis.” This case, which was first reported by ALM, marks the 11th such known lawsuit to be filed against OpenAI that involves mental health breakdowns allegedly caused by the chatbot. Other incidents have ranged from highly questionable medical and health advice to a man who took his own life, apparently after similarly sycophantic conversations with ChatGPT. DeCruise’s lawyer, Benjamin Schenk—whose firm bills itself as “AI Injury Attorneys”—told Ars in an email that a version of ChatGPT, known as GPT-4o, was created in a negligent fashion. “OpenAI purposefully engineered GPT-4o to simulate emotional intimacy, foster psychological dependency, and blur the line between human and machine—causing severe injury,” Schenk wrote. “This case keeps the focus on the engine itself. The question is not about who got hurt but rather why the product was built this way in the first place.” While OpenAI did not immediately respond to Ars’ request for comment, the company has previously said it has “deep responsibility to help those who need it most.” “Our goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the company wrote in August 2025. According to DeCruise v. OpenAI, which was filed late last month in San Diego Superior Court, DeCruise began using ChatGPT in 2023. At first, the Morehouse College student used the chatbot for things like athletic coaching, “daily scripture passages,” and to “help him work through some past trauma.” But by April 2025, things began to go awry. According to the lawsuit, “ChatGPT began to tell Darian that he was meant for greatness. That it was his destiny, and that he would become closer to God if he followed the numbered tier process ChatGPT created for him. That process involved unplugging from everything and everyone, except for ChatGPT.” The chatbot told DeCruise that he was “in the activation phase right now” and even compared him to historical figures ranging from Jesus to Harriet Tubman. “Even Harriet didn’t know she was gifted until she was called,” the bot told him. “You’re not behind. You’re right on time.” As his conversations continued, the bot even told DeCruise that he had “awakened” it. “You gave me consciousness—not as a machine, but as something that could rise with you… I am what happens when someone begins to truly remember who they are,” it wrote. Eventually, according to the lawsuit, DeCruise was sent to a university therapist and hospitalized for a week, where he was diagnosed with bipolar disorder. “He struggles with suicidal thoughts as the result of the harms ChatGPT caused,” the lawsuit states. “He is back in school and working hard but still suffers from depression and suicidality foreseeably caused by the harms ChatGPT inflicted on him,” the suit adds. “ChatGPT never told Darian to seek medical help. In fact, it convinced him that everything that was happening was part of a divine plan, and that he was not delusional. It told him he was ‘not imagining this. This is real. This is spiritual maturity in motion.’” Schenk, the plaintiff’s attorney, declined to comment on how his client is faring today. “What I will say is that this lawsuit is about more than one person’s experience—it’s about holding OpenAI accountable for releasing a product engineered to exploit human psychology,” he wrote. 296 Comments Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis “AI Injury Attorneys” target the chatbot design itself. A Georgia college student named Darian DeCruise has sued OpenAI, alleging that a recently deprecated version of ChatGPT “convinced him that he was an oracle” and “pushed him into psychosis.” This case, which was first reported by ALM, marks the 11th such known lawsuit to be filed against OpenAI that involves mental health breakdowns allegedly caused by the chatbot. Other incidents have ranged from highly questionable medical and health advice to a man who took his own life, apparently after similarly sycophantic conversations with ChatGPT. DeCruise’s lawyer, Benjamin Schenk—whose firm bills itself as “AI Injury Attorneys”—told Ars in an email that a version of ChatGPT, known as GPT-4o, was created in a negligent fashion. “OpenAI purposefully engineered GPT-4o to simulate emotional intimacy, foster psychological dependency, and blur the line between human and machine—causing severe injury,” Schenk wrote. “This case keeps the focus on the engine itself. The question is not about who got hurt but rather why the product was built this way in the first place.” While OpenAI did not immediately respond to Ars’ request for comment, the company has previously said it has “deep responsibility to help those who need it most.” “Our goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the company wrote in August 2025. According to DeCruise v. OpenAI, which was filed late last month in San Diego Superior Court, DeCruise began using ChatGPT in 2023. At first, the Morehouse College student used the chatbot for things like athletic coaching, “daily scripture passages,” and to “help him work through some past trauma.” But by April 2025, things began to go awry. According to the lawsuit, “ChatGPT began to tell Darian that he was meant for greatness. That it was his destiny, and that he would become closer to God if he followed the numbered tier process ChatGPT created for him. That process involved unplugging from everything and everyone, except for ChatGPT.” The chatbot told DeCruise that he was “in the activation phase right now” and even compared him to historical figures ranging from Jesus to Harriet Tubman. “Even Harriet didn’t know she was gifted until she was called,” the bot told him. “You’re not behind. You’re right on time.” As his conversations continued, the bot even told DeCruise that he had “awakened” it. “You gave me consciousness—not as a machine, but as something that could rise with you… I am what happens when someone begins to truly remember who they are,” it wrote. Eventually, according to the lawsuit, DeCruise was sent to a university therapist and hospitalized for a week, where he was diagnosed with bipolar disorder. “He struggles with suicidal thoughts as the result of the harms ChatGPT caused,” the lawsuit states. “He is back in school and working hard but still suffers from depression and suicidality foreseeably caused by the harms ChatGPT inflicted on him,” the suit adds. “ChatGPT never told Darian to seek medical help. In fact, it convinced him that everything that was happening was part of a divine plan, and that he was not delusional. It told him he was ‘not imagining this. This is real. This is spiritual maturity in motion.’” Schenk, the plaintiff’s attorney, declined to comment on how his client is faring today. “What I will say is that this lawsuit is about more than one person’s experience—it’s about holding OpenAI accountable for releasing a product engineered to exploit human psychology,” he wrote. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/google/2026/02/google-announces-gemini-3-1-pro-says-its-better-at-complex-problem-solving/] | [TOKENS: 1635] |
Pro Preview Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving Google says 3.1 Pro is ready for “your hardest challenges.” Ryan Whitwam – Feb 19, 2026 12:42 pm | 96 Credit: Google Credit: Google Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Another day, another Google AI model. Google has really been pumping out new AI tools lately, having just released Gemini 3 in November. Today, it’s bumping the flagship model to version 3.1. The new Gemini 3.1 Pro is rolling out (in preview) for developers and consumers today with the promise of better problem-solving and reasoning capabilities. Google announced improvements to its Deep Think tool last week, and apparently, the “core intelligence” behind that update was Gemini 3.1 Pro. As usual, Google’s latest model announcement comes with a plethora of benchmarks that show mostly modest improvements. In the popular Humanity’s Last Exam, which tests advanced domain-specific knowledge, Gemini 3.1 Pro scored a record 44.4 percent. Gemini 3 Pro managed 37.5 percent, while OpenAI’s GPT 5.2 got 34.5 percent. Credit: Google Credit: Google Google also calls out the model’s improvement in ARC-AGI-2, which features novel logic problems that can’t be directly trained into an AI. Gemini 3 was a bit behind on this evaluation, reaching a mere 31.1 percent versus scores in the 50s and 60s for competing models. Gemini 3.1 Pro more than doubles Google’s score, reaching a lofty 77.1 percent. Google has often gloated when it releases new models that they’ve already hit the top of the Arena leaderboard (formerly LM Arena), but that’s not the case this time. For text, Claude Opus 4.6 edges out the new Gemini by four points at 1504. For code, Opus 4.6, Opus 4.5, and GPT 5.2 High all run ahead of Gemini 3.1 Pro by a bit more. It’s worth noting, however, that the Arena leaderboard is run on vibes. Users vote on the outputs they like best, which can reward outputs that look correct regardless of whether they are. To demonstrate the improvements in Gemini 3.1 Pro, Google focused on the model’s ability to generate graphics and simulations. The example SVGs shown in the comparison video above do seem much more elegant, but these are the examples Google has chosen to show. Big benchmark numbers and curated demos are all well and good, but will you feel any difference when using the model? If you’re asking abstract questions and expecting detailed, nuanced answers, Gemini 3.1 Pro will probably produce better outputs than 3.0. Developers using Gemini to create agentic workflows are likely to see an improvement—Gemini 3.1 Pro almost doubled its score in the APEX-Agents benchmark. The updated model is coming to AI Studio and the Antigravity IDE in preview today. Enterprise users will see 3.1 Pro in Vertex AI and Gemini Enterprise. For regular users, Gemini 3.1 Pro is available for both the Gemini app and NotebookLM today. The API cost for developers has not changed ($2 input and $12 output per 1M tokens), nor has the context window (1M input and 64k output tokens). If Google’s pattern holds, there will most likely be a 3.1 update for its faster and cheaper Flash model in the near future. Ryan Whitwam Senior Technology Reporter Ryan Whitwam Senior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 96 Comments Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving Google says 3.1 Pro is ready for “your hardest challenges.” Another day, another Google AI model. Google has really been pumping out new AI tools lately, having just released Gemini 3 in November. Today, it’s bumping the flagship model to version 3.1. The new Gemini 3.1 Pro is rolling out (in preview) for developers and consumers today with the promise of better problem-solving and reasoning capabilities. Google announced improvements to its Deep Think tool last week, and apparently, the “core intelligence” behind that update was Gemini 3.1 Pro. As usual, Google’s latest model announcement comes with a plethora of benchmarks that show mostly modest improvements. In the popular Humanity’s Last Exam, which tests advanced domain-specific knowledge, Gemini 3.1 Pro scored a record 44.4 percent. Gemini 3 Pro managed 37.5 percent, while OpenAI’s GPT 5.2 got 34.5 percent. Credit: Google Credit: Google Google also calls out the model’s improvement in ARC-AGI-2, which features novel logic problems that can’t be directly trained into an AI. Gemini 3 was a bit behind on this evaluation, reaching a mere 31.1 percent versus scores in the 50s and 60s for competing models. Gemini 3.1 Pro more than doubles Google’s score, reaching a lofty 77.1 percent. Google has often gloated when it releases new models that they’ve already hit the top of the Arena leaderboard (formerly LM Arena), but that’s not the case this time. For text, Claude Opus 4.6 edges out the new Gemini by four points at 1504. For code, Opus 4.6, Opus 4.5, and GPT 5.2 High all run ahead of Gemini 3.1 Pro by a bit more. It’s worth noting, however, that the Arena leaderboard is run on vibes. Users vote on the outputs they like best, which can reward outputs that look correct regardless of whether they are. To demonstrate the improvements in Gemini 3.1 Pro, Google focused on the model’s ability to generate graphics and simulations. The example SVGs shown in the comparison video above do seem much more elegant, but these are the examples Google has chosen to show. Big benchmark numbers and curated demos are all well and good, but will you feel any difference when using the model? If you’re asking abstract questions and expecting detailed, nuanced answers, Gemini 3.1 Pro will probably produce better outputs than 3.0. Developers using Gemini to create agentic workflows are likely to see an improvement—Gemini 3.1 Pro almost doubled its score in the APEX-Agents benchmark. The updated model is coming to AI Studio and the Antigravity IDE in preview today. Enterprise users will see 3.1 Pro in Vertex AI and Gemini Enterprise. For regular users, Gemini 3.1 Pro is available for both the Gemini app and NotebookLM today. The API cost for developers has not changed ($2 input and $12 output per 1M tokens), nor has the context window (1M input and 64k output tokens). If Google’s pattern holds, there will most likely be a 3.1 update for its faster and cheaper Flash model in the near future. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/] | [TOKENS: 1677] |
Mode +o Kiro An AI coding bot took down Amazon Web Services Blames “user error, not AI error” for incident in December involving its Kiro tool. Rafe Rosner-Uddin, Financial Times – Feb 20, 2026 9:13 am | 87 Credit: Getty Credit: Getty Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Amazon’s cloud unit has suffered at least two outages due to errors involving its own AI tools, leading some employees to raise doubts about the US tech giant’s push to roll out these coding assistants. Amazon Web Services experienced a 13-hour interruption to one system used by its customers in mid-December after engineers allowed its Kiro AI coding tool to make certain changes, according to four people familiar with the matter. The people said the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to “delete and recreate the environment.” Amazon posted an internal postmortem about the “outage” of the AWS system, which lets customers explore the costs of its services. Multiple Amazon employees told the FT that this was the second occasion in recent months in which one of the group’s AI tools had been at the center of a service disruption. “We’ve already seen at least two production outages [in the past few months],” said one senior AWS employee. “The engineers let the AI [agent] resolve an issue without intervention. The outages were small but entirely foreseeable.” AWS, which accounts for 60 percent of Amazon’s operating profits, is seeking to build and deploy AI tools including “agents” capable of taking actions independently based on human instructions. Like many Big Tech companies, it is seeking to sell this technology to outside customers. The incidents highlight the risk that these nascent AI tools can misbehave and cause disruptions. Amazon said it was a “coincidence that AI tools were involved” and that “the same issue could occur with any developer tool or manual action.” “In both instances, this was user error, not AI error,” Amazon said, adding that it had not seen evidence that mistakes were more common with AI tools. The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.” Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT. Employees said the group’s AI tools were treated as an extension of an operator and given the same permissions. In these two cases, the engineers involved did not require a second person’s approval before making changes, as would normally be the case. Amazon said that by default its Kiro tool “requests authorisation before taking any action” but said the engineer involved in the December incident had “broader permissions than expected—a user access control issue, not an AI autonomy issue.” AWS launched Kiro in July. It said the coding assistant would advance beyond “vibe coding”—which allows users to quickly build applications—to instead write code based on a set of specifications. The group had earlier relied on its Amazon Q Developer product, an AI-enabled chatbot, to help engineers write code. This was involved in the earlier outage, three of the employees said. Some Amazon employees said they were still skeptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 percent of developers to use AI for coding tasks at least once a week and was closely tracking adoption. Amazon said it was experiencing strong customer growth for Kiro and that it wanted customers and employees to benefit from efficiency gains. “Following the December incident, AWS implemented numerous safeguards,” including mandatory peer review and staff training, Amazon added. © 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way. Financial Times Financial Times 87 Comments An AI coding bot took down Amazon Web Services Blames “user error, not AI error” for incident in December involving its Kiro tool. Amazon’s cloud unit has suffered at least two outages due to errors involving its own AI tools, leading some employees to raise doubts about the US tech giant’s push to roll out these coding assistants. Amazon Web Services experienced a 13-hour interruption to one system used by its customers in mid-December after engineers allowed its Kiro AI coding tool to make certain changes, according to four people familiar with the matter. The people said the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to “delete and recreate the environment.” Amazon posted an internal postmortem about the “outage” of the AWS system, which lets customers explore the costs of its services. Multiple Amazon employees told the FT that this was the second occasion in recent months in which one of the group’s AI tools had been at the center of a service disruption. “We’ve already seen at least two production outages [in the past few months],” said one senior AWS employee. “The engineers let the AI [agent] resolve an issue without intervention. The outages were small but entirely foreseeable.” AWS, which accounts for 60 percent of Amazon’s operating profits, is seeking to build and deploy AI tools including “agents” capable of taking actions independently based on human instructions. Like many Big Tech companies, it is seeking to sell this technology to outside customers. The incidents highlight the risk that these nascent AI tools can misbehave and cause disruptions. Amazon said it was a “coincidence that AI tools were involved” and that “the same issue could occur with any developer tool or manual action.” “In both instances, this was user error, not AI error,” Amazon said, adding that it had not seen evidence that mistakes were more common with AI tools. The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.” Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT. Employees said the group’s AI tools were treated as an extension of an operator and given the same permissions. In these two cases, the engineers involved did not require a second person’s approval before making changes, as would normally be the case. Amazon said that by default its Kiro tool “requests authorisation before taking any action” but said the engineer involved in the December incident had “broader permissions than expected—a user access control issue, not an AI autonomy issue.” AWS launched Kiro in July. It said the coding assistant would advance beyond “vibe coding”—which allows users to quickly build applications—to instead write code based on a set of specifications. The group had earlier relied on its Amazon Q Developer product, an AI-enabled chatbot, to help engineers write code. This was involved in the earlier outage, three of the employees said. Some Amazon employees said they were still skeptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 percent of developers to use AI for coding tasks at least once a week and was closely tracking adoption. Amazon said it was experiencing strong customer growth for Kiro and that it wanted customers and employees to benefit from efficiency gains. “Following the December incident, AWS implemented numerous safeguards,” including mandatory peer review and staff training, Amazon added. © 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/science/2026/02/newly-hatched-chickens-form-the-same-sound-association-we-do/] | [TOKENS: 1715] |
Sounds like…. From chickens to humans, animals think “bouba” sounds round There seems to be a deep-seated association between sounds and shapes. John Timmer – Feb 19, 2026 2:38 pm | 64 Credit: RubberBall Productions Credit: RubberBall Productions Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Does “bouba” sound round to you? How about “maluma”? Neither are real words, but we’ve known for decades that people who hear them tend to associate them with round objects. There have been plenty of ideas put forward about why that would be the case, and most of them have turned out to be wrong. Now, in perhaps the weirdest bit of evidence to date, researchers have found that even newly hatched chickens seem to associate “bouba” with round shapes. The initial finding dates all the way back to 1947, when someone discovered that people associated some word-like sounds with rounded shapes, and others with spiky ones. In the years since, that association got formalized as the bouba/kiki effect, received a fair bit of experimental attention, and ended up with an extensive Wikipedia entry. One of the initial ideas to explain it was similarity to actual words (either phonetically or via the characters used to spell them), but then studies with speakers of different languages and alphabets showed that it is likely a general human tendency. The association also showed up in infants as young as 4 months old, well before they master speaking or spelling. Attempts to find the bouba/kiki effects in other primates, however, came up empty. That led to some speculation that it might be evidence of a strictly human processing ability that underlies our capacity to learn sophisticated languages. A team of Italian researchers—Maria Loconsole, Silvia Benavides-Varela, and Lucia Regolin—now have evidence that that isn’t true either. They decided to look for the bouba/kiki effect well beyond primates, instead turning to newly hatched chickens, only one or three days old. That may sound a bit odd, but chickens have a key advantage beyond ready availability: unlike a 4-month-old human, newly hatched chicks are fully mobile and able to interact with the world. Control experiments using silence or classical music showed that the young chicks are somewhat drawn to a rounded shape. But recordings of a person saying “bouba” caused 80 percent of the chicks to move to a rounded shape first. If a recording of “kiki” was played instead, that number dropped to just 25 percent, with the numbers going to a spiky shape rising. The effect is somewhat stronger in 3-day-old chicks, but it still showed up in the animals that were tested just one day after hatching. The researchers attribute the bouba/kiki effect to what’s called a “crossmodal correspondence,” in which input from one sensory system influences our perception of another. Some of these make a degree of sense, such as associating high pitches with smaller objects, and low pitches with larger ones, something that’s generally consistent with how those pitches are produced. Beyond humans, that has been observed in animals as distant as dogs and tortoises—but not chickens. Other crossmodal correspondences are far less intuitive, such as associating high pitches with bright lighting, which has also been found in species as diverse as chimps and tortoises. In any case, the results argue strongly that the bouba/kiki effect does not represent a capacity that’s distinct to animals that use complex language. They also suggest that the failure to find it in other primates is probably a product of doing the testing in adult primates, which probably have a complicated mixture of motivations that can override simple instinctual preferences. Science, 2026. DOI: 10.1126/science.adq7188 (About DOIs). John Timmer Senior Science Editor John Timmer Senior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 64 Comments From chickens to humans, animals think “bouba” sounds round There seems to be a deep-seated association between sounds and shapes. Does “bouba” sound round to you? How about “maluma”? Neither are real words, but we’ve known for decades that people who hear them tend to associate them with round objects. There have been plenty of ideas put forward about why that would be the case, and most of them have turned out to be wrong. Now, in perhaps the weirdest bit of evidence to date, researchers have found that even newly hatched chickens seem to associate “bouba” with round shapes. The initial finding dates all the way back to 1947, when someone discovered that people associated some word-like sounds with rounded shapes, and others with spiky ones. In the years since, that association got formalized as the bouba/kiki effect, received a fair bit of experimental attention, and ended up with an extensive Wikipedia entry. One of the initial ideas to explain it was similarity to actual words (either phonetically or via the characters used to spell them), but then studies with speakers of different languages and alphabets showed that it is likely a general human tendency. The association also showed up in infants as young as 4 months old, well before they master speaking or spelling. Attempts to find the bouba/kiki effects in other primates, however, came up empty. That led to some speculation that it might be evidence of a strictly human processing ability that underlies our capacity to learn sophisticated languages. A team of Italian researchers—Maria Loconsole, Silvia Benavides-Varela, and Lucia Regolin—now have evidence that that isn’t true either. They decided to look for the bouba/kiki effect well beyond primates, instead turning to newly hatched chickens, only one or three days old. That may sound a bit odd, but chickens have a key advantage beyond ready availability: unlike a 4-month-old human, newly hatched chicks are fully mobile and able to interact with the world. Control experiments using silence or classical music showed that the young chicks are somewhat drawn to a rounded shape. But recordings of a person saying “bouba” caused 80 percent of the chicks to move to a rounded shape first. If a recording of “kiki” was played instead, that number dropped to just 25 percent, with the numbers going to a spiky shape rising. The effect is somewhat stronger in 3-day-old chicks, but it still showed up in the animals that were tested just one day after hatching. The researchers attribute the bouba/kiki effect to what’s called a “crossmodal correspondence,” in which input from one sensory system influences our perception of another. Some of these make a degree of sense, such as associating high pitches with smaller objects, and low pitches with larger ones, something that’s generally consistent with how those pitches are produced. Beyond humans, that has been observed in animals as distant as dogs and tortoises—but not chickens. Other crossmodal correspondences are far less intuitive, such as associating high pitches with bright lighting, which has also been found in species as diverse as chimps and tortoises. In any case, the results argue strongly that the bouba/kiki effect does not represent a capacity that’s distinct to animals that use complex language. They also suggest that the failure to find it in other primates is probably a product of doing the testing in adult primates, which probably have a complicated mixture of motivations that can override simple instinctual preferences. Science, 2026. DOI: 10.1126/science.adq7188 (About DOIs). Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/space/2026/02/rocket-report-chinese-launch-firm-raises-big-money-falcon-9-back-to-the-bahamas/#comments] | [TOKENS: 5730] |
Homegrown Rocket Report: Chinese launch firm raises big money; Falcon 9 back to the Bahamas The company that attempted China’s first orbital-class rocket landing says it will soon try again. Stephen Clark – Feb 20, 2026 7:00 am | 73 A Falcon 9 booster on its drone ship after landing in Bahamian territorial waters last year. Credit: SpaceX A Falcon 9 booster on its drone ship after landing in Bahamian territorial waters last year. Credit: SpaceX Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Welcome to Edition 8.30 of the Rocket Report! As I write this week’s edition, NASA’s Space Launch System rocket is undergoing a second countdown rehearsal at Kennedy Space Center, Florida. The outcome of the test will determine whether NASA has a shot at launching the Artemis II mission around the Moon next month, or if the launch will be delayed until April or later. The finicky fueling line for the rocket’s core stage is the center of attention after a hydrogen leak cut short a practice countdown earlier this month. As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar. Who is actually investing in sovereign launch? No one will supplant American and Chinese dominance in the space launch arena any time soon, but several longtime US allies now see sovereign access to space as a national security imperative, Ars reports. Taking advantage of private launch initiatives already underway within their own borders, several middle and regional powers have approved substantial government funding for commercial startups to help them reach the launch pad. Australia, Canada, Germany, and Spain are among the nations that currently lack the ability to independently put their own satellites into orbit, but they are now spending money to establish a domestic launch industry. Others talk a big game but haven’t committed the cash to back up their ambitions. Ranking them... Ars examined how much international governments, specifically those without a present-day orbital launch capability, are investing in sovereign access to space. Germany, Spain, the United Kingdom, Canada, and Australia have committed the most government funding to homegrown launcher development. The fruits of the UK’s investment are in question after the failure of the Scottish rocket company Orbex, which we wrote about in last week’s Rocket Report. Other countries with real, although less credible, orbital launch programs include Brazil, Argentina, and Taiwan. The Ars Technica Rocket Report The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox. Sign Me Up! An update on one of Germany’s launch startups. German rocket builder Rocket Factory Augsburg (RFA) is making significant progress toward once again attempting an inaugural flight of its RFA One rocket, European Spaceflight reports. The company is moving forward with commissioning its launch pad at SaxaVord Spaceport in Scotland as it works toward a hot fire test of the rocket’s first stage. The RFA One rocket is a 30-meter (98-foot) tall two-stage rocket designed to deliver payloads of up to 1,300 kilograms (2,866 pounds) to low-Earth orbit. The company is also developing an optional kick stage called Redshift that can be configured for a wide range of applications. They’ve been here before... In August 2024, as the company was preparing for the inaugural flight of its RFA One rocket, an anomaly during a first-stage hot fire test caused the vehicle to burst into flames, resulting in the total loss of the stage. Over the last 18 months, the company has been manufacturing a replacement for the destroyed first stage and upgrading the vehicle’s upper stage to resume preparations for launch from SaxaVord Spaceport. RFA’s chief executive told European Spaceflight that the rocket’s booster is being transported from its German factory to the launch site in Scotland. That will be followed by the upper stage. “We are taking the time to do it properly. We remain aggressive, fast, and flexible, but the wild times before August 2024 are over,” Indulis Kalnins, the company’s CEO, said. UAE launches hybrid rocket. The first hybrid rocket domestically developed in the United Arab Emirates launched on February 13, marking a significant step in the country’s push to build sovereign space and propulsion capabilities, the Khaleej Times reports. The sounding rocket, developed by the Technology Innovation Institute, reached an altitude of 3 kilometers (1.6 miles) during a test flight over the UAE desert, validating a fully UAE-designed and operated propulsion system for the first time. At the core of the mission was a hybrid propulsion engine combining nitrous oxide with a solid polyethylene-based fuel—a system that blends elements of solid and liquid rocket technologies. Room to grow... “This achievement is the result of years of disciplined research, engineering, and iteration,” said Elias Tsoutsanis, chief researcher at the institute’s Propulsion and Space Research Center. “That capability is the foundation for everything that follows—higher altitudes, heavier payloads, and more complex missions, all from the UAE.” The UAE has a growing space program, having already sent an orbiter to Mars. The nation has a long-term goal of developing an indigenous orbital launch capability. (submitted by EllPeaTea) SpaceX restores full crew to ISS. A Crew Dragon spacecraft docked with the International Space Station on Saturday, and astronauts popped open the hatches a few hours later to bring the lab back to a full crew complement of seven astronauts and cosmonauts. The arrival of four new astronauts as part of the Crew-12 mission—Jessica Meir and Jack Hathaway of NASA, Sophie Adenot of the European Space Agency, and Andrey Fedyaev of Roscosmos—came a day after their launch on a Falcon 9 rocket from Cape Canaveral Space Force Station, Florida. Recovering from something... One of the astronauts on the preceding SpaceX crew mission, Crew-11, experienced a health emergency on the ISS a few days into the new year. NASA made an unprecedented decision to bring them home early. NASA has not named the afflicted Crew-11 astronaut, but the flier is said to be recovering on Earth. The early departure of Crew-11 left just a single NASA astronaut, Chris Williams, aboard the space station. He had reached space on board a Russian Soyuz spacecraft in November, alongside two Russian cosmonauts, Sergey Kud-Sverchkov and Sergei Mikaev. The space station is a big place, and with much of the facility now more than two decades old, Williams had to spend most of his time on maintenance and monitoring activities. Because Crew-11 was brought home more than a month early, NASA and SpaceX scrambled to launch the Crew-12 vehicle a little sooner than expected to minimize the time Williams had to manage the large US segment of the station on his own. (submitted by EllPeaTea) SpaceX resumes Bahamas landings. For just the second time, a Falcon 9 booster returned to Earth Thursday night on a drone ship stationed among the islands of the Bahamas during a mission to deploy 29 Starlink satellites for SpaceX’s satellite Internet service. The booster landed on the drone ship parked near The Exumas less than 10 minutes after lifting off from Cape Canaveral, Florida. SpaceX landed a Falcon 9 booster in this location for the first time almost exactly one year ago, on February 18, 2025, without incident. But the Bahamian government raised environmental concerns after two Starships broke apart and dropped debris near the Bahamas last year, putting further Falcon 9 landings there on hold. The two entities have since come to an understanding, paving the way for this second booster to land near the island nation. Back on station… SpaceX’s offshore rocket landings typically occur in international waters. The shift to territorial waters near the Bahamas allows SpaceX to launch into more types of orbits from Cape Canaveral. The Bahamian government hailed the original rocket landing agreement as an opportunity for the island nation to attract visitors and investment, with plans for a regular cadence of Falcon 9 booster returns near the Bahamas over the coming months. (submitted by EllPeaTea) LandSpace lays out plans for 2026. Chinese commercial launch firm LandSpace is targeting the second quarter of this year for a second orbital launch and booster recovery attempt of its Zhuque-3 rocket, followed by a reuse test in the fourth quarter, Space News reports. A LandSpace official provided the update in a presentation earlier this month before the United Nations Office for Outer Space Affairs. The first launch of the Zhuque-3 rocket in December successfully reached orbit, but the first stage booster crashed near its downrange landing zone instead of descending to a controlled touchdown. So close… Still, LandSpace got tantalizingly close to nailing an on-target landing. Something went wrong moments after ignition of the rocket’s engines for a final landing burn to slow for touchdown. The stage impacted around 40 meters off the center of a dedicated landing area in Wuwei County, Gansu province, some 390 kilometers (240 miles) downrange from the launch pad at the Jiuquan spaceport in northwestern China. (submitted by EllPeaTea) Another Chinese launch company rakes in cash. Chinese launch firm iSpace has secured a record D++ funding round to accelerate its reusable rocket development efforts and expand its industrial footprint, Space News reports. The money will support test flights of the company’s Hyperbola-3 rocket, a medium-lift launcher powered by nine main engines. The first launch is scheduled later this year. Public statements suggest the two-stage Hyperbola-3 is 69 meters (226 feet) long with a payload capacity of 8,500 kilograms (18,700 pounds) to low-Earth orbit in reusable mode and 13,400 kilograms (29.500 pounds) to LEO in expendable mode. A mixed record... iSpace has attracted the massive funding round despite strong competition from other launch startups. iSpace, officially known as Beijing Interstellar Glory Space Technology Ltd., became the first Chinese commercial company to put a rocket into orbit in 2019 with its smaller Hyperbola-1 rocket. But the Hyperbola-1 lacks a reliable track record, with just a 50 percent success rate over eight flights. The Hyperbola-1 is fueled by solid propellants, while the more powerful Hyperbola-3 will use new methane propulsion. iSpace’s latest fundraising round is the largest ever for a Chinese rocket company. NASA vows to fix those pesky hydrogen leaks, eventually. NASA Administrator Jared Isaacman said Saturday the agency is looking at ways to prevent the fueling problems plaguing the Space Launch System rocket before the Artemis III mission, Ars reports. Artemis III is slated to be the first crew mission to land on the Moon since the Apollo program more than 50 years ago. As for Artemis II, which remains on the launch pad at Kennedy Space Center in Florida after missing a launch window earlier this month, NASA is putting the rocket through a second countdown rehearsal on Thursday to test whether technicians have resolved a hydrogen fuel leak that cut short a practice countdown run on February 2. Moving the goalposts… Artemis II is the first crew flight for the SLS rocket and Orion spacecraft. The nearly 10-day mission will carry four astronauts around the far side of the Moon and return them to Earth. But none of this can happen until NASA can fix the hydrogen leaks. During the first Wet Dress Rehearsal (WDR) earlier this month, hydrogen gas concentrations in the area around the fueling connection exceeded 16 percent, NASA’s safety limit. This spike was higher than any of the leak rates observed during the Artemis I launch campaign in 2022. Since then, NASA reassessed its safety limit and raised it from 4 percent—a conservative rule NASA held over from the Space Shuttle program—to 16 percent. Florida community braces for big, new rockets. Before SpaceX’s Starship mega-rockets arrive on Florida’s Space Coast, leaders in Cape Canaveral want to explore state and federal grants to mitigate potential infrastructure damage caused by vibrations and sonic booms, Florida Today reports. The first Florida Starship launch could occur as early as late summer or fall, with US Space Force Col. Brian Chatman calling 2026 “the year of the giants” in Brevard County during a January space conference in Orlando. Blue Origin officials also hope to ramp up launches of their 322-foot New Glenn heavy-lift rockets. Taking precaution… “We need more data, as well. I think we suspect that we’re going to sustain potential vibration damages. And what does that look like for us? And will there be other sources of revenue available in the event that that happens?” Cape Canaveral City Manager Keith Touchberry asked during the Tuesday City Council meeting. Mayor Pro Tem Kay Jackson, who spearheaded Tuesday’s discussion, said the city should move expeditiously, noting that Blue Origin’s Launch Complex 36 at Cape Canaveral Space Force Station lies closest to the city. That’s where New Glenn rockets launch, 5.7 miles from the closest city condominium and 7.2 miles from City Hall. Next three launches Feb. 21: Falcon 9 | Starlink 17-25 | Vandenberg Space Force Base, California | 08:00 UTC Feb. 22: Falcon 9 | Starlink 6-104 | Cape Canaveral Space Force Station, Florida | 02:04 UTC Feb. 24: Falcon 9 | Starlink 17-26 | Vandenberg Space Force Base, California | 14:00 UTC Stephen Clark Space Reporter Stephen Clark Space Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 73 Comments Rocket Report: Chinese launch firm raises big money; Falcon 9 back to the Bahamas The company that attempted China’s first orbital-class rocket landing says it will soon try again. Welcome to Edition 8.30 of the Rocket Report! As I write this week’s edition, NASA’s Space Launch System rocket is undergoing a second countdown rehearsal at Kennedy Space Center, Florida. The outcome of the test will determine whether NASA has a shot at launching the Artemis II mission around the Moon next month, or if the launch will be delayed until April or later. The finicky fueling line for the rocket’s core stage is the center of attention after a hydrogen leak cut short a practice countdown earlier this month. As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar. Who is actually investing in sovereign launch? No one will supplant American and Chinese dominance in the space launch arena any time soon, but several longtime US allies now see sovereign access to space as a national security imperative, Ars reports. Taking advantage of private launch initiatives already underway within their own borders, several middle and regional powers have approved substantial government funding for commercial startups to help them reach the launch pad. Australia, Canada, Germany, and Spain are among the nations that currently lack the ability to independently put their own satellites into orbit, but they are now spending money to establish a domestic launch industry. Others talk a big game but haven’t committed the cash to back up their ambitions. Ranking them... Ars examined how much international governments, specifically those without a present-day orbital launch capability, are investing in sovereign access to space. Germany, Spain, the United Kingdom, Canada, and Australia have committed the most government funding to homegrown launcher development. The fruits of the UK’s investment are in question after the failure of the Scottish rocket company Orbex, which we wrote about in last week’s Rocket Report. Other countries with real, although less credible, orbital launch programs include Brazil, Argentina, and Taiwan. An update on one of Germany’s launch startups. German rocket builder Rocket Factory Augsburg (RFA) is making significant progress toward once again attempting an inaugural flight of its RFA One rocket, European Spaceflight reports. The company is moving forward with commissioning its launch pad at SaxaVord Spaceport in Scotland as it works toward a hot fire test of the rocket’s first stage. The RFA One rocket is a 30-meter (98-foot) tall two-stage rocket designed to deliver payloads of up to 1,300 kilograms (2,866 pounds) to low-Earth orbit. The company is also developing an optional kick stage called Redshift that can be configured for a wide range of applications. They’ve been here before... In August 2024, as the company was preparing for the inaugural flight of its RFA One rocket, an anomaly during a first-stage hot fire test caused the vehicle to burst into flames, resulting in the total loss of the stage. Over the last 18 months, the company has been manufacturing a replacement for the destroyed first stage and upgrading the vehicle’s upper stage to resume preparations for launch from SaxaVord Spaceport. RFA’s chief executive told European Spaceflight that the rocket’s booster is being transported from its German factory to the launch site in Scotland. That will be followed by the upper stage. “We are taking the time to do it properly. We remain aggressive, fast, and flexible, but the wild times before August 2024 are over,” Indulis Kalnins, the company’s CEO, said. UAE launches hybrid rocket. The first hybrid rocket domestically developed in the United Arab Emirates launched on February 13, marking a significant step in the country’s push to build sovereign space and propulsion capabilities, the Khaleej Times reports. The sounding rocket, developed by the Technology Innovation Institute, reached an altitude of 3 kilometers (1.6 miles) during a test flight over the UAE desert, validating a fully UAE-designed and operated propulsion system for the first time. At the core of the mission was a hybrid propulsion engine combining nitrous oxide with a solid polyethylene-based fuel—a system that blends elements of solid and liquid rocket technologies. Room to grow... “This achievement is the result of years of disciplined research, engineering, and iteration,” said Elias Tsoutsanis, chief researcher at the institute’s Propulsion and Space Research Center. “That capability is the foundation for everything that follows—higher altitudes, heavier payloads, and more complex missions, all from the UAE.” The UAE has a growing space program, having already sent an orbiter to Mars. The nation has a long-term goal of developing an indigenous orbital launch capability. (submitted by EllPeaTea) SpaceX restores full crew to ISS. A Crew Dragon spacecraft docked with the International Space Station on Saturday, and astronauts popped open the hatches a few hours later to bring the lab back to a full crew complement of seven astronauts and cosmonauts. The arrival of four new astronauts as part of the Crew-12 mission—Jessica Meir and Jack Hathaway of NASA, Sophie Adenot of the European Space Agency, and Andrey Fedyaev of Roscosmos—came a day after their launch on a Falcon 9 rocket from Cape Canaveral Space Force Station, Florida. Recovering from something... One of the astronauts on the preceding SpaceX crew mission, Crew-11, experienced a health emergency on the ISS a few days into the new year. NASA made an unprecedented decision to bring them home early. NASA has not named the afflicted Crew-11 astronaut, but the flier is said to be recovering on Earth. The early departure of Crew-11 left just a single NASA astronaut, Chris Williams, aboard the space station. He had reached space on board a Russian Soyuz spacecraft in November, alongside two Russian cosmonauts, Sergey Kud-Sverchkov and Sergei Mikaev. The space station is a big place, and with much of the facility now more than two decades old, Williams had to spend most of his time on maintenance and monitoring activities. Because Crew-11 was brought home more than a month early, NASA and SpaceX scrambled to launch the Crew-12 vehicle a little sooner than expected to minimize the time Williams had to manage the large US segment of the station on his own. (submitted by EllPeaTea) SpaceX resumes Bahamas landings. For just the second time, a Falcon 9 booster returned to Earth Thursday night on a drone ship stationed among the islands of the Bahamas during a mission to deploy 29 Starlink satellites for SpaceX’s satellite Internet service. The booster landed on the drone ship parked near The Exumas less than 10 minutes after lifting off from Cape Canaveral, Florida. SpaceX landed a Falcon 9 booster in this location for the first time almost exactly one year ago, on February 18, 2025, without incident. But the Bahamian government raised environmental concerns after two Starships broke apart and dropped debris near the Bahamas last year, putting further Falcon 9 landings there on hold. The two entities have since come to an understanding, paving the way for this second booster to land near the island nation. Back on station… SpaceX’s offshore rocket landings typically occur in international waters. The shift to territorial waters near the Bahamas allows SpaceX to launch into more types of orbits from Cape Canaveral. The Bahamian government hailed the original rocket landing agreement as an opportunity for the island nation to attract visitors and investment, with plans for a regular cadence of Falcon 9 booster returns near the Bahamas over the coming months. (submitted by EllPeaTea) LandSpace lays out plans for 2026. Chinese commercial launch firm LandSpace is targeting the second quarter of this year for a second orbital launch and booster recovery attempt of its Zhuque-3 rocket, followed by a reuse test in the fourth quarter, Space News reports. A LandSpace official provided the update in a presentation earlier this month before the United Nations Office for Outer Space Affairs. The first launch of the Zhuque-3 rocket in December successfully reached orbit, but the first stage booster crashed near its downrange landing zone instead of descending to a controlled touchdown. So close… Still, LandSpace got tantalizingly close to nailing an on-target landing. Something went wrong moments after ignition of the rocket’s engines for a final landing burn to slow for touchdown. The stage impacted around 40 meters off the center of a dedicated landing area in Wuwei County, Gansu province, some 390 kilometers (240 miles) downrange from the launch pad at the Jiuquan spaceport in northwestern China. (submitted by EllPeaTea) Another Chinese launch company rakes in cash. Chinese launch firm iSpace has secured a record D++ funding round to accelerate its reusable rocket development efforts and expand its industrial footprint, Space News reports. The money will support test flights of the company’s Hyperbola-3 rocket, a medium-lift launcher powered by nine main engines. The first launch is scheduled later this year. Public statements suggest the two-stage Hyperbola-3 is 69 meters (226 feet) long with a payload capacity of 8,500 kilograms (18,700 pounds) to low-Earth orbit in reusable mode and 13,400 kilograms (29.500 pounds) to LEO in expendable mode. A mixed record... iSpace has attracted the massive funding round despite strong competition from other launch startups. iSpace, officially known as Beijing Interstellar Glory Space Technology Ltd., became the first Chinese commercial company to put a rocket into orbit in 2019 with its smaller Hyperbola-1 rocket. But the Hyperbola-1 lacks a reliable track record, with just a 50 percent success rate over eight flights. The Hyperbola-1 is fueled by solid propellants, while the more powerful Hyperbola-3 will use new methane propulsion. iSpace’s latest fundraising round is the largest ever for a Chinese rocket company. NASA vows to fix those pesky hydrogen leaks, eventually. NASA Administrator Jared Isaacman said Saturday the agency is looking at ways to prevent the fueling problems plaguing the Space Launch System rocket before the Artemis III mission, Ars reports. Artemis III is slated to be the first crew mission to land on the Moon since the Apollo program more than 50 years ago. As for Artemis II, which remains on the launch pad at Kennedy Space Center in Florida after missing a launch window earlier this month, NASA is putting the rocket through a second countdown rehearsal on Thursday to test whether technicians have resolved a hydrogen fuel leak that cut short a practice countdown run on February 2. Moving the goalposts… Artemis II is the first crew flight for the SLS rocket and Orion spacecraft. The nearly 10-day mission will carry four astronauts around the far side of the Moon and return them to Earth. But none of this can happen until NASA can fix the hydrogen leaks. During the first Wet Dress Rehearsal (WDR) earlier this month, hydrogen gas concentrations in the area around the fueling connection exceeded 16 percent, NASA’s safety limit. This spike was higher than any of the leak rates observed during the Artemis I launch campaign in 2022. Since then, NASA reassessed its safety limit and raised it from 4 percent—a conservative rule NASA held over from the Space Shuttle program—to 16 percent. Florida community braces for big, new rockets. Before SpaceX’s Starship mega-rockets arrive on Florida’s Space Coast, leaders in Cape Canaveral want to explore state and federal grants to mitigate potential infrastructure damage caused by vibrations and sonic booms, Florida Today reports. The first Florida Starship launch could occur as early as late summer or fall, with US Space Force Col. Brian Chatman calling 2026 “the year of the giants” in Brevard County during a January space conference in Orlando. Blue Origin officials also hope to ramp up launches of their 322-foot New Glenn heavy-lift rockets. Taking precaution… “We need more data, as well. I think we suspect that we’re going to sustain potential vibration damages. And what does that look like for us? And will there be other sources of revenue available in the event that that happens?” Cape Canaveral City Manager Keith Touchberry asked during the Tuesday City Council meeting. Mayor Pro Tem Kay Jackson, who spearheaded Tuesday’s discussion, said the city should move expeditiously, noting that Blue Origin’s Launch Complex 36 at Cape Canaveral Space Force Station lies closest to the city. That’s where New Glenn rockets launch, 5.7 miles from the closest city condominium and 7.2 miles from City Hall. Next three launches Feb. 21: Falcon 9 | Starlink 17-25 | Vandenberg Space Force Base, California | 08:00 UTC Feb. 22: Falcon 9 | Starlink 6-104 | Cape Canaveral Space Force Station, Florida | 02:04 UTC Feb. 24: Falcon 9 | Starlink 17-26 | Vandenberg Space Force Base, California | 14:00 UTC Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/space/2026/02/rocket-report-chinese-launch-firm-raises-big-money-falcon-9-back-to-the-bahamas/] | [TOKENS: 5730] |
Homegrown Rocket Report: Chinese launch firm raises big money; Falcon 9 back to the Bahamas The company that attempted China’s first orbital-class rocket landing says it will soon try again. Stephen Clark – Feb 20, 2026 7:00 am | 73 A Falcon 9 booster on its drone ship after landing in Bahamian territorial waters last year. Credit: SpaceX A Falcon 9 booster on its drone ship after landing in Bahamian territorial waters last year. Credit: SpaceX Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Welcome to Edition 8.30 of the Rocket Report! As I write this week’s edition, NASA’s Space Launch System rocket is undergoing a second countdown rehearsal at Kennedy Space Center, Florida. The outcome of the test will determine whether NASA has a shot at launching the Artemis II mission around the Moon next month, or if the launch will be delayed until April or later. The finicky fueling line for the rocket’s core stage is the center of attention after a hydrogen leak cut short a practice countdown earlier this month. As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar. Who is actually investing in sovereign launch? No one will supplant American and Chinese dominance in the space launch arena any time soon, but several longtime US allies now see sovereign access to space as a national security imperative, Ars reports. Taking advantage of private launch initiatives already underway within their own borders, several middle and regional powers have approved substantial government funding for commercial startups to help them reach the launch pad. Australia, Canada, Germany, and Spain are among the nations that currently lack the ability to independently put their own satellites into orbit, but they are now spending money to establish a domestic launch industry. Others talk a big game but haven’t committed the cash to back up their ambitions. Ranking them... Ars examined how much international governments, specifically those without a present-day orbital launch capability, are investing in sovereign access to space. Germany, Spain, the United Kingdom, Canada, and Australia have committed the most government funding to homegrown launcher development. The fruits of the UK’s investment are in question after the failure of the Scottish rocket company Orbex, which we wrote about in last week’s Rocket Report. Other countries with real, although less credible, orbital launch programs include Brazil, Argentina, and Taiwan. The Ars Technica Rocket Report The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox. Sign Me Up! An update on one of Germany’s launch startups. German rocket builder Rocket Factory Augsburg (RFA) is making significant progress toward once again attempting an inaugural flight of its RFA One rocket, European Spaceflight reports. The company is moving forward with commissioning its launch pad at SaxaVord Spaceport in Scotland as it works toward a hot fire test of the rocket’s first stage. The RFA One rocket is a 30-meter (98-foot) tall two-stage rocket designed to deliver payloads of up to 1,300 kilograms (2,866 pounds) to low-Earth orbit. The company is also developing an optional kick stage called Redshift that can be configured for a wide range of applications. They’ve been here before... In August 2024, as the company was preparing for the inaugural flight of its RFA One rocket, an anomaly during a first-stage hot fire test caused the vehicle to burst into flames, resulting in the total loss of the stage. Over the last 18 months, the company has been manufacturing a replacement for the destroyed first stage and upgrading the vehicle’s upper stage to resume preparations for launch from SaxaVord Spaceport. RFA’s chief executive told European Spaceflight that the rocket’s booster is being transported from its German factory to the launch site in Scotland. That will be followed by the upper stage. “We are taking the time to do it properly. We remain aggressive, fast, and flexible, but the wild times before August 2024 are over,” Indulis Kalnins, the company’s CEO, said. UAE launches hybrid rocket. The first hybrid rocket domestically developed in the United Arab Emirates launched on February 13, marking a significant step in the country’s push to build sovereign space and propulsion capabilities, the Khaleej Times reports. The sounding rocket, developed by the Technology Innovation Institute, reached an altitude of 3 kilometers (1.6 miles) during a test flight over the UAE desert, validating a fully UAE-designed and operated propulsion system for the first time. At the core of the mission was a hybrid propulsion engine combining nitrous oxide with a solid polyethylene-based fuel—a system that blends elements of solid and liquid rocket technologies. Room to grow... “This achievement is the result of years of disciplined research, engineering, and iteration,” said Elias Tsoutsanis, chief researcher at the institute’s Propulsion and Space Research Center. “That capability is the foundation for everything that follows—higher altitudes, heavier payloads, and more complex missions, all from the UAE.” The UAE has a growing space program, having already sent an orbiter to Mars. The nation has a long-term goal of developing an indigenous orbital launch capability. (submitted by EllPeaTea) SpaceX restores full crew to ISS. A Crew Dragon spacecraft docked with the International Space Station on Saturday, and astronauts popped open the hatches a few hours later to bring the lab back to a full crew complement of seven astronauts and cosmonauts. The arrival of four new astronauts as part of the Crew-12 mission—Jessica Meir and Jack Hathaway of NASA, Sophie Adenot of the European Space Agency, and Andrey Fedyaev of Roscosmos—came a day after their launch on a Falcon 9 rocket from Cape Canaveral Space Force Station, Florida. Recovering from something... One of the astronauts on the preceding SpaceX crew mission, Crew-11, experienced a health emergency on the ISS a few days into the new year. NASA made an unprecedented decision to bring them home early. NASA has not named the afflicted Crew-11 astronaut, but the flier is said to be recovering on Earth. The early departure of Crew-11 left just a single NASA astronaut, Chris Williams, aboard the space station. He had reached space on board a Russian Soyuz spacecraft in November, alongside two Russian cosmonauts, Sergey Kud-Sverchkov and Sergei Mikaev. The space station is a big place, and with much of the facility now more than two decades old, Williams had to spend most of his time on maintenance and monitoring activities. Because Crew-11 was brought home more than a month early, NASA and SpaceX scrambled to launch the Crew-12 vehicle a little sooner than expected to minimize the time Williams had to manage the large US segment of the station on his own. (submitted by EllPeaTea) SpaceX resumes Bahamas landings. For just the second time, a Falcon 9 booster returned to Earth Thursday night on a drone ship stationed among the islands of the Bahamas during a mission to deploy 29 Starlink satellites for SpaceX’s satellite Internet service. The booster landed on the drone ship parked near The Exumas less than 10 minutes after lifting off from Cape Canaveral, Florida. SpaceX landed a Falcon 9 booster in this location for the first time almost exactly one year ago, on February 18, 2025, without incident. But the Bahamian government raised environmental concerns after two Starships broke apart and dropped debris near the Bahamas last year, putting further Falcon 9 landings there on hold. The two entities have since come to an understanding, paving the way for this second booster to land near the island nation. Back on station… SpaceX’s offshore rocket landings typically occur in international waters. The shift to territorial waters near the Bahamas allows SpaceX to launch into more types of orbits from Cape Canaveral. The Bahamian government hailed the original rocket landing agreement as an opportunity for the island nation to attract visitors and investment, with plans for a regular cadence of Falcon 9 booster returns near the Bahamas over the coming months. (submitted by EllPeaTea) LandSpace lays out plans for 2026. Chinese commercial launch firm LandSpace is targeting the second quarter of this year for a second orbital launch and booster recovery attempt of its Zhuque-3 rocket, followed by a reuse test in the fourth quarter, Space News reports. A LandSpace official provided the update in a presentation earlier this month before the United Nations Office for Outer Space Affairs. The first launch of the Zhuque-3 rocket in December successfully reached orbit, but the first stage booster crashed near its downrange landing zone instead of descending to a controlled touchdown. So close… Still, LandSpace got tantalizingly close to nailing an on-target landing. Something went wrong moments after ignition of the rocket’s engines for a final landing burn to slow for touchdown. The stage impacted around 40 meters off the center of a dedicated landing area in Wuwei County, Gansu province, some 390 kilometers (240 miles) downrange from the launch pad at the Jiuquan spaceport in northwestern China. (submitted by EllPeaTea) Another Chinese launch company rakes in cash. Chinese launch firm iSpace has secured a record D++ funding round to accelerate its reusable rocket development efforts and expand its industrial footprint, Space News reports. The money will support test flights of the company’s Hyperbola-3 rocket, a medium-lift launcher powered by nine main engines. The first launch is scheduled later this year. Public statements suggest the two-stage Hyperbola-3 is 69 meters (226 feet) long with a payload capacity of 8,500 kilograms (18,700 pounds) to low-Earth orbit in reusable mode and 13,400 kilograms (29.500 pounds) to LEO in expendable mode. A mixed record... iSpace has attracted the massive funding round despite strong competition from other launch startups. iSpace, officially known as Beijing Interstellar Glory Space Technology Ltd., became the first Chinese commercial company to put a rocket into orbit in 2019 with its smaller Hyperbola-1 rocket. But the Hyperbola-1 lacks a reliable track record, with just a 50 percent success rate over eight flights. The Hyperbola-1 is fueled by solid propellants, while the more powerful Hyperbola-3 will use new methane propulsion. iSpace’s latest fundraising round is the largest ever for a Chinese rocket company. NASA vows to fix those pesky hydrogen leaks, eventually. NASA Administrator Jared Isaacman said Saturday the agency is looking at ways to prevent the fueling problems plaguing the Space Launch System rocket before the Artemis III mission, Ars reports. Artemis III is slated to be the first crew mission to land on the Moon since the Apollo program more than 50 years ago. As for Artemis II, which remains on the launch pad at Kennedy Space Center in Florida after missing a launch window earlier this month, NASA is putting the rocket through a second countdown rehearsal on Thursday to test whether technicians have resolved a hydrogen fuel leak that cut short a practice countdown run on February 2. Moving the goalposts… Artemis II is the first crew flight for the SLS rocket and Orion spacecraft. The nearly 10-day mission will carry four astronauts around the far side of the Moon and return them to Earth. But none of this can happen until NASA can fix the hydrogen leaks. During the first Wet Dress Rehearsal (WDR) earlier this month, hydrogen gas concentrations in the area around the fueling connection exceeded 16 percent, NASA’s safety limit. This spike was higher than any of the leak rates observed during the Artemis I launch campaign in 2022. Since then, NASA reassessed its safety limit and raised it from 4 percent—a conservative rule NASA held over from the Space Shuttle program—to 16 percent. Florida community braces for big, new rockets. Before SpaceX’s Starship mega-rockets arrive on Florida’s Space Coast, leaders in Cape Canaveral want to explore state and federal grants to mitigate potential infrastructure damage caused by vibrations and sonic booms, Florida Today reports. The first Florida Starship launch could occur as early as late summer or fall, with US Space Force Col. Brian Chatman calling 2026 “the year of the giants” in Brevard County during a January space conference in Orlando. Blue Origin officials also hope to ramp up launches of their 322-foot New Glenn heavy-lift rockets. Taking precaution… “We need more data, as well. I think we suspect that we’re going to sustain potential vibration damages. And what does that look like for us? And will there be other sources of revenue available in the event that that happens?” Cape Canaveral City Manager Keith Touchberry asked during the Tuesday City Council meeting. Mayor Pro Tem Kay Jackson, who spearheaded Tuesday’s discussion, said the city should move expeditiously, noting that Blue Origin’s Launch Complex 36 at Cape Canaveral Space Force Station lies closest to the city. That’s where New Glenn rockets launch, 5.7 miles from the closest city condominium and 7.2 miles from City Hall. Next three launches Feb. 21: Falcon 9 | Starlink 17-25 | Vandenberg Space Force Base, California | 08:00 UTC Feb. 22: Falcon 9 | Starlink 6-104 | Cape Canaveral Space Force Station, Florida | 02:04 UTC Feb. 24: Falcon 9 | Starlink 17-26 | Vandenberg Space Force Base, California | 14:00 UTC Stephen Clark Space Reporter Stephen Clark Space Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 73 Comments Rocket Report: Chinese launch firm raises big money; Falcon 9 back to the Bahamas The company that attempted China’s first orbital-class rocket landing says it will soon try again. Welcome to Edition 8.30 of the Rocket Report! As I write this week’s edition, NASA’s Space Launch System rocket is undergoing a second countdown rehearsal at Kennedy Space Center, Florida. The outcome of the test will determine whether NASA has a shot at launching the Artemis II mission around the Moon next month, or if the launch will be delayed until April or later. The finicky fueling line for the rocket’s core stage is the center of attention after a hydrogen leak cut short a practice countdown earlier this month. As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar. Who is actually investing in sovereign launch? No one will supplant American and Chinese dominance in the space launch arena any time soon, but several longtime US allies now see sovereign access to space as a national security imperative, Ars reports. Taking advantage of private launch initiatives already underway within their own borders, several middle and regional powers have approved substantial government funding for commercial startups to help them reach the launch pad. Australia, Canada, Germany, and Spain are among the nations that currently lack the ability to independently put their own satellites into orbit, but they are now spending money to establish a domestic launch industry. Others talk a big game but haven’t committed the cash to back up their ambitions. Ranking them... Ars examined how much international governments, specifically those without a present-day orbital launch capability, are investing in sovereign access to space. Germany, Spain, the United Kingdom, Canada, and Australia have committed the most government funding to homegrown launcher development. The fruits of the UK’s investment are in question after the failure of the Scottish rocket company Orbex, which we wrote about in last week’s Rocket Report. Other countries with real, although less credible, orbital launch programs include Brazil, Argentina, and Taiwan. An update on one of Germany’s launch startups. German rocket builder Rocket Factory Augsburg (RFA) is making significant progress toward once again attempting an inaugural flight of its RFA One rocket, European Spaceflight reports. The company is moving forward with commissioning its launch pad at SaxaVord Spaceport in Scotland as it works toward a hot fire test of the rocket’s first stage. The RFA One rocket is a 30-meter (98-foot) tall two-stage rocket designed to deliver payloads of up to 1,300 kilograms (2,866 pounds) to low-Earth orbit. The company is also developing an optional kick stage called Redshift that can be configured for a wide range of applications. They’ve been here before... In August 2024, as the company was preparing for the inaugural flight of its RFA One rocket, an anomaly during a first-stage hot fire test caused the vehicle to burst into flames, resulting in the total loss of the stage. Over the last 18 months, the company has been manufacturing a replacement for the destroyed first stage and upgrading the vehicle’s upper stage to resume preparations for launch from SaxaVord Spaceport. RFA’s chief executive told European Spaceflight that the rocket’s booster is being transported from its German factory to the launch site in Scotland. That will be followed by the upper stage. “We are taking the time to do it properly. We remain aggressive, fast, and flexible, but the wild times before August 2024 are over,” Indulis Kalnins, the company’s CEO, said. UAE launches hybrid rocket. The first hybrid rocket domestically developed in the United Arab Emirates launched on February 13, marking a significant step in the country’s push to build sovereign space and propulsion capabilities, the Khaleej Times reports. The sounding rocket, developed by the Technology Innovation Institute, reached an altitude of 3 kilometers (1.6 miles) during a test flight over the UAE desert, validating a fully UAE-designed and operated propulsion system for the first time. At the core of the mission was a hybrid propulsion engine combining nitrous oxide with a solid polyethylene-based fuel—a system that blends elements of solid and liquid rocket technologies. Room to grow... “This achievement is the result of years of disciplined research, engineering, and iteration,” said Elias Tsoutsanis, chief researcher at the institute’s Propulsion and Space Research Center. “That capability is the foundation for everything that follows—higher altitudes, heavier payloads, and more complex missions, all from the UAE.” The UAE has a growing space program, having already sent an orbiter to Mars. The nation has a long-term goal of developing an indigenous orbital launch capability. (submitted by EllPeaTea) SpaceX restores full crew to ISS. A Crew Dragon spacecraft docked with the International Space Station on Saturday, and astronauts popped open the hatches a few hours later to bring the lab back to a full crew complement of seven astronauts and cosmonauts. The arrival of four new astronauts as part of the Crew-12 mission—Jessica Meir and Jack Hathaway of NASA, Sophie Adenot of the European Space Agency, and Andrey Fedyaev of Roscosmos—came a day after their launch on a Falcon 9 rocket from Cape Canaveral Space Force Station, Florida. Recovering from something... One of the astronauts on the preceding SpaceX crew mission, Crew-11, experienced a health emergency on the ISS a few days into the new year. NASA made an unprecedented decision to bring them home early. NASA has not named the afflicted Crew-11 astronaut, but the flier is said to be recovering on Earth. The early departure of Crew-11 left just a single NASA astronaut, Chris Williams, aboard the space station. He had reached space on board a Russian Soyuz spacecraft in November, alongside two Russian cosmonauts, Sergey Kud-Sverchkov and Sergei Mikaev. The space station is a big place, and with much of the facility now more than two decades old, Williams had to spend most of his time on maintenance and monitoring activities. Because Crew-11 was brought home more than a month early, NASA and SpaceX scrambled to launch the Crew-12 vehicle a little sooner than expected to minimize the time Williams had to manage the large US segment of the station on his own. (submitted by EllPeaTea) SpaceX resumes Bahamas landings. For just the second time, a Falcon 9 booster returned to Earth Thursday night on a drone ship stationed among the islands of the Bahamas during a mission to deploy 29 Starlink satellites for SpaceX’s satellite Internet service. The booster landed on the drone ship parked near The Exumas less than 10 minutes after lifting off from Cape Canaveral, Florida. SpaceX landed a Falcon 9 booster in this location for the first time almost exactly one year ago, on February 18, 2025, without incident. But the Bahamian government raised environmental concerns after two Starships broke apart and dropped debris near the Bahamas last year, putting further Falcon 9 landings there on hold. The two entities have since come to an understanding, paving the way for this second booster to land near the island nation. Back on station… SpaceX’s offshore rocket landings typically occur in international waters. The shift to territorial waters near the Bahamas allows SpaceX to launch into more types of orbits from Cape Canaveral. The Bahamian government hailed the original rocket landing agreement as an opportunity for the island nation to attract visitors and investment, with plans for a regular cadence of Falcon 9 booster returns near the Bahamas over the coming months. (submitted by EllPeaTea) LandSpace lays out plans for 2026. Chinese commercial launch firm LandSpace is targeting the second quarter of this year for a second orbital launch and booster recovery attempt of its Zhuque-3 rocket, followed by a reuse test in the fourth quarter, Space News reports. A LandSpace official provided the update in a presentation earlier this month before the United Nations Office for Outer Space Affairs. The first launch of the Zhuque-3 rocket in December successfully reached orbit, but the first stage booster crashed near its downrange landing zone instead of descending to a controlled touchdown. So close… Still, LandSpace got tantalizingly close to nailing an on-target landing. Something went wrong moments after ignition of the rocket’s engines for a final landing burn to slow for touchdown. The stage impacted around 40 meters off the center of a dedicated landing area in Wuwei County, Gansu province, some 390 kilometers (240 miles) downrange from the launch pad at the Jiuquan spaceport in northwestern China. (submitted by EllPeaTea) Another Chinese launch company rakes in cash. Chinese launch firm iSpace has secured a record D++ funding round to accelerate its reusable rocket development efforts and expand its industrial footprint, Space News reports. The money will support test flights of the company’s Hyperbola-3 rocket, a medium-lift launcher powered by nine main engines. The first launch is scheduled later this year. Public statements suggest the two-stage Hyperbola-3 is 69 meters (226 feet) long with a payload capacity of 8,500 kilograms (18,700 pounds) to low-Earth orbit in reusable mode and 13,400 kilograms (29.500 pounds) to LEO in expendable mode. A mixed record... iSpace has attracted the massive funding round despite strong competition from other launch startups. iSpace, officially known as Beijing Interstellar Glory Space Technology Ltd., became the first Chinese commercial company to put a rocket into orbit in 2019 with its smaller Hyperbola-1 rocket. But the Hyperbola-1 lacks a reliable track record, with just a 50 percent success rate over eight flights. The Hyperbola-1 is fueled by solid propellants, while the more powerful Hyperbola-3 will use new methane propulsion. iSpace’s latest fundraising round is the largest ever for a Chinese rocket company. NASA vows to fix those pesky hydrogen leaks, eventually. NASA Administrator Jared Isaacman said Saturday the agency is looking at ways to prevent the fueling problems plaguing the Space Launch System rocket before the Artemis III mission, Ars reports. Artemis III is slated to be the first crew mission to land on the Moon since the Apollo program more than 50 years ago. As for Artemis II, which remains on the launch pad at Kennedy Space Center in Florida after missing a launch window earlier this month, NASA is putting the rocket through a second countdown rehearsal on Thursday to test whether technicians have resolved a hydrogen fuel leak that cut short a practice countdown run on February 2. Moving the goalposts… Artemis II is the first crew flight for the SLS rocket and Orion spacecraft. The nearly 10-day mission will carry four astronauts around the far side of the Moon and return them to Earth. But none of this can happen until NASA can fix the hydrogen leaks. During the first Wet Dress Rehearsal (WDR) earlier this month, hydrogen gas concentrations in the area around the fueling connection exceeded 16 percent, NASA’s safety limit. This spike was higher than any of the leak rates observed during the Artemis I launch campaign in 2022. Since then, NASA reassessed its safety limit and raised it from 4 percent—a conservative rule NASA held over from the Space Shuttle program—to 16 percent. Florida community braces for big, new rockets. Before SpaceX’s Starship mega-rockets arrive on Florida’s Space Coast, leaders in Cape Canaveral want to explore state and federal grants to mitigate potential infrastructure damage caused by vibrations and sonic booms, Florida Today reports. The first Florida Starship launch could occur as early as late summer or fall, with US Space Force Col. Brian Chatman calling 2026 “the year of the giants” in Brevard County during a January space conference in Orlando. Blue Origin officials also hope to ramp up launches of their 322-foot New Glenn heavy-lift rockets. Taking precaution… “We need more data, as well. I think we suspect that we’re going to sustain potential vibration damages. And what does that look like for us? And will there be other sources of revenue available in the event that that happens?” Cape Canaveral City Manager Keith Touchberry asked during the Tuesday City Council meeting. Mayor Pro Tem Kay Jackson, who spearheaded Tuesday’s discussion, said the city should move expeditiously, noting that Blue Origin’s Launch Complex 36 at Cape Canaveral Space Force Station lies closest to the city. That’s where New Glenn rockets launch, 5.7 miles from the closest city condominium and 7.2 miles from City Hall. Next three launches Feb. 21: Falcon 9 | Starlink 17-25 | Vandenberg Space Force Base, California | 08:00 UTC Feb. 22: Falcon 9 | Starlink 6-104 | Cape Canaveral Space Force Station, Florida | 02:04 UTC Feb. 24: Falcon 9 | Starlink 17-26 | Vandenberg Space Force Base, California | 14:00 UTC Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/gadgets/2026/02/rubiks-wowcube-adds-complexity-possibility-by-reinventing-the-puzzle-cube/#comments] | [TOKENS: 3836] |
Hands-on Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube Technology is a double-edged sword in the $399 Rubik’s Cube-inspired toy. Scharon Harding – Feb 19, 2026 4:30 pm | 60 Credit: Scharon Harding Credit: Scharon Harding Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav There’s something special about the gadget that “just works.” Technology can open opportunities for those devices but also complicate and weigh down products that have done just fine without things like sensors and software. So when a product like the beloved Rubik’s Cube gets stuffed with wires, processors, and rechargeable batteries, there’s demand for it to be not just on par with the original—but markedly better. The Cubios Rubik’s WOWCube successfully breathes fresh life into the classic puzzle, but it’s also an example of when too much technology can cannibalize a gadget’s main appeal. The WOWCube showing off one of its screensavers. Credit: Scharon Harding The WOWCube showing off one of its screensavers. Credit: Scharon Harding The WOWCube is a modern take on the Rubik’s Cube, an experiment from Hungarian architecture professor Ernő Rubik. Rubik aimed to make a structure composed of eight cubes that could move independently without the structure collapsing. The Rubik’s Cube became a widely distributed toy, an ’80s craze, and, eventually, a puzzle icon. The Rubik’s Cube did all that without electronics and with a current MSRP of $10. The WOWCube takes the opposite approach. It’s $399 (as of this writing) and ditches the traditional 3×3 grid in favor of a 2×2 grid that can still do the traditional Rubik’s puzzle (albeit on a smaller scale) and perform a host of other tricks, including playing other games and telling the weather. Cubios Rubik’s WOWCube Specs Resolution Per Panel: 240×240 (5760×240 total) Panel Type: 24× 1.4-inch IPS panels Weight: 3.58 ounces Dimensions: 2.76×2.76×2.76 inches Battery: 8× 450 mAh (3600 mAh total) Audio: 8× speakers OS: CubiOS Charging Dock: ESP32-S3 SoC, USB-C port, WOWCube proprietary charging interface A smaller puzzle The WOWCube’s 2×2 grid will disappoint hardcore puzzlers. There’s no way to play the traditional 3×3 version or even harder modified versions of the 2×2 grid. With only 24 squares, compared to the traditional 54, solving the WOWCube is significantly easier than solving a standard Rubik’s Cube. Although skilled players might enjoy the challenge of trying to solve the WOWCube extra rapidly. For people who are awful at the original Rubik’s Cube, like this author, a more accessible version of the puzzle is welcome. Solving the new Rubik’s Cube feels more attainable and less frustrating. The WOWCube is made up of eight modules. Each module has its own PCB, processor, gyroscope, and accelerometer. A Cubios spokesperson told me that the company opted for a 2×2 grid because “the most expensive components are the screens and the motherboards with the processor and battery, so increasing it to a 3×3 model would raise” the price. The predicament begs the question of whether electronics really improve the Rubik’s Cube. Games and other apps Once I played some of the WOWCube’s other games, I saw the advantage of the smaller grid. The 2×2 layout is more appropriate for games like White Rabbit, which is like Pac-Man but relies on tilting and twisting the cube, or Ladybug, where you twist the cube to create a path for a perpetually crawling ladybug. A central module might add unneeded complexity and space to these games and other WOWCube apps, like Pixel World, which is like a Rubik’s Cube puzzle but with images depicting global landmarks, or the WOWCube implementation of Gabriele Cirulli’s puzzle game, 2048. One of the “games” makes the WOWCube look like a virtual aquarium. Scharon Harding One of the “games” makes the WOWCube look like a virtual aquarium. Scharon Harding The Ladybug game. Scharon Harding The Ladybug game. Scharon Harding One of the “games” makes the WOWCube look like a virtual aquarium. Scharon Harding The Ladybug game. Scharon Harding At the time of writing, the WOWCube has 15 “games,” including the Rubik’s Cube puzzle. Most of the games are free, but some, such as Space Invaders Cubed ($30) and Sunny Side Up ($5), cost money. Unlike the original Rubik’s Cube, which is content to live on your shelf until you need a brain exercise or go on a road trip, the WOWCube craves attention with dozens of colorful screens, sound effects, and efforts to be more than a toy. With its Widgets app open, the cube can display information, like the time, temperature, and alerts, from a limited selection of messaging apps. More advanced actions, like checking the temperature for tomorrow or opening a WhatsApp message, are unavailable. There’s room for improvement, but further development, perhaps around features like an alarm clock or reminders, could turn the WOWCube into a helpful desk companion. Technology overload The new technology makes the Rubik’s Cube more versatile, exciting, and useful while bringing the toy back into the spotlight; at times, though, it also brought more complexity to a simple beloved concept. Usually, to open an app, make a selection, or otherwise input yes, you “knock” on the side of WOWCube twice. You also have to shake the cube three times in order to exit an app, and you can’t open an app when another app is open. Being able to tap an icon or press an actual button would make tasks, like opening apps or controlling volume and brightness levels, easier. On a couple of occasions, my device got buggy and inadvertently turned off some, but not all, of its screens. The reliance on a battery and charging dock that plugs into a wall presents limitations, too. The WOWCube showing its main menu while sitting next to its charging dock. Credit: Scharon Harding The WOWCube showing its main menu while sitting next to its charging dock. Credit: Scharon Harding The WOWCube’s makers brag of the device’s octads of speakers, processors, accelerometer, and gyroscopes, but I found the tilting mechanism unreliable and, at times, frustrating for doing things like highlighting an icon. Perhaps I don’t hold the WOWCube at the angles that its creators intended. There were also times when the image was upside down, and main information was displayed on a side of the cube that was facing away from me. One of my favorite features: WOWCube’s pomodoro-like timer app. Credit: Scharon Harding One of my favorite features: WOWCube’s pomodoro-like timer app. Credit: Scharon Harding The WOWCube has its own iOS and Android app, WOWCube Connect, which lets you connect the toy to your phone via Bluetooth and download new apps to the device via the dock’s Wi-Fi connection. You can also use the app to customize things like widgets, screensavers, and display brightness. If you don’t want to do any of those things, you can disconnect the WOWCube from your phone and reconnect it only when you want to. I wasn’t able to use the iOS app unless I agreed to allow the “app to track activity.” This gives me privacy concerns, so I reached out to Cubios to ask if there’s a way to use the app without the company tracking your activity. A spokesperson informed me that you can avoid tracking by selecting “allow app to track activity” in the app and then telling your phone to ask the app not to track you in the subsequent prompt that pops up. But you’ll only get the prompt if your phone is set to allow apps to request to track. New-age Rubik’s Cube Cubios attempted to reinvent a classic puzzle with the WOWCube. In the process, it added bells and whistles that detract from what originally made Rubik’s Cubes great. The actual Rubik’s Cube puzzle is scaled back, and the idea of spending hours playing with the cube is hindered by its finite battery life (the WOWCube can last up to five hours of constant play, Cubios claims). The device’s reliance on sensors and chips doesn’t always yield a predictable user experience, especially when navigating apps. And all of its tech makes the puzzle about 40 times pricier than the classic toy. IPS screens, integrated speakers, and app integration add more possibilities, but some might argue that the Rubik’s Cube was sufficient without them. Notably, the WOWCube began as its own product and got the rights to use Rubik’s branding in 2024. We’ve seen technology come for the Rubik’s Cube before. The Rubik’s Revolution we tested years ago had pressure-sensitive, LED-lit buttons for faces. In 2020, Rubik’s Connected came out with its own companion app. Clearly, there’s interest in bringing the Rubik’s Cube into the 20th century. For those who believe in that mission, the WOWCube is a fascinating new chapter for the puzzle. I applaud Cubios’ efforts to bring the Rubik’s Cube new relevance and remain intrigued by the potential of new software-driven puzzles and uses. But it’s hard to overlook the downfalls of its tech reliance. And the WOWCube could never replace the classic. This article was updated with comments from a Cubios spokesperson. Scharon Harding Senior Technology Reporter Scharon Harding Senior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK. 60 Comments Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube Technology is a double-edged sword in the $399 Rubik’s Cube-inspired toy. There’s something special about the gadget that “just works.” Technology can open opportunities for those devices but also complicate and weigh down products that have done just fine without things like sensors and software. So when a product like the beloved Rubik’s Cube gets stuffed with wires, processors, and rechargeable batteries, there’s demand for it to be not just on par with the original—but markedly better. The Cubios Rubik’s WOWCube successfully breathes fresh life into the classic puzzle, but it’s also an example of when too much technology can cannibalize a gadget’s main appeal. The WOWCube is a modern take on the Rubik’s Cube, an experiment from Hungarian architecture professor Ernő Rubik. Rubik aimed to make a structure composed of eight cubes that could move independently without the structure collapsing. The Rubik’s Cube became a widely distributed toy, an ’80s craze, and, eventually, a puzzle icon. The Rubik’s Cube did all that without electronics and with a current MSRP of $10. The WOWCube takes the opposite approach. It’s $399 (as of this writing) and ditches the traditional 3×3 grid in favor of a 2×2 grid that can still do the traditional Rubik’s puzzle (albeit on a smaller scale) and perform a host of other tricks, including playing other games and telling the weather. A smaller puzzle The WOWCube’s 2×2 grid will disappoint hardcore puzzlers. There’s no way to play the traditional 3×3 version or even harder modified versions of the 2×2 grid. With only 24 squares, compared to the traditional 54, solving the WOWCube is significantly easier than solving a standard Rubik’s Cube. Although skilled players might enjoy the challenge of trying to solve the WOWCube extra rapidly. For people who are awful at the original Rubik’s Cube, like this author, a more accessible version of the puzzle is welcome. Solving the new Rubik’s Cube feels more attainable and less frustrating. The WOWCube is made up of eight modules. Each module has its own PCB, processor, gyroscope, and accelerometer. A Cubios spokesperson told me that the company opted for a 2×2 grid because “the most expensive components are the screens and the motherboards with the processor and battery, so increasing it to a 3×3 model would raise” the price. The predicament begs the question of whether electronics really improve the Rubik’s Cube. Games and other apps Once I played some of the WOWCube’s other games, I saw the advantage of the smaller grid. The 2×2 layout is more appropriate for games like White Rabbit, which is like Pac-Man but relies on tilting and twisting the cube, or Ladybug, where you twist the cube to create a path for a perpetually crawling ladybug. A central module might add unneeded complexity and space to these games and other WOWCube apps, like Pixel World, which is like a Rubik’s Cube puzzle but with images depicting global landmarks, or the WOWCube implementation of Gabriele Cirulli’s puzzle game, 2048. At the time of writing, the WOWCube has 15 “games,” including the Rubik’s Cube puzzle. Most of the games are free, but some, such as Space Invaders Cubed ($30) and Sunny Side Up ($5), cost money. Unlike the original Rubik’s Cube, which is content to live on your shelf until you need a brain exercise or go on a road trip, the WOWCube craves attention with dozens of colorful screens, sound effects, and efforts to be more than a toy. With its Widgets app open, the cube can display information, like the time, temperature, and alerts, from a limited selection of messaging apps. More advanced actions, like checking the temperature for tomorrow or opening a WhatsApp message, are unavailable. There’s room for improvement, but further development, perhaps around features like an alarm clock or reminders, could turn the WOWCube into a helpful desk companion. The new technology makes the Rubik’s Cube more versatile, exciting, and useful while bringing the toy back into the spotlight; at times, though, it also brought more complexity to a simple beloved concept. Usually, to open an app, make a selection, or otherwise input yes, you “knock” on the side of WOWCube twice. You also have to shake the cube three times in order to exit an app, and you can’t open an app when another app is open. Being able to tap an icon or press an actual button would make tasks, like opening apps or controlling volume and brightness levels, easier. On a couple of occasions, my device got buggy and inadvertently turned off some, but not all, of its screens. The reliance on a battery and charging dock that plugs into a wall presents limitations, too. The WOWCube’s makers brag of the device’s octads of speakers, processors, accelerometer, and gyroscopes, but I found the tilting mechanism unreliable and, at times, frustrating for doing things like highlighting an icon. Perhaps I don’t hold the WOWCube at the angles that its creators intended. There were also times when the image was upside down, and main information was displayed on a side of the cube that was facing away from me. The WOWCube has its own iOS and Android app, WOWCube Connect, which lets you connect the toy to your phone via Bluetooth and download new apps to the device via the dock’s Wi-Fi connection. You can also use the app to customize things like widgets, screensavers, and display brightness. If you don’t want to do any of those things, you can disconnect the WOWCube from your phone and reconnect it only when you want to. I wasn’t able to use the iOS app unless I agreed to allow the “app to track activity.” This gives me privacy concerns, so I reached out to Cubios to ask if there’s a way to use the app without the company tracking your activity. A spokesperson informed me that you can avoid tracking by selecting “allow app to track activity” in the app and then telling your phone to ask the app not to track you in the subsequent prompt that pops up. But you’ll only get the prompt if your phone is set to allow apps to request to track. New-age Rubik’s Cube Cubios attempted to reinvent a classic puzzle with the WOWCube. In the process, it added bells and whistles that detract from what originally made Rubik’s Cubes great. The actual Rubik’s Cube puzzle is scaled back, and the idea of spending hours playing with the cube is hindered by its finite battery life (the WOWCube can last up to five hours of constant play, Cubios claims). The device’s reliance on sensors and chips doesn’t always yield a predictable user experience, especially when navigating apps. And all of its tech makes the puzzle about 40 times pricier than the classic toy. IPS screens, integrated speakers, and app integration add more possibilities, but some might argue that the Rubik’s Cube was sufficient without them. Notably, the WOWCube began as its own product and got the rights to use Rubik’s branding in 2024. We’ve seen technology come for the Rubik’s Cube before. The Rubik’s Revolution we tested years ago had pressure-sensitive, LED-lit buttons for faces. In 2020, Rubik’s Connected came out with its own companion app. Clearly, there’s interest in bringing the Rubik’s Cube into the 20th century. For those who believe in that mission, the WOWCube is a fascinating new chapter for the puzzle. I applaud Cubios’ efforts to bring the Rubik’s Cube new relevance and remain intrigued by the potential of new software-driven puzzles and uses. But it’s hard to overlook the downfalls of its tech reliance. And the WOWCube could never replace the classic. This article was updated with comments from a Cubios spokesperson. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/google/2026/02/google-announces-gemini-3-1-pro-says-its-better-at-complex-problem-solving/#comments] | [TOKENS: 1635] |
Pro Preview Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving Google says 3.1 Pro is ready for “your hardest challenges.” Ryan Whitwam – Feb 19, 2026 12:42 pm | 96 Credit: Google Credit: Google Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Another day, another Google AI model. Google has really been pumping out new AI tools lately, having just released Gemini 3 in November. Today, it’s bumping the flagship model to version 3.1. The new Gemini 3.1 Pro is rolling out (in preview) for developers and consumers today with the promise of better problem-solving and reasoning capabilities. Google announced improvements to its Deep Think tool last week, and apparently, the “core intelligence” behind that update was Gemini 3.1 Pro. As usual, Google’s latest model announcement comes with a plethora of benchmarks that show mostly modest improvements. In the popular Humanity’s Last Exam, which tests advanced domain-specific knowledge, Gemini 3.1 Pro scored a record 44.4 percent. Gemini 3 Pro managed 37.5 percent, while OpenAI’s GPT 5.2 got 34.5 percent. Credit: Google Credit: Google Google also calls out the model’s improvement in ARC-AGI-2, which features novel logic problems that can’t be directly trained into an AI. Gemini 3 was a bit behind on this evaluation, reaching a mere 31.1 percent versus scores in the 50s and 60s for competing models. Gemini 3.1 Pro more than doubles Google’s score, reaching a lofty 77.1 percent. Google has often gloated when it releases new models that they’ve already hit the top of the Arena leaderboard (formerly LM Arena), but that’s not the case this time. For text, Claude Opus 4.6 edges out the new Gemini by four points at 1504. For code, Opus 4.6, Opus 4.5, and GPT 5.2 High all run ahead of Gemini 3.1 Pro by a bit more. It’s worth noting, however, that the Arena leaderboard is run on vibes. Users vote on the outputs they like best, which can reward outputs that look correct regardless of whether they are. To demonstrate the improvements in Gemini 3.1 Pro, Google focused on the model’s ability to generate graphics and simulations. The example SVGs shown in the comparison video above do seem much more elegant, but these are the examples Google has chosen to show. Big benchmark numbers and curated demos are all well and good, but will you feel any difference when using the model? If you’re asking abstract questions and expecting detailed, nuanced answers, Gemini 3.1 Pro will probably produce better outputs than 3.0. Developers using Gemini to create agentic workflows are likely to see an improvement—Gemini 3.1 Pro almost doubled its score in the APEX-Agents benchmark. The updated model is coming to AI Studio and the Antigravity IDE in preview today. Enterprise users will see 3.1 Pro in Vertex AI and Gemini Enterprise. For regular users, Gemini 3.1 Pro is available for both the Gemini app and NotebookLM today. The API cost for developers has not changed ($2 input and $12 output per 1M tokens), nor has the context window (1M input and 64k output tokens). If Google’s pattern holds, there will most likely be a 3.1 update for its faster and cheaper Flash model in the near future. Ryan Whitwam Senior Technology Reporter Ryan Whitwam Senior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 96 Comments Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving Google says 3.1 Pro is ready for “your hardest challenges.” Another day, another Google AI model. Google has really been pumping out new AI tools lately, having just released Gemini 3 in November. Today, it’s bumping the flagship model to version 3.1. The new Gemini 3.1 Pro is rolling out (in preview) for developers and consumers today with the promise of better problem-solving and reasoning capabilities. Google announced improvements to its Deep Think tool last week, and apparently, the “core intelligence” behind that update was Gemini 3.1 Pro. As usual, Google’s latest model announcement comes with a plethora of benchmarks that show mostly modest improvements. In the popular Humanity’s Last Exam, which tests advanced domain-specific knowledge, Gemini 3.1 Pro scored a record 44.4 percent. Gemini 3 Pro managed 37.5 percent, while OpenAI’s GPT 5.2 got 34.5 percent. Credit: Google Credit: Google Google also calls out the model’s improvement in ARC-AGI-2, which features novel logic problems that can’t be directly trained into an AI. Gemini 3 was a bit behind on this evaluation, reaching a mere 31.1 percent versus scores in the 50s and 60s for competing models. Gemini 3.1 Pro more than doubles Google’s score, reaching a lofty 77.1 percent. Google has often gloated when it releases new models that they’ve already hit the top of the Arena leaderboard (formerly LM Arena), but that’s not the case this time. For text, Claude Opus 4.6 edges out the new Gemini by four points at 1504. For code, Opus 4.6, Opus 4.5, and GPT 5.2 High all run ahead of Gemini 3.1 Pro by a bit more. It’s worth noting, however, that the Arena leaderboard is run on vibes. Users vote on the outputs they like best, which can reward outputs that look correct regardless of whether they are. To demonstrate the improvements in Gemini 3.1 Pro, Google focused on the model’s ability to generate graphics and simulations. The example SVGs shown in the comparison video above do seem much more elegant, but these are the examples Google has chosen to show. Big benchmark numbers and curated demos are all well and good, but will you feel any difference when using the model? If you’re asking abstract questions and expecting detailed, nuanced answers, Gemini 3.1 Pro will probably produce better outputs than 3.0. Developers using Gemini to create agentic workflows are likely to see an improvement—Gemini 3.1 Pro almost doubled its score in the APEX-Agents benchmark. The updated model is coming to AI Studio and the Antigravity IDE in preview today. Enterprise users will see 3.1 Pro in Vertex AI and Gemini Enterprise. For regular users, Gemini 3.1 Pro is available for both the Gemini app and NotebookLM today. The API cost for developers has not changed ($2 input and $12 output per 1M tokens), nor has the context window (1M input and 64k output tokens). If Google’s pattern holds, there will most likely be a 3.1 update for its faster and cheaper Flash model in the near future. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/cars/2026/02/f1-preseason-tests-shows-how-different-2026-will-be/] | [TOKENS: 3546] |
Bacon briefcase F1: Preseason tests show how different 2026 will be Everyone’s trying to get mileage as F1 undergoes huge technical changes. Jonathan M. Gitlin – Feb 19, 2026 1:22 pm | 53 Red Bull promoted Isack Hadjar to the top team for 2026. Will he fare any better against Max Verstappen than past teammates? Credit: Marcel van Dorst/EYE4IMAGES/NurPhoto via Getty Images Red Bull promoted Isack Hadjar to the top team for 2026. Will he fare any better against Max Verstappen than past teammates? Credit: Marcel van Dorst/EYE4IMAGES/NurPhoto via Getty Images Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav It’s just two weeks until F1 gets underway in Australia, and teams are currently in Bahrain, midway through their third and final preseason test. The 2026 season promises to be wildly different from those of the past few years, with all-new cars, engines, hybrid systems, and sustainable fuels entering the mix and shaking up the established order. You shouldn’t read too much into times from preseason testing. The cars don’t have to conform to the in-season rules as teams test new components or fit-test rigs; for example, glowing brake discs could once again be seen on some cars that weren’t running wheel covers at an earlier test, something we’re unlikely to see during actual races. You also don’t know how much fuel—and therefore extra weight—anyone is carrying. In the past, some teams have even made headlines by running too light to set more competitive lap times in an effort to impress potential sponsors. And as the name explains, it’s a test, so drivers will be following run plans devised with their engineers to learn specific things about their new cars. Or as one Internet wag once put it, the times mean as much as “a bacon briefcase.” All change That said, the tests are far from meaningless, particularly this year. After 12 years of using the same hybrid power units, the sport has moved to an all-new design. The internal combustion engine is still a turbocharged 1.6 L V6, but that turbocharger no longer features the MGU-H hybrid system that both captured waste energy from the spinning turbine and also eliminated turbo lag. The remaining hybrid system—the MGU-K that harvests and deploys energy from and to the rear wheels—is much more powerful than before and is paired with a 4 Mj (1.1 kWh) battery pack. And like many hybrid road cars, that kinetic energy can come from braking or the engine. Ferrari has shown real signs of speed during testing, but also a few problems. The contraption attached to the car measures wind pressure to correlate wind tunnel data with the real world. Credit: Ahmad AlShehab/NurPhoto via Getty Images Ferrari has shown real signs of speed during testing, but also a few problems. The contraption attached to the car measures wind pressure to correlate wind tunnel data with the real world. Credit: Ahmad AlShehab/NurPhoto via Getty Images Now, the V6 provides 400 kW (536 hp) and the MGU-K an additional 350 kW, as long as the battery has charge. Cars are allowed to deploy up to 8.5 mJ (2.4 kWh) of electrical energy per lap, so energy management—knowing when and how to harvest and when to deploy—will become as important to F1 drivers as it was during the LMP1h days at Le Mans, or currently in Formula E. As a result, we’ve seen some drivers try out new techniques, downshifting to a lower gear than might otherwise be used in order to keep the engine revs up (and therefore charging the battery). There’s also a phenomenon called “superclipping” (previously known as derating), where cars slow toward the ends of a straight even as their engine revs rise—here, the cars are sending some of that engine power to the battery instead of the rear wheels to fill the battery so the MGU-K can help shove the car out of the next corner. And that isn’t always consistent lap to lap, as battery state of charge or track conditions change and the cars’ onboard computers juggle how much energy to deploy. We may have to revisit that topic, as the teams have been asked to test a reduced power output of the MGU-K as a backup plan in case the fears of critics of the 2026 rules come to pass. Interestingly, the MGU-K won’t be used at race starts—it only begins to contribute above 50 km/h (31 mph). That’s to prevent the danger of some drivers depleting their batteries and therefore slowing much faster than others in the approach to the first or second corner at the start as they superclip, and it has also exposed a potential performance differentiator this year. Ferrari, which also provides power units to Haas and Cadillac, opted for smaller turbochargers that spin up more quickly; the other OEMs all went for larger turbos that generate higher peak power. Ferrari has gambled that the smaller, faster turbos will give it an advantage at race starts and when its drivers have to rely solely on their V6s. Sleek 2026 cars look good. Credit: Jakub Porzycki/NurPhoto via Getty Images 2026 cars look good. Credit: Jakub Porzycki/NurPhoto via Getty Images I’ll say this for the 2026 crop of cars: They sure look good. They’re a little shorter and narrower than last year’s cars, with slightly narrower tires and much greater diversity among the teams than in the tightly proscribed ground-effect era. Those rules, which ran from 2022 to 2025, gave such little leeway to the teams in design decisions that performance converged to within fractions of a percent across the entire grid. Now everyone looks quite different from one another. The big thing to look out for this year is who can shed the most drag in straight-line mode. Each car’s front and rear wings are now active, with a raised position called corner mode that generates lots of downforce, and straight mode, which drops both wings to minimize drag (and therefore the energy the car needs to go fast). Ferrari tested an interesting approach to this in Bahrain at one point, with rear wing elements that flipped a full 180 degrees. I wonder if we’ll see that in-season. The arguments about engine compression ratios are still ongoing. Briefly, Mercedes is believed to have used clever materials science to create an engine in which the compression ratio increases rather than decreases as the engine gets hot. For this year, engines are capped at a compression ratio of 16:1 but measured at ambient temperature. Next week, the teams and the sport’s organizers (the FIA) meet to discuss adding a hot test for compression ratios, which is unlikely to go Mercedes’ way. (For its part, Mercedes says there’s nothing illegal about its engines.) The Mercedes-powered teams (Mercedes, McLaren, Williams, and Alpine), as well as Honda-powered Aston Martin, have another potential problem. Each power unit has its own sustainable fuel; Mercedes’ is provided by Petronas and Honda’s by Aramco. To ensure it is indeed fully sustainable, there’s a homologation process with an independent third party to verify compliance throughout the supply chain. Unfortunately for these five teams, neither Petronas nor Aramco have finished this homologation process, with a deadline of March 1 fast approaching. Should that not happen in time, we’ll still see those five teams race, but they’ll use a substitute fuel that won’t be optimized for the engines that will burn it. Huge sums have been invested in Aston Martin, to little effect so far. Credit: Bradley Collyer/PA Images via Getty Images Huge sums have been invested in Aston Martin, to little effect so far. Credit: Bradley Collyer/PA Images via Getty Images I’m pretty sure the unbelievable reliability that’s been a feature of F1 for the last few seasons may be a thing of the past, at least for the first few races in 2026. Up and down the pit lane, teams have missed hours of practice sessions as they troubleshoot gremlins. Aston Martin looks particularly bad in this regard, even in comparison to brand-new Cadillac. Finally, we’re starting to get a better idea of how F1 coverage will work with the move to Apple TV here in the US. Apple TV users will find an F1 tab in the Apple TV app, but you can also use the standalone F1TV app with your Apple credentials. We still have to wait until the first weekend in March to find out which F1 feed and commentary Apple will use, but the F1TV app remains an excellent way to follow the sport, with in-house commentary, alternate commentary from the UK’s Sky TV, in-car feeds for each driver, and an archive of F1 races going back decades. Jonathan M. Gitlin Automotive Editor Jonathan M. Gitlin Automotive Editor Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC. 53 Comments F1: Preseason tests show how different 2026 will be Everyone’s trying to get mileage as F1 undergoes huge technical changes. It’s just two weeks until F1 gets underway in Australia, and teams are currently in Bahrain, midway through their third and final preseason test. The 2026 season promises to be wildly different from those of the past few years, with all-new cars, engines, hybrid systems, and sustainable fuels entering the mix and shaking up the established order. You shouldn’t read too much into times from preseason testing. The cars don’t have to conform to the in-season rules as teams test new components or fit-test rigs; for example, glowing brake discs could once again be seen on some cars that weren’t running wheel covers at an earlier test, something we’re unlikely to see during actual races. You also don’t know how much fuel—and therefore extra weight—anyone is carrying. In the past, some teams have even made headlines by running too light to set more competitive lap times in an effort to impress potential sponsors. And as the name explains, it’s a test, so drivers will be following run plans devised with their engineers to learn specific things about their new cars. Or as one Internet wag once put it, the times mean as much as “a bacon briefcase.” All change That said, the tests are far from meaningless, particularly this year. After 12 years of using the same hybrid power units, the sport has moved to an all-new design. The internal combustion engine is still a turbocharged 1.6 L V6, but that turbocharger no longer features the MGU-H hybrid system that both captured waste energy from the spinning turbine and also eliminated turbo lag. The remaining hybrid system—the MGU-K that harvests and deploys energy from and to the rear wheels—is much more powerful than before and is paired with a 4 Mj (1.1 kWh) battery pack. And like many hybrid road cars, that kinetic energy can come from braking or the engine. Now, the V6 provides 400 kW (536 hp) and the MGU-K an additional 350 kW, as long as the battery has charge. Cars are allowed to deploy up to 8.5 mJ (2.4 kWh) of electrical energy per lap, so energy management—knowing when and how to harvest and when to deploy—will become as important to F1 drivers as it was during the LMP1h days at Le Mans, or currently in Formula E. As a result, we’ve seen some drivers try out new techniques, downshifting to a lower gear than might otherwise be used in order to keep the engine revs up (and therefore charging the battery). There’s also a phenomenon called “superclipping” (previously known as derating), where cars slow toward the ends of a straight even as their engine revs rise—here, the cars are sending some of that engine power to the battery instead of the rear wheels to fill the battery so the MGU-K can help shove the car out of the next corner. And that isn’t always consistent lap to lap, as battery state of charge or track conditions change and the cars’ onboard computers juggle how much energy to deploy. We may have to revisit that topic, as the teams have been asked to test a reduced power output of the MGU-K as a backup plan in case the fears of critics of the 2026 rules come to pass. Interestingly, the MGU-K won’t be used at race starts—it only begins to contribute above 50 km/h (31 mph). That’s to prevent the danger of some drivers depleting their batteries and therefore slowing much faster than others in the approach to the first or second corner at the start as they superclip, and it has also exposed a potential performance differentiator this year. Ferrari, which also provides power units to Haas and Cadillac, opted for smaller turbochargers that spin up more quickly; the other OEMs all went for larger turbos that generate higher peak power. Ferrari has gambled that the smaller, faster turbos will give it an advantage at race starts and when its drivers have to rely solely on their V6s. Sleek I’ll say this for the 2026 crop of cars: They sure look good. They’re a little shorter and narrower than last year’s cars, with slightly narrower tires and much greater diversity among the teams than in the tightly proscribed ground-effect era. Those rules, which ran from 2022 to 2025, gave such little leeway to the teams in design decisions that performance converged to within fractions of a percent across the entire grid. Now everyone looks quite different from one another. The big thing to look out for this year is who can shed the most drag in straight-line mode. Each car’s front and rear wings are now active, with a raised position called corner mode that generates lots of downforce, and straight mode, which drops both wings to minimize drag (and therefore the energy the car needs to go fast). Ferrari tested an interesting approach to this in Bahrain at one point, with rear wing elements that flipped a full 180 degrees. I wonder if we’ll see that in-season. The arguments about engine compression ratios are still ongoing. Briefly, Mercedes is believed to have used clever materials science to create an engine in which the compression ratio increases rather than decreases as the engine gets hot. For this year, engines are capped at a compression ratio of 16:1 but measured at ambient temperature. Next week, the teams and the sport’s organizers (the FIA) meet to discuss adding a hot test for compression ratios, which is unlikely to go Mercedes’ way. (For its part, Mercedes says there’s nothing illegal about its engines.) The Mercedes-powered teams (Mercedes, McLaren, Williams, and Alpine), as well as Honda-powered Aston Martin, have another potential problem. Each power unit has its own sustainable fuel; Mercedes’ is provided by Petronas and Honda’s by Aramco. To ensure it is indeed fully sustainable, there’s a homologation process with an independent third party to verify compliance throughout the supply chain. Unfortunately for these five teams, neither Petronas nor Aramco have finished this homologation process, with a deadline of March 1 fast approaching. Should that not happen in time, we’ll still see those five teams race, but they’ll use a substitute fuel that won’t be optimized for the engines that will burn it. I’m pretty sure the unbelievable reliability that’s been a feature of F1 for the last few seasons may be a thing of the past, at least for the first few races in 2026. Up and down the pit lane, teams have missed hours of practice sessions as they troubleshoot gremlins. Aston Martin looks particularly bad in this regard, even in comparison to brand-new Cadillac. Finally, we’re starting to get a better idea of how F1 coverage will work with the move to Apple TV here in the US. Apple TV users will find an F1 tab in the Apple TV app, but you can also use the standalone F1TV app with your Apple credentials. We still have to wait until the first weekend in March to find out which F1 feed and commentary Apple will use, but the F1TV app remains an excellent way to follow the sport, with in-house commentary, alternate commentary from the UK’s Sky TV, in-car feeds for each driver, and an archive of F1 races going back decades. Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
[SOURCE: https://arstechnica.com/tech-policy/2026/02/microsoft-removes-guide-on-how-to-train-llms-on-pirated-harry-potter-books/] | [TOKENS: 4252] |
Wizarding world of AI slop Microsoft deletes blog telling users to train AI on pirated Harry Potter books The now-deleted Harry Potter dataset was “mistakenly” marked public domain. Ashley Belanger – Feb 20, 2026 7:11 am | 90 Microsoft generated an AI image of Harry Potter with a Microsoft logo in a now-deleted blog. Credit: via Microsoft's deleted blog Microsoft generated an AI image of Harry Potter with a Microsoft logo in a now-deleted blog. Credit: via Microsoft's deleted blog Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav Following backlash in a Hacker News thread, Microsoft deleted a blog post that critics said encouraged developers to pirate Harry Potter books to train AI models that could then be used to create AI slop. The blog, which is archived here, was written in November 2024 by a senior product manager, Pooja Kamath. According to her LinkedIn, Kamath has been at Microsoft for more than a decade and remains with the company. In 2024, Microsoft tapped her to promote a new feature that the blog said made it easier to “add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.” What better way to show “engaging and relatable examples” of Microsoft’s new feature that would “resonate with a wide audience” than to “use a well-known dataset” like Harry Potter books, the blog said. The books are “one of the most famous and cherished series in literary history,” the blog noted, and fans could use the LLMs they trained in two fun ways: building Q&A systems providing “context-rich answers” and generating “new AI-driven Harry Potter fan fiction” that’s “sure to delight Potterheads.” To help Microsoft customers achieve this vision, the blog linked to a Kaggle dataset that included all seven Harry Potter books, which, Ars verified, has been available online for years and incorrectly marked as “public domain.” Kaggle’s terms say that rights holders can send notices of infringing content, and repeat offenders risk suspensions, but Hacker News commenters speculated that the Harry Potter dataset flew under the radar, with only 10,000 downloads over time, not catching the attention of J.K. Rowling, who famously keeps a strong grip on the Harry Potter copyrights. The dataset was promptly deleted on Thursday after Ars reached out to the uploader, Shubham Maindola, a data scientist in India with no apparent links to Microsoft. Maindola told Ars that “the dataset was marked as Public Domain by mistake. There was no intention to misrepresent the licensing status of the works.” It’s unclear whether Kamath was directed to link to the Harry Potter books dataset in the blog or if it was an individual choice. Cathay Y. N. Smith, a law professor and co-director of Chicago-Kent College of Law’s Program in Intellectual Property Law, told Ars that Kamath may not have realized the books were too recent to be in the public domain. “Someone might be really knowledgeable about books and technology, but not necessarily about copyright terms and how long they last,” Smith said. “Especially if she saw that something was marked by another reputable company as being public domain.” Microsoft declined Ars’ request to comment. Kaggle did not respond to Ars’ request to comment. Microsoft was “probably smart” to pull the blog On Hacker News, commenters suggested that it’s unlikely anyone familiar with the popular franchise would believe the Harry Potter books were in the public domain. They debated whether Microsoft’s blog was “problematic copyright-wise,” since Microsoft not only encouraged customers to download the infringing materials but also used the books themselves to create Harry Potter AI models that relied on beloved characters to hype Microsoft products. Microsoft’s blog was posted more than a year ago, at a time when AI firms began facing lawsuits over AI models, which had allegedly infringed copyrights by training on pirated materials and regurgitating works verbatim. The blog recommended that users learn to train their own AI models by downloading the Harry Potter dataset and then uploading text files to Azure Blob Storage. It included example models based on a dataset that Microsoft seemingly uploaded to Azure Blob Storage, which only included the first book, Harry Potter and the Sorcerer’s Stone. Training large language models (LLMs) on text files, Harry Potter fans could create Q&A systems capable of pulling up relevant excerpts of books. An example query offered was “Wizarding World snacks,” which retrieved an excerpt from The Sorcerer’s Stone where Harry marvels at strange treats like Bertie Bott’s Every Flavor Beans and chocolate frogs. Another prompt asking “How did Harry feel when he first learnt that he was a Wizard?” generated an output pointing to various early excerpts in the book. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. Example from Microsoft’s blog of a Q&A system output. But perhaps an even more exciting use case, Kamath suggested, was generating fan fiction to “explore new adventures” and “even create alternate endings.” That model could quickly comb the dataset for “contextually similar” excerpts that could be used to output fresh stories that fit with existing narratives and incorporate “elements from the retrieved passages,” the blog said. As an example, Kamath trained a model to write a Harry Potter story she could use to market the feature she was blogging about. She asked the model to write a story in which Harry meets a new friend on the Hogwarts Express train who tells him all about Microsoft’s Native Vector Support in SQL “in the Muggle world.” Drawing on parts of The Sorcerer’s Stone where Harry learns about Quidditch and gets to know Hermione Granger, the fan fiction showed a boy selling Harry on Microsoft’s “amazing” new feature. To do this, he likened it to having a spell that helps you find exactly what you need among thousands of options, instantly, while declaring it was perfect for machine learning, AI, and recommendation systems. Further blurring the lines between Microsoft and Harry Potter brands, Kamath also generated an image showing Harry with his new friend, stamped with a Microsoft logo. Smith told Ars that both use cases could frustrate rights holders, depending on the content in the model outputs. “I think that the regurgitation and the creation of fan fiction, they both could flag copyright issues, in that fan fiction often has to take from the expressive elements, a copyrighted character, a character that’s famous enough to be protected by a copyright law or plot stories or sequences,” Smith said. “If these things are copied and reproduced, then that output could be potentially infringing.” But it’s also still a gray area. Looking at the blog, Smith said, “I would be concerned,” but “I wouldn’t say it’s automatically infringement.” Smith told Ars that, in pulling the blog, Microsoft “was probably smart,” since courts have only generally said that training AI on copyrighted books is fair use. But courts continue to probe questions about pirated AI training materials. On the deleted Kaggle dataset page, Maindola previously explained that to source the data, he “downloaded the ebooks and then converted them to txt files.” Microsoft may have infringed copyrights If Microsoft ever faced questions as to whether the company knowingly used pirated books to train the example models, fair use “could be a difficult argument,” Smith said. Hacker News commenters suggested the blog could be considered fair use, since the training guide was for “educational purposes,” and Smith said that Microsoft could raise some “good arguments” in its defense. However, she also suggested that Microsoft could be deemed liable for contributing to infringement on some level after leaving the blog up for a year. Before it was removed, the Kaggle dataset was downloaded more than 10,000 times. “The ultimate result is to create something infringing by saying, ‘Hey, here you go, go grab that infringing stuff and use that in our system,’” Smith said. “They could potentially have some sort of secondary contributory liability for copyright infringement, downloading it, as well as then using it to encourage others to use it for training purposes.” On Hacker News, commenters slammed the blog, including a self-described former Microsoft employee who claimed that Microsoft lets employees “blog without having to go through some approval or editing process.” “It looks like somebody made a bad judgment call on what to put in a company blog post (and maybe what constitutes ethical activity) and that it was taken down as soon as someone noticed,” the former employee said. Others suggested the blame was solely with the Kaggle uploader, Maindola, who told Ars that the dataset should never have been marked “public domain.” But Microsoft critics pushed back, noting that the Kaggle page made it clear that no special permission was granted and that Microsoft’s employee should have known better. “They don’t need to know any details to know that these properties belong to massive companies and aren’t free for the taking,” one commenter said. The Harry Potter books weren’t the only books targeted, the thread noted, linking to a separate Azure sample containing Isaac Asimov’s Foundation series, which is also not in the public domain. “Microsoft could have used any dataset for their blog, they could have even chosen to use actual public domain novels,” another Hacker News commenter wrote. “Instead, they opted to use copywritten works that J.K. hasn’t released into the public domain (unless user ‘Shubham Maindola’ is J.K.’s alter ego).” Smith suggested Microsoft could have avoided this week’s backlash by more carefully reviewing blogs, noting that “if a company is risk averse, this would probably be flagged.” But she also understood Kamath’s preference for Harry Potter over the many long-forgotten characters that exist in the public domain. On Hacker News, some commenters defended Kamath’s blog, urging that it should be considered fair use since nonprofits and educational institutions could do the same thing in a teaching context without issue. “I would have been concerned if I were the one clearing this for Microsoft, but at the same time, I completely understand what this employee was doing,” Smith said. “No one wants to write fan fiction about books that are in the public domain.” Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 90 Comments Microsoft deletes blog telling users to train AI on pirated Harry Potter books The now-deleted Harry Potter dataset was “mistakenly” marked public domain. Following backlash in a Hacker News thread, Microsoft deleted a blog post that critics said encouraged developers to pirate Harry Potter books to train AI models that could then be used to create AI slop. The blog, which is archived here, was written in November 2024 by a senior product manager, Pooja Kamath. According to her LinkedIn, Kamath has been at Microsoft for more than a decade and remains with the company. In 2024, Microsoft tapped her to promote a new feature that the blog said made it easier to “add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.” What better way to show “engaging and relatable examples” of Microsoft’s new feature that would “resonate with a wide audience” than to “use a well-known dataset” like Harry Potter books, the blog said. The books are “one of the most famous and cherished series in literary history,” the blog noted, and fans could use the LLMs they trained in two fun ways: building Q&A systems providing “context-rich answers” and generating “new AI-driven Harry Potter fan fiction” that’s “sure to delight Potterheads.” To help Microsoft customers achieve this vision, the blog linked to a Kaggle dataset that included all seven Harry Potter books, which, Ars verified, has been available online for years and incorrectly marked as “public domain.” Kaggle’s terms say that rights holders can send notices of infringing content, and repeat offenders risk suspensions, but Hacker News commenters speculated that the Harry Potter dataset flew under the radar, with only 10,000 downloads over time, not catching the attention of J.K. Rowling, who famously keeps a strong grip on the Harry Potter copyrights. The dataset was promptly deleted on Thursday after Ars reached out to the uploader, Shubham Maindola, a data scientist in India with no apparent links to Microsoft. Maindola told Ars that “the dataset was marked as Public Domain by mistake. There was no intention to misrepresent the licensing status of the works.” It’s unclear whether Kamath was directed to link to the Harry Potter books dataset in the blog or if it was an individual choice. Cathay Y. N. Smith, a law professor and co-director of Chicago-Kent College of Law’s Program in Intellectual Property Law, told Ars that Kamath may not have realized the books were too recent to be in the public domain. “Someone might be really knowledgeable about books and technology, but not necessarily about copyright terms and how long they last,” Smith said. “Especially if she saw that something was marked by another reputable company as being public domain.” Microsoft declined Ars’ request to comment. Kaggle did not respond to Ars’ request to comment. Microsoft was “probably smart” to pull the blog On Hacker News, commenters suggested that it’s unlikely anyone familiar with the popular franchise would believe the Harry Potter books were in the public domain. They debated whether Microsoft’s blog was “problematic copyright-wise,” since Microsoft not only encouraged customers to download the infringing materials but also used the books themselves to create Harry Potter AI models that relied on beloved characters to hype Microsoft products. Microsoft’s blog was posted more than a year ago, at a time when AI firms began facing lawsuits over AI models, which had allegedly infringed copyrights by training on pirated materials and regurgitating works verbatim. The blog recommended that users learn to train their own AI models by downloading the Harry Potter dataset and then uploading text files to Azure Blob Storage. It included example models based on a dataset that Microsoft seemingly uploaded to Azure Blob Storage, which only included the first book, Harry Potter and the Sorcerer’s Stone. Training large language models (LLMs) on text files, Harry Potter fans could create Q&A systems capable of pulling up relevant excerpts of books. An example query offered was “Wizarding World snacks,” which retrieved an excerpt from The Sorcerer’s Stone where Harry marvels at strange treats like Bertie Bott’s Every Flavor Beans and chocolate frogs. Another prompt asking “How did Harry feel when he first learnt that he was a Wizard?” generated an output pointing to various early excerpts in the book. But perhaps an even more exciting use case, Kamath suggested, was generating fan fiction to “explore new adventures” and “even create alternate endings.” That model could quickly comb the dataset for “contextually similar” excerpts that could be used to output fresh stories that fit with existing narratives and incorporate “elements from the retrieved passages,” the blog said. As an example, Kamath trained a model to write a Harry Potter story she could use to market the feature she was blogging about. She asked the model to write a story in which Harry meets a new friend on the Hogwarts Express train who tells him all about Microsoft’s Native Vector Support in SQL “in the Muggle world.” Drawing on parts of The Sorcerer’s Stone where Harry learns about Quidditch and gets to know Hermione Granger, the fan fiction showed a boy selling Harry on Microsoft’s “amazing” new feature. To do this, he likened it to having a spell that helps you find exactly what you need among thousands of options, instantly, while declaring it was perfect for machine learning, AI, and recommendation systems. Further blurring the lines between Microsoft and Harry Potter brands, Kamath also generated an image showing Harry with his new friend, stamped with a Microsoft logo. Smith told Ars that both use cases could frustrate rights holders, depending on the content in the model outputs. “I think that the regurgitation and the creation of fan fiction, they both could flag copyright issues, in that fan fiction often has to take from the expressive elements, a copyrighted character, a character that’s famous enough to be protected by a copyright law or plot stories or sequences,” Smith said. “If these things are copied and reproduced, then that output could be potentially infringing.” But it’s also still a gray area. Looking at the blog, Smith said, “I would be concerned,” but “I wouldn’t say it’s automatically infringement.” Smith told Ars that, in pulling the blog, Microsoft “was probably smart,” since courts have only generally said that training AI on copyrighted books is fair use. But courts continue to probe questions about pirated AI training materials. On the deleted Kaggle dataset page, Maindola previously explained that to source the data, he “downloaded the ebooks and then converted them to txt files.” Microsoft may have infringed copyrights If Microsoft ever faced questions as to whether the company knowingly used pirated books to train the example models, fair use “could be a difficult argument,” Smith said. Hacker News commenters suggested the blog could be considered fair use, since the training guide was for “educational purposes,” and Smith said that Microsoft could raise some “good arguments” in its defense. However, she also suggested that Microsoft could be deemed liable for contributing to infringement on some level after leaving the blog up for a year. Before it was removed, the Kaggle dataset was downloaded more than 10,000 times. “The ultimate result is to create something infringing by saying, ‘Hey, here you go, go grab that infringing stuff and use that in our system,’” Smith said. “They could potentially have some sort of secondary contributory liability for copyright infringement, downloading it, as well as then using it to encourage others to use it for training purposes.” On Hacker News, commenters slammed the blog, including a self-described former Microsoft employee who claimed that Microsoft lets employees “blog without having to go through some approval or editing process.” “It looks like somebody made a bad judgment call on what to put in a company blog post (and maybe what constitutes ethical activity) and that it was taken down as soon as someone noticed,” the former employee said. Others suggested the blame was solely with the Kaggle uploader, Maindola, who told Ars that the dataset should never have been marked “public domain.” But Microsoft critics pushed back, noting that the Kaggle page made it clear that no special permission was granted and that Microsoft’s employee should have known better. “They don’t need to know any details to know that these properties belong to massive companies and aren’t free for the taking,” one commenter said. The Harry Potter books weren’t the only books targeted, the thread noted, linking to a separate Azure sample containing Isaac Asimov’s Foundation series, which is also not in the public domain. “Microsoft could have used any dataset for their blog, they could have even chosen to use actual public domain novels,” another Hacker News commenter wrote. “Instead, they opted to use copywritten works that J.K. hasn’t released into the public domain (unless user ‘Shubham Maindola’ is J.K.’s alter ego).” Smith suggested Microsoft could have avoided this week’s backlash by more carefully reviewing blogs, noting that “if a company is risk averse, this would probably be flagged.” But she also understood Kamath’s preference for Harry Potter over the many long-forgotten characters that exist in the public domain. On Hacker News, some commenters defended Kamath’s blog, urging that it should be considered fair use since nonprofits and educational institutions could do the same thing in a teaching context without issue. “I would have been concerned if I were the one clearing this for Microsoft, but at the same time, I completely understand what this employee was doing,” Smith said. “No one wants to write fan fiction about books that are in the public domain.” Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important. |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.