text
stringlengths
0
1.83k
Rufo wants to be certain that he gets credit. That is a lot of ego talking, but he may have a point. Despite his broadcasting his plans on X, laying out his strategy like a cartoon villain and claiming victory to anyone who will listen, some people still want to find more genteel explanations. Conservative commentators blame diversity itself for the Harvard debacle, arguing that a “social justice model” of higher education has supplanted a merit model at our nation’s colleges and universities. It is most galling at the most prestigious institutions, where status granted without concern for merit breeds resentment. Consequently, academic rigor and culture have receded from Western civilization’s high-water mark.
It is a popular idea. Some scholars believe it. A lot of the alt-right believes it. Regular people complaining about someone getting into college when they did not “deserve” to, they believe it. The underlying belief is noxious. It presumes diversity and merit are mutually exclusive. Beyond that, whether higher education is less meritorious now than it was in some unspecified past cannot be measured.
That is because merit, itself, cannot be defined. That is why the concept is so useful for slippery slopes. It cannot be proved or disproved. It can only be argued.
Academicians and practitioners know that you cannot operationalize merit. But historians know that there is powerful evidence about merit in the archives of our nation’s elite institutions. Whenever politicians, activists and investors agree that there is a merit crisis at Harvard, it signals that a battle rages, not over rigor, but over power.
In the 1880s, Harvard was willing to train a small group of women in art, literature and philosophy. But there were limits. Class and race, obviously. But also a limit on just how legitimate this training was when pursued by the female sex. Some worried that educating women could corrupt their natural talents and that coeducational learning could compromise character development for men and women.
In the 1920s, Harvard (along with Yale and Princeton) were dismayed that so many Jewish students were passing its carefully designed admissions tests. The institutions set out to revise those tests to account for all manner of cultural and physical attributes to filter out those Jewish students. The tests included questions about “character” that amounted to a litmus test for race and ethnicity. Jerome Karabel, in his book “The Chosen,” shows this redefinition of merit at Harvard, Yale and Princeton over and over again as elite institutions fight not to defend rigor but to maintain their hold on status amid social changes in America.
Wave after wave of immigrants, minorities and other socially mobile groups of people in the United States have experienced a similar story with Harvard. Each successive fight for the university’s soul was cloaked in language about merit. Well-meaning scolds worried about immigrants’ test scores or poor students’ cultural fit or whether women could do math. Every time, the moral case for “diversity” must contend with the supposedly rational case for “merit” or achievement. There are often religious overtones, harking back to America’s manifest destiny; it’s as if the country will dissolve into a failed state if merit’s clerics do not defend its virtues.
All of this dichotomous thinking forgets one thing: Academics are not born; they’re made. More broadly, administrators of Harvard, or anywhere else for that matter, are not born; they are made. They are promoted and trained. Surely, Harvard can train a bureaucrat.
Of course Harvard can train a bureaucrat. It trains the world’s leaders. It also runs the Harvard Seminar for New Presidents, which trains university presidents. It is part of academic leadership culture and the administrative industry that has grown around higher education.
What I have found particularly interesting (if not a little shocking) about this whole affair is that Dr. Gay’s promotion to president is so utterly normal. Rufo has described her scholarship record as “thin,” but university leadership has been professionalized for at least two decades. Competitive programs recruit and train cohorts of early-career scholars to prepare them to become provosts and presidents. Beyond that, “nontraditional” university presidents are highly prized in the modern university. University boards view them as market-friendly and business-savvy.
As has happened historically, Dr. Gay’s detractors redefined merit to mean whatever they wanted it to mean, in practice turning bureaucratic minutiae into a political bomb. Joseph McCarthy could only have wished for the networked media power that today’s reactionary power-seekers possess. The speed, scale and amplification of the power to capture an aspect of routine workaday life and cast it as nefarious activity is staggering. What has happened at Harvard is not just a blueprint for taking over higher education; it is a strategy for taking over our information environment.
I don’t like to argue about the human resources problems of rich private colleges. But love them or hate them, the Ivies set the Overton window for a lot of higher education. Colleges without Harvard’s media spotlight and billions of dollars are more vulnerable. Countless mobilized reactionary groups have more media attention than they have organic audience. They know how to capture media, court financial donors and form political alliances. They do not need a wide community of adherents to make themselves look like a movement.
If you like Rufo’s vision of a status hierarchy, in which merit is whatever the winner says it is, then he’s your man. In his vision for the New College of Florida, liberal arts has been diminished, gender studies has been marginalized and merit — whatever that means — trumps social justice. Harvard can buttress the consequences with prestige and money. The rest of higher education will find it harder. Networked, nationalized and emboldened, Rufo has nothing standing in his way. If Florida feels like the future you desire, you are in luck. Florida’s architects are winning.
Under the handle Tanner Leatherstein, Volkan Yilmaz rips, burns and slices apart luxury goods to show how much he thinks they are really worth.
One video opens with a large white leather handbag covered in the signature LV logo of Louis Vuitton. Within milliseconds, a hand with a switchblade swoops in and slashes a huge gash in the side of the bag before tearing it apart at its seams. In another, the distinctive red sole of Christian Louboutin is loudly ripped from a black stiletto using a wrench; in still another, scissors snip through a $2,200 Prada clutch before a man sets fire to a piece of the leather and turns it to ash.
You’ve entered the TikTok world of Tanner Leatherstein, which has more than 950,000 followers. Leatherstein, whose real name is Volkan Yilmaz, has attracted a cult following on the social media platform — as well as on YouTube and Instagram — for his butchering of exorbitantly expensive items. The reason, he says, is to show his viewers the true quality of the materials and craftsmanship and then break down how much the item may have cost to make.
“In many cases,” Mr. Yilmaz said from his Dallas workshop in December, “my estimates come in at about a tenth of what the price tag says. The markups that underpin the luxury business still shock a lot of people.”
In the edited interview below, Mr. Yilmaz, 37, discussed his lifelong obsession with leather, how much he spends on luxury products for his platform and what people should look for when buying new leather items.
When did your love of leather goods begin?
My family owned a tannery in Turkey, so I was born into the business. Around 11, I tanned five sheepskins to make my first leather jacket. While at college in Istanbul, I worked at the tannery, then went to China to learn about leather imports and exports and then to Turkmenistan.
In 2009, I won the U.S. green card lottery and moved to Chicago. I drove a cab while I got an MBA from the University of Illinois, then worked as a management consultant, which made me feel like I was dying inside. I was still obsessed with leather, so I started my own leather brand called Pegai, teaching myself about the design side from YouTube and driving Uber jobs to make ends meet. In 2019, once the business was underway, I moved to Dallas.
Why did you start creating social media content?
Friends and even friends of friends have always asked me to check their leather purchases. What do I think of the quality? Have they paid too much?
It made me realize that people don’t actually know that much about how leather is sourced or used and are suspicious about the markups on luxury leather products. So I started making some videos to answer their questions. I didn’t expect them to blow up the way they have.
More than anything else, you are known for slicing up bags. Why do you do that?
When I started dissecting bags, I wanted to show that price really wasn’t about the leather or the materials used — that it was mostly about the status associated with a label. So many people automatically assume that if it’s expensive, it must be good.
What was the first bag you ever cut up?
It was a Louis Vuitton briefcase. Louis Vuitton is one of the most famous leather brands in the world, but many people don’t know that the iconic LV monogram material is actually canvas. The first video that went viral was a little $1,200 wallet from Chanel. From then on, requests to feature different brands have been rolling in nonstop.
What are you looking for when you slash a bag?
The leather quality, of course. How it has been tanned. I use acetone to remove the finish, and I can see how much plastic makeup has been applied to the leather. I burn the leather to assess what tanning process has been used. Then I look at the craftsmanship, which is reflected in the stitching, hardware and construction.
A big part of what I do is assessing the brand’s claims. A bag might look good from the outside, but when you rip it open and look inside, it tells another story.
Who are the almost two million followers you have on social media?
There is definitely a demographic who hate luxury brands, full stop, who think the pricing is a scam and that people who pay for them are stupid. Then there are people who just love the entertainment value of chopping up expensive products. But many people are watching the videos because they love luxury and want a better understanding of quality products. They want to assess their luxury or vintage purchases with their eyes open.
Which brands are worth the money?
Bottega Veneta uses incredible leathers, and I’ve done three or four videos on their beautiful products. Though in one video — on a $650 wallet — I cut it up, and the lining was made from a lower quality leather than the one described on the label. (Bottega did not respond to a request for comment).
I really like a Scottish label called Strathberry. They make their products in Ubrique, which is this small town in Spain where brands like Loewe and Dior make their goods. But Strathberry is a fraction of the cost — more like $500 instead of $3,000. Polene is another great label made by people who really know what they are doing. Coach is pretty good at a mid-price point.
Are you ever shocked by what you find?
I don’t get positively shocked — I’m paying a lot of money. Great if we can show a bag to be great material and design, but that should be the standard.
Do brands reach out to you now?
Not really, and especially not from the luxury space. I don’t accept free items or advertising opportunities. People will trust me only if I stay totally independent.
Lots of people will have given or received leather goods during the holidays. Any tips for them?
Trust your senses. Feel it. If it feels plasticky, that’s not a good sign. Smell it. There isn’t only one leather smell, but there is a pleasant, slightly earthy aroma to quality leather. It should not smell like chemicals.
Look at it. Leather is an animal-sourced product. It has variations to its grain and fiber structure. The more variations you see in the fabric, the more natural and untreated it is. If it’s overdone with a heavy finish, leather becomes very standardized and lower quality hides can be hidden.Treatment wasn’t helping her anorexia, so doctors allowed her to stop — no matter the consequences. But is a “palliative” approach to mental illness really ethical?
The doctors told Naomi that she could not leave the hospital. She was lying in a narrow bed at Denver Health Medical Center. Someone said something about a judge and a court order. Someone used the phrase “gravely disabled.” Naomi did not think she was gravely disabled. Still, she decided not to fight it. She could deny that she was mentally incompetent — but this would probably just be taken as proof of her mental incompetence. Of her lack of insight. She would, instead, “succumb to it.”
It was early 2018. She had come to the hospital voluntarily, because she was getting so thin. In the days before, she had felt her electrolyte levels dip toward the danger zone — and she had decided that, even after everything, she did not want to be dead. By then, Naomi was 37 and had been starving herself for 26 years, and she was exquisitely attuned to her body’s corrupted chemistry. At the hospital, she was admitted to the ACUTE Center for Eating Disorders & Severe Malnutrition for medical stabilization. There, doctors began what was once called refeeding but is now more commonly called nutritional rehabilitation, using an intravenous line that fed into her neck. Reintroducing food to an emaciated body can be dangerous and even lethal if done too quickly. Physicians identified this phenomenon in the aftermath of World War II, when they observed skeletal concentration-camp survivors and longtime prisoners of war eat high-caloric foods and then drop dead of cardiac failure.
“Well, here I am,” Naomi said in a video message that she recorded for her parents. “I am alive, but am I happy? I don’t know. … It’s pretty pathetic. I don’t know how I feel about the fact that I would have died had I not come.” In the video, she was wearing a hot pink tank top, even though it was cool in the hospital room, because she wanted to shiver, because shivering burned calories.
A few days later, when she was not imminently dying anymore, Naomi announced that she was going home — and the hospital responded by placing her on a 72-hour mental-health hold. Clinicians then obtained what Colorado calls a short-term certification, which required, by judicial order, that Naomi be detained and treated, in her case until she reached what physicians determined to be 80 percent of her “ideal body weight.” In Colorado, as in most states, a patient can be treated against her will if she is mentally ill and found incapable of making informed decisions. That day, Naomi was transferred to a residential program at Denver’s Eating Recovery Center (E.R.C.).
“I’m so mad, I’m so mad,” Naomi said in another video message, her voice dull and impassive. “I was completely disrespected. I was tricked.” Naomi could feel that her mind was diminished — it was too slow, too slack — but she found that she could think in a straight line. She could reason. So why did the doctors claim otherwise? By then, she had been in and out of hospitals and psychiatric wards and eating-disorder programs, including the E.R.C., more times than she could recall. Was it really so irrational for her to assume that trying the same treatment for the hundredth time would be futile?
When she was a teenager, Naomi believed that treatment programs might save her. She ate supervised meals and attended group-therapy sessions where, among other things, patients discussed the origins and possible psychic functions of their eating disorders. Sometimes Naomi told the story of how she stopped eating because she thought it would make her a faster swimmer. Or the one about how she just wanted to be special, like her eldest brother was special because he was so smart. Other times, she told the story about the day her grandfather died and the whole family went to eat at a restaurant. Naomi was revolted watching everyone nourish their bodies with something as carnal as food when they should have been awash in grief. Years later, it was hard to tell if any of these origin stories mattered. With each inpatient admission, Naomi gained weight. Each time, the extra weight felt unbearable, and she lost it soon after discharge.
As the years passed, Naomi found it harder to be “compliant” with standard treatment. She refused to participate in group sessions. Or she disengaged during therapy, which she found infantile and pointless. She sometimes tampered with her intravenous lines, because it was too awful to watch those plastic bags of liquid calories empty into her body. During some admissions, Naomi forced herself to gain weight so that she could be discharged. Other times, she signed herself out against medical advice. Later, Naomi started bingeing and purging. She would excuse herself after meals and step into the backyard to vomit into plastic bags that she would throw into the neighbor’s yard, so that nobody would see. She vomited and vomited until stomach acid burned through the enamel of her teeth and she had to spend $22,000 to replace them.
In between treatment programs and emergency hospitalizations, Naomi, at 18, went to college. She wanted to study psychology, but all she could really do was exercise for hours a day after eating almost nothing, maybe an apple. In her final year, she dropped out. Later she found jobs that she cared about — a certified nursing assistant who did home health assessments, a patient coordinator at a hospital — but they were often interrupted by yet another medical admission.
As she moved through adulthood, Naomi acquired new diagnoses: anorexia binge-purge type, osteoporosis, hypotension, gastroparesis, superior mesenteric artery syndrome, obsessive-compulsive disorder, post-traumatic stress disorder, bipolar disorder. She took mood stabilizers and antidepressants and antipsychotics. Her bipolar manic periods felt like an ecstatic embrace of the world. The depressed periods made her want to kill herself, and sometimes try to.
She collapsed into her 30s. She had no hobbies and no friends. She had become a kind of professional patient: her whole life whittled down to the airless world of her diseases, the logistical management of her self-denial. Everything was epic drama, but also staggeringly boring. To Naomi, her doomed attempts to get well had started to feel less tragic and more ridiculous. It wasn’t so much that she wanted to be dead, at least most of the time. It was that she could no longer stand anyone trying to cure her — especially because the “cures” were always the same and never worked. “I’ll either die of anorexia or I’ll die of suicide,” Naomi told me when we first spoke. “I’ve accepted that.”
After her admission to the Eating Recovery Center, Naomi spent a few days lying in bed, being fed by a nasogastric tube, which pushed fluids and nutrients down her throat and into her stomach. Some days, she put plastic flowers in her hair and took selfies, just frowning at the camera. She made conversation with her roommate, who was very nice but sometimes threw up on the floor between their beds. After a few weeks, Naomi gained enough weight that she could be discharged into an outpatient program. It was there, she says, that a therapist asked her if she had ever heard of palliative care.
The field of palliative care was developed in the 1960s and ’70s, as a way to minister to dying cancer patients. Palliative care offered “comfort measures,” like symptom management and spiritual guidance, as opposed to curative treatment, for people who were in pain and would never get better. Later, the field expanded beyond oncology and end-of-life care — to reach patients with serious medical illnesses like heart disease, H.I.V. and AIDS, kidney failure, A.L.S. and dementia. Some people who receive palliative care are still fighting their diseases; in these cases, the treatment works to mitigate their suffering. Other patients are actively dying or in hospice care. These patients are made “comfortable,” or as comfortable as possible, until the end.
Naomi’s therapist had printed out an article for her to read. It was called “Medical Futility and Psychiatry: Palliative Care and Hospice Care as a Last Resort in the Treatment of Refractory Anorexia Nervosa,” published in 2010 in The International Journal of Eating Disorders. The paper’s authors argued that psychiatry needed its own subfield of palliative care: specifically for the 15 to 20 percent of patients whose anorexia developed a “chronic course” and did not respond to standard treatment — and for the fraction of those patients who did not want to keep trying to get better.
These patients, the paper proposed, should not be coerced into treatment but offered an approach that aimed to palliate their psychological pain — until, maybe, they died of their eating disorders. The authors acknowledged that the idea of letting a mentally ill person withdraw from treatment was uncomfortable, even radical — even though the rest of medicine already recognized a patient’s right to stop fighting her disease and risk dying. A patient with advanced kidney failure, for instance, might become exhausted and decide to quit dialysis treatments. “It has been argued that patients with anorexia nervosa should have similar rights to discontinue treatment, despite the fact that in their case food refusal might seem irrational,” they wrote. “Although patients with anorexia nervosa may irrationally choose not to eat, they are often competent to make decisions in all other areas of their lives.”
When Naomi looked up the paper’s authors, she was surprised to find that one of them, Dr. Joel Yager, was based in Denver. He was a psychiatrist at UCHealth University of Colorado Hospital and had been working with anorexia nervosa patients since the 1970s. Back then, psychiatrists were just beginning to understand anorexia as a mental illness, one with neurological and metabolic components. Nevertheless, there was reason to be optimistic; with early and aggressive treatment, a vast majority of the starving patients got better.
Of course, there were the ones who didn’t. Within the treatment community, anorexia had always been described as an acute condition, something with an adolescent onset and relatively short duration. It was only in the mid-1980s that a small number of academic articles began to refer to a “protracted” or “long-term course” of the disorder, and then eventually to “severe and enduring” anorexia. It was this kind of patient, typically a woman with a decade of failed treatments behind her — “kind of hobbling along in life,” Yager said — who found her way to him.
Yet when Yager, who was then working at the University of California, Los Angeles, looked for guidance on what to do for such a person, he found almost nothing. All he could see were articles instructing him on how to exert his will over recalcitrant patients, how to give them more standard treatment aimed at full weight restoration. And sometimes, because that was all he had to offer, his patients would simply stop coming to appointments. Yager would discover, later, that they had gone home and died alone on their sofas. Maybe by starvation, maybe by suicide. Maybe in pain. “I felt like a failure,” Yager told me. “They fired me, basically, at the end, knowing that I wasn’t able to help them anymore and wasn’t eager to just see them through the end.” In a desperate attempt to not abandon them, he had abandoned them. Bludgeoned them with care. Rescued them to death.
He came to think that he had been impelled by a kind of professional hubris — a hubris particular to psychiatrists, who never seemed to acknowledge that some patients just could not get better. That psychiatry had actual therapeutic limits. Yager wanted to find a different path. In academic journals, he came across a small body of literature, mostly theoretical, on the idea of palliative psychiatry. The approach offered a way for him to be with patients without trying to make them better: to not abandon the people who couldn’t seem to be fixed. “I developed this phrase of ‘compassionate witnessing,’” he told me. “That’s what priests did. That’s what physicians did 150 years ago when they didn’t have any tools. They would just sit at the bedside and be with somebody.”
Yager believed that a certain kind of patient — maybe 1 or 2 percent of them — would benefit from entirely letting go of standard recovery-oriented care. Yager would want to know that such a patient had insight into her condition and her options. He would want to know that she had been in treatment in the past, not just once but several times. Still, he would not require her to have tried anything and everything before he brought her into palliative care. Even a very mentally ill person, he thought, was allowed to have ideas about what she could and could not tolerate.
If the patient had a comorbidity, like depression, Yager would want to know that it was being treated. Maybe, for some patients, treating their depression would be enough to let them keep fighting. But he wouldn’t insist that a person be depression-free before she left standard treatment. Not all depression can be cured, and many people are depressed and make decisions for themselves every day. It would be Yager’s job to tease out whether what the patient said she wanted was what she authentically desired, or was instead an expression of pathological despair. Or more: a suicidal yearning. Or something different: a cry for help. That was always part of the job: to root around for authenticity in the morass of a disease.
Most of the patients who asked for palliative care, Yager thought, probably wouldn’t want to die but would be open to dying if it meant that they could stop trying to get better in the same old ways. Yager imagined that his practice would, in large part, be defined by absence. No coercive care. No obligatory weekly weigh-ins. No heroic measures. A palliative approach might even mean de-prescribing drugs that helped keep a mental illness at bay but made the patient feel bad in other ways: prioritizing comfort over life extension or symptom reduction. The care would be shaped by what the patient wanted, in the moment.
From Denver, Yager started publishing papers about his ideas, and other doctors started contacting him, clinicians who had, in the quiet context of their own practices, invented a kind of palliative psychiatry of their own. Once in a while, Yager heard directly from a patient.
“Dear Dr. Yager,” Naomi wrote in an email in February 2018. “After 20 years of trying the same thing over and over again and expecting different results, I am tired of fighting the system.”
After he read Naomi’s email, Yager called her. “Come in,” he said. “Let’s see.” With her tangle of disorders, Naomi presented as a complex patient — but only in the way that many other patients were complex. She was depressed and bipolar, but both conditions were being managed with drugs. Naomi told Yager that her current outpatient providers would continue treating her only if she strove for and ultimately maintained 80 percent of her ideal body weight — but that she couldn’t meet their condition because she couldn’t bear to be so heavy. “I’ve been there, I’ve done that,” Yager remembers her saying. “I have these obsessions. They won’t let go of me. Nothing they have ever given me in therapy has ever changed those internal, infernal thoughts.”
Yager agreed to help Naomi put together a palliative-care team at UCHealth and to oversee her psychiatric care. It was obvious that, in many ways, Naomi’s thinking was deeply distorted — but when she expressed her desire to stop fighting, Yager thought she seemed “as clear as a bell.”
Contrary to what medicine had recognized for most of its history, Yager knew that a substantial number of patients with psychiatric disorders were, in fact, medically and legally capable of making decisions on their own. When given a standard “capacity test” — which measures a patient’s ability to understand information related to a specific decision, appreciate benefits and harms, reason and express a choice — many passed. In one study of 70 adult women with severe anorexia, 46 were found to have “full mental capacity.”
If a patient is found capable, her physician is meant to respect her choice, whether or not it seems rational or circumspect. The test is always whether a person is able to reason, not whether she seems reasonable to her doctors.
After their initial meeting, Naomi was told that she could set the rules. Point 1: no more residential programs, ever. “It only accelerates the suffering,” she said. “And I refuse to encounter it ever again.” Point 2: no involuntary heroic measures from her doctors, no mandatory weigh-ins, no behavioral therapy. Naomi was willing to play around with new psychiatric medications — because, she said, a better drug might make her remaining days more tolerable — but she no longer wanted to analyze the root causes of anything. She was tired of telling her life story, tired of trying to interpret things.
Naomi told her new palliative-care physician, Jonathan Treem, that she could not increase her weight, at least not without something bad happening. She believed that whenever she relaxed a bit on the anorexia front, her bipolar disorder got worse; whenever she gained a few pounds, it threw her mood way off kilter — and that was worse than starving. She needed to appease both demons.
Naomi was willing to accept the odd temporary measure, like an infusion of electrolytes to lift her energy, but she wouldn’t treat her underlying physical disorders: her osteoporosis or her gastrointestinal issues or whatever else set in. Fixing those things would do nothing for her mood. Besides, at some point her body would fail and it would be inevitable, and she would let it happen. “If my heart decides that it’s done beating,” she said, “then I will not stop it.”
When Treem sat with Naomi, he could feel “an incredible agony that was internalized and unremitting and, to a certain degree, barely endurable” — a depression that was “likely perennial and unlikely to be subject to change.” In Treem’s view, Naomi’s anorexia was both a cause of pain and a symptom of a larger hurt. “She’s actually used her body as a communication tool for a long time. ‘I want to look so grotesque that people cannot look away.’”
Treem was an internal-medicine doctor by training, and most of his work involved palliating patients who were dying of typical somatic ailments: cancer, heart failure. Working with Naomi, he found, required him to undertake some “philosophical groundwork.” He thought about how he might protect his patient from her most self-destructive impulses, but also refrain from bulldozing over what she wanted. Treem talked with Naomi about how choosing to die from the natural progression of a disease was not the same thing as suicide.
To Treem, it felt as if Naomi was asking for something more than his nonintervention; she wanted his mercy. His permission to let go, his compassion. It made him think about the other doctors who had treated her. “This is where it gets into a passionate discussion,” he told me. “If you are going to accept responsibility for the people you save, and you’re going to elevate them as examples of why everyone should undergo compulsory treatment, you had better recognize the blood on your hands. That, on some level, in order to ‘save everyone,’ you are perpetuating suffering in others.”
Yet Treem had his limits. He told Naomi that he could not look away if she was actively suicidal. Several times, after an especially unsettling appointment, Treem walked her down to the emergency room, where she was put on a 72-hour mental-health hold.
Naomi also met regularly with Yager, who sometimes wondered whether, paradoxically, giving up recovery-focused treatment could steer his patient back to health. Palliative care, Yager reasoned, might give Naomi the cognitive space to reset. It would eliminate the classic power struggle between flailing eating-disorder patient and exacting psychiatrist and, perhaps, let her sense of fight turn inward. But Yager knew he had to be restrained in this thinking. If he approached Naomi’s palliative care as a means to a cure, then it wasn’t really palliative care at all — just a stealthy treatment program. This required a sort of intellectual sleight of hand. Yager had to be equally accepting of either outcome: that Naomi lived or that she didn’t.
Besides, what did the alternative look like? Would he be better off to declare Naomi incompetent? Sedate her? Restrain her physically or chemically? Get court orders for involuntary medications and involuntary tube feeds — which wouldn’t “cure” her anyway but would keep her alive for more treatment? Lock her on a ward? Try to keep her there? Hope she comes around? “Are you going to do it forever?” Yager asked.
Yager had always been suspicious of psychiatry’s affinity for hope, of the hopefulness that many doctors deliberately exhibited for their patients. “I’m full of hope,” he told me. “I’m one of the most hopeful guys you’re going to find. But I’m also a realist.”
Many psychiatrists, Yager knew, believed that they must hold hope for their hopeless patients, that a projection of hope, by a clinician, mattered — that it was even essential — because the hope could be absorbed by a patient and, in turn, change the course or constitution of her disease. In this way, psychiatry was fundamentally different from other kinds of medicine. In oncology, for instance, a doctor’s professed hope for a patient could not shrink a tumor or lower a blood-cell count. But maybe, in psychiatry, there was a more porous boundary between physician and patient, between an illness and a patient’s ideas about it. Maryrose Bauschka, a psychiatrist at the Eating Recovery Center, told me, “I think there’s often a lot of fear that if we’re transmitting anything less than a message of hope — or anything less than, like, a full-court press — that we’re not going to help them get better.”
But couldn’t a doctor’s hope also be a kind of harm? Yager could see that some of his patients benefited from his cheerleading. Others, though, were propelled into unwanted treatment by somebody else’s hope for them — and then left to feel defeated when it didn’t work. So couldn’t it also be argued that a doctor had a moral obligation not to provide hope that was unjustified, and maybe even to expose false hope where it lay? “We thus find ourselves in a paradox,” wrote Justine Dembo, a psychiatrist and assistant professor at the University of Toronto, “in which hope is vital for recovery but may also lengthen lives of unbearable mental anguish. What is an ethical therapist to do?”
Yager knew that the evidence base for many recovery-oriented therapies — some of which had been in existence for decades — was weak. For instance, he had never found a single randomized control study proving, with any certainty, that the by-then-ubiquitous residential eating-disorder program worked better than other kinds of care. Many of the country’s largest treatment facilities were owned by private companies that did not, as a practice, invite third-party researchers to study their approaches or track their long-term patient outcomes. Yager worried that the many doctors pushing residential programs were compromised, if not financially then at least intellectually. They had become, as he put it, “zealots for the model.”
And there was certainly no evidence at all that a fourth, or fifth or 10th attempt at the same kind of program was likely to be helpful, especially if the patient didn’t want it. The same was true of involuntary care. There was some evidence that forced treatment could be life-sustaining in the short term, but its long-term effects were more uncertain. In his own academic articles, Yager wrote about the “willfully blind Pollyannish therapeutic attitudes” of psychiatrists throughout history, and of their “excessive hyperinterventionism.”
Within the rest of medicine, “medical futility” had become a subject of contention in the 1980s, after relatively new interventions like cardiac life support and mechanical ventilation allowed the nearly dead to be resuscitated and sustained. Sometimes, patients’ families demanded that their loved ones be treated aggressively and kept alive, hearts beating and lungs pumping, when there was no realistic prospect for recovery. Or alternatively, families pushed back against a physician’s aggressive, almost knee-jerk use of technology to sustain a flailing life. Eventually, those doctors grew accustomed to admitting defeat, to acknowledging that yet another week of life support or another round of chemotherapy or another aggressive surgery would serve no therapeutic purpose.
But the idea of futility remained “relatively unknown in the world of psychiatry,” according to a 2023 paper in Frontiers in Psychiatry. When I asked a psychiatrist with expertise in severe and persistent mental illness how much time had been devoted, during her more than a decade of medical training and residency, to learning about futility, she laughed. “Zerooooo.”
After all, in psychiatry, there were always more drugs and drug combinations to try. More behavioral interventions and therapeutic modalities to employ. More clinicians who believed that they alone had the special therapeutic touch. It seemed to Yager that despite what every honest psychiatrist should know, psychiatrists were never really allowed to acknowledge futility — and so never allowed to stop treating. In turn, their patients were never “allowed” to say no. Never allowed to decline care. Certainly never allowed to die.
In one 2023 study, published in The American Journal of Bioethics Neuroscience, 174 U.S. psychiatrists completed a survey on “their attitudes about the management of suicidal ideation in patients with severely treatment-refractory illness.” The doctors were given one of two case studies: the first, about a patient with borderline personality disorder; the other, about a patient with major depressive disorder. They were told that the patients had already received every treatment that might reasonably be expected to work and that, despite this, they remained sick. The psychiatrists were then asked to rate the expected helpfulness of further treatments — and the likelihood that they, personally, would prescribe them.
The conclusion of the study was stunning: “Sizable minorities of participants said they were likely to recommend interventions they thought were unhelpful.” The authors identified several potential reasons. Perhaps the doctors were trying to meet expectations: the patient’s, her family’s, their colleagues’, the system’s. Perhaps they worried about legal liability.
But maybe there was another explanation. Maybe this was just the logic of a profession that saw death as the absolute worst outcome, regardless of what living might look like.
Some physicians in the field had heard the emerging calls for palliative psychiatry with alarm. The idea that certain patients would be better off if they gave up on cure-focused treatment was, as Dr. Agnes Ayton of Britain’s Royal College of Psychiatry told me, “dangerous nonsense.” For many of these doctors, Yager’s writings about palliative psychiatry were not just ill defined but threatening to the profession, particularly because they were so underdeveloped and so contentious and because, nevertheless, Yager and others were already deploying them.
Some physicians had doubts about the premise — core to Yager’s thinking — that patients who were very sick could still have the mental capacity to make decisions as grave as the one to stop recovery-oriented care. A typical anorexic patient had cognitive distortions and pathological values. She was intransigent, fearful, cognitively inflexible. She could be emotionally anesthetized too, so apathetic that she didn’t care very much what happened to her. Her brain was literally starving. How could such a patient be taken at her word when she said she was prepared to die — that it was what she “wanted”? Any experienced physician should know that what the anorexic patient “wanted” was perverted by her disease. He should see through the ruse — even if, like many people with anorexia, his patient spoke well and dressed well, was not in the depths of psychosis and could clearly articulate the potential medical benefits and drawbacks of various treatments. This was not mental lucidity, but instead a pantomime of reasoned thought.
Other psychiatrists took issue with the way Yager conceptualized futility. With anorexia nervosa, it was almost always impossible to say that a given treatment would be physiologically futile, because there was virtually no point at which an eating disorder became physically resistant to healing. If a patient ate, nearly all of her medical conditions could be reversed. It was even hard to make educated guesses based on how other patients had fared in similar situations, because there was so much variability between treatment programs and because nobody was collecting large databases of patient outcomes.
For the anorexic patient, any conclusions about “futility” would have to be based on fuzzier judgments about how a treatment might affect her quality of life. To critics, this was insufficiently rigorous. “Medical futility,” the psychiatrist Cynthia Geppert warned in a 2019 handbook, “can only be tentatively and tenuously translated into psychological constructs.”
In Yager’s model, decisions about futility seemed to rest a lot on what the patient believed the effect of a treatment would be. But many people with chronic mental illness are ambivalent about recovery and resistant to treatment. They “know” that they will never get better. They “know” that a treatment will fail. These feelings are literally products of a pathology. This pathological despair must be challenged, not interpreted as an expression of enlightened thought and then honored in the name of patient rights.
“What many in the profession would say,” Thomas Strouse, a psychiatrist and palliative-care physician at U.C.L.A., explained, “is that anorexia leading to death is a form of protracted suicide.” In this view (which Strouse does not endorse), accepting a patient’s slow death by starvation and choosing not to medically intervene, with force if necessary, was akin to collaborating in a suicidal act. At the least, it was colluding with a person’s mental illness.
Already, research showed that some patients with eating disorders who were involuntarily treated did well. In the short term, their rate of weight restoration was the same as that of voluntarily treated patients. One paper noted that among those admitted to hospital, “nearly half of patients with eating disorders who denied a need for treatment on admission converted to acknowledging that they needed to be admitted within two weeks of hospitalization.” The food, in other words, brought the insight.
Other physicians emphasized the current inadequacies in American mental-health care as a reason any futility judgment would be ethically tenuous. A decision that further treatment was “futile,” they argued, would be meaningless if the patient had never received high-quality care in the past. In the case of eating disorders, many people can’t access evidence-based treatment or experienced providers, because they don’t have private insurance to cover it. Others do have insurance but discover that their providers’ patience is limited. Patients are discharged from programs because their insurance companies do not believe that they are progressing quickly enough. Or because they seem to progress too quickly. These patients are released as soon as they have gained sufficient weight (as defined by the insurance company) but before their weight is fully restored. They then go home and get sick again. Can a person’s decision to decline treatment, made in the context of resource scarcity, really be described as a free choice?
And the sickest of patients can still get better — even after decades of failed treatment. One study of adult patients with anorexia, published in The Journal of Clinical Psychiatry in 2017, found that nine years after the start of their illness, only 31.4 percent had recovered — but that by 22 years, the recovery rate had doubled to 62.8 percent. “These findings,” the study’s authors wrote, “should give patients and clinicians hope that recovery is possible, even after long-term illness, suggesting that even brief periods of weight restoration and symptom remission from anorexia nervosa are meaningful and may be the harbingers of more durable gains to be made ahead.”
Angela Guarda, a professor of psychiatry and behavioral sciences at the Johns Hopkins School of Medicine, told me that palliative measures can sometimes be useful — but only alongside curative care and never instead of it. Guarda said she has treated several thousand patients with anorexia and still “cannot predict who will get better and who will not.” Patients sometimes surprised her. So “how do I decide which patients of mine I should instill hope in, and which patients of mine I should decide to help die?”