title
stringclasses
114 values
description
stringlengths
71
138
essay
stringlengths
412
63.4k
authors
stringlengths
7
67
source_url
stringlengths
52
104
thumbnail_url
stringlengths
113
249
Space exploration
When self-replicating craft bring life to the far Universe, a religious cult, not science, is likely to be the driving force
Some time late this century, someone will push a button, unleashing a life force on the cosmos. Within 1,000 years, every star you can see at night will host intelligent life. In less than a million years, that life will saturate the entire Milky Way; in 20 million years – the local group of galaxies. In the fullness of cosmic time, thousands of superclusters of galaxies will be saturated in a forever-expanding sphere of influence, centred on Earth. This won’t require exotic physics. The basic ingredients have been understood since the 1960s. What’s needed is an automated spacecraft that can locate worlds on which to land, build infrastructure, and eventually make copies of itself. The copies are then sent forth to do likewise – in other words, they are von Neumann probes (VNPs). We’ll stipulate a very fast one, travelling at a respectable fraction of the speed of light, with an extremely long range (able to coast between galaxies) and carrying an enormous trove of information. Ambitious, yes, but there’s nothing deal-breaking there. Granted, I’m glossing over major problems and breakthroughs that will have to occur. But the engineering problems should be solvable. Super-sophisticated flying machines that locate resources to reproduce are not an abstract notion. I know the basic concept is practical, because fragments of such machines – each one a miracle of nanotechnology – have to be scraped from the windshield of my car, periodically. Meanwhile, the tech to boost tiny spacecraft to a good fraction of the speed of light is in active development right now, with Breakthrough Starshot and NASA’s Project Starlight. The hazards of high-speed intergalactic flight (gas, dust and cosmic rays) are actually far less intense than the hazards of interstellar flight (also gas, dust and cosmic rays), but an intergalactic spacecraft is exposed to them for a lot more time – millions of years in a dormant ‘coasting’ stage of flight. It may be that more shielding will be required, and perhaps some periodic data scrubbing of the information payload. But there’s nothing too exotic about that. The biggest breakthroughs will come with the development of self-replicating machines, and artificial life. But those aren’t exactly new ideas either, and we’re surrounded by an endless supply of proof of concept. These VNPs needn’t be massive, expensive things, or perfectly reliable machines. Small, cheap and fallible is OK. Perhaps a small fraction of them will be lucky enough to survive an intergalactic journey and happen upon the right kind of world to land and reproduce. That’s enough to enable exponential reproduction, which will, in time, take control of worlds, numerous as the sand. Once the process really gets going, the geometry becomes simple – the net effect is an expanding sphere that overtakes and saturates millions of galaxies, over the course of cosmic time. Since the geometry is simplest at the largest scale (owing to a Universe that is basically the same in every direction), the easiest part of the story is the extremely long-term behaviour. If you launch today, the rate at which galaxies are consumed by life steadily increases (as the sphere of influence continues to grow) until about 19 billion years from now, when the Universe is a little over twice its current age. After that, galaxies are overtaken more and more slowly. And at some point in the very distant future, the process ends. No matter how fast or how long it continues to expand, our sphere will never overtake another galaxy. If the probes can move truly fast – close to the speed of light – that last galaxy is about 16 billion light-years away, as of today (it will be much further away, by the time we reach it). Our telescopes can see galaxies further still, but they’re not for us. A ‘causal horizon’ sets the limit of our ambition. In the end, the Universe itself will push galaxies apart faster than any VNP can move, and the ravenous spread of life will stop. Communication becomes increasingly difficult. Assuming you invent a practical way to send and receive intergalactic signals, you’ll be able to communicate with the nearby galaxies pretty much forever (though, with an enormous time lag). But the really distant galaxies are another matter. If we assume fast probes, then seven out of eight galaxies we eventually reach will be unable to send a single message back to the Milky Way, due to another horizon. The late Universe becomes increasingly isolated, with communication only within small groups of galaxies that are close enough to remain gravitationally bound to each other. Our VNP project might encounter another kind of limitation, too. What if another intelligent civilisation had the very same idea, initiating their own expansion from their own home in a distant galaxy? Our expanding spheres would collide, putting a stop to further expansion for each of us. We don’t know if that will happen, because no one has observed a telltale cluster of engineered galaxies in the distance, but we should be open to the possibility. If we can do it, another civilisation can too – it’s just a question of how often that occurs, in the Universe. Taken as a whole, this entire process bears an uncanny resemblance to a cosmological phase transition, with ‘nucleation events’ and ‘bubble growth’ that come to fill most of the Universe. There is even ‘latent heat’ given off in the process, depending on how quickly these massive civilisations consume energy. Despite the limitations imposed by nature, suffice it to say that a single VNP launch would offer an unimaginable wealth of the Universe’s resources to dispose of as you wish. OK, maybe not you, but whoever programs that VNP. Which raises a rather sticky point – what exactly should they do? It’s easy to imagine VNPs pillaging the resources of the Universe for no good reason, but what’s the actual benefit? What would motivate anyone to do anything like this? The power it would manifest – millions of years in the future, of course – is so beyond the scale of human experience that we’re still in the earliest stages of imagining what to do with it. It hasn’t even begun to be digested by popular culture and entertainment. But, as a first hint, imagine that, 50 years from now, you were approached to fund a cosmic-scale VNP project. In addition to instructions to ‘reproduce and expand’, each probe will carry a vast library of genetic data and information to reconstruct human bodies and minds on each world, along with an array of plants, animals and cultural information. If you’re still reluctant to fund the project, suppose I throw in a perk: a copy of you, reconstructed with your current memories intact, installed as absolute ruler on countless worlds. Promise of an eternal reign in a heavenly realm has, after all, been known to motivate real people. But no matter how great your god complex, all the returns-on-investment occur ‘out there’ in space and time, and won’t make anyone rich in the here and now, in the direct manner of, say, asteroid mining. After 1,000 human lifespans, cosmic expansion will still be in its infancy. Don’t expect so much as a snapshot from the nearest large galaxy for at least 5 million years. This pulls us back to the central question. If every direct, tangible benefit is deferred to a weird kind of technological afterlife, why would anyone do it? The real product of the early space programme was a taste of a new kind of purpose and meaning At least one answer has been considered by people who think about artificial superintelligence. Maybe we won’t do it – maybe a super-AI will do it for some arcane instrumental reason that doesn’t pay off for billions of years (aggressive resource-acquisition benefits almost any sufficiently long-term goal). I don’t find this answer too satisfying. It’s basically saying that humans will launch VNPs indirectly, by failing to put any limits on an AI’s behaviour. Yes, it could happen, but it doesn’t seem too likely. No doubt the superintelligence control problem is a serious challenge. But writing instructions that constrain an AI to a small region of spacetime should not be the slippery sort of problem that is infinitely easy to get wrong (unlike instructions to ‘make everyone happy’). Generally, I sense that invoking super-AI makes little difference to the question. ‘Why would anyone do it?’ just becomes ‘Why would anyone use super-AI to do it’? A real answer has to lie with human incentives in the present, on Earth. So, if there is no direct product in the present, what about the indirect products, that do occur in the here and now? This is where the answer must lie. Space programmes have known about these since Apollo. The early space programme did generate some tech spin-offs, but the real product was something different – it was a taste of a new kind of purpose and meaning, as we constructed the story of humanity’s first tenuous steps into a new realm. In the kind of VNP project we’re imagining here, human meaning will be embedded in a cosmic story spanning billions of years, superclusters of galaxies, and a narrative that grants special status to those who participate. The story will contain a moral dimension too, since you’ll need an overpowering moral imperative to justify appropriating galaxies. Regardless of whether a moral imperative exists at present, if a demand for one exists, a supply will emerge to fill it. Let’s be sceptical of that last sentence. Perhaps we’re offended by this entire discussion, and conclude that humanity must not despoil the cosmos with VNPs. Further, suppose we have total faith in our ability to convince the world that a ‘no cosmic expansion’ philosophy is the best vision. Well, that’s not good enough, because this philosophy must also compete for all future opinions. For the sake of argument, let’s say that our ‘no cosmic expansion’ philosophy is dominant for 1,000 years before briefly falling out of favour, allowing a single VNP to be released. The net outcome for the cosmos is identical to a world in which our philosophy never existed at all. No, reliance on human persuasion is insufficient, if we’re really committed to the cause. A more practical, long-term way to safeguard the Universe from life would be to launch a competing project of cosmic expansion, using our own VNPs. One whose goal is to spread everywhere and, with minimal use of resources, do nothing but prevent others from gaining a foothold on the trillions of worlds we come to occupy. Only then can we smugly sit back and let it all go to waste in sterility. The point is that any competing philosophy with a sufficiently strong opinion must adopt some form of cosmic expansion, even if it opposes the entire concept. Those efforts will unavoidably create their own Cosmic Story with Moral Dimension, enshrining the progenitors and offering Purpose and Meaning. There doesn’t seem to be any way around it, short of snuffing out humanity before any of this can happen. What about this ‘Cosmic Story with Moral Dimension that delivers Purpose and Meaning’? That description may seem familiar. That’s because it’s religion, by another name. It could be a secular religion (that will inevitably take offence at religious comparisons), or it could be one that imports spiritual beliefs from pre-existing religions. Either way, religion it will be. Cosmic Story. Moral Dimension. Transcendent Purpose and Meaning for practitioners. One can go further – based on what we’ve seen before, it’s likely to be a cult. That may sound like a stretch, so let’s unpack it. If your goal is to conquer and utilise the accessible Universe, you’ll need absolute certainty in your philosophy. At least, you’ll need to approach certainty before launching your VNPs (it’s no good changing your mind after the launch!) So, you’ll need to identify and recruit participants inclined to fully commit to your cause. And you’ll need to relentlessly purge dissenters who occasionally arise inside your organisation – they threaten to mutate the ‘absolutely certain’ goal. You’ll also have a strong incentive to adopt secrecy as a tool to prevent infiltration, spying and sabotage from competing groups, or government interference. So, then, what do you call an insular, highly dogmatic religion that ruthlessly enforces conformity? Exactly. The underlying philosophy will need supreme self-confidence to justify asserting itself on the cosmos, and it must strenuously avoid meddling from outsiders before the launch date. These projects won’t necessarily start out as cults – they may even work against cultish behaviour – but as the decades pass and objectives become less abstract and goals get nearer, they’ll find strong incentives to move in a cult-like direction, and very little incentive to move back. Another obvious observation is that competing religions tend not to get along with each other. When they do get along, it’s usually because one or more has given up on certain ambitions, and/or stopped taking their doctrine too seriously. They become more agreeable as they become more about ‘personal faith’, and less outward-focused. That condition will not be present in a race to deploy VNPs to capture the cosmos. The next 100 billion years of the Universe will be at stake, depending crucially on events happening today. The future of millions of galaxies. Someone will surely point out that direct physical conflict in the here-and-now on Earth is preferable to cosmic-scale conflict later on. In other words, there will be an incentive to violence, before launch-day. The most successful cult – by hook or by crook – is going to inherit the cosmos I’m hardly unique in predicting conflict over future technology. Science fiction loves to do that. Others, like Hugo de Garis, have predicted an eventual world war over the question of ‘whether humanity should build godlike massively intelligent machines’. But this is different. I’m talking about the few. Conflict between small, secretive groups of highly technical zealots. People who could tell you the distance to the Andromeda Galaxy but hope you don’t want to know. While the rest of humanity is fretting over issues like AI safety on Earth and shouting about impacts to their personal way of life, these people will be thinking about something else entirely, and watching with a jealous eye for others like themselves. Because the most successful cult – by hook or by crook – is going to inherit the cosmos. There’s an important point we touched on before. Each religion is in competition with the others of the present, but also with the others of the future. Being the first to launch VNPs isn’t enough to guarantee victory over the competition. The reason is that intergalactic travel takes millions of years. Suppose you launch VNPs with a travel speed that’s 50 per cent of the speed of light, and your competitor launches VNPs with a speed 1 percentage point faster. Your competitor then arrives at the nearest large galaxy with a 100,000-year lead. That’s enough lead to capture the entire thing, depending on the dispersal pattern of the probes. The effect is magnified the further out you go; you’ll quickly be cut out of all future expansion, finding every galaxy fully colonised by your competitor by the time your probes arrive. It’s irrelevant if you were the first to launch by a decade, a century, or a millennium. Thus, if your moral imperative dictates that you capture the cosmos, you want to launch and want to see no future launches by anyone else. This creates an incentive that is truly perverse. If you want certainty that your probes are successful, you’ll have to act to prevent all future competition. It’s hard to imagine many ‘nice’ ways to do that. Even the most heavy-handed political schemes tend to become uncertain in less than a century. A group that successfully launches first will be placed in an awkward position, weighing the wellbeing of one planet – Earth – against the future of millions of galaxies. In a nightmare scenario, a truly committed cult could become the most extreme kind of death cult, determined to leave a poison pill for the rest of us, to ensure the ‘correct’ cosmic outcome. No one knows the probability to assign to any of this, but it’s unwise to ignore incentives just because they’re horrific. The strength of the incentive is magnified by the scale of the future. If the future promises to be big and glorious enough, almost anything is justified in the present to ensure a righteous outcome. We’ve seen a similar moral calculus at work in 20th-century political movements, and real-world implications of this kind of futurist reasoning are already appearing in the present day, as with the case against Sam Bankman-Fried. What happens when those incentives reach their maximum possible strength, with the cosmic future in the balance? I’ll advance a picture that seems plausible to me. The humans recruited would be technical types, and those with connections, money or other useful resources. They would have to be attracted to (or tolerant of) cult-like behaviour, with personalities that accept the demand for extreme control, and for whom personal meaning, ‘secret knowledge’, and a new/special identity are a big draw. They would, of course, also be selected for a proven capacity to keep their mouths shut in the face of any number of red flags. The overlap of those requirements narrows the pool, yet large numbers are not essential. Just enough to have their fingers in the relevant technologies, and the ability to take them a few steps in their own direction. Imagine something like a secret network within a few powerful companies – one with a charismatic leader (not necessarily a CEO) and a critical mass of followers in key positions, willing to do almost anything to advance the leader’s grandiose cosmic scheme. I’m favouring small, secretive groups over large, overt players such as governments or big organisations, publicly dedicated to their own vision. The reason is that, for any specific Moral Imperative you might propose, there will be many more people who oppose it than who agree – just as no single, coherent religious sect commands a human majority. Large, overt organisations are also easy to infiltrate and sabotage. Imagine any active politician – even one you think is particularly good. How comfortable would you be in handing over all cosmic resources and the next 100 billion years to a Moral Imperative of their choosing? Can you imagine anyone willing to take extreme measures to prevent it from happening? And what do you think would happen if, let’s say, the UN wanted to select the Imperative by vote? The sci-fi we all grew up with trained us to think too small about the future, in space and time I suspect that getting and maintaining sufficient agreement, secrecy and control implies a small group. Small groups could tap ‘off-the-shelf’ technologies as they become increasingly available. High availability implies that more small groups will compete, when the time is right. What does this imply about the Moral Imperative itself? It will probably incorporate extreme versions of beliefs that are trendy with engineering types at the time (two or three generations hence), with a proven ability to evoke strong emotions and commitment. A lot of history will occur between now and then, so I hesitate to even speculate on the theme it will take. I seriously doubt it will be an idea that is fashionable today. Where are we are in this timeline right now? In the very early days. References to our interplanetary future are still largely found in science fiction; yet it’s a great irony that the big-budget sci-fi we all grew up with trained us to think too small about the future, in space and time. Fictional world-building invoked fanciful notions like faster-than-light space travel and ‘aliens everywhere’ so that events could unfold in a short time, and not too far away. It was never a case of invoking implausible tech as part of ‘thinking big’. The real Cosmic Story is yet to be imagined. The most distant and uncertain part of the picture is the Moral Imperative. I haven’t seen one that looks compelling. Eventually, I expect there to be many. For now, though, the heavy lifting is done by the vastness of scale, not by the moral dimension – but eventually, it must become the ultimate driver. Of course, the most dedicated agents may not make their programmes public. Someone with a coherent long-term plan might prefer this state of affairs to persist as long as possible, where no one can imagine a moral imperative connected with ‘outer space’ – simply as a matter of having less competition. Finally, what about Purpose and Meaning? It’s making an appearance already. However one might critique longtermism in detail, it has surely discovered a powerful human response that won’t be going away. Since Copernicus in the 1500s, humanity’s place in the Universe has been continually and relentlessly demoted by astronomy. Unfortunately, human meaning was demoted along with it. Wouldn’t it be intoxicating, then, to learn that the entire point of that 500-year enterprise wasn’t to show us our insignificance, after all? The real purpose, I submit, was to comprehend the scale of events that we mere mortals would be setting in motion. This Essay was made possible through the support of a grant to Aeon+Psyche from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Foundation. Funders to Aeon+Psyche are not involved in editorial decision-making.
Jay Olson
https://aeon.co//essays/cosmic-expansion-is-a-given-who-inherits-the-cosmos-is-not
https://images.aeonmedia…y=75&format=auto
History of science
To the detriment of the public, scientists and historians don’t engage with one another. They must begin a new dialogue
Would boycotting Russian scientists be an effective protest against the Russian invasion of Ukraine? Where do terms like ‘altruism’ come from, and what assumptions come with them? How long should research groups be allowed to embargo their data, and why? Why is the normal curve assumed to be normal for so many disparate phenomena, from the distribution of heights to the distribution of observational errors? Who should count as an author of a scientific publication? These are questions that, in the here and now, tax scientists’ judgment and shape their research. Historical perspective and understanding can illuminate these and other problems facing scientists. The problem is that the scientists and the historians have stopped talking with and listening to one another. Scientists found the thickly contextualised, sharply focused histories of now-discarded science irrelevant and indigestible. Historians bridled at the scientists’ demands for a mythologised and anachronistic version of the past. We think it’s time to restart the conversation, for the benefit of both scientists and historians. What would the scientists stand to gain? First, they would learn a lot to help them in making consequential decisions. Take the question of whether to boycott Russian science because of the invasion of Ukraine: history is rich in lessons about how effective a boycott is likely to be, as well as about the potential costs. Specifically, historical precedents suggest that a boycott is likely to significantly damage Russian science as well as serving as a statement of moral disapprobation. Given the indifference of the present Russian regime to the flourishing of domestic science, however, a boycott is unlikely to have any direct impact on the course of the present conflict. History also gives us a better understanding of the question of whose work merits recognition in the author line of a scientific article. Norms of scientific authorship have evolved constantly since the 17th century, when aristocratic anonymity (what could be more vulgar than splashing one’s good name across a work anyone with some spare change could buy?) prevailed. By the 18th century, signed, individual authorship became the norm, but all manner of other forms of authorship were and are being tried. For example, the Bourbaki, an influential mid-20th-century group of mathematicians, chose the collective pseudonym Nicolas Bourbaki in order to modernise and standardise the teaching of mathematics at French universities. Today we see lists of 80-plus names on publications in high-energy physics. Historical studies show how the present norms of authorship originated in past circumstances – and past values (for example, values that elevate theory over practice, or seniority over actual contribution) – that may no longer hold. The same goes for norms concerning the open publication of data: the ongoing debate over how long scientists may embargo the results of clinical trials in medicine or whether corporate-funded research belongs in the public domain are only the latest episodes in the long history of who, if anyone, owns scientific data. Knowing something about these developments can be liberating as well as enlightening: not so long ago, scientists made very different decisions about similar matters. What was different in the past can change again; history can show the plasticity of things scientists sometimes presume fixed. Second, history can provide strong insight into flawed premises. The most well-documented cases are of racial and gender biases malforming all-too-many studies of human difference, some with tragic consequences, as in the case of eugenics researchers who recommended policies of social exclusion, sterilisation and even genocide. In January 2023, the world’s largest body of human geneticists, the American Society of Human Genetics, apologised for the role of its past members in promoting racism, eugenics and other forms of discrimination. The board’s pledge to do better in the future was grounded in historical knowledge of its past. ‘This time of reckoning with history is overdue,’ announced the society’s president, ‘but it forms the foundation for a brighter future.’ A deeper understanding of metaphors can sharpen the scientist’s eye for inaccuracies or distortions Political prejudice is only the most obvious source of dubious assumptions. More subtle but more pervasive are the metaphors and analogies coined to capture newly discovered phenomena and newly invented ideas that, by their very novelty, stretch existing terminology. Sometimes old words, such as ‘intelligence’, which meant general quickness of understanding, are turned to new purposes, as in the case of how Alfred Binet, Lewis Terman and other 20th-century psychologists defined an ‘intelligence quotient’ (IQ), which measures specific verbal and quantitative skills. But the new, scientific applications still drag along the baggage of the old words: IQ is conflated with all intelligence, and altruism with all unselfishness, and not just by the lay public. Importantly, a metaphor that shines a spotlight on one aspect of a phenomenon – for example, ‘partnership’ to pick out the mutually beneficial aspects of symbiotic relationships among organisms – can plunge other interesting features into darkness. In this case, for example, it obscures the fact that more than two organisms may be involved and that the relationship can be both competitive and cooperative. Similarly, Richard Dawkins’s arresting metaphor of ‘the selfish gene’ helped popularise George C Williams’s gene-centric view of evolutionary change and William Hamilton’s notion of inclusive fitness. Dawkins’s work promoted an understanding of how the persistence of altruism and self-sacrificial behaviours in populations could be consistent with the core evolutionary principles of competition and the struggle for survival. But the power of the metaphor led to an overemphasis on natural selection and diverted attention away from other mechanisms of evolutionary change such as developmental constraints. Proponents of a new ‘extended evolutionary synthesis’ have persuasively argued that evolutionary thinking needs to be expanded beyond simplistic models that focus purely on genes and natural selection. Other metaphors – nature as a machine, the brain as a computer, DNA as a ‘blueprint’, to cite just a few examples – can harden into uncritical assumptions and inhibit innovative thinking. Social scientific studies suggest that scientists are often unaware of the extent to which their activities are shaped by such constrictive metaphors and analogies. A deeper understanding of key terms and metaphors that form some of the currency of scientific communication and thought can sharpen the scientist’s eye for blind spots, inaccuracies or distortions. In fact, the repeated and uncritical use, in word and thought, of familiar images and metaphors can easily lead those engaged in the scientific pursuit of knowledge astray, to distortion rather than illumination. The pinpoint specialisation required in scientific research, and the prestissimo tempo of contemporary science, may be efficient but they also produce myopia. In the competitive environment of scientific research, it is natural to acquire an ant’s-eye view of a landscape shaped as much by the availability of funding, professional relationships, institutional resources and serendipity as by some inexorable march toward truth. The history that does survive in research programmes straightens out meandering paths of development: every journal article begins with a review of the relevant literature, most of it recent but some of it stretching back centuries to honoured ancestors such as Johannes Kepler or Charles Darwin. These brief reviews connect the research at hand with a trajectory of past enquiry, as if extending a curve through the latest data point. That is not the story the history of science tells. Instead of the one smooth curve, there are many curves, each with multiple forking branches, some veering off at unexpected angles, and some petering out altogether. We suspect that this history, the result of research every bit as scrupulously empirical as that of the scientists themselves, is in fact closer to the lived experience of most working scientists. What the history of science can provide is a wide-angle orientation that helps scientists see a bigger picture, including why they’re studying what they’re studying – and what the alternatives might be. How would the historians benefit from renewed lines of communication? Historians, too, suffer from specialisation and the pressure to publish. They are swept up in the pursuit of the latest hot research topic to the exclusion of all else. Writing for an audience of scientists would force historians of science to look up and outwards from trends within their discipline. It would also oblige them to forge new kinds of narratives. It is one thing to reject a teleological plot line, in which past science inevitably and triumphantly culminates in present doctrines. It is quite another to invent an alternative narrative that captures the drama of how science actually advances while remaining true to the fundamentally unpredictable path of empirical research. With a few notable exceptions, historians of science have largely ducked this challenge. Writing for scientists – not to mention the broader public – would force them to confront it. And not just in writing. Some of the most compelling recent accounts of key episodes in the history of science have been television series and films – see, for example, Light Fantastic (2004) and Black Holes: The Edge of All We Know (2020), respectively. Historians would also benefit from a deeper and more supple understanding of their most fundamental concept: context. The relevant context for, say, political historians may not be best suited for historians of science. More discussions with scientists might heighten historians’ awareness of the cosmopolitan character of both past and present science. Nationalist geographies and chronologies frame most forms of history, and the history of science is no exception. Since the rise of the nation-state in the 19th century, most historians have tailored their specialties to fit this political framework. But these nation-centric frameworks are a bad fit for the scientific ideas and practices of almost any epoch, which traversed many cultures and languages, and in all directions. This is not just the case for jet-age science. Studies of premodern science and scholarship in many parts of the world reveal the remarkable mobility of people and ideas across seas and continents. Framing science within the history of a single nation-state – or, for that matter, within a single culture, language or religion – is as misleading for the 13th century as it is for the 21st. Historians of all stripes stand to benefit from the cosmopolitan perspective that has long been integral to science. Historians insist that what the present takes for granted should not be projected onto the past. But it is also true that hindsight can be powerful, and that present experience does shine a new light onto parts of the past that historians have neglected. Occasionally, lighting up dark corners can transform a familiar historical landscape. The social movements of the latter half of the 20th century inspired new kinds of history: workers’ history, women’s history and the history of marginalised minorities. This extraordinary body of scholarship has, in many cases, brought us more vital and sophisticated understandings of political revolutions, industrialisation and imperialism, to name only a few examples. Similarly, historians of science have a lot to learn from the present experience of scientists – and not because any easy analogy can be drawn from past to present science (such analogies are usually superficial). Rather, what’s happening now can pose new questions to the past. For example, the methods of big data have alerted historians to the historical collection of data troves, from the astrometeorological observations of ancient Babylonia (some still cited in NASA’s Five Millennium Canon of Lunar Eclipses: -1999 to +3000) to the weather diaries of 19th-century ships’ captains (a valuable source for tracking climate change). Conversely, historians investigating the impact of climate episodes such as the Little Ice Age, which lasted from about 1300 to 1850, have greatly benefited from scientific data such as the width of tree rings, which can be reliably correlated with annual average temperature. Scientists accused historians of ignorantly and maliciously undermining scientific authority Consider, too, the Ordered Universe project, a collaborative endeavour involving scientists and historians (along with philosophers, artists, educationalists and others). The project focuses on the remarkable scientific writings of the polymath Robert Grosseteste (c1170-1253). One of its accomplishments has been to show us how modern physics can assist with the interpretation of medieval works on optics and experimental methods. Thinking the history of past science in terms of present science can sometimes enlarge, not just distort. Since conversations between scientists and historians of science would benefit both, why are they so rare? History provides some insights. For almost two centuries, the increasing specialisation of the sciences has militated against the flow of information between the sciences and the academic humanities (and, indeed, among the sciences themselves). As early as 1834, the polymath scientist-historian William Whewell bemoaned the ‘division of the soil of science into infinitely small allotments’. Science, he feared, was ‘a great empire falling to pieces’. This dismal prospect prompted Whewell to coin the term ‘scientist’ in the hope of bestowing a semblance of collegial unity on fragmenting disciplines. Overall, the new appellation has been an outstanding success. ‘Scientist’ is now an immediately recognisable designation. It conveys the impression of an authoritative community of professionals committed to a proven and reliable ‘scientific method’. The profession of the scientist, moreover, is a relatively prestigious one and is consistently ranked by the general public as among the most trustworthy. But it is significant that the label ‘scientist’ initially struggled to gain acceptance among those whom it was intended to unite. Some thought it a crass Americanism. To others, it was redolent of ‘dentist’ – someone who is paid for the performance of uncongenial labours. For much of the 19th century, the preferred descriptors continued to be ‘philosopher’ or ‘naturalist’. These terms reflected the older disciplines of natural philosophy and natural history, which then constituted much of what we now refer to as the natural sciences. In the Anglophone world, the eventual success of the designation ‘scientist’ led to an effacing of the origins of the older disciplinary configurations in which core scientific activities were conducted as branches of philosophy and history. In a related development, there emerged the accompanying idea of ‘the scientific method’, understood as a singular and systematic approach to knowledge-making that distinguished genuine science from other ways of knowing. In all of this, the word ‘science’, which had once encompassed all legitimate forms of knowledge, came to refer only to the natural sciences (again, in the Anglophone world at least – other languages, such as German, still use their cognate terms for ‘science’ to refer to all forms of systematic knowledge). In the latter half of the 20th century, after English became the lingua franca of science, the narrowed meanings of the words ‘science’ and ‘scientist’, along with the conception of a distinctive and unified ‘scientific method’, helped establish an enduring science/humanities division, and began the reversal of a balance of cultural prestige that had once strongly favoured humanistic disciplines. The division hardened further and became acrimonious in the 1960s and ’70s. The philosophers Thomas Kuhn and Paul Feyerabend challenged the self-perceptions of many scientists, who understood them to be impugning the objectivity and rationality of science. In the 1990s, the ‘science wars’ broke out, in which scientists such as the physicist Steven Weinberg accused historians of science and science studies scholars of ignorantly and maliciously undermining scientific authority (and, with it, public support for big scientific projects). Historians countered that their accusers were too arrogant to learn history and accept legitimate criticism. These episodes left deep scars on both sides. After a decade of bitter exchanges between scientists who felt science was under attack and scholars who felt scientists had no authority to talk about history, an uneasy detente was bought at the price of mutual non-interference, which subsequently became mutual indifference. Like most people, scientists are interested in history. Yet they often prefer to read triumphal works produced by fellow scientists rather than scholarship by professional historians. To the chagrin of historians, popular interest in the history of science often resembles the kind preferred by scientists, with celebrity scientists reproducing digestible myths and memes that offer entertaining and engaging stories about the past. These popular histories are replete with heroes and villains and offer a simple narrative arc of truth vanquishing error and ignorance. These one-dimensional accounts often promote public misconceptions about how science really works. Equally importantly, among the scientific community they can also erode the self-critical spirit that is so central to the success of scientific endeavours, replacing it with a self-congratulatory one. Worse, they can suggest that what scientific textbooks currently teach are only eternal truths – a position difficult to reconcile with belief in scientific progress and how history works. Such a rigid view will undermine public confidence in scientific authority when new evidence prompts scientists to change their minds. If there is one lesson to be drawn from the history of science, it is caution about smug complacency. Science doesn’t stand still long enough to rest on its laurels. Its triumphs are at once very real and never the last word. To begin again a fruitful conversation, both historians and scientists will have to overcome some misgivings. Historians would have to surmount fears that learning from and writing for scientists would compromise their critical edge and independence as scholars. Scientists would have to overcome their fear that non-triumphalist history is somehow, by definition, hostile to science. Both sides’ fears are exaggerated – and counterproductive. Historical scholars and scientists should recognise the need to make common cause against political forces that would discredit all research-based expertise, whether scientific or historical. Dialogue between science and history could have improved public understanding of the pandemic Aside from what benefits scientists and historians may derive by restarting the conversation, the public also stands to gain. The COVID-19 pandemic showed us all that neither scientists nor historians have done a particularly good job of explaining how science works: that fierce controversy over how to make sense of empirical results is a feature, not a bug; that while there is no central authority to decide who’s right and who’s wrong, some sources of information are more reliable than others; that views can evolve quickly as research accelerates. A perplexed public expecting scientists to deliver eternal truths that could guide policy and conduct during a dangerous pandemic was understandably confused and disappointed when those truths seemed in flux. Many scientists seemed hard put to reconcile their commitment to both the permanence of scientific truth and the desirability of scientific progress, which brings all such truths under scrutiny and revision. A better dialogue between science and history could have contributed to improved public understanding. Historians and scientists need one another to reconcile and communicate how the practice of science is always fragile, sometimes chaotic, but also astonishingly successful. What scientists and historians share is a deep commitment to empirical enquiry. Scientists may claim that they’re the experts on what science is and how it’s done in the present, and historians may counter that they’re the experts on what science was and how it’s been done in the past. Both sides often regard the perspective of the other as irrelevant. Yet science-present and science-past have lessons to teach each other. Past science shows that current arrangements are neither inevitable nor necessarily optimal; present science shows how novel ideas and practices emerge in real time. Might this be the occasion to think about how science is done with the same empirical rigour that both historians and scientists bring to their own specialties? That means including both the past and present practice of science; here, historians and scientists can make common cause. There are some hopeful signs that such conversations are being restarted: historians who spend time in the lab or in the field as well as the archive and the library; scientists who inform themselves about how their field has grappled with past challenges in order to seek guidance in current dilemmas, both intellectual and ethical. Whether the resulting conversations will be worth the effort required of both parties to take part in them is admittedly a wager – especially when both scientists and historians of science have plenty to occupy themselves with in their own thriving disciplines. But the stakes are high for both sides: nothing less than a deeper understanding of how science has changed – and is still changing.
Lorraine Daston & Peter Harrison
https://aeon.co//essays/science-and-history-cannot-afford-to-be-indifferent-to-each-other
https://images.aeonmedia…y=75&format=auto
Religion
Once a centre of Afghan culture, Sufism seems to have disappeared in the maelstrom of war and upheaval. But still it survives
My introduction into the world of Afghanistan’s Sufism began in 2015, over lunch with my friend Rohullah, the director of a research institute in Kabul. I had been working in Afghanistan in various sectors from government to nongovernmental jobs, and had returned to explore topics for a PhD that I had embarked on, a year prior. I asked what had happened to Afghanistan’s Sufis. Were they all gone? Afghanistan had, after all, once been the cradle of mystic interpretations of Islam, the place of origin of Mawlana Jalaluddin Balkhi, known in the West as Rumi. Had the Sufis disappeared in the exodus precipitated by successive wars that had engulfed Afghanistan since the late 1970s? Or had they been replaced by more radical and austere forms of Islam, as some analysts speculated? Rohullah laughed. ‘They are still here,’ he said. ‘You foreigners just don’t ask about them. All you care about is gender, counter-insurgency and nation-building.’ Any cursory look through titles in bookstores or newspaper headlines on Afghanistan substantiated Rohullah’s insight: Western policymakers, journalists and most researchers tended to nurture the kinds of knowledge about Afghanistan that informed policy, and for that purpose Sufis were not particularly useful. But even when searching regionally for literature on Afghanistan’s Sufis, all I could find were texts on the historical prevalence and importance of Sufis, though nothing about their present-day lives and struggles. On occasion, Sufism still burst onto the public stage, for instance in 2016 when Iran and Turkey tried to claim the Masnawi Ma’navi, Rumi’s magnus opus, as their joint cultural heritage (the poet died in Konya, in present-day Turkey, in 1273 – and wrote in Persian, a language spoken in both Iran and Afghanistan). Western scholars and pundits barely took notice but, in Afghanistan, public intellectuals such as the poet laureate and Sufi poetry teacher Haidari Wujodi argued that ‘Maulana belongs to present-day Afghanistan and yesterday’s Khorasan. It is the responsibility of the Afghan government to take swift action about it to protect our heritage.’ An online petition decried the attempt to lay claim to Afghanistan’s cultural legacy while the Ministry of Foreign Affairs held talks with UNESCO over the perceived slight. And Atta Mohammad Noor, the then governor of the northern province of Balkh where Mawlana’s family originated, penned a letter to the UN condemning Iran and Turkey’s ‘imperialistic’ attempts to appropriate Rumi and disregard Balkh as the esteemed poet’s ‘motherland’. This ‘diplomatic frenzy’, as Radio Free Europe/Radio Liberty called it, revealed Afghan pride in Sufism and that it still has the power to spark intense debate. Sufis in Afghanistan never really fit into Western narratives about the Taliban or the war and occupation. So, Sufism was ignored. The chaotic US military evacuation in 2021 and the sweeping Taliban takeover, with all the scenes of suffering and human rights abuses that followed, have made it even more difficult to imagine an Afghanistan where Sufi scholars debate the finer points of Islamic ontology and poets ruminate on the infinite ways to lose oneself in the beauty of God’s creation. It requires a real stretch to remember that Sufism, in its multifaceted incarnations, has been a central thread in the tapestry of Afghanistan’s historical, artistic, educational and political life. Sufi traditions were once so influential in royal courts that kings extended patronage to poets and Islamic figurative artists who illuminated manuscripts, weaving Sufi literary motifs into exquisite paintings. Some historians, such as Waleed Ziad, even go as far as to say that Sufi orders that were firmly rooted in what later became Afghanistan built their own ‘hidden caliphate’, creating networks throughout the Middle East, Central and South Asia. These chapters remind us that Afghanistan’s history transcends the geopolitical tumult of the present, tracing back to a rich heritage of spiritual and artistic expression. The history of centres of Sufi learning, such as the Pahlawan Sufi lodge in an old part of Kabul, starts in this different time: in the 18th century, the capital shifted from southern Kandahar to the mountain-crested city of Kabul, a migration that ushered in a wave of cultural and spiritual transformation. Among those embarking on this northward journey was a man named Sufi Sher Mohammad and his son Mir Mohammad. Sufi Sher earned the sobriquet of Pahlawan, or ‘wrestler’, a testament to his reputed superhuman fighting prowess. But, also, a name to praise that he fought for the powerless. In the heart of Kabul, they built the Khanaqah Pahlawan, or the Lodge of the Wrestler, in a district fittingly named Asheqan-o-Arefan, a place where lovers and mystics, the seekers of gnosis, congregated in their pursuit of divine wisdom. Here, seekers assembled for weekly meditative zikr (literally, ‘remembrance of God’) rituals and spiritual advancement through reading and learning. In the modern era, Sufism continued to play a central role in Islamic thought and practice in Afghanistan until at least the last quarter of the 20th century. Sufi poetry was not a fringe phenomenon but a mainstream approach to teaching Islam in Afghanistan’s madrasas. Alongside the Quran and Hadith, students learned poetic exegeses based on the compilations of Rumi, Saadi and Hafiz. ‘In the past, there was oral knowledge on how to understand, recite and sing poetry,’ an Afghan friend told me. ‘Until the Soviet time, [in addition to the Quran,] the mosques were also teaching poetry, through collections such as Panj Ganj … Now there is only learning by heart, no analysis.’ The Soviet occupation of Afghanistan, my friend pointed out, was a time of radical change and violence on multiple levels. Fighting and destruction sent many Afghans into neighbouring countries, while ideas about what constituted Islamic authority shifted during the jihad, fought against the backdrop of the Cold War. Khanaqah Pahlawan’s spiritual lineage stretches back centuries but the structure of the Kabul lodge itself bears the scars of its journey through Afghanistan’s recent history. During one visit in 2018, Haji Tamim, the custodian, told me: ‘We had to rebuild the roof and upper floor two times,’ explaining how they were hit by rockets that had shaken the lodge’s foundations. ‘Then the mujahidin came,’ he continued. ‘They looted and burned everything that was in here. They took out all the dishes and all the stuff from the mosque [on the first floor] and from the khanaqah [lodge]. They took even the carpet from the mosque!’ In the era of the civil war, when various mujahidin factions fought each other, the khanaqah often found itself on the precipice of violence, its serenity disrupted by war. The Sufi community sometimes chose to congregate instead in a mosque in another part of Kabul, where they continued their zikr sessions and spiritual studies. When they were finally able to return in the 1990s, the community came together to repair the damaged khanaqah. An extensive network of students and regular visitors pitched in financially and with labour to reconstruct the building. Sufis in Kabul use the khanaqah for meetings and celebrations, for rituals as well as a community space for studying poetry, hagiographic compendia and philosophy. Without any state support, Sufi religious networks coalesced, repaired and rejuvenated as best as they could. This could have been the end of the Sufi lodge, its leadership starting new lives abroad As I walked through the principal congregational chamber on the second floor, an elongated rectangular space adorned with richly patterned red carpets, illuminated by a cluster of chandeliers, Haji Tamim led me to a dark-blue metal cabinet tucked away in the room’s corner. He unlocked the cabinet and, with reverence, began retrieving a collection of relics. The first, a wooden walking stick, had once been the steadfast companion of Pahlawan Sahib, the founder of the khanaqah, more than two centuries earlier. As Haji Tamim cradled the staff, he told the history of each item. Haji Tamim displaying the cap that belonged to Haji Ahmad Jan. Khanaqah Pahlawan, Kabul, 2018 They included a cap that had belonged to Haji Ahmad Jan, a respected teacher, whose prospects were bound to the tumultuous era of Hafizullah Amin, when the Communist coup of 1978 set in motion a harrowing, year-long campaign of ideological cleansing to assert control over religious education. In this brief yet catastrophic period, the estimated tally of the disappeared ranged between 50,000 and 100,000. Intellectuals who dared to critique the government, liberal thinkers, Maoists, religious scholars as well as those arbitrarily swept up in the purges found themselves ensnared in a web of persecutions. Even the devoted disciples and revered teachers of Sufi orders were not spared this repression, Haji Tamim recounted, his voice lowering. ‘Haji Ahmad Jan was the one leading the khanaqah. They came and dragged him outside and arrested him. When they manhandled him, he lost his cap. It fell to the floor. He never came back.’ The persecution of religious teachers by the People’s Democratic Party of Afghanistan (PDPA) ultimately led to the most enduring transformations, including unlikely alliances that would guarantee the safety of the lodge and its members. These cherished relics symbolise not only the foundation of the khanaqah but also a turning point, marked by the Communist regime’s oppression, which forced the family that had been its steadfast guardians into exile. The teacher was arrested, and so were other members of the Pahlawan family who were detained for several years. At the time, one could never know whether an arrest would lead to an eventual release or disappearance. The Pahlawan family made the decision to leave Afghanistan for good – first to Pakistan, then India, before settling in the United States and Germany. This could have been the end of the Sufi lodge, its leadership starting new lives abroad, students dispersing to other places of learning or giving up their path altogether. But the family struck a deal with a quiet, unassuming mullah from another part of town: he would become the pir of the order, guarding the lodge and leading the community. Thus began the leadership of Haji Saiqal, the unlikely leader of Kabul’s Pahlawan Sufi community. When Haji Saiqal went from the threshold of his mosque out into the streets in Kabul’s Microrayon district, dashing first through wide boulevards and turning into winding alleyways on his way to the reverent confines of the Khanaqah Pahlawan, he crossed multiple spaces and boundaries. At the mosque, the plainly dressed old man with his well-groomed white beard, signature flawless pirhan tumban and a modest turban on his balding head was the keeper of the Law, the imam who, five times a day, led prayers for a neighbourhood of believers. On Fridays, he delivered a sermon expounding the message of the Quran and the Hadiths. At the lodge of the Pahlawan Sufi community, Haji Saiqal was the keeper of a place of spiritual knowledge his followers believe brought them closer to God’s divine presence. Moving from one role to the other, from mullah to Sufi guide (pir) and back again, was as much – perhaps more – a spiritual transition. This double role was also almost unheard of in the reporting on Afghanistan. In recent times, mullahs have become, perhaps unfairly, a disreputable class of Islamic leader, in both the East and the West. In its most basic sense, a mullah is an educated Muslim trained in Islamic theology and sacred law, holding an official post in a mosque as an imam. But this term embodies a wide spectrum of attributes, from esteemed community leader to rigid dogmatist to bumbling object of ridicule. Mullahs are believed to hold the potential to rouse fervent crowds or even frenzied mobs, particularly when their Friday sermons delve into politically charged terrain. Since the Taliban takeover of Afghanistan in 2021, and its theocratic precursor in Iran in 1979, the geopolitical influence that ruling mullahs can wield has been a cause for both regional concern and strategic interest. But they can also be the butt of jokes, as with Mullah Nasruddin – a satirical character in the trope of the wise fool, well known in regional folklore from the Balkans to China; at times witty, at other times wise, he dispenses pedagogical humour that criticises the powerful and humbles the listener. Zikr meeting at Khanaqah Pahlawan, August 2021 Regardless of where they fall on the spectrum – whether respected, reviled or ridiculed – mullahs are often portrayed as the antithesis of Sufis. Yet in Afghanistan, supposedly the embodiment of all that is wrong with ‘mullah Islam’, there was Haji Saiqal, occupying both roles with relative ease. How was it possible that a mullah, putatively antagonistic to Sufi thought and practice, could become a Sufi leader, the head of a revered and storied khanaqah in the heart of Kabul, taking on the mantle of both esoteric knowledge and protector of the Pahlawan Sufi community? Sufism and Islam were separated and located within different – and antagonistic – personas For historians of Islam, Haji Saiqal’s dual position is not so surprising. Many traditional scholars (ulama) throughout history have simultaneously inhabited the role of legal experts and Sufi thinkers, leaders and guides, including al-Ghazali, Abdullah Ansari and Rumi himself. However, at the time when Haji Saiqal was chosen as leader, the changes during Afghanistan’s civil war widened a conceptual rift between what is perceived by many as a Sufi Islam that stands in stark contrast to a legalistic ‘mullah Islam’, a rift that remains to the present day. The rift has its origins in colonial and Orientalist literature, which divided Islam between a perceived legalistic Islam in contrast to mystic Sufism as an individual, liberal pursuit. One example of this division is the writing of the early colonial envoy Mountstuart Elphinstone (1779-1859), who describes three categories of religious functionaries: the ‘moollahs’, the ‘holy men’ (sayyids, dervishes, faqirs and qalandars) and the ‘Soofees’, whom he considers a minority sect of philosophers. Setting aside the misrepresentation of Sufism as a sect, Elphinstone saw mullahs and Sufis as diametrically opposed enemies in the religious field. Sufism and Islam were separated and located within different roles: the alim who studies the Islamic sciences, in contrast with the Sufi who sees beyond them. Ignoring the reality of a dual orientation of scholar and mystic in a single person, Sufism and Islam were separated and located within different – and antagonistic – personas. Not only was Islam split in two (legalistic vs mystic), but Sufism was also divided: Sufism as philosophy – the high art and literature of mystic poetry – in contrast to living, contemporary Sufi pirs who were often seen as flawed, or even charlatans. As the anthropologist Katherine Ewing sketched out in 2020 in her overview of the politics of representing Sufism, the living ‘holy men’ were studied and carefully managed by colonial administrators. In contrast, Sufi mystic poetry and literature were to be deciphered by Orientalist scholars. Rather than seeing these various forms as belonging to a varied spectrum of belief, they were located in mutually exclusive roles and personas. These conceptual splits also played a part in the allocation of religious authority during the decades of war in Afghanistan. Before the onset of the conflict, traditional claims to religious authority were based on religious knowledge, clerical training or Sufi lineages. The problem for Islamist party leaders who rose to prominence during the anti-Soviet jihad was that they lacked all of these credentials. Islamism developed in Afghanistan’s urban university milieu in the 1950s and ’60s, and most leaders of Afghanistan’s emerging Islamist parties, all based across the border in Peshawar, were university-educated men with no traditional religious training or pedigree. Instead, they legitimised their claims to leadership with the fact that they were the first to initiate jihad against the PDPA government in Kabul and had access to weapons and money through the assistance of Pakistan and other foreign powers, including the US and Saudi Arabia. In an environment of both raw destruction and more fine-grained societal change, in which the external performance of piety was linked either to a position within the war as a mujahidin or as a recognisable authority through title and position, Haji Saiqal proved to be the right man for the moment in two key ways: first, his position and training as a mullah; and, second, his personal pragmatism in dealing with expectations of powerbrokers. His position as a low-level cleric made him recognisable to mujahidin commanders and Taliban officials as a respectable, though nonthreatening, conservative religious scholar, someone whose official position in his mosque they recognised and whose rank would mark him out in a way as ‘one of them’ – a rightful member of religiously legitimated authority. He could face officials when they came for visits to check what was going on at the khanaqah, and he could present an image of respectability by asserting that ritual practices were situated within the strictures of Islamic law. The neighbourhood mosque that Haji Saiqal led in the Soviet-built neighbourhood of Microrayon seemed to be a physical manifestation of this adeptness at social camouflage. The simple concrete building, rectangular walls, empty halls and plain red carpets were a far cry from the dazzling tiles, arches and impressively constructed domes of Islamic architecture in Central Asia and the Persianate world. I had somehow expected a more outwardly beautified place as the seat of a Sufi leader. But, here, Haji Saiqal did not wear that mantle, donning instead the garb of a humble neighbourhood mullah. The mosque, it turned out, was a repurposed depot and distribution centre where Afghans once came to redeem their food stamps during the PDPA government in the late 1970s and ’80s. Later, it became one of the 94,000 estimated unregistered mosques in Afghanistan. The environment that Haji Saiqal had chosen as his base for teaching and preaching was inconspicuous – one mosque among many, one mullah among hundreds. The choice of Haji Saiqal as leader of the Pahlawan community was a stroke of navigational genius. The powerbrokers who took control of Kabul in the 1990s – whether mujahidin or later Taliban – were focused on the outward compliance of conduct and representative titles that met their expectations for religious credentials; Haji Saiqal checked all of those boxes. For the Sufi family of the Pahlawan lodge and their followers, however, he was chosen for his character and deeds. They had seen him growing up, from the time when he was a young boy who sometimes joined his father on his visits to the Khanaqah Pahlawan for zikr. This knowledge of Haji Saiqal’s inner state trumped his outward credentials when the community decided to whom to entrust the future of the khanaqah. Haji Saiqal, the mullah and the pir, becomes a symbol of the creative adaptation For his part, Haji Saiqal demonstrated a canny ability to manage the volatile environment. He could, when needed, appeal to the Taliban’s morality police from the Ministry for the Promotion of Virtue and the Prevention of Vice with his deep knowledge of Sharia. He could just as expertly administer to the needs of the Sufi community. When varying ministers made moves to shut down the Sufi lodge, he drew on his network of madrasa students and their connections to various Taliban officials to keep the doors of the khanaqah open. He led the community into the 21st century, caring for the modernisation of the Sufi lodge in the coming two decades under the coalition governments, until new changes within the governmental set-up were afoot. When I last visited the Sufi lodge in the winter of 2022, the Taliban had not only taken over Afghanistan, but had also closed all Sufi lodges nationwide after a bomb had struck within another Sufi lodge in Kabul in April – in the same place where Haji Saiqal had originally received his ijaza (authorisation for transmitting knowledge). Not only were the lodges closed but so too were religious foundations in which Sufi scholars were teaching weekly Masnawi classes. The official reason was the same in all instances: the danger of attacks (presumably by the Islamic State’s Afghanistan affiliate, although none of the attacks on Sufi places had been officially claimed by them). One of the Sufi alims in Kabul opined that the Taliban had used the attack as a convenient excuse to close the lodges because they were in reality against Sufism, arguing that, if the Taliban had been concerned for the wellbeing of Sufi affiliates, they would have given the lodges additional security personnel rather than completely shutting them down. After all, why would they want to shut down a place that offered support, spiritual edification, a warm meal and tea, all the manifestation of community self-help at a time when Afghanistan was hard hit by an economic depression and many families were sliding into poverty? Haji Saiqal would not see these changes – he passed away from a tumour two years before the Taliban took over. Just like in years past, internal transitions within the lodge took place alongside the more overt political changes within Afghanistan. After many deliberations within the community both in Afghanistan and its diaspora, the calm seller of mobile phone cables Haji Tamim, who had been the guardian of the Sufi lodge for decades together with Haji Saiqal, took on the leadership. The story of how Haji Saiqal and Haji Tamim cared for the Sufi lodge in old Kabul is only one of many. Once we shift our gaze from the capital to other cities, from Kandahar to Herat, Bamiyan to Badakhshan, we find others, maybe not a mullah and a mobile phone-cable seller, maybe this time calligraphers and booksellers, university professors and shopkeepers, who hide books, rebuild community centres and shrines, or who argue with authorities. As the places and persons change, so do their adaptive strategies in dealing with violence and repression. What stays the same is their lives within a centuries-long history of Sufis in Afghanistan, immersed in literature, art, belief, philosophy and worship. Following the Sufi lodge’s trajectory backward in time, through Afghanistan’s recent history of war and instability and the Pahlawan community’s struggles to sustain its traditions, leads us to a place where we begin to see Afghans very differently, not as victims in need of saving but as active agents in preserving Afghanistan’s rich and varied cultural heritage. From this perspective, Haji Saiqal, the mullah and the pir, becomes a symbol of the creative adaptation – an ethos that his successor has taken on as well. Haji Tamim shrugged when I asked him about the lodge’s closure. ‘The khanaqahs have been here before I was born, and they will exist long after we are gone.’ In his view, governments came and went, but Sufi groups endured – sometimes by simply outliving them, sometimes through engagement and clever navigation. Governments or rulers and their laws could change, but Sufis would not stop gathering.
Annika Schmeding
https://aeon.co//essays/sufi-transitions-between-mullahs-and-sufis-in-afghanistan
https://images.aeonmedia…y=75&format=auto
Thinkers and theories
The intrepid logician Kurt Gödel believed in the afterlife. In four heartfelt letters to his mother he explained why
As the foremost logician of the 20th century, Kurt Gödel is well known for his incompleteness theorems and contributions to set theory, the publications of which changed the course of mathematics, logic and computer science. When he was awarded the Albert Einstein Prize to recognise these achievements in 1951, the mathematician John von Neumann gave a speech in which he described Gödel’s achievements in logic and mathematics as so momentous that they will ‘remain visible far in space and time’. By contrast, his philosophical and religious views remain all but hidden from view. Gödel was private about these, publishing nothing on this subject during his lifetime. And while scholars have grappled with his ontological proof of God’s existence, which he circulated among friends towards the end of his life, other tenets of his belief system have received no significant discussion. One of these is Gödel’s belief that we survive death. Why did he believe in an afterlife? What argument did he find persuasive? It turns out that a relatively full answer to these questions is buried in four lengthy letters written to his mother, Marianne Gödel, in 1961, to whom he makes the case that they are destined to meet again in the hereafter. 13831Kurt and Marianne Gödel pictured together in 1964. Courtesy the Vienna City Library Before exploring Gödel’s views on the afterlife, I want to recognise his mother as the silent heroine of the story. Although most of Gödel’s letters are publicly accessible via the digital archives of the Wienbibliothek im Rathaus (Vienna City Library), none of his mother’s letters are known to have survived. We possess only his side of their conversation, left to infer what she said from his replies. This creates a mystique when reading his letters, as if one were provided a Platonic dialogue with all the lines removed, except for those uttered by Socrates. Although we lack her own words, we owe a debt of gratitude to Marianne Gödel. For, without her curiosity and independence of thought, we would have one less resource in understanding her famous son’s philosophy. Thanks to Marianne’s direct question about Gödel’s belief in an afterlife, we get his mature views on the matter. She asked him for this in 1961, a time when he was in top intellectual form and thinking extensively about philosophical topics at the Institute for Advanced Study (IAS) in Princeton, New Jersey, where he had been a full professor since 1953 and a permanent member since 1946. The nature of the exchange compelled Gödel to detail his views in a thorough and accessible manner. As a result, we have (with some supplementation) the equivalent of Gödel’s full argument for belief in an afterlife, intentionally aimed at comprehensively satisfying his mother’s questions, which appear in the series of letters to Marianne from July through to October 1961. While Gödel’s unpublished philosophical notebooks present a space in which he actively worked out views and experimented through often gnomic aphorisms and remarks, Gödel wanted these letters to be understandable and to provide a definitive answer to an earnest enquiry. And because the correspondence was private, he did not feel the need to hide his true views, which he might have done in more formal academic settings and among his colleagues at the IAS. Albert Einstein and Kurt Gödel photographed at the IAS by the economist Oskar Morgenstern, c1948. Morgenstern recounted how Einstein confided that his ‘own work no longer meant much, that he came to the Institute merely … to have the privilege of walking home with Gödel’. Photo courtesy the Shelby White and Leon Levy Archives Center, IAS, Princeton, NJ, USA. In a letter dated 23 July 1961, Gödel writes: ‘In your previous letter you pose the challenging question of whether I believe in a Wiedersehen.’ Wiedersehen means ‘to see again’. Rather than the more philosophically formal terms of ‘immortality’ or ‘afterlife’, this term lends the exchange an intimate quality. After emigrating from Austria to the United States in 1940, Gödel never returned to Europe, forcing his mother and brother to take the initiative to visit him, which they first did in 1958. As a result, one can intuit here what must have been a deep longing for lasting reunification on his mother’s behalf, wondering if she would ever have a meaningful amount of time with her son again. Gödel’s answer to her question is unwaveringly affirmative. His rationale for belief in an afterlife is this: If the world is rationally organised and has meaning, then it must be the case. For what sort of a meaning would it have to bring about a being (the human being) with such a wide field of possibilities for personal development and relationships to others, only then to let him achieve not even 1/1,000th of it? He deepens the rhetorical question at the end with the metaphor of someone who lays the foundation for a house only to walk away from the project and let it waste away. Gödel thinks such waste is impossible since the world, he insists, gives us good reason to consider it to be shot through with order and meaning. Hence, a human being who can achieve only partial fulfilment in a lifetime must seek rational validation for this deficiency in a future world, one in which our potential manifests. His opinions are informed and critical, albeit imbued with optimism Before moving on, it is good to pause and capture Gödel’s argument in a nutshell. Assuming that the world is rationally organised, human life – as embedded in the world – ought to possess the same rational structure. We have grounds for assuming that the world is rationally organised. Yet human life is irrationally structured. It is constituted by a great potential but it never fully expresses this potential in a lifetime. Hence, each of us must realise our full potential in a future world. Reason demands it. Let’s linger first with a key premise of the argument, namely, the claim that the world and human life, as part of it, display a rational order. While not an uncommon position to hold in the history of philosophy, it can often seem difficult to square with what we observe. Even if we are a rational species, human history often belies this fact. The first half of 1961 – permeating the background of Gödel’s awareness – was filled with rising Cold War tensions, violence aimed at nonviolent protestors during the civil rights movement, and random suffering such as the loss of the entire US figure-skating team in a plane crash. Folly and unreason in human events seem the historical rule rather than the exception. As Shakespeare’s King Lear tells Gloucester when expounding on ‘how this world goes’, the conclusion seems to be: ‘When we are born, we cry that we are come to this great stage of fools.’ It would be a mistake, however, to think that Gödel was naive in his insistence that the world is rational. At the end of a letter dated 16 January 1956, he asserts that ‘This is a strange world.’ And his discussions in his correspondence with his mother show that he was up to speed on political topics and world events. Throughout his letters, his opinions are informed and critical, albeit imbued with optimism. What is tantalising, and perhaps unique, about his argument for an afterlife is the fact that it actually depends on the inevitable irrationality of human life in an otherwise reason-imbued world. It is precisely the ubiquity of human suffering and our inevitable failures that gave Gödel his certainty that this world cannot be the end of us. As he neatly summarises in the fourth letter to his mother: What I name a theological Weltanschauung is the view that the world and everything in it has meaning and reason, and indeed a good and indubitable meaning. From this it follows immediately that our earthly existence – since it as such has at most a very doubtful meaning – can be a means to an end for another existence.Precisely in virtue of the fact that our lives consist in unfulfilled or spoiled potential makes him confident that this lifetime is but a staging ground for things to come. But, again, that is only if the world is rationally structured. If humanity and its history do not display rational order, why believe the world is rational? The reasons that he gives to his mother in the letters display his rationalist proclivities and belief that natural science presupposes that intelligibility is fundamental to reality. As he writes in his letter dated 23 July 1961: Does one have a reason to assume that the world is rationally organised? I think so. For it is absolutely not chaotic and arbitrary, rather – as natural science demonstrates – there reigns in everything the greatest regularity and order. Order is, indeed, a form of rationality.Gödel thinks that rationality is evident in the world through the deep structure of reality. Science as a method demonstrates this through its validated assumption that intelligible order is discoverable in the world, facts are verifiable through repeatable experiments, and theories obtain in their respective domains regardless of where and when one tests them. It is this result that shook the mathematical community to its core In the letter from 6 October 1961, Gödel expounds his position: ‘The idea that everything in the world has meaning is, by the way, the exact analogue of the principle that everything has a cause on which the whole of science is based.’ Gödel – just like Gottfried Wilhelm Leibniz, whom he idolised – believed that everything in the world has a reason for its being so and not otherwise (in philosophical jargon: it accords with the principle of sufficient reason). As Leibniz puts it poetically in his Principles of Nature and Grace, Based on Reason (1714): ‘[T]he present is pregnant with the future; the future can be read in the past; the distant is expressed in the proximate.’ When seeking meaning, we find that the world is legible to us. And when paying attention, we find patterns of regularity that allow us to predict the future. For Gödel, reason was evident in the world because this order is discoverable. Although unmentioned, his belief in an afterlife is also imbricated with the results from his incompleteness theorems and related thoughts on the foundation of mathematics. Gödel believed the world’s deep, rational structure and the soul’s postmortem existence depend on the falsity of materialism, the philosophical view that all truth is necessarily determined by physical facts. In an unpublished paper from around 1961, Gödel asserts that ‘materialism is inclined to regard the world as an unordered and therefore meaningless heap of atoms.’ It follows too from materialism that anything without grounding in physical facts must be without meaning and reality. Hence, an immaterial soul could not count as possessing any real meaning. Gödel continues: ‘In addition, death appears to [materialism] to be final and complete annihilation.’ So materialism contradicts both that reality is constituted by an overarching system of meaning, as well as the existence of a soul irreducible to physical matter. Despite living in a materialist age, Gödel was convinced that materialism was false, and thought further that his incompleteness theorems showed it to be highly unlikely. The incompleteness theorems proved (in broad strokes) that, for any consistent formal system (for example, mathematical and logical), there will be truths that cannot be demonstrated within the system by its own axioms and rules of inference. Hence any consistent system will inevitably be incomplete. There will always be certain truths in the system that require, as Gödel put it, ‘some methods of proof that transcend the system.’ Through his proof, he established by mathematically unquestionable standards that mathematics itself is infinite and new discoveries will always be possible. It is this result that shook the mathematical community to its core. In one fell swoop, it terminated a central goal of many 20th-century mathematicians inspired by David Hilbert, who sought to establish the consistency of every mathematical truth through a finite system of proof. Gödel showed that no formal mathematical system could ever do so or prove definitively by its own standards that it was free of contradiction. And insights discovered about these systems – for instance, that certain problems are truly non-demonstrable within them – are evident to us through reasoning. From this, Gödel concluded that the human mind transcends any finite formal system of axioms and rules of inference. Regarding the incompleteness theorems’s philosophical implications, Gödel thought the results presented an either/or dilemma (articulated in the Gibbs Lecture of 1951). Either one accepts that the ‘human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine’, from which it follows that the human mind is irreducible to the brain, which ‘to all appearances is a finite machine with a finite number of parts, namely, the neurons and their connections.’ Or one assumes that there are certain mathematical problems of the sort employed in his theorems, which are ‘absolutely unsolvable’. If this were the case, it would arguably ‘disprove the view that mathematics is only our own creation.’ Consequently, mathematical objects would possess an objective reality all its own, independent of the world of physical facts ‘which we cannot create or change, but only perceive and describe.’ This is referred to as Platonism about the reality of mathematical truths. Much to the materialist’s chagrin, therefore, both implications of the dilemma are ‘very decidedly opposed to materialistic philosophy’. Worse yet for the materialist, Gödel notes that the disjuncts are not exclusive. It could be that both implications are true simultaneously. How does this connect with Gödel’s view that the world is rational and the soul survives death? The incompleteness theorems and their philosophical implications do not in any way prove or show that the soul survives death directly. However, Gödel thought the theorem’s results dealt a heavy blow to the materialistic worldview. If the mind is irreducible to the physical parts of the brain, and mathematics reveals a rationally accessible structure beyond physical phenomena, then an alternative worldview should be sought that is more rationalistic and open to truths that cannot be tested by the senses. Such a perspective could endorse a rationally organised world and be open to the possibility of life after death. Suppose we – cynics and all – accept that the world, in this deep sense, is rational. Why presume that human beings deserve anything beyond what they receive in this lifetime? We can guess that something similar troubled his mother. Gödel says in his next letter’s theological portion: ‘When you write that you pray to creation, you probably mean that the world is beautiful all over where human beings cannot reach, etc.’ Here, Marianne might have agreed that much in creation appears ordered, but challenged the assumption that all of reality is so ordered, in particular when it comes to human beings. Must the whole world be rational? Or might it be that human beings are irrational aberrations of an otherwise rational order? Gödel’s response reveals extra degrees of nuance to his position. In the first letter, Gödel had only loosely referenced a ‘wide field of possibilities’ that go underdeveloped but which demand completion. In his subsequent letters, he details what it is about humanity that requires existence to continue – that is, what is essential to humanity. It is first important to explain what Gödel meant by an ‘essential’ property. We have, of course, many properties. I have the property, for example, of standing in a relationship of self-identity (I am not you), of being a US citizen, and of enjoying the horror genre. Although there is no unanimity on exactly how to understand Gödel’s use of ‘essential’, his ontological proof for the existence of God includes a definition of what he means by an essential property. According to that definition, a property is essential of something if it stands in necessary connection with the rest of its properties such that, if one possesses said property, then one necessarily possesses all its other properties. It follows that every individual has an individuated essence, or as Gödel notes in the handwritten draft of the proof: ‘any two essences of x are nec. [sic] equivalent.’ Gödel, like Leibniz, believed that each individual possessed a uniquely determinable essence. It’s the human ability to learn from our mistakes in a way that gives life more meaning At the same time, even if essence is defined as individual-specific in the proof, there is evidence that Gödel thought that essences could also be kind-specific. He thought all human beings are destined for an afterlife because they all share a property in virtue of their being human. There are sets of necessary properties that hang together and that are interrelated across individuals such that the possession of this set would entail something being the kind of thing it is. In his ontological proof, for example, he defines a ‘God-like’ being as one that must possess every positive property. As for human beings, I am a human being in virtue of possessing a kind-specific set of properties that all human beings possess necessarily and that at least some of which are completely unique to us (just as only a God-like being can have the property of possessing every positive property). In Gödel’s letter of 12 August 1961, he points out the crucial question, which is too often overlooked: ‘We not only don’t even know whence and why we are here, but also don’t know what we are (namely, in essence and seen from within).’ Gödel then notes that if we were capable of discerning with ‘scientific methods of self-observation’, we would discover that every one of us has ‘completely determined properties’. Gödel playfully in the same letter remarks that most individuals believe the opposite: ‘According to the common conception, the question “what am I” would be answered such that I am something that has absolutely no properties in its own right, something along the lines of a coat rack on which one can hang anything one pleases.’ That is, most people assume that there is nothing essential about the human being and that one can ascribe to humanity any trait arbitrarily. For Gödel, however, such a conception presents a distorted picture of reality – for if we have no kind-specific essential properties, on what grounds can categorisation and determination of something as something begin? So what essentially human property points towards a destiny beyond this world? Gödel’s answer: the human ability to learn, and specifically the ability to learn from our mistakes in a way that gives life more meaning. For Gödel, this property hangs necessarily together with the property of being rational. While he admits that animals and plants can learn through trial and error to discover better means for achieving an end, there is a qualitative difference between animals and human beings for whom learning can elevate one into a higher plane of meaning. This is the heart of Gödel’s rationale for ascribing immortality to human beings. In the 14 August 1961 letter, Gödel writes: Only the human being can come into a better existence through learning, that is, give his life more meaning. One, and often the only, method to learn arises from doing something false the first time. And that occurs of course in this world truly in abundant quantity.The folly of human beings mentioned above is perfectly consistent with the belief in the world’s rationality. In fact, the world’s ostensible senselessness provides an ideal set-up to learn and develop our reason through the contemplation of our shortcomings, our moments of suffering, and our all-too-human proclivities to succumb to baser inclinations. To learn in Gödel’s sense is not about our ability to improve the technical means for achieving certain ends. Rather, this distinctive notion of learning is humanity’s capacity to become wiser. I might, for example, learn to be a better friend after losing one because of selfish behaviour, and I might learn techniques for thinking creatively about a theoretical approach after multiple experimental setbacks. An essential property of being human is, in other words, being prone to develop our reason through learning of the relevant sort. We are not just learning new ways of doing things, but rather acquiring more meaning in our lives at the same time through reflection on deeper lessons discovered through making mistakes. All this might lead one to infer that Gödel believed in reincarnation. But that would be overhasty, at least according to certain standard conceptions of it. An intriguing feature of Gödel’s theological worldview is his belief that our growth into fully rational beings occurs not as new incarnations in this world, but rather in a distinct future world: In particular, one must imagine that the ‘learning’ occurs in great part first in the next world, namely, in that we remember our experiences from this world and come to understand them really for the first time, so that our this-worldly experiences are – so to speak – only the raw material for learning.And he elaborates further: Moreover one must of course assume that our understanding there will be substantially better than here, so that we can recognise everything of importance with the same infallible certainty as 2 x 2 = 4, where deception is objectively impossible. The next world, therefore, must be one that liberates us from our current, earthly limitations. Rather than recycling back into another earthly body, we must become beings with the capacity to learn from memories that are latently brought along into our future, higher state of being. The belief that it is our essence to become something more than we are here explains why Gödel was drawn to a particular passage in St Paul’s first letter to the Corinthians, which I discovered when perusing his personal library at the archives of the IAS. In a Latin, pocket-sized edition of the New Testament, Gödel jotted at the top of the title page in faint pencil: ‘p. 374’. Following this reference, one is led to Chapter 15 of St Paul’s letter where Gödel marked verses 33 through 49 with square brackets and drew an arrow to one verse in particular. In the bracketed verses, St Paul describes our bodily resurrection. Employing the metaphor of crops, St Paul notes that sown seeds must be destroyed in order to grow into plants that it is their nature to become. So too, he notes, will it be with us. Our lives and bodies in this lifetime are only seeds, awaiting their destruction, after which we will grow into our ultimate state of being. Gödel drew an arrow pointing at verse 44 to highlight it: ‘It is sown in weakness, it is raised in power. It is sown a physical body, it is raised a spiritual body.’ For Gödel, St Paul had apparently arrived at the correct conclusion, albeit by prophetic vision as opposed to rational argument. We are left largely to wonder about Marianne’s reaction to her son’s views on the hereafter, though it is certain that she was puzzled. In the letter dated 12 September 1961, Gödel assures his mother that her confusion about his position has nothing to do with her age and much more to do with his compact explanations. And in the last letter, from 6 October 1961, Gödel objects against the claim that his views resemble ‘occultism’. He insists, on the contrary, that his views have nothing in common with those who would merely cite St Paul or discern messages directly from angels. He admits of course that his views might appear ‘unlikely’ at first glance, but insists that they are quite ‘possible and rational’. Indeed, he arrived at his position through reasoning alone, and thinks that his convictions will eventually be shown to be ‘thoroughly compatible with all known facts’. It is in this context that he further presents a defence of religion, recognising a rational core to it, which he claims is often maligned by philosophers and undermined by bad religious institutions: N.B. the current philosophy curriculum doesn’t help much in understanding such questions since 90 per cent of contemporary philosophers see their primary objective as knocking religion out of people’s heads, and thereby work the same as bad churches.Whether this convinced Marianne or not, we can only guess. For us who remain with both feet still in this world, Gödel’s argument presents us with a fascinating take on why we might continue to exist after shuffling off this mortal coil. Indeed, his argument glows with an optimism that our future lives, if reason is to be satisfied, must be ones in which we maximise certain essential human traits that remain in a paltry state here. Our future selves will be more rational, and somehow capable of making sense of the raw material of suffering experienced in this life. Can we assume that Kurt and Marianne are now reunited? Let us hope so.
Alexander T Englert
https://aeon.co//essays/kurt-godel-his-mother-and-the-argument-for-life-after-death
https://images.aeonmedia…y=75&format=auto
Thinkers and theories
For Rachel Bespaloff, philosophy was a sensual activity shaped by the rhythm of history, embodied in an instant of freedom
Shortly after Rachel Bespaloff’s suicide in 1949, her friend Jean Wahl published fragments from her final unfinished project. ‘The Instant and Freedom’ condensed themes that occupied the Ukrainian-French philosopher throughout her life: music, rhythm, corporeality, movement and time. One of Bespaloff’s key ideas, ‘the instant’, is less a fragment of duration than a life-changing event, a moment of embodied metamorphosis. In the midst of a noisy world, torn between transience and eternity, the human being listens to the sound of history. Had she completed and published it, ‘The Instant and Freedom’ might have become the masterpiece of an important early existentialist thinker. Instead, her name is hardly mentioned today. Yet Bespaloff was a brilliant and original thinker, among the first wave of existentialists in France. Albert Camus, Jean-Paul Sartre and Gabriel Marcel all admired her. A professional dancer and choreographer, she had finely tuned ears for the musicality of philosophical writing. For Bespaloff, philosophy is a dynamic, sensual activity of listening to and engaging with the voices of others, including those long dead. In dialogue with Homer, Kierkegaard, Nietzsche and Heidegger, she found her own voice. At the heart of Bespaloff’s world is an original conception of time shaped by embodiment and music: the instant is a silent pause that suspends history’s repetitive rhythm. Through our bodies, we experience that break from history as a brief moment of freedom. Her more famous contemporary Simone Weil also used her body to express her philosophy: Weil eventually starved herself to death in solidarity with friends and compatriots in occupied France. Bespaloff shared Weil’s interest in attention, listening and waiting as mystical practices of the body. For both thinkers, philosophy was an existential embodiment of their ideas. However, Bespaloff did not use her body as a weapon against itself; rather, she was interested in dance as a creative alchemy of movement. Bespaloff’s philosophy of the body is closely linked to the experience of time: it is our embodied day-to-day existence that measures and gives rhythm to time. In an essay on Homer’s Iliad written during the Second World War, Bespaloff captured the experience of living through the horrors of exile and war. The human being, ‘bound to her time by disorder and misfortune, acquires a new perception of the time of her own existence.’ (All translations here from the French are my own.) Bespaloff’s own life was one of repeated displacement: she moved from Ukraine to Switzerland, Paris to southern France, to Mount Holyoke via New York. Born in 1895 in Nova Zagora in Bulgaria to a Ukrainian-Jewish family, she spent her childhood in Kyiv and then in Geneva where the family moved in 1897. Her mother Debora Perlmutter was a philosopher who taught at university; her father, Daniel Pasmanik, a surgeon, became a leading theoretician of Zionism in the Russian Empire. A fervent anti-Bolshevik, Pasmanik fought for the White Army in the Russian Civil War. In Switzerland, Bespaloff (then Rachel Pasmanik), studied piano and composition at the conservatory, philosophy at the university, and eurythmics with Émile Jaques-Dalcroze. These three areas of study are all entwined in her existential philosophy of embodiment. Dalcroze eurhythmics is a holistic method of musical education; it turns the body into an instrument. Different temporalities are concretised through movements, arm gestures and steps. For Bespaloff, eurythmics became an intimate practice of listening with her entire body. Dalcroze’s favourite student, she was sent to work in Paris in early 1919. She began teaching eurythmics at the Paris Opera while also publishing short texts on dance. Bespaloff’s ‘plastic dance’ aimed to restore a lost dynamism. Her method attracted the attention of Jean Cocteau and Sergei Diaghilev, who introduced this new corporeality to his Ballets Russes. If philosophy sharpened her ears, eurythmics sculpted her body towards an embodied experience of temporality. She believed that a more authentic sense of time, lost in modernity, still lurked beneath our skin. ‘She listened with her whole person: with her hands, with her lips, with her eyes’ In 1921, Bespaloff was the choreographer of the ‘Royal Hunt’ scene in Hector Berlioz’s opera The Trojans – a theme she would return to in her Iliad essay. In ‘Dance and Eurythmics’ (1924), Bespaloff wrote that dance is a universe with ‘its vocabulary, a fixed language, its own logic, its needs.’ Eurythmics is the system of this universe, turning movement into existential experiences. Through the plasticity of our bodies, we can reach new forms of being. In the fragment ‘The Dialectic of the Instant’, Bespaloff describes time consciousness as ‘nothing other than a certain way of grasping the relationship between finitude and infinity in the instant.’ The instant’s brevity points us towards a lost continuity that can be restored. Through music and dance, Bespaloff discovered what she calls the experience of ‘magic interiority’. By externalising movement, the subject of eurythmics plunges herself into an inner experience. Bespaloff met her second important teacher in 1925, the Jewish existentialist philosopher Lev Shestov (born in Kyiv as Yehuda Leib Shvartsman). The encounter with Shestov changed her life: Bespaloff the choreographer decided to become a philosopher. This was a radical move but, by then, she was already married to a Ukrainian businessman, which allowed her to quit her job at the Opera and soon after have a daughter. Shestov was a central figure in the philosophical émigré circles of interwar Paris. French existentialism gained fame much later through the works of Sartre and Camus. However, Sartre was deeply indebted to Shestov’s original synthesis of Nietzsche, Kierkegaard, Dostoevsky and Jewish theology. Shestov’s charisma and unsystematic thought magnetised young philosophers, among them Georges Bataille. In many ways, the Shestov circle was the hotbed of French existentialism. Along with the Romanian poet Benjamin Fondane, Bespaloff was at the centre of Shestov’s salon. Her friend Daniel Halévy described her sitting on Shestov’s sofa, completely motionless, while ‘she listened with her whole person: with her hands, with her lips, with her eyes.’ One of the few women in the circle, she soon became friends with the Christian existentialist writer Gabriel Marcel and the Jesuit theologian Gaston Fessard who both admired her work. A female philosopher in the 1930s was, as Olivier Salazar-Ferrer put it, ‘a bit like a woman in the 19th century wearing men’s clothes.’ However, Bespaloff would soon wear her own clothes. In 1929, she had dinner with Edmund Husserl whose phenomenology she confidently attacked with Shestovian arguments. Bespaloff caused another stir with the publication of her ‘On Heidegger (Letter to Daniel Halévy)’ in La Revue philosophique in 1933. It was among the very first discussions of Martin Heidegger’s thought in France. Fluent in German, Bespaloff had read Heidegger’s Being and Time (1927) in the summer of 1932. Heidegger’s greatness, she wrote, was that ‘he situates himself in the inextricable; he does not want to detach himself.’ Similar to the experience of eurythmics, Heidegger’s philosophy proposes our hopeless entanglement with the world. It is not difficult to imagine a 28-year-old Sartre being drawn to Bespaloff’s letter, where she wrote excitedly: ‘Existence projects itself into the possible: choice is its destiny.’ For Bespaloff, interpreting Heidegger, this choice is not a matter of free will but of irrevocable commitment. By actively choosing, we dash beyond ourselves into an uncertain future. As a musician, Bespaloff ‘listened’ to Heidegger’s text as if to a performance of Bach, a ‘monumental Art of Fugue’. She recognised that, as in a Baroque fugue, all the motifs ‘bring us back to the central theme of Being taken up in all its possible aspects, with increasing infinite variation, but always identical to itself.’ Bespaloff’s enthusiasm for Heidegger’s musical metaphysics was soon tempered by the discovery of another existentialist: Søren Kierkegaard. In 1934, she published notes on Kierkegaard’s Repetition (1843), a work that emphasised the musicality of repetition as continuous transformation. She declares war on her teacher’s total denial of any possibility of truth Repetition does not add anything, it only accentuates what is irreducible to human existence. Repetition in Kierkegaard is ‘the will to live again and the refusal to survive’. Only by repeating can we become authentic subjects. In Kierkegaard’s ‘beautiful moment’, Bespaloff found what she called ‘the instant’: an experience of absolute, eternal silence. The absence of a path, she wrote on Kierkegaard, is the only path his philosophy wants to follow. This Zen-like image also perfectly captures the meandering trajectories of her own thought, which Laura Sanò has called ‘nomadic’. A wandering cosmopolitan, Bespaloff was forced to traverse the boundaries of various countries, languages and cultures. Her philosophy mirrored that nomadism, with subtle attention to the embodied experience of movement, melody and metamorphosis. Bespaloff’s essay collection Paths and Crossroads (Cheminements et Carrefours) appeared in 1938. Dedicated to Shestov, the book includes texts on Julien Green, André Malraux, Marcel and two essays on Kierkegaard. The chapter ‘Shestov before Nietzsche’ declares war on her teacher’s total denial of any possibility of truth. By refusing to think, she writes, Shestov had returned to another dogma – a radical relativism that ultimately turned into nihilism. Against Shestov’s rejection of reason, Bespaloff poses Nietzsche’s attempt to reach truth through and within one’s life. Nietzsche’s concept of the Will to Truth, she thought, could reconcile us to the tragedy of existence. Where Shestov saw an unbridgeable gap, Bespaloff made a leap: in the instant, happiness is in our reach. Bespaloff’s ‘happy consciousness’ made a deep impression on Camus who read the book closely in the summer of 1939. Bespaloff’s writings on Kierkegaard coincided with the publication of Wahl’s Kierkegaardian Studies (1938) – a testimony to their friendship and lifelong collaboration. Bespaloff and Wahl were trendsetters in Paris. Introducing Kierkegaard’s anti-Hegelian philosophy into France, they prepared the ground for the existentialism that flourished in wartime Paris. Their ventures into Christian existentialism directly reacted to Hegel’s revival in France instigated by Alexandre Kojève’s lectures, held between 1933 and 1939. Another émigré from the Russian empire, Kojève was as pivotal as Shestov to the formation of French modernism. It was these refugees from eastern Europe, among them Bespaloff, who shaped the course of French culture by importing new currents to Paris, including Surrealism, Marxism, phenomenology and existentialist philosophy. In the spring of 1938, Bespaloff began rereading the Iliad with her daughter Naomi. Her extensive notes turned into a brilliant essay on Homer’s epic poem. Shestov’s death that year deeply upset her. In a letter to Wahl, she calls Shestov one of the few truly noble men she knew. The family moved to her husband’s estate in southern France in 1939. Just before the Nazis occupied Paris, she wrote a letter to Marcel: ‘But the worse it gets, the more I realise that you can’t love life, the more I discover the urgent need to find new reasons to love it. And I am afraid that this time I won’t be able to, which would be worse than death…’ Her work on the Iliad essay became an existential ‘method of facing the war’. She soon became aware of a similar text, written coincidentally, that appeared in Cahiers du Sud in 1940: Weil’s ‘The Iliad, or the Poem of Force’. Bespaloff began to revise her essay; she critically responded to Weil’s condemnation of any use of force. Living as a Jew in Vichy France, Bespaloff became increasingly desperate, and with good reason. In November 1941, she wrote to Marcel: ‘I feel as if I am stuck in a sad, restless, absurd dream. And I am very afraid of waking up.’ Her friend Wahl, also Jewish, had been imprisoned and tortured by the Gestapo, and worse was to come for many Jews in Paris. In 1942, Bespaloff managed to escape, boarding one of last ships to leave Nazi-occupied France, with her mother and daughter, her library and grand piano. Having narrowly fled a concentration camp outside of Paris, Wahl joined them. With his encouragement, Bespaloff began to rework her essay on the Iliad. She eventually finished her notes in yet another exile, this one in New York. Published in English translation in 1943, On the Iliad framed war as an absolute ‘question of losing it all to gain it all’. In the words of Fondane’s letter to his wife, war became ‘the moment to live our existential philosophy’. According to Bespaloff, Homer felt both intense love and intense horror of war. Where Weil claimed that force transforms subjects into objects, Bespaloff, emphasises brief moments of beauty that occur in the midst of violence. With war being waged all around, there are flashing instants of generosity and grace. In the Iliad, force is both a supreme reality and an illusion. It is the superabundance of life itself, ‘a murderous lightning stroke, in which calculation, chance, and power seem to fuse in a single element to defy man’s fate.’ This does not mean that Bespaloff glorified violence. Far from it. But the experience of the Second World War made her realise the inescapability of force and its power to transform an individual’s understanding of the human predicament. At the heart of her essay is Hector, the ‘resistance-hero’ who embodies justice and courage. Like every human in the Iliad, Hector cannot flee his fate – and he knows it. Hector’s flight from force is short but has ‘the eternity of a nightmare’. That is the horrifying temporality of war that Bespaloff experienced first-hand. Hannah Arendt’s reading of Kafka echoed Bespaloff’s existentialist despair The most crushing parts of Bespaloff’s Iliad essay are dedicated to Helen, a woman with whom she clearly identifies. Clothed in long white veils, she is the most austere character of Homer’s poem. Both unbearably beautiful and unfortunate, Helen awoke in exile and felt ‘nothing but a dull disgust for the shrivelled ecstasy that has outlived their hope.’ She is the prisoner of her own passivity, forced to live in horror of herself. Ultimately, Helen’s promise of freedom, like Bespaloff’s own, remains unfulfilled. Helplessly, Helen watches the men who went to war for her, observing ‘the changing rhythm of the battle’. The breaks that interrupt the fighting are rare instants of silence: The battlefield is quiet; a few steps away from each other, the two armies stand face to face awaiting the single combat that will decide the outcome of the war. Here, at the very peak of the Iliad, is one of those pauses, those moments of contemplation, when the spell of Becoming is broken, and the world of action, with all its fury, dips into peace.While in New York, Bespaloff preserved her ties to Parisian intellectual life from her exile by exchanging letters with Fessard and Marcel. She got a job with the Voice of America’s French broadcast before moving to Mount Holyoke College in Massachusetts, where she taught French literature. Mount Holyoke became an important outpost for French culture in the US during the war. At gatherings of exiled scholars organised by Wahl, Bespaloff met Jacques Maritain, André Masson, Marc Chagall and Claude Lévi-Strauss. This ‘small, dark lady who wore white gloves’, as her translator Mary McCarthy described her, also made an impression on Hannah Arendt who visited in August 1944 to deliver a lecture on Franz Kafka. Arendt’s reading of Kafka, later published in Partisan Review, echoed Bespaloff’s existentialist despair. Under the dark shadow of war, Arendt describes humanity as inescapably trapped in history’s meshes. Kafka’s ‘nightmare of a world’ had become reality. In an essay on Camus, her last published work, Bespaloff describes how history forced her generation ‘to live in a climate of violent death’. After the war, despite previously having been fêted by them, Bespaloff became a vocal critic of the new generation of French existentialists, especially Sartre. In a 1946 letter to the musicologist Boris de Schloezer, Bespaloff wrote that ‘the hollowness of subjectivity that Sartre opposes to what I call “magical interiority” is much less the foundation of a new humanism than the harbinger of a new conformity.’ She argued that, instead of liberating the individual, Sartre’s existentialism destroyed the magical interiority through which humans can authentically connect with one another. For Bespaloff, Sartre degraded the subject into an object under the gaze of the Other. This objectified ‘subjectivity curiously aligns with American “individualism”, which unleashes itself in action to mask the absence of the individual.’ Like Helen’s Troy, the US felt both dull and hostile to Bespaloff. Bespaloff’s journey to Mount Holyoke was her final exile. During term break, in April 1949, for reasons not entirely clear, she sealed her kitchen doors and turned on the gas oven. Her own complex fugue ended with a tragic cadence. She had written earlier of the happiness that can be found in an instant. In her final note, alluding to Camus’s claim, she wrote: ‘One can imagine Sisyphus happy, but joy is forever out of his reach.’
Isabel Jacobs
https://aeon.co//essays/for-rachel-bespaloff-philosophy-was-a-sensual-activity
https://images.aeonmedia…y=75&format=auto
Architecture
Architectural drawing speaks of mathematical precision, but its roots lie in the theological exegesis of a prophetic book
Years ago, my professor would make his architectural history students prepare for seminars by pinning large sheets of paper to a noticeboard. Each had finely printed plans and elevations on them. Over the week, I’d stand in front of those sheets for at least an hour looking at the various drawings, as instructed. Back in class, students took turns to explain what exactly the drawings represented, determining the building’s appearance from the drawings alone and describing how a person might move through the space as if we were there. Those well-spent hours were among my favourite during my degree; the language of drawing was a catalyst to my imagination, creating worlds beyond what words could ever do. In learning about this language, I realised that we know remarkably little about how it developed, as if it arose fully formed in the 13th century, since no single drawing can be linked to a specific building project until that century’s end. This baffled me. How could monuments like Durham Cathedral, the renovated basilica of Saint-Denis outside Paris (the genesis of the Gothic style), and all the High Gothic churches in northern Europe have been made without something so simple as a drawing? Visually communicating the appearance of a building seems a natural thing to do – an easier way of planning. As it happens, drawings were used in the construction process before the 13th century. In the 1st century BCE, the architect Vitruvius wrote his De architectura in an attempt to elevate the practice of architecture to the level of the liberal arts; that is, work derived from the mind rather than the mindless graft of one’s hands. Near the beginning of the treatise, Vitruvius describes three types of architectural drawing: plans, elevations, and (very likely) drawings in perspective (his precise meaning is hotly contested). Despite this evidence for the use of drawings, none survive from antiquity. The only examples to weather the test of time are monumental plans inscribed on stone or mosaics, but these could have been decorative objects – simple maps or sculptural monuments: their purpose is not clear. Also, most were done after the buildings they depict were completed, so they cannot have been used in the construction process. After the decline of the Roman empire in the 5th century, the infrastructure for educating and training architects vanished in the West. Not until the 13th century do we get a designer who oversees several projects simultaneously – a sort of proto-architect. Prior to their emergence, there was a master mason who’d make certain geometrical constructions on the ground or in plaster, allowing him to construct the layout of a building. This master mason is an obscure historical figure. He likely did not have a formal education but started his career as an apprentice who learned structurally sound forms from his master. He would have travelled across building sites learning and picking up new designs and ideas. At 12th-century Canterbury, for example, the original designer of the Gothic building was William of Sens, who likely had experience of working on the new Gothic elements of the French cathedral. He could promise that and more to his new patrons in England, not necessarily using a drawing but by describing what exactly he would do over the coming years. Later, in the 15th century, the artist and architect Leon Battista Alberti, in his brief mention of architectural drawings, assumes that they are done only by architects. This leaves us with a story of architecture that follows a well-worn narrative: the decline of Rome led to a dearth of advanced practices, which were picked up again only in early modern Italy. But this is not the real story. Towards the middle of the 12th century, a Scottish theologian named Richard moved across the Channel to Paris and to the Abbey of Saint Victor on the left bank, about a 20-minute walk from where Notre-Dame Cathedral stands today, but outside the walls of the medieval city. Here, Richard penned a commentary on the Book of Ezekiel, filled with more than a dozen plans and elevations that systematically represent the buildings the prophet describes. These are key to understanding the beginning of architectural drawing in the West. Richard is the first person to use the term ‘plan’ with reference to a drawing that would be recognised as a plan today. He was the first person we know of to represent a building more than once, offering a three-dimensional view of the structure; and the first to provide a clear sectional elevation, where part of the building is sliced through to give a view of the interior. His commentary suggests that architectural drawings were in use a full century earlier than is conventionally held, complete with a fully fledged language for the representation of three-dimensional objects. The Abbey of Saint Victor was established in 1108, at the beginning of what the historian Charles Homer Haskins in 1927 called a ‘Twelfth-Century Renaissance’. This renaissance was characterised by a renewed focus on classical literature and a drive to understand the physical world. Latin translations of ancient Greek and Arabic works on mathematics, geometry and every other subject gave energy to scholars to interrogate the world a little deeper. Works by Aristotle, Euclid and Plato, dimly known but whose writing was thought gone forever, began to arrive on the shores of Europe. Within this swirling intellectual storm, the members of Saint Victor had one of the best libraries in Europe and a commitment to teach whoever wanted to learn. They were spiritual centrists, never veering close to zealotry and never losing their minds to the new fashion for pure logic, characterised by the infamous self-promotor Peter Abelard. The Abbey of Saint Victor, Paris, etching, 1702. Courtesy of INHA, Paris With the fine resources and stimulation that Saint Victor provided, Richard began his commentary on the Book of Ezekiel, a daunting task for anyone who has read it. Ezekiel prophesied during the Israelite’s exile in Babylon where the once-captive Jews remained in the centuries following the destruction of the First Temple in Jerusalem. It is a book of consolation and of hope; especially the last section, which contains a detailed architectural description of a new temple that would descend from the heavens onto the mountain when the Israelites returned to their homeland. Ezekiel describes meeting a man with ‘brazen complexion’ holding a measuring stick, who accompanies the prophet around the buildings, while measuring every detail. At first glance, the buildings Ezekiel describes, and their arrangement, seem straightforward. There are three courtyards of diminishing size, set into one another, and each section is accessed via an elaborate gateway. The new temple at the centre of everything (and modelled on Solomon’s original) is perched on the mountain’s plateau. Out front in the smallest courtyard there is an altar for sacrifices, while the Temple contains three spaces: a vestibule, a long narrow hall and a smaller room called the ‘Holy of Holies’. Richard needed to know exactly what the prophet saw But the details are impossible to follow. For example, the description of the gateways range over different chapters and, though Ezekiel specifies their parts, the measurements do not always make sense. We are told the breadth of the building is one reed (six cubits), that the threshold to the gate is six cubits, and the ‘porch of the gate’ is eight cubits. As the description continues, it is tempting to pick up a pen and draw alongside its reading, the better to follow, but the layout is difficult to grasp. In the 6th century, Pope Gregory I concluded that it was impossible to understand the architecture in a literal way, and that the lack of sense within Ezekiel’s words was a sure sign that they could only be allegorical in nature. Gregory gave the example of a door described as wider than the wall to which it is attached. For Richard, brought up on an intellectual diet defined by the rigours of Saint Victor’s school, it was important to understand the facts of Ezekiel’s words. He needed to know exactly what the prophet saw. And so, Richard’s commentary on this part of the Book of Ezekiel included more than a dozen plans and elevations to help realise the prophet’s vision. Telling his readers why he included the drawings, Richard says he wanted to show that no matter how ‘simple’ they might be, the truth of his argument was that these buildings had a tangible existence, and that Ezekiel’s description makes sense if the reader has the wit to follow it. Richard’s drawings Richard’s drawings are like nothing made before. They are precocious in pointing to a masterful visualisation of space long before the language of architectural drawing was systematised. Richard used recent developments in geometry to fully articulate the relationship between the plans and elevations: in fact, the drawings represent the beginning of architectural abstraction in the West, not because he uses plans and elevations, but because he uses them together to give readers a real sense of the buildings’ three dimensions. As far as we know, no one in Europe had done this previously. Richard’s final plan, vestibule running through the middle In his commentary, Richard takes his readers through the envisioned Temple complex carefully, starting with a very general sense of the entire layout, allowing us to situate ourselves properly. Then he zooms in to focus on one building in particular, the gatehouses that combine the three atria that surround the temple. He provides three plans and two elevations for the gatehouses on three sides of the complex. The three bird’s eye plans show its general layout, a detailed rendering of half the building, then its complete internal footprint. The final plan (above) shows a vestibule running through the middle with long rectangular rooms perpendicular to it. If we set this plan alongside the elevation of the building (below), we can see these same rooms perched upon each step of the vestibule. To aid readers, Richard labelled the rooms in Latin, making it easy to work from one drawing to another. We can take the complexity a little further, since, in the elevation, the viewer can see the interior of the ground floor as if the drawing were a section where part of the building is cut open. This would make it the first clear sectional elevation, and an important development in architectural drawing. Richard’s drawing of the sectional elevation None of Richard’s innovations are accidental. Rather, they are rooted in the language of geometry: by including the elevation, the viewer sees that the gatehouses are located on a mountainside. Yet having to reconcile the plan with the elevation disturbed Richard’s attempt at accuracy. He knew that measurements taken along a flat surface and on a sloped one would be different when compared with one another, and result in discrepancies between his plans and elevations. To combat the problem, Richard proposed a method by which a ‘plan’ measurement could be translated into one that accommodated the mountain’s slope, using something very similar to the Pythagorean theorem, which was then circulating around western Europe. The plans include measurements that assume the site was flat, and so he calls them ‘planum’. This is the first time this term was used in reference to a drawing. For Richard, a ‘plan’ was a two-dimensional drawing that showed the layout of a building on a plane (ie, flat) surface – language that we still use today. A number of tantalising plans, predating Richard’s commentary, survive, but they lack a systematic approach to representing reality. The best known is the 9th-century Saint Gall plan (below). This shows a monastery laid out in red ink, complete with church, cloister, abbot’s house, medicinal garden and everything else a monastery could need, right down to the number of beds in the dormitory. The note attached to the plan refers to it as an exemplata, a word that could mean anything from ‘copy’ to ‘proof’. The Saint Gall plan likely survived because a couple of centuries later someone wrote a life of Saint Martin on its reverse side – which for a medieval audience held much more value than this diagram. The Saint Gall plan is impressive, but it’s not entirely clear what any of the proposed monastery buildings would look like since there are no elevations and it is only one single drawing, unlike Richard’s more expansive approach. Saint Gall plan, 9th century There are other theological plans, images of the Celestial Jerusalem, of the Holy Sepulchre and a couple of others. Their survival suggests that, even in the context of so much medieval material that has been lost, there must have been more drawings that were destroyed or else deemed not important enough to save. Not all had the fortune of a life of Saint Martin written on their reverse. It is tempting to imagine Richard in conversation with masons who must have been a constant presence in Saint Victor. Richard held the position of prior, a sort of second-in-command in the abbey. One of his responsibilities would have been to oversee the masons’ work in the abbey and to interact with the builders on a regular basis. There was, however, a cultural chasm between the cloister and the building site that Richard would not have crossed lightly, even for the sake of an important work such as a biblical commentary. To do that, I think he crossed another type of boundary. Saint Victor was famous by the end of the 12th century, and one of the reasons why is how good its literal commentaries on the Bible were, an approach advocated by its earliest superstar and Richard’s mentor, Hugh of Saint Victor. Hugh was a teacher, writer, very much the 12th-century renaissance man, and he is known to have spoken to nearby rabbinic scholars and their schools. Hugh recognised the value of Jewish knowledge, especially when it came to understanding history: as one modern author put it, from a Christian perspective, talking to a Jewish scholar was like picking up the phone to the Old Testament. Richard, it seems, continued that tradition. If Richard could speak that visual language, then so could the people he hired to build Saint Victor As if to underline the connection between the two traditions, one of the earliest Christian manuscripts containing Richard’s work includes a map of the Holy Land described in the Book of Ezekiel 48 (below). It shows the land belonging to the tribes of Israel on the left and right, with the Holy City and its environs below. The map is almost identical to one in a Hebrew commentary (below) by the famous scholar Rashi who established his school not far from Paris. The only difference between the Hebrew and Latin maps is the language of the labels. Not long after Richard wrote his commentary, the Jewish scholar Maimonides included architectural drawings in his discussion on the Second Temple, called the tractate Middoth. Not just plans, but elevations and even sections, just as in Richard’s commentary. All these architectural drawings appearing in theological texts of the 12th century does not prove that drawings were used in the construction process. What it does prove, though, is that people such as Richard and the many readers of his work could understand the language of architectural drawings if a master mason put one in front of them. Richard’s map of the Holy Land, as described in the Book of Ezekiel 48, in Latin Rashi’s map of the Holy Land, in Hebrew In response to my own question, as to how some of the great architectural monuments of medieval Europe could be built without drawings, the answer is they could not. Although we don’t have those drawings made by 12th-century master masons, Richard’s commentary certainly suggests that the practice of architectural drawing was common enough across the religious divide, and that if a patron such as Richard could speak that visual language, then so could the people he hired to build Saint Victor. Richard’s importance is clear but, since he was a theologian working in a cloistered community, his legacy within architectural history is difficult to quantify. He never built anything as far as we know. I do not want to suggest that the invention of architectural drawing was a top-down affair, where the language and visual forms were invented by a patron within theological contexts and imparted to the lowly builder. I do not even want to suggest that parallel traditions in Islamic countries, and further east in China, lag behind the West: only that the practice as we know it in the West developed out of the complex relationship between patron and mason, and almost certainly predates the 13th century. Richard’s commentary helps us fill in the gaps. It demonstrates how known unknowns (the existence of drawings in the 12th century) become known. Masons and, later, architects used and developed these drawings to a remarkable degree, perhaps not based on direct knowledge of Richard’s commentary, but certainly from the world that those drawings inhabited.
Karl Kinsella
https://aeon.co//essays/the-surprising-history-of-architectural-drawing-in-the-west
https://images.aeonmedia…y=75&format=auto
Stories and literature
Bereft and suicidal, I lay on my sofa. Only David Foster Wallace’s novel kept me tethered to life, and still does
In the surreal aftermath of my suicide attempt and amid the haze of my own processing, my best friend visited me in the hospital with a (soft-bound and thus mental-patient-safe) copy of David Foster Wallace’s Infinite Jest under his arm. It was the spring of 2021. A couple months earlier, I had slipped in a tub, suffered a concussion, and triggered my first episode of major depression, and those had been the most difficult months of my life. Though a lifelong ‘striver’ and ‘high achiever’, nothing I’ve ever done was harder than waging that war against myself while catatonic on that Brooklyn sofa. This was an inarticulable and so alienating war, one during which, at every moment, it was excruciating and terrifying to exist at all. I thought I knew the extent of my own mind’s capacity to torture itself, to hurt me, and what this thing we call depression can really be like. But I had been wrong. For anyone who hasn’t experienced it at its worst, I now think it is psychologically impossible to imagine. It may even prove impossible for those who have experienced to still remember it after the fact, just as someone who temporarily perceives a fourth dimension wouldn’t really, fully remember what it was like once the perception is lost, only facets of the larger, unfathomable thing. So maybe I can’t really remember, either: but I can recall thinking again and again these staggered reflections I’m writing now. Some of the swirling emotions that distressed and disoriented me on that sofa also remain faintly accessible, like the crippling inability to make any decisions, no matter how small, such that even contemplating a choice among some host of mine’s warmly offered selection of teas would incapacitate me with self-loathing and breathless, gushing tears. I remember hopelessly trying to make myself feel even the glimmer of anything good, turning to everything – the music, the friends – that had brought me so much joy before, only to find that I could no longer feel any of it but rather just, from somewhere afar, see and long for it while watching as the ever-darkening blackness in me instead consumed it all. I remember the debilitating guilt and shame that emerged for everything I had ever done, including for having the audacity to keep existing for so long. And I remember an overwhelming empathy as I wondered how many others felt this way in the history of the world, imagining the vastness of all these solitary confinements within our minds across space and time. At the same time, it was unfathomable to me that anyone had ever felt like this, or that there could even be enough darkness in the universe to realise the experience more than this once. From the days following my injury through the several months after, my ultimate challenge on that sofa was finding a way to endure the passage of time. I needed something to help me get through each moment and make it to the next one while still intact. I couldn’t actually do anything, but staring into space (or even watching TV) kept me vulnerable, as the cognitive passivity left ample room for the darkness to seep in and swallow me away. After a few desperate weeks, I eventually found that reading fiction – filling my head with another world that left room for little else – was the one thing that made it more bearable to exist. My best friend then suggested (after having gently and generously recommended the book to me for years) that perhaps this was the moment to read Infinite Jest. I think every day about how grateful I am that he did. I started reading and it soon became the case that so long as Infinite Jest was in my hands, it was possible, okay even, for me to stick around. The core themes of the book that would soothe and sustain me over the coming weeks can be conveyed, I think, by its two dominant and contrasting venues – a halfway house for addicts in recovery on the one hand, and an elite and high-pressure tennis academy on the other – in conjunction with an underlying and unifying thesis: all of us, whether we’re chasing substances, achievements or whatever else we hope will satisfy us and make it bearable to exist, are afflicted. We are all, for lack of a better word, fucked in the head in the very same ways. With Infinite Jest in my hands, I was suspended afloat by a contradictory catharsis, this evanescent insight that I could hold on to so long as I just kept reading and rereading the book’s (blessedly many) pages: that I was not crazy, nor alone, precisely because I really was crazy, which is to say that this all wasn’t me but rather it – it was a human condition. The book assured me that this was just what it was like to be crazy in this way, was exactly how others crazy in the same way were made to feel, a crazy that made them feel just as alone as I now felt. The book witnessed me, affirmed me, and assured me that my experience was familiar to the world. I can’t put it any better than just saying the book was my friend. The book’s most famous lines are on suicidality, and the air-tight logic that it brings along Some passages can only speak for themselves, as they so articulate (and help me remember) facets of the thing I was facing on that sofa. On the ‘psychotic depression’ suffered by the character Kate Gompert, the most haunting and compelling personification of depression I have come across: It is a level of psychic pain wholly incompatible with human life as we know it. It is a sense of radical and thoroughgoing evil not just as a feature but as the essence of conscious existence. It is a sense of poisoning that pervades the self at the self’s most elementary levels. It is a nausea of the cells and soul … It … is probably mostly indescribable except as a sort of double bind in which any/all of the alternatives we associate with human agency – sitting or standing, doing or resting, speaking or keeping silent, living or dying – are not just unpleasant but literally horrible.No description that I’ve encountered has better conveyed, so clearly and directly, the precise nature of that moment-by-moment agony in which I had found myself. Infinite Jest’s most famous lines are on suicidality, and the air-tight logic that it brings along. The book analogises it to the choice faced by those trapped inside a burning building and deciding whether to jump: Make no mistake about people who leap from burning windows. Their terror of falling from a great height is still just as great as it would be for you or me standing speculatively at the same window just checking out the view; ie, the fear of falling remains a constant … It’s not desiring the fall; it’s terror of the flames. And yet nobody down on the sidewalk, looking up and yelling ‘Don’t!’ and ‘Hang on!’, can understand the jump. Not really. You’d have to have personally been trapped and felt flames to really understand a terror way beyond falling.The suicidal person, in other words, is not misguided but rather literally facing different choices – ones unimaginable to those who do not also have flames slowly engulfing them. I don’t think I can really explain what reading all this meant to me. The book could see me like a mirror at that moment and describe it all right back. More concretely, I can’t explain what it meant to find such forceful validations of my particular sense of this ‘mental illness’, not as some wrong or irrational reaction by me, a misapprehension or miscalculation on my part, but rather as something happening to me; it was a thing inside me – a billowing shape, as the book often calls it – to which all my dread and despair was actually just the reasonable and appropriate response. But I can tell you that, once I finished Infinite Jest, my grip on this self-understanding – and so my self-preservation – quickly started to slip away, and it was only a few days later that I tried to kill myself. By then, I was back to being alone on that sofa, surrounded by those flames the book had managed to keep at bay. I think reading Infinite Jest had been keeping me alive. So that’s why, when he came to the hospital, my friend knew to bring along another copy of the book. I remember looking up at him then, bleary-eyed with anxious shame for what felt like my most monumental failure, a profoundly self-absorbed act of weakness on my part – and, not to mention, a terrible inconvenience for all those I’d dared to drag into my life. He smiled softly while waving Infinite Jest in a silent reminder that these emotions, though compelling in their presentation and thus reasonable to be so compelled by, weren’t really reflecting the reality of the matter. And with a copy to share, in that secured visiting area, we then had our own little pop-up book club. I admit to sometimes feeling guilty for being the one who found salvation in his book instead of him It all felt a bit like Bible study or something, in the fluorescent sterility and chaos of that strange space, and I remember my friend making some fittingly dark joke about how this was probably how DFW would’ve most wanted the book to be read anyway: like the word of God, among rock bottoms, being involuntarily held. It was a glimmer of Wallace’s raw hilarity, which fills so much of Infinite Jest (1996) – a grotesque humour, one that could punctuate my otherwise continuously unbearable tenure on that sofa with stitches of transcendent laughter, and which not only kept me alive but sometimes feeling alive, wanting to be, hoping I do somehow make it through it all, if for no other reason than because laughing still felt like something worthwhile. I was reminded, in our pop-up book club, that maybe this was still worth doing. In truth, the reality of what had happened was only beginning to crash down upon me, and it was going to be a very long road ahead. But we at least managed to make it all a bit gentler and more intelligible in that moment. As of this September, it has been 15 years since Wallace’s suicide and two and a half years since my attempt. Like Wallace’s, my own decision to take my life had immediately followed an adjustment to my antidepressants. I remember it clearly: I’d been holding on so long as I’d still been reading, and when the reading was over and the enkindling darkness took its place, there was just barely enough left in me to pull myself up and pick up a phone, to articulate the necessary words and ask the professionals if they could possibly find some way to help me out. I’d still been searching in anguish for an escape as the walls closed in, a way to still win, to stick around. Sadly, it was the prescribed dosage increase itself that hit me – as it is sometimes known to do – with another dark wave, knocking me back into the depths of myself, right as I’d been treading so very hard to reach a stable surface. I know Wallace’s suicide had been amid choppy chemical changes of his own, which is to say that we’d both still been fighting, and so these disparate outcomes were the product of random chance. There is a tragedy and humanity, I think, for one’s own desperate attempt at staying alive to be the very thing that does one in – and I admit to sometimes feeling guilty for being the one who found salvation in his book instead of him, as though this salvation was itself cosmically predestined to be scarce. When I’m asked what exactly I found in Infinite Jest, I limit myself to noting two things. I found powerful portraits of mental illness, and I also found empathy. Like I said, the book was my friend. But the thing is, I know that many others have very different things to say about Infinite Jest – about the book, its author, its ‘prototypical’ readers, the very idea of it, and the ethos it has come to represent. In her chapter ‘On Not Reading DFW’ (2016), Amy Hungerford defends her choice never to read it by arguing (among other things) that there’s no reason to think DFW could have anything valuable to say about women. More recently, in the London Review of Books this July, Patricia Lockwood said of Infinite Jest that ‘it’s like watching someone undergo the latest possible puberty. It genuinely reads like he has not had sex.’ Hungerford, Lockwood and the mainstream ethos generally dismiss the book’s intended and actual audiences as white, male and not to be trusted, driven by Stockholm syndrome, sunk costs or delusions of self-interested grandeur in calling the book genius or important. I’m not exaggerating when I say that I find these critiques – so often snide or irreverent in their cadence – baffling, gaslighting, disempowering, at times even agonising. I can’t understand what they could possibly have to do with this book that I know as my friend, that I found myself in at my most alienated moment. And the bitter irony is that this ethos all concerns a man who, after writing such an empathetic book about mental illness, took his own life; for it is a collective instance of the very kind of empathy failure that I think Infinite Jest asks us to resist and helped me resist myself. I guess it is the least I can do for it now – and for my own survivor’s guilt – to join this ongoing chorus on the book with my own belting, discordant voice. Mental illness can persuade you that you’re now seeing the reality that had always been real Infinite Jest was life-saving for me, but I don’t just mean when I say this that it had been saving me while I was reading it on that sofa, or even the times that I’ve read the book since. Infinite Jest is saving my life all the time. There’s a recurring motif in the book, a haunting symbol for all of our many mental demons: the Face in the Floor. It first appears in a second-person vignette as an evil presence that only you, the reader, can feel. You wake up from a nightmare, you look around, and you suddenly notice that there is the Face in the Floor beneath you. It is a Face that you know is evil, and you know this evil is only for you. But as soon as you notice this Face in the Floor, you are also convinced that it has actually been there all along. You are certain of this, that its ‘horrid toothy smile [has been] leering right at your light all the time,’ and that it had simply been ‘unfelt by all others and unseen by you’ until now. In a later passage, this evil Face in the Floor – ‘the grinning root-white face of your worst nightmares’ – comes back, but this time, it’s your addiction. It ‘finally remove[s] its smily-face mask to reveal centerless eyes’, and you see that the Face in the Floor – your addiction – has now completely taken you over. The Face in the Floor has become your own. It’s ‘your own face in the mirror, now, it’s you’ for it has ‘devoured or replaced and become you’. I think about the Face in the Floor every single day. I remind myself of it. One of the most harrowing things about mental illness is not anything captured by descriptions of its first-order symptoms, but rather the way it can convince you that these symptoms are just picking up on something that is and has always been the case, that was actually there all the time; and when you didn’t feel this way it was because you had been blind. Mental illness can persuade you that you’re now seeing the reality that had always been real, the Face that had always been there in the Floor – which is all to say that your epistemic position has simply been improved. So long as that is what you are being made to believe, then how can anyone expect you to also believe ‘this too shall pass’ (or anything of the sort), or to somehow just stop it from swallowing you up? I’m no longer on that sofa or surrounded by those flames. But still, I’ll probably always be moving with and managing my own billowing shape. Mine is a synergistic and explosive Molotov cocktail of depression and ‘emotion dysregulation’. This basically means that my internal reality is prone to quickly and intensely turn itself upside down again and again – somersaulting through euphoria, despair, mania, shame, rage, paranoia, guilt, panic, bliss, self-aggrandizement, self-hatred, even within a single day. My challenge in the dissociated midst of these episodes will always be to find something from outside the moment to believe in, or to at least have faith that any such thing could even exist, and so to resist the recurring immersive insistence that only this moment and nothing before it is what’s real. Maybe that’s why I needed to say all of this, to give my experience this reality and write it all down, and paper over at least one of the Floor’s Faces and preserve this here instead for myself; and maybe these revelations are also my redemption for that audacity to have been the one saved. But when I say that Infinite Jest is saving my life all the time, what I mean is that I still keep trying my very best to tell myself – because I still need and will keep needing to tell myself – what has become both my mantra and my prayer: it’s the Face in the Floor. It’s the Face in the Floor. It’s the Face in the Floor. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. Or text HOME to 741741 to reach Crisis Text Line. In the UK and Ireland, the Samaritans can be contacted on 116 123 or email jo@samaritans.org or jo@samaritans.ie In Australia, the crisis support service Lifeline is 13 11 14 Other international helplines can be found at www.befrienders.org
Mala Chatterjee
https://aeon.co//essays/how-infinite-jest-tethered-me-to-life-when-i-almost-let-it-go
https://images.aeonmedia…y=75&format=auto
History of technology
From its mythic beginnings in a Chinese garden, the story of silk is a window into how weaving has shaped human history
Some say that history begins with writing; we say that history begins with clothing. In the beginning, there was clothing made from skins that early humans removed from animals, processed, and then tailored to fit the human body; this technique is still used in the Arctic. Next came textiles. The first weavers would weave textiles in the shape of animal hides or raise the nap of the fabric’s surface to mimic the appearance of fur, making the fabric warmer and more comfortable. The shift from skin clothing to textiles is recorded in our earliest literature, such as in the Babylonian Epic of Gilgamesh, where Enkidu, a wild man living on the Mesopotamian steppe, is transformed into a civilised being by the priestess Shamhat through sex, food and clothing. Judaism, Christianity and Islam all begin their accounts of their origins with a dressing scene. A naked Adam and Eve, eating from the forbidden tree, must flee the Garden of Eden. They clothe themselves and undertake a new way of life based on agriculture and animal husbandry. The earliest textile imprints in clay are some 30,000 years old, much older than agriculture, pottery or metallurgy. Persian Carpet Dealer on the Street (1888) by Osman Hamdi Bey (1842-1910). Nationalgalerie der Staatlichen Museen zu Berlin – Preußischer Kulturbesitz, Berlin. Courtesy Wikimedia Commons In the 21st century, the Silk Roads have re-emerged as the catch-all name for a highly politicised infrastructure project across Asia. The name Silk Roads comes from the origin and spread of sericulture – the practice of making silk fibres – in which Chinese women have played a special role. The discovery of silk fibres is attributed to the Empress Ling Shih, known as Lei Zhu. Legend says a silk cocoon fell into her cup and began to unravel in the hot tea water while she sat under a mulberry tree. Another legend tells that it was a Chinese princess who brought sericulture out of China to the Kingdom of Khotan by hiding silkworm eggs in her hair when she was sent to marry the Khotanese king. In Modern Chinese, sī (絲, ‘silk, thread, string’) is commonly reconstructed as Middle Chinese *si. Linguists believe that the word journied via nomadic tribes in western China who also adapted the Mongolian word sirkeg (‘silk fabric’) and the Manchu sirge or sirhe (‘silk thread, silk floss from a cocoon’). The Greek noun sērikón and Latin sēricum come from the same Chinese root. The English word silk, Old Norse silki and Scandinavian silke – transferred into Finnish and Karelian as silkki, Lithuanian šilkas, and Old Russian šĭlkŭ – all have the same origin in Chinese. It took approximately one millennium for the word ‘silk’ to travel from China to northern Europe via Central Asia and Iran: 10,000 kilometres in 1,000 years. In ancient Asia, silk was valuable and coveted, even by the powerful. It is said that in the year 1 BCE, China paid off invaders from the north with 30,000 bolts of silk, 7,680 kg of silk floss and 370 pieces of clothing. Among the less powerful, textiles possessed even greater value. We know from 3rd- and 4th-century Kroraina kingdom legal documents (from Chinese Turkistan, present-day Xinjiang province) that the theft of ‘two jackets’ could occasion a crime and that ‘two belts’ were significant enough to appear in wills. Silk became the symbol of an extravagance and decadence The classical Greek and Roman world thought of India as the site of great textiles and garments. The Romans marvelled at Indian saffron (Crocus indicus), a precious spice and dye plant yielding a bright yellow. Indigo was among the most valuable commodities traded from Asia. Diocletian’s Edict of Maximum Prices of 301 CE tells us that one Roman pound of raw silk cost the same as nine years’ wages of a smith. In Rome, silk became the symbol of an extravagance and decadence that some saw as corrupt and anti-Roman. Cleopatra was also said to wear quite inappropriate clothing of Chinese origin, revealing her breasts and therefore also her vanity, and indicating loose morals and greed. The Roman emperor Elagabalus was described contemptuously by his contemporary Herodian, who wrote that the ruler refused to wear traditional Roman clothes because they were made of inferior textiles. Only silk ‘met with his approval’. The Roman poet Horace dismissed women who wore silk, arguing that its lightness meant that ‘you may see her, almost as if naked … you may measure her whole form with your eye.’ Wall painting of two young Roman women wearing fine translucent fabric. Roman, 1-75 CE. Gift of Barbara and Lawrence Fleischman. Photo by J Paul Getty Museum, Los Angeles The technology behind silk had long been a historical puzzle. The recent archaeological discovery of a 2nd-century BCE Han dynasty burial chamber of a woman in Chengdu has now solved it. Her grave contained a miniature weaving workshop with wooden models of doll-sized weavers operating pattern looms with an integrated multi-shaft mechanism and a treadle and pedal to power the loom. Europeans wouldn’t devise the treadle loom, which enhances power, precision and efficiency, for another millennium. Chengdu loom model (digital reconstruction). Photo courtesy China National Silk Museum, Hangzhou, Zhejiang province This technology, known as weft-faced compound tabby, also emerged in the border city of Dura-Europos in Syria and in Masada in Israel, dating to the 70s CE. We can, however, be confident that the technique known as taqueté was first woven with wool fibre in the Levant. From there, it spread east, and the Persians and others turned it into a weft-faced compound twill called samite. Samites became the most expensive and prestigious commodity on the western Silk Roads right up until the Arab conquests. They were highly valued international commodities, traded all the way to Scandinavia. Fragments of silk samite from fabric no 1 from Oseberg, as drawn by Sofie Krafft. Photo by Ann Christine Eek. © Museum of Cultural History, Oslo, Norway In Norway in 834 CE, two women were buried in the large Oseberg Viking ship, loaded with silk textiles, including more than 110 silk samite pieces cut into narrow, decorative strips. Most of the Oseberg silk strips are of Central Asian origin and they were probably several generations old when they were buried. The old Norse sagas speak of exquisite fabrics that were perhaps samites, even calling them guðvefr, literally ‘God-woven’. These samite strips could have come to Scandinavia via close contact with the Rus communities settled along the Russian rivers, who could negotiate favourable conditions of trade with Byzantium. We know from historical sources that if a Rus merchant lost a slave in Greek territory, he would be entitled to compensation in the form of two pieces of silk. However, Byzantia also set a maximum purchase allowance for the Rus, and the maximum price for silk was 50 bezants. These silks that the Rus were trading in Byzantium, and then again with the Scandinavians, came from the Syrian cities of Antioch, Aleppo and Damascus. Most early medieval silks in Europe are Byzantine, not Chinese. The Scandinavians also exported fur products to Asia that fuelled luxury consumption in Byzantium and eastwards, including coats, but also trimmings for hats and boots, and hems for kaftans and collars. The combination of fur and silk remained popular in prestige clothing to the Renaissance kings of Europe, and still exists in royal ermine robes. Under the Muslim dynasties of the Umayyads (661-750), the Abbasids (750-1258), the Ilkhanids (1256-1335) and the Mamluks (1250-1517), diplomatic clothing gifts evolved into robes of honour. In Arabic, these are called khilʿa or tashrīf, and they are precious garments that a ruler would bestow upon his elites. They would then wear them to show loyalty. Silk gift-giving worked in both directions, it seems, and a caliph might receive hundreds of garments from one of his subjects. A huge textile industry, private as well as royal, flourished in Baghdad in the 9th to 10th centuries, employing at least 4,000 people in silk and cotton manufacturing alone. Precious dyes, such as kermes from Armenia, offered opportunities for exclusive designs of bright-red fabric. Early Islamic scholars praise Central Asia not only for its silk but also for its wool, linen, fur and especially fine cotton. The 10th century also saw the spread of Islam, and the advance of trade networks lead to a renaissance in West African weaving and textile production. The Rules and Regulations of the Abbasid Court state that, in the year 977 CE, the wealthy Adud al-Dawla sent the caliph gifts of 500 garments in a full range of qualities, from the finest to the coarsest – an excellent example of ‘silken diplomacy’. The Abbasid dynasty invested in palace textile workshops producing sophisticated patterns and techniques, such as the renowned tirāz. Originally a Persian loan-word, the term tirāz eventually became used for exquisite decorated or embroidered fabrics with in-woven inscriptions of the name of the ruler or praising Allah. The silk tapestry roundel unites symbolic and aesthetic concepts from both the Islamic and Chinese realms The purpose of tirāz textiles, at least to begin with, may have been a form of tax or tribute that was paid by provinces in Central Asia to honour new rulers when they took power. The term also came to be the name for a workshop where such exquisite fabrics with inscriptions were produced. The author Ibn Khaldūn, who wrote in the 14th century, dedicated a whole chapter to tirāz textiles in his book Muqaddimah: Royal garments are embroidered with such a tirāz, in order to increase the prestige of the ruler or the person of lower rank who wears such a garment, or in order to increase the prestige of those whom the ruler distinguishes by bestowing upon them his own garment … A 14th-century silk and metal-thread slit tapestry roundel. At its centre, an elegant ruler is seated on his throne, clad in a blue and gold robe or kaftan girded by a golden belt. He has a beard and a Persian-style crown, and is flanked by two seated noblemen, both wearing kaftans; on the right side is a Mongol prince or general, under whose foot is a blue tortoise, a typical Chinese symbol of longevity and endurance. Behind the throned ruler stand two guards wearing the same helmet-like hats. The medallion is decorated with an outer band of good wishes woven in Arabic golden letters, and inner bands of animals and imaginary creatures. Photo by Pernille Klemp, courtesy of David’s Collection, Copenhagen/Wikimedia Commons The Abbasid rule ended in 1258 when Baghdad was conquered by the Mongols under the command of Hulegu, a grandson of Chinggis Khan. Hulegu took the title of Il-Khan to signal that he was subordinate to the Great Mongol Khans of China. One of his successors is portrayed in a silk tapestry roundel, uniting symbolic and aesthetic concepts from both the Islamic and Chinese realms (see image above). The depicted figures – Mongols, Persians and Arabs – manifest the union of ethnic and political groups in an idealised image of the Pax Mongolica. The technical features of this tapestry, made using a gold thread with a cotton core, suggest it may have been made in a cotton-growing region yet woven by Chinese weavers. The Mongols are famous for many things; it is less well known that they were great patrons of arts, crafts and textiles. The Ilkhanid dynasty ruled for some generations until it collapsed around 1335. European imports of silks from China and Central Asia rose steadily in the Middle Ages. In 1099, after the capture of Jerusalem by the knights of the First Crusade, they increased again. The creation of Christian states in the Holy Land opened new trade routes, which facilitated the rise of the Italian city-states. The westward expansion of the Mongol Empire under Chinggis Khan and his successors also helped augment the power of these Italian trading centres. Great quantities of raw silk coming into Italy helped stimulate creative and technological progress in Europe, generating new techniques and patterns as well as new technologies. The lampas or woven fabrics especially fuelled innovation in patterning and the introduction of the treadle loom in medieval Europe. While China was an important source of silk and other goods, South Asia had long been part of exchange networks linking the Indian Ocean world with the Gulf, Africa, Europe, and South-East and East Asia. Economic and political shocks from the 14th century led to surging prices for silk in European markets. The value of silk thread per ounce approached the price of gold. In the early 15th century, the Chinese white mulberry (Morus alba) began to be successfully cultivated in Europe, in particular in Lombardy in Italy. We should not think of European silk cultivation and silk weaving only as a short business venture or a mere adjunct to Chinese or Asian dominance. Italy remained a leading global producer over several centuries, first of silk fabrics and then of silk threads, maintaining its position as the world’s second largest exporter of silk threads after China into the 1930s. To this day, Italian capacity and expertise in silk production survives. The most famous legend tells of two monks who smuggled silkworm eggs to Europe New silk institutions also emerged. In Valencia in Spain, between 1482 and 1533, the ‘Silk Exchange’ was erected to regulate and promote the city’s trade. It served as a financial centre, a courthouse for arbitration to solve commercial conflicts, and a prison for defaulting silk merchants. The Hall of Columns in the Lonja de la Seda or ‘Silk Exchange’ in Valencia, built 1482-1533. A UNESCO World Heritage Site of cultural significance, its impressive pillars are shaped like z-spun threads. Photo Trevor Huxham/Flickr Many legends arose around silk, primarily because of its value, with the technology of sericulture and silk production jealously guarded in China for millennia. Perhaps the most famous legend tells of two monks who smuggled silkworm eggs to Europe, thus breaking the production monopoly and revealing how silk was made. In the second half of the 17th century, Paris became the centre of European textile production, design and technique. This included the emergence of a luxury shopping environment of boutiques and fashion houses. Fashion magazines such as Le Mercure galant reported on style and new trends from the royal court. The largest Parisian fashion houses, such as the Gaultier family business, supplied the wardrobes of the royal family and the nobility, and held shares in the French East India Company. King Louis XIV and his minister Jean-Baptiste Colbert invested in fashion and textile production as an important innovative sector to showcase France’s greatness. Illegal imports of foreign textiles and luxury copies posed a challenge for French trade and domestic production. French consumers had a large desire for foreign textiles, and colourful, cheap fabrics flooded the market. Illicit products from Asia arrived via trading posts in the Philippines and Mexico, putting pressure on European fabrics and fashionable goods in terms of price and quality. King Louis XIV of France and his grandson, Philip V of Spain, sent Jean de Monségur, an industrial and commercial spy, on a mission to Mexico City to collect intelligence on the legal and illegal trade between India, China and Europe. His detailed intelligence report addressed the trade in textiles, clothing and fashion. With great concern, he wrote: [T]he Chinese have got hold of our patterns and designs, which they have utilised well and can today produce quality goods, although not everything that comes from over there can match the European standard … The times are over when one could assume that the Chinese are clumsy, without talent or trade talent, or that their goods are not in demand.Monségur also noted that Chinese silks were highly competitive because of their lower prices. In Mexico, even commoners wore Chinese silk clothing. When the victorious Mongols conquered new land, they selected artisans, especially weavers, and saved their lives because they were crucial to the expanding empire’s needs and ambitions. These skilled craftspeople were then ordered to settle where the empire needed them, hence the large-scale forced movements of textile workers within the Mongol Empire. Beginning in the 15th century, the colonisation of the Americas brought about the largest forced textile labour movement in history. It forcibly displaced some 13 million people, transporting them from West Africa to the Caribbean and North America. Coerced labour was central in the establishment and development of a textile industry heavily dependent on cotton and indigo. Even today, cotton harvesting is very labour intensive: every year from September to October, millions of workers pick cotton in Turkmenistan, Uzbekistan, Pakistan, India, the United States and China. Cotton pledges have been signed by textile and fashion companies committed to banning forced labour in the cotton harvests, yet the massive need for labour and the low price of cotton are obstacles to these efforts. ‘Christmas greetings from the Danish West Indies’: postcard from the cotton plantation Bettys Hope on the island of Saint Croix, a Danish colony until 1917 and today part of the US Virgin Islands. Courtesy of the Royal Danish Library, Copenhagen Some 60 per cent of the 40 million people employed by the garment industry today are in the Asia-Pacific region. Working conditions and pay levels are often poor, in part because of the pressure to lower production costs. Implications for the health and safety of workers are often terrible: for example, when the poorly constructed Rana Plaza complex in Bangladesh collapsed in 2013, more than 1,100 garment workers lost their lives. Everyone knows that clothing can symbolise power, legacy, glory, as well as ethnic or national identity and aspirations. In male power-dressing, we observe over time how clothing emphasises the ruler’s head, shoulders and torso, and a belt highlights bodily strength. Jewellery, weapons and other royal insignia serve as garnish. The choice of simple clothes, preferred by many Left-wing leaders, also projects meanings – and the source of their power. The last emir of Bukhara, Alim Khan (1880-1944), dressed in a deep-blue silk robe. Photo by Sergei Prokudin-Gorskii. Courtesy Wikimedia Commons Among the elite in many parts of Eurasia, Western dress practices became symbolic of a progressive mindset. In the late 17th century, Peter the Great imposed Western clothing on the civil administration of Russia. In Meiji-era Japan, the ruler and his family adopted full Western attire. The Japanese emperor would wear the sebiro, the Japanese term for ‘suit’ derived from Savile Row, the London street that was home to the finest gentlemen’s tailors. Emperor Meiji in 1873, dressed in Western military parade uniform and with an admiral’s hat. Photo by Uchida Kuichi (内田九一) (1844-75). Albumen silver print from glass negative with applied colour. Courtesy of The Met Museum, New York In the early 20th century, clothing became so accessible and cheap that rulers could demand that their subjects dress in a certain way and adapt their clothing to the ruler’s politics. They wanted the general population to mirror the rulers’ values, political beliefs and ambitions. For example, in 1925, the Greek dictator Theodoros Pangalos imposed a law stipulating that women’s dresses should not rise more than 30 cm from the ground. The same year, Ataturk’s Hat Law was passed in Turkey, another historical example of clothing regulations being used as a political instrument to orient, redress or change the mentality of an entire society. Wearing a Western hat and abandoning the traditional Ottoman and Islamic headgear of the turban and fez became a political act of adherence to the Kemalist republic. Men’s headgear became a potent symbol of ideology, and the ‘wrong’ hat was penalised with fines and, occasionally, even with capital punishment. At the Yalta Conference in February 1945, Winston Churchill wears a civilian double-breasted wool coat, Franklin D Roosevelt, a civilian suit under a cape with tresses and a fur collar, and Stalin, a double-breasted Soviet uniform whose design mirrors both earlier Tsarist and 20th-century European uniforms. A Persian carpet from western Iran forms a connection between them all. Photo courtesy of Wikimedia In the 20th century, military uniform design and cut followed those of the country’s allies and ambitions. We can see this in the military uniforms used across Eurasia during the Cold War, with a ‘communist’ style in countries allied with the Soviet Union or China, versus the ‘capitalist’ NATO styles used by the West’s allies. Throughout the world, rulers have tried to control people by regulating their clothing It is notable that textile metaphors gained currency to represent both the reign of the Cold War, with its ‘Iron Curtain’, and the period’s historic end in 1989, with the ‘Velvet Revolution’ in Czechoslovakia. The expressions play on both the softness of fabric (velvet) and its capacity to cover and conceal (curtain). In popular culture, it was denim and blue jeans that caught the imagination of young people in the East, as symbols of youth and of political and moral freedom. The name ‘denim’ comes from the French city of Nîmes in Occitanie, a major producer of blue dye from woad (Isatis tinctoria) and synonymous with workers’ blue cotton cloth. The word ‘jeans’ connects to the French name of Gênes and the Italian city of Genova, from where such coarse fabrics were exported. Throughout history, and throughout the world, rulers have tried to control people by regulating their clothing. Regulations can be prescriptive or proscriptive, and carry gendered and social meanings and ramifications. Dress codes – from the military to school uniforms – indicate political and social alignment, to visually express unity, loyalty and adherence. Meanwhile, bans, prohibitions or censure of the dress practices of certain individuals or groups aim to exclude. When the Chinese emperor Zhu Yuanzhang, the founder of the Ming dynasty, took the throne in 1368, he banned the former regime’s style of clothing, branding it ‘barbaric’, and ordered a return to the clothing style of the Han dynasty. Clothing regulations can be social or legal, and across Eurasia many have attempted to regulate how people dress to enforce an ideal, or to protect national production from foreign imports. Sumptuary laws (from Latin sumptus, meaning ‘expense’) could regulate both manufacturing and trade, as well as national moral economies that would influence consumption patterns and values. They represented social, gendered and racial hierarchies, and expressed them visually. Many regulated the use of jewellery and the practices surrounding feasts or funerals. The main objective was always directed at dress practices, with greater significance given to fabrics, fibres, weave and decoration than to cuts and tailoring. In Lima, Peru – in Spanish colonial America – sumptuary laws stipulated that women of African or mixed African and European descent were prohibited from wearing woollen cloth, silks or lace – though forbidden luxury fabrics often simply reappeared as cheaper copies, and trade labels were faked. Fabric merchant in Samarkand, photographed between 1905 and 1915 by Sergei Prokudin-Gorskii. The merchant’s goods include striped silks, printed cotton, wool fabrics, and carpets. He wears a white turban and a silk kaftan adorned with Chinese-inspired floral motifs. Photo courtesy of the Library of Congress As globalisation intensified, it brought about technological breakthroughs in transport, communication and trade, through which dress has become more standardised, with many rich and diverse clothing cultures of the world diminished. Fortunately, the early 20th-century photographers Albert Kahn and Sergei Prokudin-Gorskii captured the clothing of many glorious local traditions of Central Asia. Today, we can see some of these local costumes only in tourist shows and museums. Not surprisingly, we know much more about the textiles and clothing of the elite than about the attire of ordinary people on the Silk Roads. Archaeology can help. The Chehrābād tunic belonged to a salt-mine worker, perhaps trapped and killed when the mine collapsed around 400 CE. It was woven of monochrome cotton cut and sewn into a knee-length tunic with long sleeves. Perhaps the tailor knew the body size of the worker or about his hard toil in the salt mines, since gussets were inserted in the armpit areas and at the hips to provide him with greater freedom of movement. Weaving mistakes occur in many places, as if woven in a hurry, or maybe because this was, after all, a work outfit. The history of textile production has always been linked to cheap labour. Shepherding, sericulture, and cotton and flax cultivation require many hands, time, constant tending, efficiency, and standardised tools and techniques. The mechanisation of the clothing industry and of textile production therefore produced dramatic change. Richard Arkwright’s inventions in the 18th century were put into industrial-scale production when the English entrepreneur introduced the spinning frame, adapted it to use waterpower, and patented a rotary carding engine. Arkwright’s achievement was to combine power, machinery, semi-skilled labour and a new raw material, cotton, to create mass-produced yarn. European ladies wore fashionable, soft pashmina shawls with Iranian and Central Asian paisley patterns The French city of Lyon took advantage of geographical advantages that helped it become the centre of a silk ‘tiger economy’. The hill of Croix-Rousse housed factories, with every street filled with the clamorous sounds of mechanical looms. With its 30,000 canuts (the nickname for Lyon’s silk workers), this industrious district turned Lyon into a major hub for textile production, especially silk-weaving, providing garments for the royal court and the nobility of Europe. In the social world of the rising 18th- and 19th-century Western bourgeoisie, we find many products of the Silk Roads, both in textiles and designs. Ladies wore fashionable, soft pashmina shawls with Iranian and Central Asian paisley patterns – a style that had travelled from representing the bonds between Britain and its empire in Asia. Young and fashionable women in European royal families would inspire others to wear these colourful soft shawls as a new accessory. One of the most iconic ‘influencers’ was Empress Joséphine of France who integrated pashmina fabrics and paisley patterns into her wardrobe. Portrait of Empress Joséphine (c1808-9) by Antoine-Jean Gros. Courtesy of the Musée Masséna, Nice/Wikipedia Women of the Spanish Empire would wear the mantón de Manila, also known as the Spanish shawl, which takes its name from Manila in the Philippines, from where it was traded eastwards over the Pacific into the Spanish Empire of the Americas. Originally, it was a silk garment adorned with embroidery, and woven in Southern China, which was traded from the late 16th century via Manila and the Spanish-American colonies, then further into Europe via Spain. Russian girls in a rural area 500 km north of Moscow, photographed by Sergei Prokudin-Gorskii in 1909. Industrially woven, colourful printed fabrics were accessible even in remote villages, and likely used, re-used, sewn and mended. At this time, dyes were chemically bonded and developed from the industrial competition between Germany, France and the UK in the race to patent new synthetic dyes. Courtesy of the Library of Congress In An Inquiry into the Nature and Causes of the Wealth of Nations (1776), Adam Smith wrote that trade was not only mutually beneficial to trade partners but to society as a whole. To illustrate his argument, he explored the competitive advantages of cloth compared with wheat. Textile production was to Smith a sign of economic dynamism. It was only following the French Revolution that clothing regulations were abolished and the nation’s citizens could dress as they wished: ‘Everyone is free to wear whatever clothing and accessories of his sex that he finds pleasing.’ However, the very same decree stipulated the obligation to visibly wear the cocarde knot of red, white and blue ribbons, emblematic of the French Revolution. It was implicitly asserted that clothing should be gender-appropriate and respect earlier dress regulations. Two Germans with particular textile histories would revolutionise the political landscape of the 19th century. Friedrich Engels was the scion of the family behind the cotton company Baumwollspinnerei Ermen & Engels in western Germany, and he settled in the English city of Manchester, a leading centre for global cotton trade and manufacture. Karl Marx was greatly influenced by his close friend Engels and by the textile industry in particular. In Das Kapital (1867), Marx illustrated his arguments about the working classes by referring to the Lumpenproletariat – or the ‘proletariat of rags’ – and by using the example of an overcoat as an allegory for the measure of labour, resources, technology and the uneven rewards of capitalism. ‘Drilling and training for the revolution, spinning and weaving for the people’: Chinese poster, 1974. Courtesy of the Landsberger Collection/chineseposters.net In the 20th century, political transformations and new economic conditions and ideologies have negatively impacted artisanal weaving and other kinds of traditional crafts globally. Much intangible textile craft culture has been lost; new technologies have made handicrafts obsolete or very expensive; urbanisation has standardised fashion; and people no longer want to carry out what is seen as tedious textile work. The word ‘text’ comes from Latin texere (‘to weave’), and a text – morphologically and etymologically – indicates a woven entity. We can therefore say that history starts not with writing but with clothing. Before history, there was nudity, at least in the Abrahamic tradition; clothing thus marks the beginning of history and society. The representation of nudity as part of a wild and pre-civilised life mirrors the European colonial perspective of the naked human as ‘wild’. Across the world today, there are two main ways to dress: gendered into male and female, and stylistically into clothing tailored to fit the body, or draped/wrapped around it like the Roman toga or the Indian sari. Fitted clothing dominates globally, especially after the Second World War, with blue jeans and T-shirts now ubiquitous across all continents. Today, a T-shirt on sale in any shop around the world is the result of a finely meshed web of global collaboration, trade and politics. From cotton fields in Texas or Turkmenistan, to spinning mills in China, garment factories in Southeast Asia, printers in the West, and second-hand clothing markets in Africa, a T-shirt travels thousands of kilometres around the world in its lifetime. On average, a Swede purchases nine T-shirts annually, and even if they are made to last 25 to 30 washes, consumers tend to discard them before. Greenpeace found that Europeans and North Americans, on average, hold on to their clothes for only three years. Some garments last only for one season, either because they fall out of fashion, or because the quality of the fabric, tailoring and stitching is so poor that the clothes simply fall apart. This is the impact of fast fashion that has taken hold since the beginning of the 21st century: for millennia, clothing had always been expensive, worth repairing and maintaining, and made to last. Along with the acceleration of consumption came falling prices and an ever-narrowing margin for profit. The fast-fashion business model requires seamless global trade, inexpensive long-distance transportation, cheap flexible labour and plentiful natural resources. That equation is changing in a world that is warming and where trade barriers are coming up. The future of fabrics, textiles and clothing is bound up in the great themes of the present – and the future. This Essay is based on the chapter ‘The World Wide Web’ by Marie-Louise Nosch, Feng Zhao and Peter Frankopan, from the UNESCO report Textiles and Clothing Along the Silk Roads (2022) edited by Feng Zhao and Marie-Louise Nosch.
Peter Frankopan, Marie-Louise Nosch & Feng Zhao
https://aeon.co//essays/silk-is-a-thread-that-opens-up-the-weave-of-human-history
https://images.aeonmedia…y=75&format=auto
Earth science and climate
Three earthquakes hit Mexico City on the same date in 1985, 2017 and 2022. The coincidence left the city stranded in time
Shortly before 7:19am on 19 September 1985, time began to shift in Mexico City. It started with a tremor, emerging from the subduction zone on the Pacific coast, about 300 km southwest of the metropolis. The magnitude 7.4 quake took less than a minute to travel through the surface of southern Mexico before arriving beneath the city. Amplified by soft soils, it reached magnitude 8.1, killing, according to government data, around 10,000 people (the real number is likely much higher – perhaps as many as 40,000 people), and immediately causing 400 buildings to collapse (3,000 would eventually be demolished). Telephone lines went down, sewerage flooded the drinking water, roads into and out of the city became blocked. In the aftermath, up to 700,000 of the estimated 9.1 million residents in the Federal District of Mexico City were left homeless – the state response to the disaster was catastrophically incompetent. And behind it all, as the event recedes into memory, time itself began to take on ever-stranger forms. During the following years, while city and federal governments grappled with the political fallout, the anniversary of the earthquake became a date on which the state expresses its contrition for the past and demonstrates its preparedness for the future. Every year since 1985, a minute’s silence is held on 19 September, followed by commemorative events, the unveiling of memorials and monuments, the inauguration of new preventative technologies and infrastructures, and the promulgation of risk-reduction legislation – all to ensure that similar disasters are avoided. These state performances are also met with protests from residents demanding the government be held to account for rampant corruption in the real estate industry, which had led to the substandard construction in many of the collapsed buildings. In the early 1990s, evacuation drills were added to the commemorative events of 19 September. And in the early 2000s, these anniversary evacuations followed the sounding of the city’s Seismic Alert System (Sistema de Alerta Sísmica Mexicano, or SASMEX), which was gradually being implemented across the metropolis. From loudspeakers on street corners, the alert begins as a pulsing, vibrating rhythm that is more eerie than alarming. Over this sound, a cold monotone voice repeats the words ‘Alerta sísmica’. The anniversary becomes a day for declaring that the events of 19 September 1985 will never happen again. It turns the earthquake into something to be memorialised: a historical event. All that changed in 2017, when the alarm sounded twice on 19 September. Once for the memorial and commemorative evacuations, and then again, two hours later, for a devastating magnitude 7.1 earthquake that killed more than 300 people and levelled dozens of buildings. For the survivors, the coincidence begins to create profound temporal disorientation. How, survivors ask each other, could this be happening again? How could the two most devastating earthquakes in Mexico City’s history strike on the same date? Some residents told me that when the second alarm sounded, they assumed it was another commemoration of the 1985 earthquake rather than a warning of a new tremor, and so they remained in their buildings until the city began to shake. Fernanda, a woman living in southern Mexico City, told me: I simply could not believe it… I heard the alert and thought to myself: ‘That’s strange, another drill.’ I did not think: ‘That’s another earthquake.’ I guess I thought that earthquakes would only come [during the other] 364 days of the year.The apparent impossibility of the coincidence wreaked havoc with the past and present: people ran home to check on their apartments, only to inadvertently run back in time to where they lived in 1985. My friend Eli told me that when the city’s second alert began sounding before the 2017 earthquake, he became ‘atascado’ (meaning stranded, as in jammed, stuck or overwhelmed). ‘Here was the alert saying that an earthquake will happen,’ he told me, shaking his head. ‘But a large part of me is just wondering: Where am I? Is this really happening?’ Another friend, Carlos, described a similar sense of confusion. Earthquakes that were once separate Earthly events were now interconnected ‘reminders that the Earth is always happening to us’. And for Elena, whom I met in 2019 at a protest for still-homeless victims from the 1985 and 2017 disasters, the earthquakes never really ended. Though the tremors stopped, their effects lingered. The unlikelihood of the coincidence showed that time was never really under human control These responses all reflect a sensibility that is now common in Mexico City: when the 2017 earthquake struck, time itself shifted a little. Younger people, who ‘remember’ the 1985 disaster only through its annual commemoration, find themselves stranded between the inertia of human-historical time – of clocks, calendars and national anniversaries – and the demands of a looping geological moment. Since then, for many residents, it was as if the present became ceaseless and extensive, and the past and the future stopped being mutually exclusive temporal categories. A 2020 survey by the newspaper El Financiero showed that Mexico City’s residents were particularly fearful of earthquakes. But, during my time in the city, I noticed that this fear was new and different: after the 2017 event, people had become more afraid of earthquakes. Like people living in other seismic zones, the city’s residents are accustomed to experiencing earthquakes, but the coincidence in 2017 proved too strange to simply consign to Earth’s arbitrary movements. Alongside the increased fear of seismicity, that same survey showed that much of the city is now frightened of 19 September – as a date. In the city’s collective imagination, the earthquakes that have occurred on other dates since 2017 are mere geological flotsam; 19 September, however, is a day that now belongs to Earth. Each anniversary, residents will attempt to either work from home, find open spaces away from buildings, or leave Mexico City entirely. All are worried about being caught somewhere precarious when the next 19 September arrives. And each year, fewer and fewer attend the protests, not because they are losing interest in justice for those who lost their homes in 2017, but because they are terrified of being in the city on the anniversary. I am an anthropologist who writes about time, the state, and how people experience strange, inconceivable events. During the six years I conducted ethnographic research in Mexico City, I learned a little about how the geological coincidence in 2017 has shifted residents’ experience of time. One change is that the unlikelihood of the coincidence showed that time was never really under human control. Though a geological event can seem to have ended from a human perspective, it may, as Carlos told me, still be ongoing for Earth. In geological terms, the interval between 1985 and 2017 is just an instant. This dilation of human time by Earth becomes especially visceral as each anniversary approaches, and geological forces promise once again to gather human futures into an ongoing Earthly present. Though time might seem to be advancing for humans – the future becoming the present, the present the past – this temporal flow is contained in one duration for Earth: a long geological now. In the temporal imagination of Mexico City’s residents, it’s as if 19 September has become three things: a date on the calendar, a reminder of events in the City’s history, and a marker of the inhuman forces that have ravaged Earth in perpetuity. For many residents, the categories of past, present and future have become subject to the whims of a capricious Earth. Opening a geology textbook unleashes a torrent of metaphor and analogy in which Earth appears to live and breathe. Slopes are described as ‘retreating’, mountains can be ‘revived’, streams ‘defeated’, plains ‘undulating’, walls ‘hanging’, glaciers ‘pulsating’ and rocks ‘fatigued’. As a descriptive science, noticing deep-time processes that appear static in human lifetimes – like the movements of tectonic plates – requires a turn to the metaphoric. Earthly metaphors seem to provide a sense of stability in our everyday lives that liquid metaphors can’t. We ‘lay the groundwork’ for plans; we seek a ‘sure footing’ or a ‘steady foundation’ on terra firma. Perhaps this sense of stability is why seismology offers such potent metaphors for massive, sudden or irreversible change. We might hear about ‘tectonic shifts’ in values and meanings, or entrenched political ‘fault lines’, or of an event’s inevitable ‘aftershocks’. This last metaphor, aftershock, is particularly malleable. It is used to describe post-traumatic stress disorder, interest rate rises, the fallout of the COVID-19 pandemic, and many other moments in which changes produce lasting effects. ‘Aftershock’ is a metaphor that locates an event by pointing to its consequences, using a linear imagination of time to move from cause to effect. But looking at the experience of aftershock sequences can open our imaginations to how time’s coherence is contingent upon the stability of an erratic Earth. The term ‘aftershock’ comes from the Japanese seismologist Fusakichi Omori, who realised in 1895 that three major Japanese earthquakes – in Kumamoto (1889), Nōbi (1891) and Kagoshima (1893) – were in fact a single, staggered event. By identifying their contiguity, the similarity of their wave forms and other factors, he positioned these discrete events into an ongoing process that became known as the ‘aftershock sequence’. Omori’s Law, and Tokuji Utsu’s later amendments (known collectively as the Omori-Utsu Law) state that, after a shallow earthquake, the parts of the fault that slipped will readjust causing connected earthly movements, but over time, the probability of those events will diminish. In concrete terms, the likelihood of an aftershock the day after an earthquake will be half what it was the day of the earthquake, around one-tenth by the 10th day, and so on. In the mid-20th century, the Gutenberg-Richter Law added that the higher the magnitude of the mainshock, the more frequent and stronger are its aftershocks. Importantly, though aftershock timings, numbers and locations broadly conform to these statistical rules, they remain stochastic. Paying attention to different forms of aftershock transforms the linear sequence of events into something strange and less determinate. Context renders metaphor uncanny. For instance, an important variable is the speed with which tectonic plates move. Along the San Andreas Fault, which moves around 37 mm per year, aftershock sequences tend to end about 10 years after an earthquake. In the New Madrid seismic zone of the eastern United States, however, tectonic plates move at close to 0.2 mm per year, so any earthquake that happened there before 2012 is considered an aftershock of an earthquake from 1812. What might be ‘after’ for the city might still be ‘before’ for Earth In the 14 months after the 1891 Nōbi earthquake in Japan, 3,090 aftershocks were recorded; by 1975, three to four still registered each year. Some seismologists theorise that the magnitude 7.6 earthquake that struck Chiloé Island in Chile in 2016 was an aftershock of the magnitude 9.5 Valdivia earthquake in 1960. Moreover, mainshocks will often be preceded by foreshocks. The 1960 Valdivia earthquake was preceded by a magnitude 8.1 earthquake 33 hours prior; likewise, the magnitude 9.2 Sumatran earthquake of 2004 might have been presaged by a magnitude 7.6 foreshock in 2002. It’s not hard to imagine that a foreshock of that severity would have been considered a mainshock until a larger earthquake occurred. For example, on 24 August 2016, the municipality of Accumoli in Italy was struck by a magnitude 6.2 earthquake. Thousands of aftershocks followed it, hundreds each day, some up to magnitude 5.5, with a generalised decay in frequency and force. Then, on 30 October, a magnitude 6.6 earthquake struck the region – an aftershock that repositioned the August mainshock as a foreshock. An aftershock sequence has a slippery temporality, despite its use as a metaphor denoting a linear succession of events. The relationship between foreshock, mainshock and aftershock only becomes clear long after the extended event has died down, sometimes at a scale that troubles conceptions of causality. This form of seismic time is not knowable through the human experience of a single seismic event. Instead, it is a geological process instituted within an earthquake that endures beyond it, distributed across years or decades. Under such conditions, ideas like ‘past’ and ‘future’ are shifting, contingent categories. One cannot be sure if their ‘present’ is before or after a mainshock – before or after (or within) the geological ‘present’ of Earth. This describes how many people have experienced the uncanny geological temporality of Mexico City. Though the 2017 earthquake was not an aftershock of the 1985 tremor, the unpredictable temporality of an aftershock sequence, particularly the notion of an uncertain ‘present’, is a means of understanding residents’ sense that geological time had displaced the time of Mexico City. What might be ‘after’ for the city might still be ‘before’ for Earth. I was in Mexico City in 2021, in the weeks before 19 September. As the date approached, the sense of an uncertain present began to return once again. While travelling through the city, overhearing conversations or talking with friends, I began to get a sense of these growing temporal anxieties. Some residents began to worry that this 19 September would be Mexico City’s last. Shortly before 9pm on 7 September, as I sat reading in the window of my apartment, with the buzz of fat summer raindrops filling the street below, I heard the speakers of the city’s early warning system suddenly crackle to life. The other residents and I ran downstairs to await the coming earthquake, which arrived 20 seconds or so later. The electricity immediately cut out as a magnitude 7.1 quake, with long, rolling waves, turned the city’s surface to a piece of fabric billowing in a gentle breeze. It lasted for around two minutes. Fortunately, despite its magnitude and duration, the earthquake caused little damage. But, climbing back to our apartment, I receive a text message from a friend: ‘No way, it’s September 7, again.’ Four years earlier, the earthquake of 19 September 2017 had been prefigured by an earthquake on 7 September. The tremor we had just felt began to appear like a geological promise: a confirmation that Mexico City was headed for another disaster on 19 September 2021. We had 12 days. The odds were so low that an earthquake would appear on the same date in 1985 and 2017 that some residents felt they had become exposed to a timescale that would make such geological coincidences possible. The unlikely becomes inevitable, precisely because of its improbability. And so, another earthquake was expected to arrive on 19 September 2021, and every other year since the 2017 coincidence. Residents were convinced that the city’s repeating geological loop had initiated Back in 2018, when the first anniversary of the 2017 quake approached (and the 33rd anniversary of 1985), a cartoon by Victor Solís was shared widely on social media, particularly in the WhatsApp group chats I shared with earthquake victims and their advocates. (Since then, Solís says, it ‘religiously wanders and arrives to him via WhatsApp on this day each year.’) The cartoon shows a man in pyjamas praying at the side of his bed on the evening before 19 September. The text at the base of the image, the man’s prayer, translates as ‘… and that tomorrow would be nothing more than just the drill.’ Cartoon by Victor Solís This feeling underpinned the 12 days of expectation in 2021. Shortly after I received the message from my friend alerting me to the similarities with the 2017 earthquakes, a rumour went viral across Mexican social media reminding the city of its geological history. A common version of the message reads: Do you want to scare yourselves? On 7 September 1985, there was a strong earthquake in Mexico City; on 19 September, an earthquake gravely damaged the heart of Mexico City. On 7 September 2017, Mexico City trembled hard; then, on 19 September, another earthquake shook Mexico. And today, 7 September, it just trembled very hard. Strange coincidences.Residents were convinced that the city’s repeating geological loop had initiated. During the ominous 12 days between 7 and 19 September, Mexican seismologists appeared for interviews on television and in newspapers, reminding the city that Earth cares nothing for human calendars. Experts do this each September, but in 2021 they went so far as to publish seismograms showing unequivocally that there was no geological relationship between the dates 7 September and 19 September in 1985. Contrary to the rumours, there was no significant earthquake on 7 September 1985. And yet, despite the reassurances, the geological coincidence seems bound to return because Mexico City is in an ongoing geological present: after a before, but still before an after. This expectation sometimes transforms anxiety into outright hysteria and panic. Upon hearing commemorative sounding of Mexico City’s earthquake alarm, some residents have nervous breakdowns, throw themselves out windows, or fall down stairs. Each year since 2019, the city’s governor has announced the number of injuries that the commemorative evacuation causes each anniversary. The double anniversary is so heavy with Earthly time and human history that it is as if, until 2017, 19 September was a date on which earthquakes couldn’t happen, but after 2017, it became a date on which they had to. We wait nervously throughout the 12 days of anticipation in 2021. Ultimately, an earthquake doesn’t strike on 19 September. But relief is short-lived. Unfortunately for Mexico City, there is a 19 September every year, an annual promise that the city’s ‘after’ has yet to begin. We begin waiting and expecting. Will it happen again? In an earthquake, the time of Earth and the time of human experience intersect. As John McPhee suggests in Annals of the Former World (1998), thinking at these two timescales – ‘one human and emotional, the other geologic’ – induces a form of temporal schizophrenia because they are ‘so disparate’. Generally, the experience of geological time at a day-to-day level involves an abstracted, expanded frame of reference that demands leaps of imagination: Picture seeing emptiness where Mt Fuji once stood; envisage the landmass now known as India colliding with the continent of Asia; imagine the Himalayas swelling up. But in Mexico City post-2017, the peculiar convergence of these two timescales provokes a more visceral sense of Earthly forces that would otherwise remain abstract. This feeling of a deep-time present becomes especially acute on the most geologically unstable date in Mexican history. On 19 September 2019, I stood alongside earthquake victim advocacy groups in Mexico City while we waited for the commemorative evacuation drill. When the alert began sounding, many around me put their fingers in their ears to drown out the robotic voice blandly repeating ‘Alerta sísmica’ over the ghostly, pulsating tone of the alarm. As if the early warning alert itself were somehow causal of earthquakes, a woman said under her breath: ‘Que se quede tranquila la tierra hoy’ (‘That the earth would remain tranquil today’) and we murmured our agreement until the alert drowned us out. One explanation of these fears of 19 September might be that residents are experiencing a kind of seismic PTSD – a fearful response to a date marked by the human grief and suffering that a volatile Earth can deliver. This may be true to some extent, but understanding the experience of being ensnared in geological time as a form of trauma is insufficient. ‘Trauma’ can psychologise experiences, obscuring the important role of structural and environmental factors. Trauma can also be an elastic concept, capable of describing the experiences of – as Ruth Leys points out in her 2000 book – both the attendees of a wedding bombed by a drone, and the pilot who did the bombing. But most importantly, trauma has a linear temporality, especially in experiences of post-traumatic stress, in which a past event determines the future. This linear sense of time and history can reduce contemporary experience to an epiphenomenon of the past, which risks discounting the strangeness of the present in Mexico City. To view residents’ fears as seismic PTSD would also require overlooking what happened an hour or so after the commemorative alarm sounded on 19 September 2022. These temporal geometries contort human history into strange and terrifying shapes When the third 19 September earthquake happened, and the ground began to tremble, the city lost power. The magnitude 7.7 tremor was felt in 12 states, damaging buildings and killing two people. It was relatively minor compared with the earthquakes of 1985 and 2017 but, as the city shook for a third time, the temporality of trauma shifted: fears and anxieties that might have appeared to result from past disasters could no longer be considered ‘post-traumatic’ because the ‘post-’ had yet to begin. Human time was being swallowed by an abyssal geological present. I ran to check on my apartment, then went to a cantina. With the power out, the bartenders were taking beers from their fridges and putting them in big buckets of ice on the street. Workers, holidaymakers, street vendors and police officers all sat on the footpath, drinking warm beer, and theorising what the hell was happening to Mexico City. There was a 0.026 per cent chance that the 2017 earthquake would happen on the anniversary of the 1985 disaster; we would later find out that the 2022 earthquake had about a 0.000751 per cent chance of happening. But for everyone I spoke with, its unlikelihood guaranteed that it would happen. The improbable was not impossible, least of all in Mexico City. I heard countless theories that explained the tremor: the city was in an Earthly loop, simply beyond human comprehension; residents’ fears of the date somehow manifested the earthquake; millions of people stomping out of their buildings during the commemorative evacuation upset the tectonic plates. But, above all, held the idea that Mexico City’s residents were justified in their fears: 19 September no longer belonged to humans, and the city had been set adrift in the time of Earth. Though the three 19 September tremors are not formally defined as an aftershock sequence, for some residents, it feels as if ‘after’ will never begin. We are currently in a moment described as ‘the Anthropocene’, an epoch in which the actions of some humans register at the geological scale through traces of anthropogenic matter, such as nuclear radiation, plastics and carbon emissions. From this vantage, the future becomes an aftereffect of human action. But Mexico City, with its looping geology and its long 19 September, points toward a different relationship between the human and the geological, in which time itself is an effect of Earth. For the city’s residents, the disaster of 1985 is in the past. The 2017 earthquake is in the past. Even the 2022 event is now in the past. But as these discrete events slip into human memory, all three are folded into Earth’s geological present. In Mexico City’s geological now, the relationship between past, present and future is not preordained, and these temporal geometries contort human history into strange and terrifying shapes. Mexico City’s time is dislocated, its residents stranded after what was prior but still before what might yet come. And 19 September is now, like its own axis of time, a yearly reminder that humans might not be in charge of when ‘after’ begins.
Lachlan Summers
https://aeon.co//essays/the-earthquakes-that-shook-mexico-citys-sense-of-time
https://images.aeonmedia…y=75&format=auto
Subcultures
Bending a mysterious world to your will was the goal of esoteric practices. Now it’s the unashamed aim of the tech titans
Deep in the labyrinthine tags of TikTok, a group of teenage occultists promise they have the power to help you change your life. ‘Manifesting’ influencers – as they’ve come to be known – promise their legions of viewers that, with the right amount of focus, positive thinking and desire, the universe will bend to their will. ‘Most of these people [who manifest] end up doing what they say they’re going to do and being who they say they’re going to become,’ insists one speaker on the mindsetvibrations account (600,000 followers). Another influencer, Lila the Manifestess (70,000 followers) offers a special manifestation (incantation?) for getting your partner to text you back. (‘Manifest a text every time.’) Manifest With Gabby tells her 130,000-odd followers in pursuit of ‘abundance’ about ‘5 things I stopped doing when learning how to manifest’ – among them, saying ‘I can’t afford.’ It’s not just TikTok. Throughout the wider wellness and spirituality subcultures of social media, ‘manifesting’ – the art, science and magic of attracting positive energy into your life through internal focus and meditation, and harnessing that energy to achieve material results – is part and parcel of a well-regulated spiritual and personal life. It’s as ubiquitous as yoga or meditation might have been a decade ago. TikTok influencers and wellness gurus regularly encourage their followers to focus, Law of Attraction-style, on their desired life goals, in order to bring them about in reality. (‘These Celebrities Predicted Their Futures Through Manifesting’, crows one 2022 Glamour magazine article.) It’s possible, of course, to read ‘manifesting’ as yet another vaguely spiritual wellness trend, up there with sage cleansing or lighting votive candles with Ruth Bader Ginsburg’s face on them. But to do so would be to ignore the increasingly visible intersection of occult and magical practices and internet subcultures. As our technology has grown ever more powerful, our control over nature seemingly ever more absolute, the discursive subculture of the internet has gotten, well, ever more weird. Sometimes it seems like the whole internet is full of would-be magicians. ‘WitchTok’ and other Left-occult phenomena – largely framed around reclaiming ancient matriarchal or Indigenous practices in resistance to patriarchy – have popularised the esoteric among young, largely progressive members of Gen Z. The ‘meme magicians’ and ‘Kek-worshippers’ – troll-occultists of the 2016-era alt-Right – have given way to a generation of neotraditionalists: drawn to reactionary-coded esoteric figures like the Italian fascist-mage Julius Evola. Even the firmly sceptical, such as the Rationalists – Silicon Valley-based members of tech-adjacent subcultures like the Effective Altruism community – have gone, well, a little woo. In an article for The New Atlantis, I chronicled the ‘postrationalist’ turn of those eager to blend their Bayesian theories with psychedelics and ‘shadow work’ (a spiritualised examination of the darkest corners of our unconscious minds). As organised religion continues to decline in Western nations, interest in the spooky and the spiritual has only increased. Today, witches might be one of the fastest-growing religious groups in the United States. Magic, of course, means a host of things to a plethora of people. The early 20th-century anthropologist Edward Evans-Pritchard used ‘magic’ to describe the animistic religious sentiments of the Azande people, whom he deemed primitive. There is folk magic, popular in a variety of cultures past and present: local remedies for ailments, horseshoes on doors, love charms. There is fantasy magic, the spellcasting and levitation and transmogrification we find in children’s novels like Harry Potter. And there is magic-as-illusion, the work of the showman who pulls rabbits out of hats. But magic, as I mean it here, and as it has been understood within the history of the Western esoteric tradition, means something related to, yet distinct from, all of these. It refers to a series of attempts to understand, and harness, the workings of the otherwise unknowable universe for our personal desired ends, outside of the safely hierarchical confines of traditional organised religion. This magic comes in different forms: historically, natural magic, linked with the manipulation of objects and bodies in nature, was often considered more theologically acceptable than necromancy, or the calling on demons. But, at its core, magic describes the process of manipulating the universe through uncommon knowledge, accessible to the learned or lucky few. The canny reader may note that magic as I’ve defined it sounds an awful lot like technology, given a somewhat spiritualised sheen. This is no coincidence. The story of modernity and, in particular, the story of the quixotic founders of our early internet (equal parts hacker swagger and utopian hippy counterculture) is inextricable from the story of the development and proliferation of the Western esoteric tradition and its transformation from, essentially, a niche cult of court scientists and civil servants into one of the most influential yet least recognised forces acting upon contemporary life. From the Renaissance humanists onwards, nearly every major proponent of what we might loosely call modern, liberal, democratic, technologically saturated life was involved with, or at least influenced by, intellectual and philosophical movements – from Hermeticism to Freemasonry – that were laden with occult promise. That promise? That human beings could – indeed should – seek, contra Biblical fiat, to maximise their knowledge and technical capacity in order to transform themselves into gods. This differs from the Dan Brown vision of history, where a shadowy cabal of Freemasons (or Illuminati) secretly moves the gears of history. Rather, I’m suggesting that the once-transgressive ideology underpinning the Western esoteric tradition – that our purpose as humans is to become as close to divine as possible – has become an implicit assumption of modern life. At the extreme reaches of Silicon Valley culture, it’s an explicit assumption. Earlier this year, the tech titan and Braintree founder Bryan Johnson, who made headlines for his multimillion-dollar quest for life extension, boasted on Twitter of his status as a new Messiah. ‘I am not a tech tycoon or biohacker,’ he wrote, ‘I am playing for societal scale philosophical transformation, competing for the status and authority of Jesus, Satan, Budda [sic], and similar.’ More and more of us – regardless of religious affiliation – see our relationship to nature and culture alike as one of entitled control: Of course we should harness the powers of the universe to serve our own ends and live our best lives. Of course we are, or soon will be, functionally divine. But when did all this start? In the Renaissance, a controversial humanist scholar named Giovanni Pico della Mirandola penned his Oration on the Dignity of Man. Influenced by orthodox Christianity, the Jewish Kabbalah, Arab philosophy, and the revival and reimagining of classical Greek thought known as Neoplatonism, Mirandola believed that the defining characteristic of human beings was precisely that they are born to take the place of God. In his Oration, Pico retells the familiar story of Creation told in Genesis 1: God creating the world and ultimately humanity. But Pico’s God is a less omnipotent being than the Bible’s. He has only a limited number of mental ‘seeds’ – a Neoplatonic image signifying, essentially, divine implantation of purpose: or, the thing that makes, say, stems grow into flowers, or trees stretch for the heavens. By the time he gets to humanity, Pico’s fatigued God has exhausted his supply of such seeds. So he makes Man without one. Or: to put it more accurately, he makes Man to determine his own. ‘Adam,’ says God, ‘you have been given no fixed place, no form of your own, and no particular function, so that you may have and possess, according to your will and your inclination, whatever place, whatever form, and whatever functions you choose.’ Where other creatures have a ‘fixed nature’, God tells Adam ‘you, constrained by no laws, by your own free will … will determine your own nature.’ Pico’s writing can be read as a particularly extreme example of Renaissance humanism, as part of a general trend of early modern writing that emphasised human freedom and creative power, in contrast with medieval visions of human life as but a part of a wider, interconnected social and natural order – visions commonly associated with the theology of St Thomas Aquinas. But to understand Pico better, we must look at the texts that influenced him most: a mysterious compendium of writings known as the Corpus Hermeticum, or the Hermetica. Pseudonymously written in the first few centuries CE, likely in the philosophical melting pot of Hellenistic Alexandria, the 17-part Corpus Hermeticum purports to be the writings of a mysterious demigod, Hermes Trismegistus, associated with the Greek trickster-messenger god Hermes and the Egyptian god of writing, Thoth. Human freedom, intellectual endeavour, progress – all were signs that humanity’s destiny was to become God Blending philosophy, scripture, natural science, alchemy, astrology and magic, the Hermetica as a whole represents a distinctive vision of human transcendence. The mysterious Hermes Trismegistus is a self-made god: a mage with near-divine control over both the scientific and magical worlds – they are, in the Hermetica, the same world. As Hermes learns in Book XI of the Hermetica (from the translation by G R S Mead): If, then, thou dost not make thyself like unto God, thou canst not know Him. For like is knowable unto like [alone]. Make, [then,] thyself to grow to the same stature as the Greatness which transcends all measure; leap forth from every body; transcend all time; become Eternity.The highest purpose of the human is to transcend humanity through knowledge, and become creator. The material (mortal) decaying stuff of our physical animal bodies exists only to be overcome via a spirit linked with knowledge and will. Central to Hermetic thought was the tenet: ‘As above, so below.’ Everything is connected, from the movement of the stars and the planets to the internal workings of an insect. Understanding these secret connections, and harnessing them, was the key to a successful magician’s art. Central, too, was the occult nature of the mage’s knowledge. The mage saw things, and connections, that ordinary or uninitiated people could not. Supposedly lost for centuries, the Corpus Hermeticum was ‘rediscovered’ in the 15th century, when another Renaissance humanist (and occultist) Marsilio Ficino discovered a manuscript in the library of his patron, Cosimo de’ Medici, and translated it into Latin. Its humanistic vision – its transhumanistic vision! – was enormously influential not just on Pico and Ficino, but on the Renaissance intellectual project as a whole. Human freedom, human intellectual endeavour, human progress – all these were not merely allowed by God, such that human beings might better fulfil God’s purpose for them, but were signs that humanity’s destiny was to become God, bending technological power to accord with their own desires and wills. As the Hermetic-influenced Renaissance humanist Giordano Bruno put it, man’s purpose is ‘to fashion, other natures, other courses, other orders’ so that ‘he might in the end make himself god of the earth’. Scientific progress was thus bound up with spiritual development – a development predicated, in opposition to the authoritarian Catholic Church, on the notion of making manifest one’s own desired purpose. While, for much of Western religious history, the mythic figure of the would-be knower who rebels against God was a cautionary tale (Lucifer, Adam and Eve, the Tower of Babel, Prometheus), here, the seeker of knowledge was a model for human advancement. Hermetic ideas diffused across a range of movements in the early modern period. The Rosicrucians, for example, dabbled in human self-transcendence and attracted scientific luminaries such as the German physician Michael Maier, the English mathematician Robert Fludd, and Isaac Newton, who spent decades of his research life trying to create the alchemical ‘philosopher’s stone’. Hermeticism’s tendrils could also be felt in the rise of ‘speculative’ Freemasonry, which swept the guild structure, rhetoric and imagery of medieval masons into the ‘free-thinking’ world of the 18th century to create a ritualistic structure at once distinctly anticlerical and thoroughly religious. Freemasons such as Benjamin Franklin and George Washington, as well as several signatories of the US Declaration of Independence, blended intricate ceremony with carefully crafted regalia as meticulous as any church’s vestments or liturgy into a kind of worship of human freedom. It would be a mistake to think of Hermeticism as a codified religion: with a clear and consistent set of tenets and membership criteria. The Rosicrucians, Masons and, later, Hermetic-tinged groups like the Golden Dawn and Theosophists each had their own rites, rituals and subgroups. Nor was Hermeticism the only magical system in play; Solomonic magic derived from Arab and Kabbalistic sources also stressed self-divinisation (controlling angels and demons alike by calling them by their proper, yet secret, names). What these movements shared was a faith in human self-transcendence as the highest spiritual good. Those who participated most fully in the project of self-divinisation through knowledge could, in some sense, be said to be the most human: the elect whose ability to understand reality was bound up in their ability to shape it. Politically as well as theologically, their ‘priestcraft’ set them against the Christian ecclesiastical establishment. In this, early modern occultists were not unlike today’s peddlers of meme magic: claiming a populist stance against the elite ‘cathedrals’ of academic and journalistic establishments, while affirming the distinctly esoteric ideal of the lone genius (or elite cabal) capable of seeing what the ‘sheeple’ cannot. Today’s meme magicians likewise claim access to the hidden forces underpinning the global order, which they seek to harness for their own ends. Whoever shapes the perception of others, in order to get what they desire, is practising magic In the 19th and early 20th centuries, transhumanist magic began to focus less on knowledge of the world, natural or otherwise, and more narrowly on the power and control of the mage himself. The controversial diabolist Aleister Crowley’s Thelema (a movement as much influenced by visions of a Nietzschean Übermensch as by Hermeticism’s demigods) and the New Thought tradition from the US, for example, focused on mastering one’s own internal psychic energies. (Indeed, Thelema takes its name from the Greek word for will.) What we want – and how we focus that energy of wanting – doubles as the primary engine of reality. Which, of course, only the most godlike among us can shape. Whether the creator-God is absent, abdicated or usurped, Man’s role remains the same: to take his place. Crowley’s most famous maxim takes Pico’s vision of a self-fashioning self to its natural conclusion: ‘Do what thou wilt shall be the whole of the Law.’ In what is perhaps Crowley’s most powerful successor ideology, the ‘chaos magick’ that grew out of the 1970s London punk scene, we can find the most obvious genesis of modern internet culture. Heavily influenced by the writings of one-time Crowley acolyte Austin Osman Spare, chaos magick dispensed with Hermetic associations – and the lattice of meaningfulness that connected them – altogether. Rather, for the chaos magicians, meaning was not something to be discovered, but decided. Reality came to rest primarily with human perception, so that changing human perception was not to lie, but to reimagine reality itself. Or, as one chaos magician of the time put it: ‘chaos magick is the art of forming the unformed energies of creative chaos into a pattern leading to the outcome of the magician’s desire.’ The major tenet of traditionalism – that there was a secret initiatory truth underpinning all major world religions – collapsed into nihilism: there is no such thing as truth at all. All that matters is what we can make people believe. As the occult historian Gary Lachman writes in Dark Star Rising (2018), his account of magical tendencies in modern internet culture: ‘for chaos magick the idea of “truth” or “facts” is anathema.’ Whoever shapes the perception of others, in order to get what they desire, is practising magic. Here, magic is effectively denatured, stripped of its supernatural and mystical elements and revealed instead as the mage-like ability to bend the social imaginary to his will. ‘As above, so below’, in this context, refers less to the relationship between, say, plants and planets, than to the relationship between the human psyche and human cultural life. Change one person’s mind – and you might change the world. Enter our internet pioneers. Steeped in mid-20th-century counterculture, the futurists, technologists and inventors who would come to shape Silicon Valley culture shared with their Hermetic forebears an optimistic vision of human self-transcendence through technology. Freed of our biological and geographic constraints, and of repressive social expectations, we could make of cyberspace a new libertarian Jerusalem. As early as the 1960s, the futurist Stewart Brand, the publisher of the hippy counterculture bible the Whole Earth Catalog (1968), rhapsodised about how, in the modern world, the ‘realm of intimate, personal power is developing – the power of the individual to conduct his own education, find his own inspiration, shape his own environment,’ concluding that ‘We are as gods and might as well get good at it.’ Early cyber-enthusiasts and futurists – more than a few of whom, from Terence McKenna to Robert Anton Wilson, dabbled in occult, mystic or magical practices – saw in the prospect of cyberspace a new spiritual terrain for self-divinisation. Freed of bodily constraints and geographic limitations, the internet could help us at last achieve the magical dream of transcendence. In an article for Wired magazine in 1995, Erik Davis chronicled one ritual, performed by Mark Pesce – the founder of the early programming language known as VRML (virtual reality modelling language) – during an event that was equal parts technopagan ritual and scientific summit. Heavily structured along traditional Hermetic and Rosicrucian lines, the ritual involved four personal computers, taking on the customary role of elemental watchtowers, running a graphical browser that depicted a ‘ritual circle’, pentagrams and all. An observer chanted: ‘May the astral plane be reborn in cyberspace.’ The internet seemed to be a place where humanity could achieve a more democratic and collective magical rebirth. After all, it was a place where, in the absence of our physical bodies and social restrictions – we could exist solely as manifestations of our own will. The early internet became a gathering space for waves of magically inclined cybernauts. Technopagans, Discordians (essentially: worshippers of disorder), neopagans, Wiccans, transhumanists could find each other in cyberspace, shoring up the notion that digital life itself might presage the magician’s eschatological dream of a place where human creativity could shape the landscape of its world. The mystical algorithm presents us with a landscape in which our desires determine all that we see In the 1990s, the Extropian transhumanist Max More hailed the internet as an evolutionary portal. ‘When technology allows us to reconstitute ourselves physiologically, genetically, and neurologically,’ he wrote, ‘we who have become transhuman will be primed to transform ourselves into posthumans – persons of unprecedented physical, intellectual, and psychological capacity, self-programming, potentially immortal, unlimited individuals.’ (More was explicit about the occult genesis of the Extropian movement, exhorting readers to praise Lucifer as a self-divinising rebel against a hierarchical creator-God.) The British philosopher Nick Land, later a major figure in the far-Right Dark Enlightenment scene, hoped that digital advancements would ‘accelerate’ capitalism and technological progress and precipitate a civilisational collapse that would hasten the post-apocalyptic world to come. A devotee of Crowley, Land moved into the magician’s former home after resigning from the University of Warwick. He also coined the portmanteau term ‘hyperstition’ (‘hyper’ plus ‘superstition’) to express the notion that an idea might become real merely by being thought, which sounds uncannily like a precursor of manifesting. Later waves of transhumanists include the philosopher David Pearce, whose World Transhumanist Association (later Humanity+) openly pursued ‘eternal life’. In an interview in 2007, Pearce said that, in order to do so, ‘we’ll need to rewrite our bug-ridden genetic code and become god-like.’ The internet has absorbed some of its techno-utopian luminaries’ foundational ideas to the extent that they are practically built-in. In some ways, it’s provided us with nothing more nor less than a magical canvas – a soul-space, to paraphrase the early internet historian Margaret Wertheim, where our desires, impressions and the forces that act upon them can be made ‘manifest’. In this shared collective hallucination, we can don ideal avatars, create untethered social and even erotic relationships, curate our self-image, and in turn allow the mystical algorithm to present us with a landscape – from news headlines to targeted advertisements – in which our desires determine all that we see. In the modern internet, desire is the secret undercurrent shaping our new reality. Our desire for dopamine hits – Likes, hearts, a few seconds’ TikTok entertainment – is inextricable from the wider economic enmeshment of desire within a capitalistic attention economy, where our time and clicks are monetised in the service of advertisers bent on stoking our desire further. Unencumbered by our bodies, or communities, we live in a miasma of yearning, willingly succumbing to an increasingly palpable form of spellcraft practised by the digital magi who profit from our attention. Like the old witches’ bargains of eras past, we agree to sell parts of ourselves – our eyeballs – in exchange for certain illusory fulfilments of desire packaged up by powerful corporate tech titans and memetically gifted shitposters capable of ‘going viral’ with a perfectly worded image or tweet. Memes, in this telling, become the modern interpretations of the magician’s sigil: a magical image empowered to convey the magician’s desired energy. Charged with the collective energy of each subsequent re-Tweet or repost, memes seep into our subconscious and influence what we think, how we act and who we vote for. Memes, like sigils, are replicated in the digital space, first through the mage’s ability to tap into our desires, marionetting us to Like and re-Tweet, then through our collective urge to add meme power to our own personal brand. And, by channelling our desire and rearranging our interior landscape through a clever working of our cybernetic geography, the digital magi have the very power over us that so fascinated Crowley and the chaos magicians. But has the internet betrayed the more idealistic principles of its early engineers, for whom human transcendence was a more collective proposition? Has the power of the few able to look behind the curtain replaced the goal of shared human liberation? Perhaps to a degree, but even in these more seemingly humanistic visions of internet culture, we find a chilling nihilism: a sense that magic is fundamentally about controlling other people’s perceptions. Speaking about the magic of being online, an early internet user, going by the handle legba, told Davis: Words shape everything there, and are, at the same time, little bits of light, pure ideas, packets in no-space transferring everywhere with incredible speed. If you regard magic in the literal sense of influencing the universe according to the will of the magician, then simply being [online] is magic.Put another way, digital ‘reality’ takes the magical principles of energy manipulation as its architecture. We are all caught up in the cult of Hermes, or Prometheus, or Lucifer, in which the secret truth revealed by transgression is that truth is only ever a fiction of fools: reality is only ever what you can make people believe. Our social lives, sexual lives, professional successes are all mediated, in part or in full, by a disembodied landscape that quite literally runs on the engine of desire. The hypercapitalist attention economy – which invites us to post pictures of ourselves for Likes, or tell compelling stories about ourselves for GoFundMes or Kickstarters, or turn our eyeballs to clickbait that, in turn, shows us advertisements for items on Etsy or Amazon that we’ve already been craving – doubles as a kind of manifestation of the principles of post-Crowley magick. It is desire that makes reality real. It’s hardly surprising that new spiritual movements have cropped up in this postmodern landscape – from Left-coded practices like WitchTok to the ‘meme magic’ of the 2016-era alt-Right. Reanimating esoteric ideas of self-divinisation, and harnessing ‘energy’ to ‘manifest’ reality by attending to and valorising our own desires, they insist that what we want makes us who we are. The spiritualised space of the internet has made magicians of us all in the service of becoming our best selves As such, modern internet culture seems more indebted to Crowley’s nihilism than to the promise of Hermes Trismegistus. Widespread disinformation, ‘engagement farming’, meme culture, Russian troll bots and other fragmented attempts at capturing and shaping our attention function like magic spells of their own, warping our perceptions to reflect the perceptions of those who wield the memes. You might say the ‘meme magicians’ have won. They have revealed, at last, the dark heart at the centre of Pico’s seemingly optimistic vision of humanity: that, when we fashion ourselves according to our desire, it is because there is nothing real, or meaningful, in this world except those desires. Scottish witches of the 18th century had a word for this: glamour – appearing to others the way we wish to be, so we might impress upon them that which we wish to impress. By 2019, the concept of glamour magick was sufficiently mainstream for Teen Vogue to publish a guide to the practice, extolling teenage girls to ‘be a better you’. But, in 2023, we’re all doing ‘glamour magick’ – intentionally or not. Our participation in the spiritualised space of the internet, where energy, intention and vibes are indistinguishable from the memes and bots and Tweets and deepfakes that shape our collective consciousness, has made would-be magicians of us all in the service of becoming our best selves. As more and more of our online lives play out on platforms owned or controlled by billionaires convinced of their own divinity, we may find ourselves less mages than fodder for other magicians’ wills. More troublingly, many of us don’t seem to mind – or, if we do, we don’t mind quite enough to disenchant ourselves. We just keep pressing, playing, Liking and sharing. A Crowley devotee might think that this is because we are, after all, sheeple, lacking the mage-like temperament to determine our own destinies, or that of others. A more charitable read is that desire itself is asymptotic: it is never fully fulfilled. The longing for what we cannot have, for being more than we are, is as endemic to the human condition as death. The lure of the internet lies in the promise that this click, this article, this purchase will at last result in the final consummation we crave. We will be seen, paid attention to, and perhaps even loved, in just the way we wish to be. It is a promise as palpable as Eve’s apple.
Tara Isabella Burton
https://aeon.co//essays/how-the-internet-became-the-modern-purveyor-of-ancient-magic
https://images.aeonmedia…y=75&format=auto
Philosophy of mind
New research is uncovering the hidden differences in how people experience the world. The consequences are unsettling
On 26 February 2015, Cates Holderness, a BuzzFeed community manager, posted a picture of a dress, captioned: ‘There’s a lot of debate on Tumblr about this right now, and we need to settle it.’ The post was accompanied by a poll that racked up millions of votes in a matter of days. About two-thirds of people saw the dress as white and gold. The rest, as blue and black. The comments section was filled with bewildered calls to ‘go check your eyes’ and all-caps accusations of trolling. Vision scientists were quick to point out that the difference in appearance had to do with the ambiguity of ambient light in the photograph. If the visual system resolved the photograph as being taken indoors with its warmer light, the dress would appear blue and black; if outdoors, white and gold. That spring, the annual Vision Sciences Society conference had a live demo of the actual dress (blue and black, for the record) lit in different ways to demonstrate the way the difference of ambient light shifted its appearance. But none of this explains why the visual systems of different people would automatically infer different ambient light (one predictive factor seems to be a person’s typical wake-up time: night owls have more exposure to warmer, indoor light). Whatever the full explanation turns out to be, it is remarkable that this type of genuine difference in visual appearance could elude us so completely. Until #TheDress went viral, no one, not even vision scientists, had any idea that these specific discrepancies in colour appearance existed. This is all the more remarkable considering how easy it is to establish this difference. In the case of #TheDress, it’s as easy as asking ‘What colours do you see?’ If we could be oblivious to such an easy-to-measure difference in subjective experience, how many other such differences might there be that can be discovered if only we know where to look and which questions to ask? Take the case of Blake Ross, the co-creator of the Firefox web browser. For the first three decades of his life, Ross assumed his subjective experience was typical. After all, why wouldn’t he? Then he read a popular science story about people who do not have visual imagery. While most people can, without much effort, form vivid images in their ‘mind’s eye’, others cannot – a condition that has been documented since the 1800s but only recently named: aphantasia. Ross learned from the article that he himself had aphantasia. His reaction was memorable: ‘Imagine your phone buzzes with breaking news: WASHINGTON SCIENTISTS DISCOVER TAIL-LESS MAN. Well, then, what are you?’ Ross went on to ask his friends about what it’s like for them when they imagine various things, quickly realising that, just as he took his lack of imagery as a fact of the human condition, they similarly took their presence of visual imagery as a given. ‘I have never visualised anything in my entire life,’ Ross wrote in Vox in 2016. ‘I can’t “see” my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on 10 minutes ago… I’m 30 years old, and I never knew a human could do any of this. And it is blowing my goddamn mind.’ There is a moral imperative for us to study and understand these kinds of differences There is a kind of visceral astonishment that accompanies these types of hidden differences. We seem wedded to the idea that we experience things a certain way because they are that way. Encountering someone who experiences the world differently (even when that difference seems trivial, like the colour of a dress) means acknowledging the possibility that our own perception could be ‘wrong’. And if we can’t be sure about the colour of something, what else might we be wrong about? Similarly, for an aphantasic to acknowledge that visual imagery exists is to realise that there is a large mismatch between their subjective experiences and those of most other people. Studying hidden differences like these can enrich our scientific understanding of the mind. It would not occur to a vision scientist to ask whether being a night owl might have an impact on colour perception, but a bunch of people on the internet comparing notes on how they saw a dress inspired just such a study. The study of aphantasia is helping us understand ways in which people lacking imagery can accomplish the same goals (like remembering the visual details of their living room) without using explicit imagery. How many other such examples might there be once we start looking? There is also, arguably, a moral imperative for us to study and understand these kinds of differences because they help us understand the various ways of being human and to empathise with these differences. It’s a sobering thought that a person might respond differently to a situation not just because they have a different opinion about what to do or are in possession of different knowledge, but because their experience of the situation is fundamentally different. For most of my research career, I didn’t really care about individual differences. Like most other cognitive scientists, my concern was with manipulating some factor and looking to see how this manipulation affected the group average. In my case, I was interested in the ways that typical human cognition and perception is augmented by language. And so, in a typical experiment, I would manipulate some aspect of language. For example, I examined whether learning names for novel objects changed how people categorised, remembered and perceived them. These were typical group-effect studies in which we compare how people respond to some manipulation. Of course, with any such study, different people respond in different ways, but the focus is on the average response. For example, hearing ‘green’ helps (most) people see the subtle differences between more-green and less-green colour patches. Interfering with language by having people do a concurrent verbal task makes it harder for (most) people to group together objects that share a specific feature, such as being of a similar size or colour. But most people aren’t everyone. Could it be that some people’s colour discrimination and object categorisation is actively aided by language, but other people’s less so? This thought led us to wonder if this could be another hidden difference, much like aphantasia. In particular, we began to look at inner speech, long thought to be a universal feature of human experience. Most people report having an inner voice. For example, 83 per cent (3,445 out of 4,145 people in our sample) ‘agree’ or ‘strongly agree’ with the statement ‘When I read I tend to hear a voice in my mind’s ear.’ A similar proportion – 80 per cent – ‘agree’ or ‘strongly agree’ with the statement ‘I think about problems in my mind in the form of a conversation with myself.’ This proportion goes up even more when asked about social problems: 85 per cent ‘agree’ or ‘strongly agree’ with the statement ‘When thinking about a social problem, I often talk it through in my head.’ On average, those who report having more visual imagery also report experiencing more inner speech But 85 per cent is hardly everyone. What about those who disagree with these statements? Some of them report experiencing an inner voice only in specific situations. For example, when it comes to reading, some say that they hear a voice only if they deliberately slow down or are reading something difficult. But a small percentage (2-5 per cent) report never experiencing an inner voice at all. Like those with aphantasia who assume their whole lives that visual imagery is just a metaphor, those with anendophasia – a term Johanne Nedergaard and I coined to refer to the absence of inner speech – assume that those inner monologues so common in TV shows are just a cinematic device rather than something that people actually experience. People with anendophasia report that they never replay past conversations and that, although they have an idea of what they want to say, they don’t know what words will come out of their mouths until they start talking. It is tempting to think that there is a trade-off between thinking using language and thinking using imagery. Take the widespread idea that people have different ‘learning styles’, some being visual learners and others verbal learners (it turns out this idea is largely incorrect). When it comes to imagery and inner speech, what we find is a moderate positive correlation between vividness of visual imagery and inner speech. On average, those who report having more visual imagery also report experiencing more inner speech. Most who claim to not experience inner speech also report having little imagery. This raises the question of what their thoughts feel like to them. When we have asked, we tend to get answers that are quite vague, for example: ‘I think in ideas’ and ‘I think in concepts.’ We have lots of language at our disposal that we can use to talk about perceptual properties (especially visual ones) and, of course, we can use language to talk about language. So it is not really surprising that people have trouble conveying what thoughts without a perceptual or linguistic format feel like. But the difficulties in expressing these types of thoughts using language don’t make them any less real. They merely show that we have to work harder to better understand what they are like. Differences in visual imagery and inner speech are just the tip of the iceberg. Other hidden differences include synaesthesia, Greek for ‘union of the senses’, in which people hear lights or taste sounds, and Eigengrau, a German word for the ‘intrinsic grey’ we see when we close our eyes. Except not all of us experience Eigengrau. About 10 per cent in our samples claim their experience is nothing like Eigengrau. Instead, when they close their eyes, they report seeing colourful patterns or a kind of visual static noise, like an analogue TV not tuned to a channel. Our memory, too, seems to be the subject of larger differences than anyone expected. In 2015, the psychologist Daniela Palombo and colleagues published a paper describing ‘severely deficient autobiographical memory’ (SDAM). A person with SDAM might know that they went on a trip to Italy five years ago, but they cannot retrieve a first-person account of the experience: they cannot engage in the ‘mental time travel’ that most of us take for granted. As in other cases of hidden differences, these individuals tend not to realise they are unusual. As Claudia Hammond wrote for the BBC about Susie McKinnon, one of the first described cases of SDAM, she always ‘assumed that when people told in-depth stories about their past, they were just making up the details to entertain people.’ What is it about differences in imagery, inner speech, synaesthesia and memory that render them hidden? It is tempting to think that it’s because we don’t directly observe them. We can see that someone is a really fast runner. But having direct access only to our own reality, how are we to know what another person imagines when they think of an apple, or whether they hear a voice when they read? Still, while we can’t directly experience another person’s reality, we can compare notes by talking about it. Often, it’s remarkably easy: for #TheDress, we just needed to ask one another what colours we see. We can also ask whether letters always appear in colour (a grapheme-colour synaesthete will say yes; others will say no). People without imagery will tell you they cannot visualise an apple, and those without inner speech will say they do not have silent conversations with themselves. It is not actually difficult to discover these differences once we start systematically studying them. Paradoxically, although language is what allows us to compare notes and learn about differences between our subjective experiences, its power to abstract may also cause us to overlook these differences because the same word can mean many different things. We use ‘imagine’ to refer to forming an image in the mind’s eye, but we also use it when referring to more abstract activities like imagining a hypothetical future. It is perfectly reasonable for an aphantasic to not realise that, in some cases, people use ‘imagine’ to mean actually forming mental images that have a perceptual reality. Much of our understanding of hidden differences relies on people’s self-report. Can we trust it? Modern psychology is sceptical about self-report, a scepticism I’ve inherited as part of my academic training. Recent reports of large individual differences in imagery and inner speech have often been accompanied by incredulity. How do we know that these differences reflect something real? Can we really just take people at their word when they say they don’t have an inner voice? If the goal is to understand what a person feels, self-report trumps objective measurement Before tackling the more complex question of whether we should trust self-reports about internal subjective states like imagery and inner speech, let’s consider some simpler cases. When someone says they dislike cauliflower, they are reporting on their subjective experience, and we tend to take them at their word. But we don’t have to. We can easily set up an experiment where we observe how likely they are to eat cauliflower when given alternatives. It would be surprising if someone claimed to not like cauliflower but chose to eat it at every opportunity. There are, of course, cases where such ‘stated-vs-revealed preference gaps’ occur. Many researchers have made their careers studying these gaps. For example, if one lives in a culture where cauliflower-eating is associated with higher status, people may be compelled to say they like it even though they don’t. Conversely, someone might eat cauliflower only to avoid offending their host. Such situations call for caution in interpreting people’s preferences – both stated and revealed – but they do not negate the observation that, in ordinary circumstances, taking people at their word regarding their preferences is a very good guide to their behaviour. Let’s take another case. You are in a shared office and your office-mate says they feel cold when the thermostat is set to 72°F (22°C). Do you take them at their word, or do you say ‘But 72°F is the proper indoor temperature? How can you feel cold?’ Suppose you take measurements of their skin temperature, core temperature, even an fMRI scan showing activation of their insula. None of these would allow you to claim that they don’t feel cold. None of these measures would negate their self-report. If one was concerned about hypothermia, relying on objective measurements may well be appropriate but, if the goal is to understand what a person feels, self-report trumps objective measurement. The same logic applies to other inherently subjective states such as loneliness, pain and awe. To measure loneliness, it is not sufficient to count how many people someone talks to or is friends with because one person’s active social life may be another person’s depth of loneliness. We can tell if there is a flu epidemic by using objective tests, but diagnosing a ‘loneliness epidemic’ requires taking into account whether people feel lonely. This is also why, despite all the available technology we have to measure people’s physiological states, when it comes to pain, we continue to rely on pain scales, a simple form of self-report. If we take introspective judgments seriously when it comes to preferences, emotion and pain, why would we be more sceptical about them in cases of phenomenal differences such as imagery and inner speech? One possibility is that we are able to reliably introspect about some things and not others. Perhaps we can reliably report on ‘basic’ states like pain and whether we like cauliflower (though, even here, there may well be differences in people’s ability to self-report), but in other cases our introspection fails. For example, most people think they are above-average drivers – one of the many examples of the so-called ‘Lake Wobegon Effect’. We can also be wrong in the other direction. In a typical implicit learning study, participants are exposed to sequences of flashing lights, sounds or shapes that obey a certain rule. They subsequently have to identify whether new sequences obey the same rule or not. Participants often feel like they are just guessing, that is, they think they have not learned anything. Their behaviour, however, can be far above chance level, indicating that they in fact have learned something. In such cases, the ‘incorrect’ self-report is still informative: it gives us insight into the person’s subjective reality (they think they are in the 80 percentile of driving ability, they think they are just guessing, they think they haven’t learned something that they, in fact, have). But at the same time, these self-reports do not reflect objective reality. They are poor guides to predicting what a person can or is likely to do. Lastly, consider dreams. In a 1958 survey, Fernando Tapia and colleagues reported that only about 9 per cent of respondents indicated that their dreams contained colour. Other surveys done around this time reported similarly low proportions. A decade later, the tide turned and a large majority reported dreaming in colour. The philosopher Eric Schwitzgebel considers several explanations for this discrepancy. One possibility is that black-and-white photographs and television changed the content of dreams. As colour TV came to dominate, colour returned to people’s dreams (‘returned’ because, in a few studies from the more distant past, people did not claim to dream in black and white). The problem with this is that there is no reason to think TV should have such an outsized impact on the phenomenology of our dreams. After all, the world never ceased to be in colour. The alternative, argues Schwitzgebel, is that ‘at least some people must be pretty badly mistaken about their dreams.’ Our ability to report on the perceptual content of our dreams may simply be unreliable. And with no objective measures against which to measure the subjective report, we can’t really know whether these reports reflect any reality, subjective or not. Why then would there be any consistency in people’s reports from a given time? Perhaps because, in the absence of having good access to their phenomenal states, people go with the response they think is most reasonable. In the 1950s, the dominant popular and scientific view was that dreams lack colour. And so, when queried, participants simply mirrored that dominant view. The same happened as the dominant view later changed. Neither case, Schwitzgebel argues, reflects ‘correct’ phenomenology because we simply do not have valid introspection when it comes to the colour of our dreams. If reports about phenomenal states like imagery and inner speech are like reports about dreams, we have every reason to remain sceptical of whether differences in introspection report actual differences in people’s actual experiences. If they are more like reports about our preferences and emotions, then we can (mostly) take people at their word. Even then, we must consider social pressures to respond in a certain way. If having vivid imagery were a requirement for admission to art school, we should not be surprised if aspiring artists all claim to have very vivid imagery. If hearing a voice when one reads were considered a sign of mental illness, people would be less likely to say they hear a voice when they read. Establishing the validity of self-report can be done in several ways. First, we must show consistency. If one day people claimed they experience inner speech constantly and the next day they claimed they never did, we have a problem. As it turns out, people’s reports are highly consistent. Inner speech questionnaires taken months apart show high correlations. (At the same time, Russell Hurlburt’s work using descriptive experience sampling, which probes people’s thinking at random points during the day, does show that people overestimate how much of their thinking is in the form of inner speech.) We can also see whether differences in reported phenomenology predict differences in objective behaviour. This is not an option when it comes to dreams, but we can make specific predictions about behavioural consequences of having more or less visual imagery and inner speech based on existing theories of imagery and language. Differences in self-reported phenomenology can be linked to differences in objective behaviour. Those with less inner speech have a harder time remembering lists of words; those with less visual imagery report fewer visual details when describing past events. There are also reported differences in more automatic physiological responses. More light entering the pupil causes it to constrict. But simply imagining something bright like the Sun also causes (a smaller, but still measurable) constriction. Aphantasics show perfectly typical pupillary responses to actual changes in light. However, their pupils do not change to imagined light. At the same time, many hypothesised differences in behaviour are not observed because, it seems, people compensate by, for example, discovering ways of remembering detailed visual content without engaging explicit imagery. Such compensation can prove beneficial. People with poor autobiographical memory find other ways of keeping track of information that can help stave off some of the cognitive decline in ageing. It’s harder to brush aside self-reports of someone who says they could imagine things, and now can’t Another way to establish validity is that we can ask whether there are neural and physiological correlates of reported phenomenal differences. If differences in reported imagery were mere confabulations or the results of people just telling researchers what they think the researchers want to hear, it would be surprising if they had different brain connectivity and functional activation as measured by fMRI. Yet this is what we are finding. Fraser Milton and colleagues scanned groups of people identifying as aphantasics and hyperphantasics (those with unusually vivid visual imagery). When asked to lie in the scanner and stare at a cross on a screen, the brain responses of the hyperphantasic group had greater connectivity between prefrontal cortex and the occipital visual network, compared with the aphantasic group. Participants were also asked to look at and imagine various famous people and places. The difference in activation between perception and imagery (in a left anterior parietal region) was larger in hyperphantasic compared with aphantasic participants. Those with typical imagery tended to fall in between the aphantasics and hyperphantasic group on many of the measures. Less is known about neural correlates of differences in inner speech. In work presented at the 2023 meeting of the Society for the Neurobiology of Language, Huichao Yang and colleagues found a relationship between how much inner speech people reported to experience and resting-state functional connectivity in the language network. Lastly, even though we don’t know what it’s like to be someone else, we can compare how our phenomenology differs from one time to another. There are numerous reports of people with brain injuries that cause them to lose visual imagery, and some cases of losing inner speech. It is much harder to brush aside self-reports of someone who says they used to be able to imagine things, and now they can’t (especially when these are confirmed by clear differences in objective behaviour). Holderness’s caption introducing the world to #TheDress had a second part. ‘This is important,’ she wrote, ‘because I think I’m going insane.’ The idea that the same image can look different to different people is alarming because it threatens our conviction that the world is as we ourselves experience it. When an aphantasic learns that other people can form mental images, they are learning that something they did not know was even a possibility is, in fact, many people’s everyday reality. This is understandably destabilising. And yet, there is a scientific and moral imperative for learning about the diverse forms of our phenomenology. Scientifically, it prevents us from making claims that the majority experience (or the scientist’s experience) is everyone’s experience. Morally, it encourages us to go beyond the ancient advice to ‘know thyself’ which can lead to excessive introspection, and to strive to know others. And to do that requires that we open ourselves up to the possibility that their experiences may be quite different from our own.
Gary Lupyan
https://aeon.co//essays/the-moral-imperative-to-learn-from-diverse-phenomenal-experiences
https://images.aeonmedia…y=75&format=auto
Psychiatry and psychotherapy
I was the victim of a carjacking. The trauma from that experience was unendurable. Then I discovered eye movement therapy
I wore leggings that Tuesday. I never wore leggings to work, but that winter three years ago the New Orleans heat was in hibernation. Ice climbed up my windows, and my sweater almost reached my knees. I whispered: ‘Be good, I love you,’ to the puppy sound asleep in his crate and the groggy cat still snuggled in bed before I stepped out into the unseasonably cold January air. A neighbour’s motion-sensor light blinked on to help me navigate the blackness. It was 12:50am; my shift at the news station started in 10 minutes. My fingers became numb quickly, making it difficult to turn the key in the lock. I speed-walked to my car, my beautiful, white Hyundai Kona, my college graduation gift from my parents. I twisted the heat all the way up, slapped the seat-heater button, and turned my Spotify to Maggie Rogers’s new album. With my hands pulled into my North Face sleeves, I grabbed the wheel. I did an illegal U-turn to get out of my one-way street, cutting a whole minute off my arrival time. As I slowed at the intersection, headlights warned me to step on the brake. The red car seemed to slow as it passed in front of me, eventually turning into my street, barely missing my car. I pulled out. ‘Turn signal, dumbass,’ I mumbled, still frozen and half asleep. My phone lit up as I drove. It was a short journey; my seat heater barely had any time to thaw my insides before I got to the news station. Christmas lights and the occasional working streetlamp lit up the neighbourhood: the narrow shotgun houses with too many plants on their porches, the doors bright yellow, sleepy blue or lively green. When I would drive home later that morning, the people of Mid-City would be bundled up, starting their days with frigid dog walks and coffee runs. I reached down to check the message from my boyfriend, Henry, warming my hand over my coffee tumbler. I slowed to turn and respond, eyes darting between my phone and the road. I had been in my car all of four minutes when I rounded the next corner, foot poised over the brake as I closed in on the stop sign. Flying headlights in my rear-view mirror caught my attention. The red car belted down the street. I stopped before the sign to let them go ahead. They must be in some hurry. Maybe they have somewhere to be. At one in the morning? The car screeched in front of me, cutting me off. Three doors flung open, guns pointing directly at me: a 23-year-old woman who had never really had anything horrible happen in her life. Suddenly, I was wide awake, but my mind was blank. I screamed, I cried, I shook. I rolled down the window. They shouted at me: ‘Put it in park, PUT IT IN PARK.’ Tears poured icy hot down my face. The mascara glued my lashes together. I begged for my life; something I had never considered having to do. ‘Please, please,’ I screamed over the banging of the guns on the roof and my skyrocketing heart rate. My car nudged theirs. ‘PUT IT IN PARK.’ I reached over and thrust the car into park. A hand slid along the seatbelt, unbuckling me from the live-action horror movie. He used just enough force to get me out. ‘Get out, get out! DROP THE PHONE.’ I obliged because what choice did I have? ‘WHERE ARE THE KEYS?’ he screamed. I cried out, telling him they were in my pocket. I could no longer control my sobs. He reached into my jacket, grabbed my keys, and got into the car – my car, warmed with Maggie’s voice leaking from the speakers. The other men with guns jumped in beside him. The red car’s tyres spewed gravel as their accomplices drove off; my white car squealed after. I stood in the cold, in the middle of a run-down street, surprised no one had heard what had just happened. No lights flicked on, no one stepped out to see who or what was making hysterical noises in the middle of the night. I had nothing. I felt naked. I ran to the nearest gas station – probably a quarter mile up the road. The chill wafted in from the bayou that separated the nice houses from the even nicer houses. My nine-year-old combat boots thunked onto the pavement. I could feel my feet rubbing the thin insides, a sole coming loose. Between the sobbing and the running, I could barely breathe, let alone talk, once I stepped into the artificial light. ‘I was just carjacked. Could I please use your phone?’ The cashier looked scared, taking in my heaving chest, mascara tears and red nose. I used the gas station phone to call 911; the operator pressed me for the street corner where it had happened. In that moment, the only clear thing in my mind was the scene, playing over and over. I called my mom. The first time I called, it went to voicemail. It was 2am her time. Fresh tears fought to escape, though my cheeks weren’t dry yet. I dialled again. By the time the officers showed up, I had reviewed the mental footage a hundred times ‘Hello?’ she answered, dreams dripping from her voice. But this was a nightmare. ‘Mom,’ I half cried. The relief ripped me open. The tears fell. It took me a minute before I could even tell her why I had woken her up. ‘I was carjacked on my way to work. They had guns, so many guns. I called the police. I am OK, but can you please call Henry and have him meet me here?’ She was calm, much calmer than I would have expected. Not that this was something I had ever expected. Every day at the news station, we ran stories about crime victims, people who had lost everything, sometimes their lives, to a group of strangers bent on destruction. Never had I allowed myself to think that I could be next. In the gas station, I shook in my tattered combat boots and leggings, overcome with fear. It had happened to me. The world shifted, and my mind fell down a deep black hole. My mom and I hung up, and I called work, letting them know I wouldn’t be able to make my shift that night. I stood awkwardly by the cash register waiting for the police to arrive, intermittently crying, not bothering to wipe the black streaks off my face. The guy behind the counter left to get me an old milk crate to sit on, his pity palpable. Twenty minutes later, Henry and his sister showed up. I took one look at his face and the dam broke. I heaved into his chest while he consoled me. He called the police again. By the time the officers showed up, I had reviewed the mental footage a hundred times, but I still couldn’t tell them what kind of red car it was or whether the men were wearing masks or exactly how many of them there were. I didn’t know. I knew only that the car was red, my things were gone, and there were so many guns. I felt as if I were watching myself from a vent in the ceiling, helpless, giving them what little information I could. For weeks, I lost myself. Time passed in a blur of fear and vulnerability. Seeing a reflection made me jump. I couldn’t be alone in my own apartment or my heart rate would shoot up. I didn’t drive myself to or from work, even after the police found my car. The officers told me the men had been spotted the day after they’d stolen my car, joyriding at 90 miles an hour down a one-way street; they then ditched the officers and the car altogether. The police found it the next day, two days after the crime, undamaged and parked in a random neighbourhood with a Louisiana licence plate in place of my Michigan one. That was the last time I heard from the detectives. The men with guns could still be out there. I lay in bed at night, willing myself to sleep. When I finally did, I woke up crying or shaking or both. I retold the story to overly curious co-workers. I zoned out of conversations. I moved apartments. I never wanted to leave my bed. I wanted to stay in the comfort of Henry’s arms all the time. Even when I was there, I felt alone. The saying about how people who go through traumatic events end up going through the motions, watching themselves, trapped in their own heads, is true. I felt broken. I felt like I could never be fixed. Three weeks after the carjacking, I started seeing a therapist. Maybe this constant horror movie on the backs of my eyelids would stop. Dianne Markel welcomed me into her spacious office, decorated with a beautiful wooden bookcase behind her desk, thriving plants and a softly humming air purifier. The woman had a kind face, one that told you she was really listening. Markel worked with people who went through traumas, often using a technique called eye movement desensitisation and reprocessing, or EMDR. First developed in 1987, EMDR is an eight-phase psychotherapy technique that has mostly been used to treat veterans and others with symptoms of post-traumatic stress disorder (PTSD). Today, the approach is becoming more common, especially for people with drug or alcohol addiction, as its therapeutic benefits are recognised. EMDR not only helps patients to process their trauma, but also to develop coping skills, calm the stress response, and sustain ongoing self-improvement. It can also transform their beliefs, helping them let go of negative thoughts and become open to recovery. The first phase of EMDR is establishing trust. If the patient doesn’t trust or work well with the therapist, they might hold back during the process, not allowing themselves to fully heal. Markel’s comfortable leather couch moulded itself to me, even as I sat on the edge. Her soft voice matched her gentle demeanour, and she apologised for what had happened to me, struggling to find the words herself. She didn’t pretend to understand what I was going through or rush me. Her eyes seemed to smile as she shared bits of herself with me. She made me laugh with a story about how she had worn two different shoes to work the day before: a cheetah-print flat and a black ballerina slipper. It was easy to let my guard down. Well, ‘easy’ for someone who had just lost all their faith in humanity and badly wanted to reverse back into what life was like before the incident: naive, normal and devoid of a seemingly endless vulnerability. Markel assured me that what I was feeling was more than normal, and it was OK to be afraid. But she also promised a light at the end of the tunnel. During that same first visit, we entered the second phase of the treatment: preparation. Markel handed me a thick stack of positive mantras to repeat to myself when I got overwhelmed. I was supposed to pick one and say it slowly three times as I breathed in and out. Even though I chose a saying (I am OK; I am safe), in the weeks before our second meeting, the anxiety would usually be so consuming, the mantra did little to help. My therapist also explained what we would be doing and why it should help me recover. She would guide me through all the steps, but it would be up to me to open up fully to the treatment. ‘We can stop at any time,’ she said. I placed my trust in her. Rhythms change the neural networks that haven’t adapted to the trauma In our next session, we jumped right into the third phase: assessment. Markel had me hold a TheraTapper – two green rubber handles, one in each hand, that vibrated back and forth and connected to a small machine where I could choose the frequency and strength of the vibrations. Even though ‘EM’ in EMDR stands for ‘eye movement’, the tapper’s alternating, calming vibrations or tones in one’s hands, feet or ears have the same effect. They’re supposed to decrease the negative emotion associated with the traumatic event. My clammy right hand would feel the quiet buzz, then the left. I sank into the rhythm. According to the EMDR International Association, the TheraTapper’s rhythms connect with the biological mechanisms involved with rapid eye movement (REM), which helps those undergoing EMDR begin to process, digest and store the memory and trauma. Basically, the rhythms help to speed up the healing process by mimicking REM, which changes the neural networks that haven’t adapted to the trauma. The stimulation facilitates communication across the brain to help make sense of traumatic memories. I was not in a trance, but rather recognising the incident as if I were a bystander, taking myself out of the picture and replacing the fear with appreciation for the event as it was: a thoughtless crime against an undeserving victim. For two minutes, Markel had me close my eyes, grip the tappers and envision one part of the carjacking. It was not hard, as the scene played vividly through my mind every time I thought I was slipping into normalcy. The worst wake-up call. I watched myself succumb to the terror, become a victim over and over. My therapist had me focus on a negative thought that I associated with that part of the memory: I am weak. I am pathetic. I am helpless. I am scared. I am unsafe. I am broken. I counted the taps until it was over. Then came the difficult part. Once the two minutes were up, Markel had me rate how disturbing the negative thoughts felt on the Subjective Units of Disturbance Scale of 0 to 10, with 10 being the most disturbing. That fear, or negative thought, according to the EMDR International Association, is ‘locked in the nervous system’ after a traumatic event, which prevents the brain from processing it in a way that allows the victim to live without fear. My eyes focused on the spider plant sitting next to Markel who was listening attentively. After I answered a solid 9 on the Subjective Units of Disturbance Scale, she asked me to describe what I saw, how it made me feel, why I felt the way that I did, and where I felt it in my body. A bomb went off periodically, melting my insides, forcing my heart to race, my hands to clench, my chest to tighten. That’s where I held the trauma. It felt like I had to physically pull myself out of that moment, away from the armed men jumping out of their car, from the guns pointed at me. I cleared my throat to avoid croaking out a sob. ‘I did nothing to stop it,’ I said during one of our first sessions. But then, what would I have done? I don’t own a weapon. Even if I did, I was drastically outnumbered. But I didn’t fight them off. I allowed myself to be a victim. In the fourth phase, desensitisation, it was time to identify the negative emotions that had attached themselves to the crime. I breathed in, closed my eyes, and yearned for the TheraTapper to work its magic, still playing tag with vibrations. Terror, vulnerability, weakness, frustration, sadness, absence, anxiety, anger. I word-vomited up everything that I had been feeling about that night, surprised not to see a puke-coloured stain on the carpet. The second time I immersed myself into the rhythms, Markel had me detach my mind from the shaken version of myself struggling to put the car in park and instead watch from above, like my soul had left my body. Then, she listened to how I broke down the scene, how I felt, and turned around my phrases so I could see that this carjacking was not my fault. She spoke to me in a way that told me I would survive this and come back stronger. In the fifth phase, installation, Markel asked me to identify a positive belief about myself during the moment I had pictured. I don’t remember exactly what my belief was in that first session; mostly, I was concentrating on sharing as much as I could because I just wanted to get better. It was probably along the lines of I did the right thing, because the positive belief is supposed to reflect a more appropriate thought on what happened. For two more silent minutes, I focused both on the vibrations and seeing myself doing the right thing when I was attacked. While my eyes were closed, I willed the mantra to get stronger. Markel’s voice pulled me out of the trance. She asked me to rate how true my positive belief felt in that reflection on the Validity of Cognition Scale, where 1 equals completely false and 7 equals completely true. My answer wavered between 2 and 3. Then came the sixth phase: body scan. I identified the manifestation of the terror in my body as well as the emotions that bubbled up, so Markel could help me try to relieve them. First up, hands: my nails dug into my palms, white knuckles protruding. Why did I feel the terror there? What did my hands hold? An indescribable amount of tension. I shut my eyes again as my therapist and the TheraTapper guided me through a short meditation, targeting the terror in order to help resolve it. My hands had always been an outlet for anxiety – I grew up picking the skin around my cuticles until I bled or my mom got annoyed. For the past month, my hands had worked overtime. My fingernails looked like I’d just clawed my way out of a battlefield: bloody and raw. Slowly, I synced my breathing to the TheraTapper’s vibrations, allowing my fingers to unfold and my hands to relax and stretch. For the first time since the night of the event, the knot in my chest started to release. I wasn’t burying the terror; I was learning to accept it and grow stronger At the end of each session, the goal of the EMDR therapy was to feel better, generally, than when I’d walked in. My therapist and I breathed deeply together. In. Out. In. Out. During the first six phases, I was in control. In order for me to remain in control, Markel briefed me on what I could expect, back in the real world, as we transitioned into the seventh phase: closure. The scene would likely continue to play out, and there might even be times when a new detail would emerge. It’s all part of the process. She gave me a series of self-calming techniques: supplements, journaling, meditation, mantras, just breathing. And then I was on my own until I set foot inside her cosy office again. After the initial two sessions, we began with the eighth phase, re-evaluation, then went back and covered phases three through seven again. Markel and I would talk about my past few weeks in the real world. Was I handling the anxiety and fear better? Did the scene play out less frequently? Was I healing? We repeated that sequence once every two weeks for months. The goal was to get my ratings on the Subjective Units of Disturbance Scale down to ‘not very disturbing’, and the Validity of Cognition Scale up to ‘very believable’. The therapy aimed to release the memory from the front of my mind, allow me to come to terms with what had happened, then store the memory in the back of my mind, without locking it away. I wasn’t burying the terror; I was learning to accept it and grow stronger. I was in control. It took us about four months of going through the phases before I got to that point, before I no longer needed EMDR. Some sessions were less challenging, while others still felt almost as difficult as the first. Reliving the carjacking, allowing the scene to play out, got easier, but the tension in my hands never fully dissolved. It was like a part of me never wanted to forget how I had felt in that moment. To this day, I refuse to wear leggings to work. I hate driving in the dark, but I’m able to drive my white Hyundai Kona without succumbing to a panic attack. I harbour a general distrust of male strangers, but I’m strong enough to venture out alone. I still sync my breathing to the ghost of a TheraTapper when the anxiety gets to be too much. I clench and unfurl my fingers to release tension. My life is nowhere near where it was before January 2021. I will never not be the girl who was carjacked on her way to work in the middle of the night. But now, when I look back at the memory, I no longer see a victim. I no longer beat myself up for not doing anything to stop it from happening. I see a survivor.
Madison McLoughlin
https://aeon.co//essays/how-emdr-therapy-helped-me-heal-from-the-trauma-of-a-carjack
https://images.aeonmedia…y=75&format=auto
War and peace
The US military’s greatest enemy worked in an institution saturated with military funding. How did it shape his thought?
Noam Chomsky rose to fame in the 1960s and even now, in the 21st century, he is still considered one of the greatest intellectuals of all time. His prominence as a political analyst on the one hand, and theoretical linguist on the other, simply has no parallel. What remains unclear is quite how the two sides of the great thinker’s work connect up. When I first came across Chomsky’s linguistic work, my reactions resembled those of an anthropologist attempting to fathom the beliefs of a previously uncontacted tribe. For anyone in that position, the first rule is to put aside one’s own cultural prejudices and assumptions in order to avoid dismissing every unfamiliar belief. The doctrines encountered may seem unusual, but there are always compelling reasons why those particular doctrines are the ones people adhere to. The task of the anthropologist is to delve into the local context, history, politics and culture of the people under study – in the hope that this may shed light on the logic of those ideas. The tribe shaping Chomsky’s linguistics, I quickly discovered, was a community of computer scientists during the early years of the Cold War, employed to enhance electronic systems of command and control for nuclear war and other military operations. My book Decoding Chomsky (2016) was an attempt to explain the ever-changing intricacies of Chomskyan linguistics within this specific cultural and historical setting. I took it for granted that the ideas people entertain are likely to be shaped by the kind of life they lead. In other words, I assumed that Chomsky’s linguistic theories must have been influenced by the fact that he developed them while working for the US military – an institution he openly despised. This was Chomsky’s impossible dilemma. Somehow, he needed to ensure: a) that the research he was conducting for the US military did not interfere with his conscience; and b) that he could criticise the US military without inducing them to cease funding his research. His solution was to make sure that the two Noam Chomskys – one working for the US military and the other against it – shared no common ground. He achieved this through a bold stroke of amputation. From the start of his academic career, no part of his scientific work would show up in his political activism, while no trace of his activism would be detectable in his science. Among the inevitable outcomes was a conception of language utterly divorced from what most of us mean by that term. Language, for Chomsky, is a computational module restricted entirely to the individual, and devoid of communicative, cultural or social aspects. If it has any remaining purpose or function, it exists merely for talking to oneself. This novel and allegedly ‘scientific’ model of language was so extreme in its individualism and abstraction that, in the end, it proved of no use to anyone. Not even the US military could make any of it work. Decoding Chomsky triggered a heated debate. Although reviewers were largely positive, Chomsky’s own response was that the ‘whole story is a wreck … complete nonsense throughout’. In a letter to the London Review of Books in 2017, he said that for anyone to suggest that the Pentagon once viewed his linguistics as important for future forms of war was too absurd to require comment. In 2019, in a considerably longer polemic, he accused me of continuing to spin a ‘web of deceit and misinformation’. More recently, in an online interview with the physicist Lawrence Krauss in 2022, Chomsky suggested that those of us who raise the issue of his work for the Pentagon are just accusing him of ‘working for the war machine’. I concede that if that were my book’s message, Chomsky’s hostility would be easy to understand. But, in fact, I am saying something quite different. He refused to get security clearance and made no attempt to understand electronic devices Whether it’s Chomsky or anyone else, we all need to make a living. In a world where money talks, we’re often faced with a harsh choice – compromise on a point of principle or find ourselves out of work. One way or another, many of us have been there. To keep body and soul together, one version of ourselves colludes with the prevailing powers while another indignantly resists. In 1955, Chomsky found himself in just such a situation. He had a PhD in linguistics but was unable to get a job at Harvard. So he went to see Jerome Wiesner at the Massachusetts Institute of Technology (MIT). Wiesner was a self-described ‘military technologist’ who had helped set up the Sandia nuclear weapons laboratory and was now the director of MIT’s Research Laboratory of Electronics. He was impressed with Chomsky and gave him a job, but the young recruit had few illusions about where he now worked. As he has confirmed in various interviews, MIT was ‘90 per cent Pentagon funded’, ‘almost everybody’ was involved in defence research, and he himself ‘was in a military lab’. Chomsky was in no position to change any of this, but he could still avoid direct work on military technology. He refused to get security clearance and made no attempt to understand electronic devices, describing himself as a ‘technophobe’ who couldn’t handle anything more complicated than a tape recorder. Of course, Chomsky had to do some work to keep his job. The solution he found was to confine himself to certain alleged yet previously unsuspected grammatical principles underlying every language in the world. If he succeeded, this would be an achievement on the scale of James Watson and Francis Crick’s stunning discovery of the molecular structure of DNA. It was this search for an invariant underlying pattern – which Chomsky termed Universal Grammar – that sustained his MIT career for more than six decades. For anyone familiar with Chomsky’s powerful anti-militarist writings, it’s astonishing to imagine that the US Department of Defense once considered his linguistic theories as a means to enhance their computerised systems of weapons command and control. Their dream was that commanders could type instructions in ordinary English instead of having to master specialised computer languages. Astonishing, certainly, but such hopes are made quite clear by US Air Force scientists from the period. Take, for example, Colonel Edmund Gaines. In 1971, Gaines referred to the kind of language research that Chomsky had pioneered in these words: We sponsored linguistic research in order to learn how to build command and control systems that could understand English queries directly.That same year, Colonel Anthony Debons wrote: Much of the research conducted at MIT by Chomsky and his colleagues [has] direct application to the efforts undertaken by military scientists to develop … languages for computer operations in military command and control systems.Lieutenant Jay Keyser was a linguist recruited by Chomsky to MIT who later became Chomsky’s close friend and his ‘boss’ as head of MIT’s linguistics department. In articles from 1963 and 1965, Keyser highlighted various problems with the artificial languages then being used in the military’s command and control systems. He recommended instead an ‘English control language’, based on Chomsky’s ideas, that would enable commanders to use ordinary English when communicating with their weapons systems. Keyser illustrated his argument with references to missiles and B-58 nuclear-armed bombers using sample sentences such as: B-58’s will refuel.B-58’s must be on base.The bomber the fighter attacked landed safely.An Air Force-sponsored offshoot of MIT called the MITRE Corporation was particularly interested in such ideas. MITRE’s linguists were led by the former MIT researcher Donald Walker who, in 1969, explained: ‘our linguistic inspiration was (and still is) Chomsky’s transformational approach’. The one place we might have expected the fiercely anti-militarist Chomsky to avoid would be MITRE As many as 10 of Chomsky’s students played ‘a key role’ in MITRE’s linguistics research, and, in a report from 1962, Walker and his colleagues were quite clear that they intended to enhance ‘the design and development of US Air Force-supplied command and control systems’. MITRE’s original mission had been to design such systems for nuclear war but, by 1967, almost a quarter of the corporation’s resources were focused on the Vietnam War. MITRE’s role in that war included overseeing the technical side of the McNamara Line. This was a massive hi-tech project consisting of a barrier of sensors, mines and cluster bombs along the border between North and South Vietnam – a barrier that was intended to finally crush the Vietnamese resistance. In light of all this, the one place we might have expected the fiercely anti-militarist Chomsky to avoid would be MITRE. But it appears that the career pressures he faced at MIT meant that, from 1963, Chomsky felt obliged to work directly for the corporation. We know this because two MITRE research papers name Chomsky as a ‘consultant’ and both papers are quite clear that this research concerns the ‘development of a program to establish natural language as an operational language for command and control’. We also know from Chomsky’s former students that he visited MITRE’s laboratories on several occasions in this consultancy role. One of these students, Barbara Partee, told me that Walker convinced the military to hire her and other MIT linguists on the basis that: … in the event of a nuclear war, the generals would be underground with some computers trying to manage things, and that it would probably be easier to teach computers to understand English than to teach the generals to program.Partee qualified her statement by saying she is not sure anyone quite believed this justification. She also pointed out that any ‘basic research’ that might help the military might also benefit wider society. This is true. But it’s also true that the ability to communicate with computers in English would have given the US an important military advantage. Consequently, Chomsky’s students had to try to convince themselves that they weren’t guilty of colluding with the military. As Partee says: For a while, the Air Force was convinced that supporting pure research in generative grammar was a national priority, and we all tried to convince ourselves that taking Air Force money for such purposes was consistent with our consciences, possibly even a benign subversion of the military-industrial complex.One student, Haj Ross, even told me that he ‘never had any whiff of military work at MITRE’. But this all rather reminds me of the biologist Jonathan King’s comments about the level of self-delusion among MIT’s students in the 1980s: There were hundreds and hundreds of physics and engineering graduate students working on these weapons, who never said a word, not a word … So you’d go and have a seminar on the issue they’re just working on; you know, they’re working on the hydrodynamics of an elongated object passing through a deloop fluid at high speed. ‘Well, isn’t that a missile?’ – ‘No, I’m just working on the basic principle; nobody works on weapons.’In the 1960s, MITRE weren’t the only specialists in nuclear war command and control who were interested in Chomsky’s ideas. Researchers at the System Development Corporation were also trying to develop machines that could understand English commands, examples being ‘Blue fighter go to Boston’ and ‘Where are the fighters?’ According to A History of Online Information Services, 1963-1976 (2003) by Charles Bourne and Trudi Bellardo Hahn, these researchers ‘were paying close attention to Chomsky’s work and sometimes used Chomsky as a consultant.’ Fortunately, none of these military scientists managed to get Chomsky’s theories to actually work. Although MITRE’s linguists did produce what they called a ‘transformational grammar’ for ‘military planning files’, they don’t appear to have got much further, and the Pentagon’s generous funding for Chomsky’s linguistics eventually fell away. Chomsky still seems to regret this loss of funding, claiming that it came without strings attached. As he explained in his 2022 interview with Krauss: The Pentagon was the best funder ever. They didn’t care what you were doing … Nobody in the Left can understand that. They assume that if you’re working on problems of philosophy, and for the defence department, you must be working for the war machine!Chomsky made similar points in a 2015 talk where he also mentioned that ‘a couple of generals’ would sometimes visit his workplace at MIT but otherwise there wasn’t much surveillance. Evidently, these generals were following in the tradition of General Dwight Eisenhower who, in 1946, directed that military scientists must be given ‘the greatest possible freedom to carry out their research’. MITRE’s linguists always understood that ‘any imaginable military application would be far in the remote future’ Chomsky’s claim that the Pentagon ‘didn’t care’ what he was doing is one that he has made on several occasions. But it is in stark contrast to the documentary evidence. It seems that being an anti-militarist working in a military lab created a situation in which Chomsky has no choice but to hold contradictory ideas about his working environment. So while he has always known, as he said in a debate with Michel Foucault in 1971, that MIT was ‘a major institution of war-research’, he also needs to believe that ‘the Pentagon was not funding war work’ at MIT, as he said in an interview with Rebecca Schein in 2011. Chomsky seemed equally conflicted when, in 2019, I raised the issue of his consultancy work for MITRE. While he usually dismisses any suggestion that the military funded his linguistics in the hope of military applications, on this occasion he resorted to a quite different argument: MITRE’s linguists, he said (while summarising Barbara Partee), always understood that ‘any imaginable military application would be far in the remote future’. While this sort of reasoning might have reassured Chomsky’s students, it is unlikely to have reassured Chomsky. Consider his response when his wife Carol began working on an Air Force project in 1959. This MIT-based project was intended to enable people to communicate with computers in ‘natural language’, one aim being to enhance ‘military command and control systems’. We have it from the project’s head, Bert Green, that Noam was ‘very nervous’ about all this and needed reassurance that Carol wasn’t working on ‘voice activated command and control systems’. If Chomsky was nervous then, he must have been even more nervous when he found himself working for MITRE and the System Development Corporation, both of which were committed to designing computer systems for use in a nuclear war. To appreciate quite how much this must have troubled Chomsky, we need only recall his response when he heard the news of the Hiroshima bombing in August 1945. As he said in an interview with C J Polychroniou in 2019: I was then a junior counsellor in a summer camp. The news was broadcast in the morning. Everyone listened – and then went off to the planned activity – a baseball game, swimming, whatever was scheduled. I couldn’t believe it. I was so shocked I just took off into the woods and sat by myself for several hours.Chomsky was similarly shocked when Philip Morrison, a scientist who had worked on the Hiroshima bomb, told him that he couldn’t remember any discussion about the consequences of what he and his colleagues were doing until after the bomb had been used: These are some of the most brilliant human beings in the world – very humane, European culture, high culture – not just engineers … [But they’re] so immersed in the challenging technical problems of getting this thing to work that they were simply not considering what the effects would be until afterwards!Chomsky was always dismayed at how ‘brilliant’ people could so guiltlessly stoke up the possibility of destroying the human race. He was also well aware of the role of MIT’s managers in organising and giving focus to such brilliance. Maybe Wiesner’s interest in linguistics was purely intellectual. But I doubt it Take MIT’s vice-president in the early 1960s, General James McCormack. He supervised the university’s Center for Communication Science which naturally included MIT’s linguists. Perhaps McCormack’s interest in linguistics was purely intellectual – but I doubt it. After all, he was the general who had supervised the creation of the Pentagon’s entire nuclear weapons stockpile. Or take Wiesner, who not only recruited Chomsky to MIT but who, in 1960, co-founded the university’s linguistics programme. Wiesner later became MIT’s provost and then president which, in effect, made him Chomsky’s boss for more than 20 years. Now, maybe Wiesner’s interest in linguistics was purely intellectual. But, again, I doubt it, considering he played a significant role in setting up the Pentagon’s entire nuclear missile programme, as well as its computerised air-defence systems. By 1961, Wiesner had become President John F Kennedy’s science adviser. According to one of his MIT colleagues, Wiesner was well suited for the role as he was ‘soaked’ in military work such as ‘submarine warfare, air defence, atom bombs, guerrilla warfare, civil defence, and psychological warfare’. By the mid-1960s, Wiesner’s air-defence research at MIT had evolved into what Life magazine described as ‘the backbone of the American field communications in Vietnam’. Meanwhile, various laboratories at MIT continued to research helicopter design, radar, smart bombs and counter-insurgency techniques for use in that brutal war. While Chomsky could sometimes ignore what was going on all around him, he couldn’t do this all the time. We know this from his own words, in a letter from 1967, published by The New York Review of Books: I have given a good bit of thought to … resigning from MIT, which is, more than any other university, associated with activities of the department of ‘defense’.So why didn’t Chomsky resign? Partly, I suspect, it was because MIT’s managers were so impressed with his linguistics work that by 1966 they’d given him a named professorship, which, as Chomsky recalled in a talk in 1995, ‘isolated me from the alumni and government pressures’. This meant that, although there was still a risk of prosecution and even imprisonment for his anti-war activism, there was now no direct risk to his MIT career. This fortuitous situation enabled Chomsky to throw himself into campaigning against the Pentagon while remaining in a career largely funded by that same Pentagon. Among various motives for this shift into activism was undoubtedly a sense of guilt that this career had been so generously funded by the very institution that was, at this time, so brutally attacking Vietnam. As Chomsky told Ron Chepesiuk in 1992, he had reached a point, by 1964, where ‘it got so horrible over there that I couldn’t look at myself in the mirror anymore.’ By 1968, he was telling various journalists not only that he felt ‘guilty’ for waiting so long before protesting against the Vietnam War, but that he felt ‘guilty most of the time’. Of course, if Chomsky’s linguistic theories had actually worked – if they had enhanced the Pentagon’s ability to inflict death and destruction across the globe – then he would have had still more reason to feel guilty. Such disturbing thoughts can only have deepened Chomsky’s determination to critique the US military-industrial complex – a critique whose credibility was only strengthened by the fact that he was someone from MIT, someone from inside that very complex. Chomsky’s critiques were particularly inspiring to MIT’s more radical students and, by 1969, these students had pushed the university into a major crisis over its ongoing war research – a crisis that Chomsky did his best to resolve by opposing student demands to simply end this research. Instead, he proposed that MIT should restrict itself to war research ‘of a purely defensive and deterrent character’. His anxieties would have kept narrowing his focus to the more abstract yet unrealistic aspects of his linguistics Of course, the US Department of Defense describes almost all its activities in terms of defence and deterrence. Indeed, Chomsky’s position had some similarities with that of Wiesner who himself became quite critical of both the Vietnam War and the nuclear arms race. Although Wiesner’s opinions never stopped him from continuing to administer a huge military research programme at MIT, his liberalism did help create an atmosphere in which it was quite acceptable for MIT’s scientists to criticise the Pentagon for misusing the weaponry that they themselves had invented. Now perhaps Chomsky was also content to do military research, secure in the knowledge that he could later criticise the military if they ever misused his work. But I doubt whether such wishful thinking could really have appeased Chomsky’s conscience. It seems to me more likely that his anxieties would have kept narrowing his focus to the more abstract, other-worldly and ‘beautiful’ yet unrealistic aspects of his linguistics – resisting any pressure to delve into the kinds of messy practicalities that might actually have led to weapons. When the Pentagon funded basic research on MIT’s campus, it was always in the hope that it might lead to the development of actual weapons in various off-campus labs. But maintaining a clear distinction between basic research (on-campus) and practical applications (off-campus) was never going to be easy. As Chomsky himself says, academics and students were moving between MIT’s campus and its off-campus military labs ‘all the time’. Despite this, the illusion of a distinction felt comforting to many at MIT. As we’ve seen, it enabled the university’s physics and engineering students to claim that they were ‘just working on the basic principle; nobody works on weapons.’ Chomsky felt he needed to take this idea as far as anyone could. And if the issue of MIT’s military work did come up, the convenient on-campus/off-campus distinction enabled him to claim, as he did in a conference hosted by University College London in 2017, that: MIT itself did not have war work, war-related work, on the campus … In fact, the only exception was, at that time, the Political Science Department.Chomsky is on firm ground here in pointing to the military work of MIT’s political and social scientists, some of whom advised US policy-makers on counter-insurgency and bombing campaigns in Vietnam. But to imply that MIT’s natural scientists weren’t also complicit is quite wrong, especially when we know that Wiesner recruited 11 natural scientists from MIT to work on the McNamara Line. Chomsky must be aware of this, but he was determined to see his linguistics as a particularly ‘pure’ form of natural science on a campus where this kind of science was considered – at least officially – free of military involvement. On a political level, this approach seems to have helped quieten Chomsky’s conscience. On a scientific level, however, you can get only so far by conducting linguistics as if, like maths or physics, it was a branch of natural science. Since language is intrinsically a social phenomenon, it simply cannot be understood this way. In the 1940s and ’50s, when computing was new and exciting, it was tempting to explore the idea that there might exist in the human mind/brain a computer-like ‘device’ or ‘mechanism’ that could account for our ability to speak. But from the 1960s onwards, as these investigations kept failing, dissenters among Chomsky’s supporters kept breaking away, insisting that historical, social and cultural phenomena had to be brought back in. Chomsky, however, refused to move even an inch in that direction, his justification being that natural science is the only genuine kind of science, so-called ‘social science’ being nothing more than reactionary ideology. With this in mind, Chomsky made the striking claim that a rigorously ‘natural’ science of language is realistic in view of the fact that language itself is not social at all, having no significant function in terms of the communication of thoughts or ideas. In his book On Nature and Language (2001), he writes: [L]anguage … is not properly regarded as a system of communication … [although it] can of course, be used for communication, as can anything people do – manner of walking or style of clothes or hair, for example. So, according to Chomsky, language did not evolve to facilitate communication any more than people’s legs, clothes or hair did! Most readers of Aeon will assume that our capacity for language must have evolved among our distant ancestors through natural selection. Most will assume that language is not so much a system for thinking in private as a means of expressing our thoughts so others can share in them. You will probably take it that language is inseparably connected with social life and hence with history, politics and culture. You might also assume that, although children are genetically equipped with the necessary linguistic capacities, they actually acquire their first language by learning from and interacting with those around them. Chomsky, however, rejects each one of these ideas. Chomsky’s determination to free language from all connection drove him to bizarre conclusions For example, in the paper ‘Three Factors in Language Design’ (2005), he claims that the biological capacity for language did not evolve but appeared suddenly when the brain of a single early human was ‘rewired, perhaps by some slight mutation’. From that moment, this mutant individual supposedly used language not to communicate with others but only for silent thinking. In interviews with James McGilvray in 2012, Chomsky argues that, even today, people use language 99.9 per cent of the time for talking to themselves. Chomsky’s determination to free language from all connection with society, politics, history or culture – all connection, in other words, with the political activist side of his life – is evidently what drove him to these bizarre conclusions. It eventually drove him to the claim that words, or the concepts behind them, are lodged in the brain from birth – having become fixed in our genes at the moment when our species first emerged. When challenged to explain how this idea could possibly apply to words such as ‘bureaucrat’ and ‘carburettor’ – things that clearly didn’t exist when humans first evolved – Chomsky held his ground. Like all lexical concepts, he insisted in his book New Horizons in the Study of Language and Mind (2000), they must have been genetically installed thousands of years before real bureaucrats or carburettors had been invented. When MIT’s Jerry Fodor took Chomsky’s side on this issue, his rival philosopher Daniel Dennett expressed astonishment, writing in Consciousness Explained (1991): ‘Thus Aristotle had the concept of an airplane in his brain, and also the concept of a bicycle – he just never had occasion to use them!’ Perhaps ‘Aristotle had an innate airplane concept,’ Dennett continued, ‘but did he also have a concept of wide-bodied jumbo jet? What about the concept of an APEX fare Boston/London round trip?’ Despite the hilarity, Chomsky has continued to defend the idea. Chomsky embraces genetic determinism in an equally extreme form when discussing how a child acquires its first language. He claims that no child needs social learning to do this. Since all the world’s languages have been genetically installed in each individual from birth, says Chomsky, the child just needs to run through its internal library of languages and, by a process of elimination, compute which particular one to activate. As Chomsky said in a lecture at the University of Rochester in 2016: It’s pretty clear that a child approaches the problem of language acquisition by having all possible languages in its head. It doesn’t know which language it’s being exposed to. And, as data comes along, that class of possible languages reduces. So certain data comes along, and the mind automatically says: ‘OK, it’s not that language, it’s some other language.’Yet even while championing such extreme genetic determinism, Chomsky has in recent years happily swung over to the opposite extreme, suggesting that the role of distinctively human genetics may in fact be zero. This would be the case if Universal Grammar turned out to be a fundamental principle of language across the entire Universe. On this basis, bizarrely, Chomsky has since extended his claims to the languages of extraterrestrials, arguing at the International Space Development Conference in 2018 that Universal Grammar may prove to be universal not just among Earth-dwellers but on any planet in the Universe. In ‘Rethinking Universality’ (2020), Chomsky and his co-author Jeffrey Watumull suggest that ‘any language anywhere in the Universe would resemble human language’. Not only that, they and their co-author Ian Roberts go on to argue in ‘Universal Grammar’ (2023) that any intelligent extraterrestrials would likely be endowed with ‘human-style linguistic “software”, thus eliminating any principled limit to effective communication [between aliens and humans].’ Certainly no one could accuse Chomsky and his supporters of being too cautious in their claims! Not one of his ever-changing theoretical approaches has survived the test of time I mentioned at the outset that my job as an anthropologist isn’t just to describe Chomsky’s strange ideas or find fault with them. It is to understand why he found it necessary to arrive at them. The only explanation that makes sense to me is that, given his institutional situation at MIT, Chomsky felt obliged to follow two basic principles: firstly, he would pursue natural science to the total exclusion of politically suspect social science; and, secondly, he would keep his natural science ‘basic’ or ‘pure’ – that is, uncontaminated by the moral danger of any practical military applications. Even while continuing to admire Chomsky, most of his former supporters would now agree that, when tested in the light of how language actually works, not one of his ever-changing theoretical approaches has survived the test of time. Their most fundamental flaw was always their abstraction, in particular their insulation from social engagement and from the messy complexities of human life. In Explain Me This (2019), the influential theoretical linguist Adele Goldberg makes the point that to study written sentences in isolation – the Chomskyan strategy favoured by most theoretical linguists until recently – may be ‘akin to studying animals in separate cages in a zoo’. Writing in 2016, the prominent evolutionary linguist and child psychologist Michael Tomasello and the developmental psychologist Paul Ibbotson summed up the prevailing consensus by observing that Chomsky’s ‘Universal Grammar appears to have reached a final impasse.’ Tomasello and Ibbotson are right. Not one of Chomsky’s models of Universal Grammar has proved workable. Each new variant has turned out to be not just mistaken but fundamentally useless. Although the Pentagon’s enthusiasm for artificial intelligence has rekindled some interest in Chomskyan grammar for what they call ‘future combat systems’, there’s no reason to believe that today’s military linguists will be any more successful than their predecessors. This raises an interesting question. If the entire Chomskyan paradigm was a mistake, then how can we explain its lasting influence? Even when they proved unworkable, Chomsky’s theories retained their initial aura of promise and excitement, as if some extraordinary breakthrough was about to be achieved. In likening his intended reconstruction of linguistics to the accomplishments of Descartes and Galileo, Chomsky raised himself to a plane far higher than any rival theoretician, offering hope for nothing less than a world-changing scientific revolution. In the early days, transformational grammar’s apparent endorsement by the Pentagon played a decisive public relations role. Previously, a linguist would most likely be some kind of anthropologist making notes about the language spoken in some marginalised community or little-known tribe. The prospect of such a scholar enjoying funding from the military would have seemed absurd. Chomsky’s arrival changed everything. Few people knew precisely why the Pentagon were so interested in his thinking, but the fact that they seemed interested did his institutional status no harm. But there is more to it than that. My own suspicion is that, for Chomsky’s institutional milieu, his ideas just had to be true. Endorsing Chomsky meant endorsing his picture of language as a digital computational device. To any computer scientist, that was an attractive idea. Chomsky’s programme promised to elevate a generation of military-sponsored computer scientists to the status not merely of electronics engineers but philosophers in the tradition of Plato and Descartes, geniuses delving into the greatest of all mysteries – the ultimate nature of human language and mind. Right or wrong, it was clearly too attractive a vision to be lightly set aside. Even to this day, despite decades of disappointment and failure, the vision still enjoys passionate support. For anyone in my position as an admirer of Chomsky’s political activism, it feels risky to say things that can so easily be misunderstood. No part of my account can detract from Chomsky’s unparalleled record as an activist. Neither can it detract from his persistence in putting up with the pressures and contradictions that inevitably came with a career at MIT. Chomsky alongside members of the Student Mobilization Committee at a Boston University ‘Laos’ teach-in. Boston on Feb. 9, 1971. Photo by Cary Wolinsky/The Boston Globe via Getty Many of Chomsky’s activist supporters have been shocked to discover that their hero has been on friendly terms not only with the former head of the CIA, John Deutch, but also with the sex offender Jeffrey Epstein. But it would have been impossible for Chomsky to maintain his position at MIT for so long without associating with all sorts of dubious establishment figures. As Chomsky told The Harvard Crimson in 2023 of his meetings with Epstein: ‘I’ve met [all] sorts of people, including major war criminals. I don’t regret having met any of them.’ For me, Chomsky’s association with Epstein was a serious error. I also believe, however, that had Chomsky been so principled and pure as to refuse to work at MIT, then he might never have gained the platform he needed to inspire so many of us to oppose both militarism and the even greater threat of climate catastrophe. There are times when we all have to make compromises, some more costly than others. In Chomsky’s case, it was his attempt at a new understanding of language that suffered most from the institutional contradictions he faced. Despite the failure of his attempted revolution in linguistics, Chomsky’s political activism remains an inspiration.
Chris Knight
https://aeon.co//essays/an-anthropologist-studies-the-warring-ideas-of-noam-chomsky
https://images.aeonmedia…y=75&format=auto
History of science
From the Irish Giant to the Ancient One, is it ever ethical for scientists and museums to study bodies without permission?
In 1786, Joshua Reynolds painted a portrait of the surgeon and anatomist John Hunter. Reynolds depicted Hunter gazing into the distance, caught in mid-thought, quill in hand. On the table in front of him, apart from inkwell and paper, are some books, one propped open to a page comparing the skulls and arm bones of humans and apes. Next to the books is an anatomical specimen under a glass dome. In the upper right-hand corner is a mantel holding another anatomical specimen in a glass jar. A pair of large skeletal feet suspended in the air next to the jar hint at the large skeleton attached to them and hanging from the ceiling. The painting was well known, particularly after an engraving of it was made in 1788, and the dangling feet were also famous. Their inclusion in the portrait indicated that Hunter owned them and the skeleton to which he had re-attached them. However, in life, they had belonged to the ‘Irish Giant’ Charles Byrne. I first saw Byrne’s skeleton decades ago in the Hunterian Museum of the Royal College of Surgeons in London, among jars of preserved anatomical specimens that included an 18th-century bishop’s cancerous rectum. The last time I saw the skeleton, in 2016, I was not allowed to photograph it. The following year, the outcome of the case of an ancient skeleton in the United States brought to a head long-simmering issues surrounding the rights of the dead against the rights of museums to display their remains and of scientists to learn from them. The contrasts are many between Byrne and the man who died 8,500 years ago and was buried along what is now known as the Columbia River near Kennewick in Washington state. Nonetheless, there are similarities between them. They share stories of identity and ownership in the aftermath of colonialism. More broadly, both men seem to epitomise science’s appropriation of individual identities in the service of a larger impersonal goal of knowledge that presumably will benefit humankind. By this argument, a dead body has no value other than as a source of information. Yet the stories of these two men, vastly separated in place and time, are not merely stories of a cold and rapacious science, but of the intertwined desires and beliefs that the living project onto the dead, both in the 18th century and today, both among scientists and among others who lay claim to the bodies of the dead. The dead themselves know nothing about it. For several years, Charles Byrne, born in 1761 in County Derry, exhibited himself for a fee to the public at fairs, in taverns and coffeehouses, and at private homes across Ireland and Britain. Newspaper advertisements and broadsides breathlessly proclaimed him to be the tallest man in the world, at a height of 8 ft 2 in. We know little about him apart from his height, which was actually about 7 ft 7 in (2.31 m). His parents were of normal size but, since he hired himself out for money while still in his teens, they were most likely poor. His handlers dressed him in the height of 1780s fashion, with silk stockings and lace cuffs, and broadsides posted on walls across London announced his arrival in April 1782. The broadside described ‘The Modern Living Colossus, Or, Wonderful IRISH-GIANT’ noting his ‘admirable Symetry [sic] and Proportion’ and his ‘Vivacity and Spirit’. Leaflet advertising appearances by Charles Byrne, the Irish Giant, c1781. Courtesy the Wellcome Collection, London Byrne’s Irishness was part of his appeal. He was sometimes referred to, or referred to himself, as O’Brien, invoking an ancestry going back to the semi-mythical 11th-century Irish king Brian Boru, who in some tales was a giant. In Hilary Mantel’s novel The Giant, O’Brien (1998), Byrne is a storyteller, a myth-spinner, in contrast to Hunter’s cold rationality. Byrne was not the first Irish giant to display himself, and he would not be the last. Patrick Cotter, who really was eight feet tall, assumed Byrne’s role after his death. He also called himself O’Brien, and the two merged in the popular imagination. Patrick O’Brien, a giant (1803), etching by J Kay. Courtesy the Wellcome Collection Hunter would eviscerate it, dismember it, boil it until the flesh came off, and assemble the bones into a skeleton By the spring of 1783, only a year after his triumphant entry into London, Byrne was in constant pain, his bones cracking under his weight. Newly destitute, robbed of his life savings, he was busily drinking himself to death. He was 22 years old, and he knew that Hunter wanted his body after he died. The anatomist had offered to pay Byrne a sum of money if he would bequeath him his body. This was not an uncommon practice. Forty years earlier, the ‘Irish Dwarf’ Owen Farrell had made just such an arrangement with a surgeon. The value of Farrell’s bones, at least initially, was as a curiosity: they entered the collections of the Duke of Richmond. Hunter’s brother William, who purchased the skeleton at an auction after Richmond’s death, was more interested in Farrell’s abnormally bony cartilage. Owen Farrell (1742), engraving by J Hulett. Courtesy the Wellcome Collection, London Byrne knew that if he granted John Hunter’s request, his body would share the fate of Farrell’s: Hunter would eviscerate it, dismember it, boil it until the flesh came off, and assemble the bones with wires and rods into a skeleton. Byrne refused Hunter’s offer, and took steps to ward off this fate, arranging with friends to seal his dead body in a lead coffin and pay for its transport to the coast for a burial at sea, far from Hunter’s predatory hands. Or perhaps the burial was to be in Ireland. There are many accounts and they say different things. The announcement of Byrne’s death in Parker’s General Advertiser in June 1783 noted that ‘His remains are secured in his coffin, which measures upwards of eight feet four inches.’ Although his friends were ‘determined to have him carried to Ireland in a few days’, they first offered to show the coffin to the public, for a fee of two shillings and sixpence apiece. The coffin never made it, either to Ireland or to the coast. Accounts also differ of how Hunter obtained Byrne’s body; the London Morning Chronicle declared that Hunter paid 125 guineas, more than £20,000 today, for the body. Others claimed he paid £500. Possibly the friends who exhibited the coffin sold the body to Hunter. In any case, Byrne’s next public appearance was his skeletonised feet in Reynolds’s portrait. Hunter declared that his interest in Byrne was purely scientific. Examining the extremes of nature, as well as normal specimens, gave him insights into its inner workings. He amassed an enormous collection of human and animal skeletons, skulls and anatomical preparations housed both at his home in Leicester Square and his country house at Earl’s Court, where he also maintained a menagerie. While his collections were primarily for his own research, from 1788 onward he opened his home at Leicester Square for public viewing a few times a month. He put the skeleton of Byrne on display as part of a case depicting the growth of bones. Hunter died in 1793, and in 1799 his anatomical collections, including Byrne’s skeleton, were purchased by the British government and transferred to the Company of Surgeons, which later became the Royal College of Surgeons of England. The College’s museum opened in 1813, and Byrne’s skeleton remained on display there for the next two centuries. In 1909, a US surgeon sawed open Byrne’s skull and found evidence of the pituitary tumour that caused the release of excessive amounts of growth hormone. Later, DNA was extracted from one of Byrne’s teeth. At some point in the 19th century, another skeleton joined Byrne’s on display. Known as the ‘Sicilian Fairy’, Caroline Crachami died in 1824 at the reputed aged of nine. At the time of her death, she was around 20 in (50 cm) tall – the size of a newborn infant – making her one of the smallest humans ever recorded. The two skeletons survived the Blitz in 1941, which destroyed much of Hunter’s collection, but were put in storage in 2017 when the museum closed for renovations. Portrait of Caroline Crachami (1826), by John James Chalon. Courtesy the Hunterian Museum A second story of identity and ownership of a dead body began in the summer of 1996, when some teenagers found a skull along a bank of the Columbia River. The man who had been buried there some 8,500 years ago was tall for his time, at around 5 ft 7 in (1.7 m), but he was not a giant. His burial was deliberate, with his body laid on its back, parallel to the river and with his head pointed upstream. When his skull eventually emerged from the riverbank, time had already removed his skin and hair, his eyes and tongue. The teenagers who found it assumed it belonged to the victim of a murder and called the police. The nearest town, Kennewick, is near the confluence of the Columbia with the Yakima River to the west and the Snake River to the east, and the waters here are bisected by the McNary Dam. The skull came to the local coroner, who called James Chatters, a local archaeologist, to determine if it belonged to a crime victim. Chatters recognised that the skull was quite old and, judging by its shape, he at first believed it had belonged to a white settler. Returning to the site, he found some 300 bones and fragments: nearly a complete skeleton. Carbon-14 testing on a finger bone initially established its age at around 9,000 years old. The emergence from the muddy shore of these bones – who came to be called either ‘Kennewick Man’ or ‘the Ancient One’, depending on the speaker – opened a 20-year debate about ownership, heritage and science. Although this story was in some ways uniquely American, it also spurred global disputes that are still ongoing. The McNary Dam site is owned by the US government and managed by the US Army Corps of Engineers. The Corps issued the permits that allowed Chatters to find more of the skeleton and, once the age of the bones became clear, the Corps took possession of them. Since they dated from long before 1492, the course of action for the Corps seemed clear. Under the provisions of the six-year-old Native American Graves Protection and Repatriation Act (NAGPRA), if the bones were pre-Columbian, they were by definition Native American. If the bones were Native American, the Corps was obligated to repatriate them to local tribes. Five local Native American tribes had already claimed Kennewick Man as an ancestor, ‘the Ancient One’. At this point, only a few months after the discovery of the bones, Kennewick Man was already renowned in the archaeological world. At that time, few bones of such an advanced age, and only one other skeleton, had been found in North America. Theories of the populating of North America, including when it occurred, where it originated and by which route, were in flux after years of consensus. New methods of analysis including CAT scans and 3-D modelling helped to reconstruct what skeletal bodies looked like when alive, while stable isotope analysis – which considers the ratios of stable isotopes of carbon and nitrogen in bones – reconstructed the diets, and the origins, of the long-dead. These and other methods were changing the picture of late Ice Age America. Most exciting was the potential of DNA analysis, which was in its infancy in the 1990s. Chatters had just sent a finger bone out for DNA analysis when the Corps ordered him to stop any further testing. The bones of Kennewick Man had been in a box, in plastic bags and in an evidence locker A group of anthropologists and archaeologists sued the Corps to prevent repatriation, arguing that the identity of the bones had not yet been determined, and that therefore NAGPRA did not apply to them. They pointed to the need for further study, which the five tribes strenuously opposed, calling for the bones to be buried without further analysis. Ironically – and this case is drenched in irony – because the Federal court agreed that the identity of the bones could be determined only scientifically, the scientists were freed to conduct studies to determine that identity. These studies mostly focused on craniometry, the size and shape of the skull, which many believed to indicate a non-Native American identity. Chatters published images of the skull in 1996 and 1997, and identified its shape as ‘Caucasoid’, an archaic racial classification that some construed as racist. The facial reconstruction Chatters made from a resin cast of the skull, published in Science magazine in 1998, most resembled the actor Patrick Stewart (Chatters admitted that he was a fan of Star Trek). White supremacist groups in the US seized on the ‘Caucasoid’ label to argue that white Europeans had settled the Americas before any ancestor of modern Native Americans. By then, the bones had been deposited in the Burke Museum at the University of Washington, where they remained securely stored and were never displayed. The 2004 judgment in the lawsuit against the Corps declared in favour of the scientists. The settlement opened the door to a more detailed study of the remains over the next decade, resulting in what seemed to be the definitive account of the bones. Ten years later saw the publication of Kennewick Man: The Scientific Investigation of an Ancient American Skeleton (2014), edited by the anthropologists Douglas Owsley and Richard Jantz. The study described Kennewick Man as relatively tall, ‘broad-bodied, and massive’. But was he an ancestor of modern Native Americans? Amid an avalanche of data on diet, overall health and burial conditions, the critical question of identity remained focused largely on the skull. Some anthropologists thought it resembled the modern Ainu of Japan, an Indigenous people who predate the Japanese but whose ancient origins remain debated. Additional analysis suggested that the skull showed most affinity with Polynesians, indicating a common set of ancestors. This theory in turn fit new ideas of multiple migrations from Asia to the Americas, with some travelling across the sea and perhaps predating those who came overland from Siberia across the Bering Land Bridge. Owsley and Jantz concluded that ‘Kennewick Man … differ[s] from modern American Indians in systematic ways … their difference is mainly genetic and as such carries information about their history and biological affinities,’ although the proofs they presented were mainly morphological. Throughout the volume, Kennewick Man is referred to as a ‘Paleoamerican’ rather than the more common name of ‘Paleoindian’, in an attempt to demarcate differing ancient groups. A new reconstruction of his face resembled photographs of Ainu men, with a full beard and lightly tanned skin. Kennewick Man’s skull and recreation. Courtesy Brittney Tatchell/Smithsonian Institution A little more than a year after the publication of Owsley and Jantz’s volume, the carefully constructed chain of identification fell apart when the results of DNA analysis showed that ‘Kennewick Man is closer to modern Native Americans than to any other population worldwide.’ An article in Nature magazine in July 2015, authored by a team led by the Danish scientist Eske Willerslev, reported the results of the analysis. The team had sequenced the genome of Kennewick Man and compared it with other populations, particularly other Native American populations. The closest relationship appeared with the Confederated Tribes of the Colville Reservation, one of the five tribes who had claimed Kennewick Man as the Ancient One nearly 20 years earlier. The Colville tribe was the only one of the five that agreed to provide DNA samples for comparison. The development of research on what is known as ancient DNA or aDNA had progressed rapidly in the decade before the Nature article. Ancient DNA consists of short and degraded fragments that persist in materials such as bones and teeth, as well as hair, mummified skin and dental calculus. Although the first sequencing of aDNA occurred in 1984 from a museum animal specimen, the technological challenges to obtaining meaningful results from fragmentary and often contaminated evidence took years to overcome. Moreover, the interpretation of aDNA evidence required collaboration between bench scientists and social scientists, who had to overcome their mutual suspicion and develop a common vocabulary as well as a chain of ownership between excavation site or museum and laboratory that minimised contamination. For example, the bones of Kennewick Man had at various times been in a box, in plastic bags and in an evidence locker before their arrival at the Burke Museum, increasing their chances of contamination with modern DNA at each stop. Perhaps most important, the analysis of aDNA is necessarily destructive. The sample to be analysed is reduced to a powder and treated with chemicals that purify the DNA from contaminants, isolate it, and extract and recover fragments. While improved technology has reduced the size of the sample required, the cultural significance of removing even a small sample from an ancient specimen cannot be ignored. The results of the aDNA analysis fulfilled NAGPRA’s requirement of a preponderance of evidence, allowing the Corps to declare definitively that Kennewick Man belonged to the five tribes who had originally claimed him, and particularly to the Colville. The Corps confirmed what the five tribes had said all along: that the Ancient One was their ancestor. In February 2017, the Burke Museum turned over the bones to the tribes. As required by law, that meant all the bits and pieces, including what remained of the finger bone used for DNA testing, thus precluding any further testing. The Ancient One was reburied in an undisclosed location on the Colville Reservation along the Columbia River. Such a resting place was not to be allowed to the skeleton of Charles Byrne. Calls for the burial of his bones – either in Ireland or at sea – began years before the Hunterian Museum closed for renovations in 2017. These calls intensified during the museum’s closure, with Hilary Mantel taking a prominent role. Every inch of Byrne’s skeleton had been measured and examined. A tooth had been extracted for DNA testing in 2011. Surely, Mantel and other advocates argued, science had learned all there was to learn. ‘He’s waited long enough,’ she said in 2020. DNA testing had revealed that the pituitary tumour that caused Byrne to be a giant had a hereditary component. It was not merely a spontaneous mutation, but had a genetic basis, and the frequency of Irish giants was not merely apparent or coincidental but quite real. The giant Knipe twins, with whom Byrne had been depicted, had grown up in a neighbouring village to Byrne’s, and by this evidence they were most likely distant cousins. The genetic evidence showed that the giants trail went back for generations. Ireland really was a birthplace of giants. The Knipe Twins depicted with Charles Byrne, etching by John Kay (1784). Courtesy the NPG, London However, this evidence, far from persuading the museum that science had learned all there was to learn, led them to conclude that the bones still had more to tell. The scientists who had examined Kennewick Man would certainly agree with this assessment. DNA analysis and particularly the analysis of ancient DNA has been refined further and further in the past decade. Unlike Kennewick Man, Byrne has not been buried, although his skeleton was no longer on display when the Hunterian Museum reopened in May 2023. He remains in the museum as Osteo.223, available for future research. The tiny skeleton of Caroline Crachami, Osteo.227, also remains in storage at the museum. The unwavering values of Native American communities claimed the Ancient One as their own These stories affirm that the way the living have treated dead bodies throughout history is never about the dead but about themselves. The living give the bodies of the dead – and thus their own bodies – meaning, whether as relics, museum displays, scientific subjects or ones buried in the ground. Their status as colonial subjects made the remains of Byrne and the Ancient One particularly vulnerable to exploitation. In Byrne’s time, certain fields in Ireland were still littered with bones and skulls, the result of Oliver Cromwell’s brutal reconquest of Ireland in the 1650s. Byrne and his contemporaries assumed they owned their own bodies, believing therefore that they could control their disposition after death. But in Byrne’s case, he rejected the coldly transactional arrangements made by his countryman Owen Farrell, hoping to escape the fate of Farrell’s bones. Although Mantel portrayed Byrne as a deeply spiritual man steeped in Irish mythology, in fact we know little of his spiritual or emotional life, or of his wider community. In contrast, the transformation of Kennewick Man into the Ancient One owed to the unwavering values of Native American communities who claimed the Ancient One as their own. However, the US government accepted these values only when science affirmed the claims. Moreover, although the Ancient One sleeps beneath the banks of the Columbia once more, forever lost to science, Native American bones and artefacts remain in anthropology departments and museum collections around the world. The values of the community who have demanded the burial of Byrne are more diffuse, and the laws that govern the display of human remains in Britain are loosely interpreted unless the remains are under 100 years old, in which case they are subject to stricter regulation. Byrne’s bones hang in a storage room, awaiting further tests as the science of aDNA advances. Perhaps scientists will learn more about pituitary tumours from Byrne, and his body, racked with pain in life, will be able to prevent a similar fate for others. In place of the skeleton itself, visitors to the museum can view Reynolds’s depiction of his feet, hanging over Hunter’s mantel.
Anita Guerrini
https://aeon.co//essays/do-the-dead-have-a-right-to-keep-their-bodies-out-of-museums
https://images.aeonmedia…y=75&format=auto
Comparative philosophy
A 17th-century classic of Ethiopian philosophy might be a fake. Does it matter, or is that just how philosophy works?
In 2017, the Australasian Journal of Philosophy issued a rare retraction, informing their readers that one of their articles was not in fact written by a cat. The short article, a critique of David Lewis’s ‘Veridical Hallucination and Prosthetic Vision’, was published in 1981 under the name of ‘Bruce Le Catt’, a figure with no discernible institutional affiliation or track record of publishing, but who appears to have been familiar with Lewis’s work. As indeed he might have been, being the beloved pet of the great American philosopher. It may not have come as a surprise to those familiar with Lewis’s work that ‘Bruce Le Catt’ was not the pseudonym of an astute critic, but of Lewis himself. The playfulness of Lewis’s writing is well known: for instance, the paper ‘Holes’ (1970), co-written with Stephanie Lewis, is a dialogue between two characters, ‘Argle’ and ‘Bargle’, on the ontological status of holes as found in Gruyère, crackers, paper-towel rollers and in matter more generally. Nevertheless, the attribution of the 1981 paper to a cat seemed to cross a line. It may have been playful, but it was also deceptive, hence the retraction. Lewis was not the only 20th-century philosopher to publish using an invented persona. The contents page of the book Explaining Emotions (1980), edited by Amélie Oksenberg Rorty, features the essay ‘Jealousy, Attention, and Loss’ by one Leila Tov-Ruach, listed on the Contributors page as ‘an Israeli psychiatrist, who writes and lectures on philosophic psychology’. Some readers might have noticed that this is a rather unusual name – a pun on laila tov ruach or ‘goodnight wind’ in Hebrew – and might have had their suspicions confirmed by the fact that there is no discernible trace of this psychiatrist elsewhere on the medical or academic record. Indeed, as an erratum on the University of California Press website drily notes, Amélie Oksenberg Rorty and Leila Tov-Ruach are indeed one and the same person. The case of Tov-Ruach is somewhat different to Bruce Le Catt. Rather than playfully externalising the critique of the originating philosopher’s own arguments, Tov-Ruach’s paper is included side by side with Rorty’s own contributions to a volume that she herself edited. The two write on different topics and have their own biographical entries in the volume but are not in opposition. It is certainly a more elaborate and less obviously tongue-in-cheek intervention than Lewis’s use of Bruce Le Catt as an antagonist. What are the ethics of this kind of pseudonymous publication? When they realised what had happened, the Australasian Journal of Philosophy and the University of California Press evidently felt it necessary, as a matter of academic ethics, to issue a clarification on the identity of the true authors. They were prompted to do so by the unflagging work of Michael Dougherty, the Sister Ruth Caspar Chair in Philosophy at Ohio Dominican University, who has spent years unmasking cases of misattribution, and downright plagiarism along with murkier, quirkier cases like these. For Dougherty, such cases are primarily about disciplinary morality, amounting to a wilful obstruction of the scholarly endeavour. On the Rorty/Tov-Ruach case, he writes: It’s odd to have a dialogue with yourself under two names in the published literature. I have no idea why she is doing this. Dr Rorty is a distinguished philosopher, and the use of pseudonyms can impede a genuine history of philosophy.It is the implied question in Dougherty’s statement that interests me: why is she doing it? Why would any philosopher write under somebody else’s name, pretend to be someone that they are not? If plagiarism is the intellectual sin of taking credit for someone else’s ideas, what are we to think of its opposite: pinning one’s own ideas on somebody else who doesn’t even exist? While it might seem odd in the world of contemporary journal publication, smuggling ideas under someone else’s name is rather more common in the history of philosophy than you might think. Medieval philosophy in particular abounds with texts that blur the boundaries between anonymity, pseudonymity and straightforward authorship. Consider the various ‘pseudos-’ – from pseudo-Augustine, pseudo-Aristotle, pseudo-Dionysius the Areopagite – that proliferated in the late antique and medieval periods. Many of the medieval scholars used this kind of device to invoke the authority of an older figure for their ideas; humble monks who wrote (if writing under any name at all) under the names of the mighty dead to gain intellectual clout and authority. Indeed, in a slightly different form, this practice has far deeper roots. Any philosophical dialogue using the names of real figures does something similar: is Plato’s Socrates the ‘real’ Socrates, or a mouthpiece for Plato’s own views, or somewhere in between? Was Plato’s Protagoras the ‘real’ Protagoras, or just a foil for Plato’s own ideas? And, if the latter, is there really anything wrong with this? Courtesy Casa Fernando Pessoa, Lisbon, Portugal And what about when the name under which a philosopher writes does not refer to a real individual? Søren Kierkegaard wrote under a great many names: Johannes Climacus, Constantin Constantius, Victorin Victorius Victor, Johannes de Silentio are a few of them, none of whom is anything but the creative imagining of Kierkegaard himself. In fact, it is perhaps more proper to call these personages ‘heteronyms’, as developed later in the works of Fernando Pessoa, in which the different names are not simply alternative labels for an identical author hiding behind the label, but denote fully conceived individuals, each with their own personality, appearance and distinctive literary style. Pessoa himself conjured more than 60 such persons, in addition to two ‘semi-heteronyms’ that constituted a ‘mere mutilation’ of his own personal style, and finally the single ‘orthonym’ that referred to the origin points of all of these names: Pessoa himself. This thinker, who does not exist, nevertheless takes up a particular perspective on the world Rorty’s use of an alias is in many ways easier to understand, mainly because she tells us precisely why she wrote under a name that was not her own. Indeed, Leila Tov-Ruach was not her only pseudonym. In addition to an Israeli psychiatrist, Rorty also tried her hand at writing as a Chinese Platonist and, in her edited collection Philosophers on Education (1998), she explains why she chose to write her article on ‘Plato’s Counsel on Education’ under the name of Zhang LoShan: Ever since teaching a course in the history of philosophy in the People’s Republic of China in 1981, and finding students and colleagues there passionately interested in Plato, I had been trying to see him through their eyes, with their preoccupations … Although I wrote that essay, it is, in a perfectly straightforward way, not strictly speaking mine … It is an experiment I strongly recommend to all serious scholars: surprising features emerge from the exercise.The aim of writing under the name of this nonexistent philosopher was, in Rorty’s words, ‘intellectual empathy’, understood as the attempt to enter into the mind of another thinker, a kind of exercise. This thinker, who does not exist, nevertheless takes up a particular perspective on the world, a perspective that rests on a different set of assumptions and preoccupations from the author’s. When the pseudonymous author imaginatively occupies such a perspective through the processes of intellectual empathy, they might thereby see things differently (as might readers). Today, some people might object to the case of Rorty-as-Zhang-LoShan on grounds of cultural appropriation, and perhaps Rorty would admit that this is precisely the point: to appropriate a perspective that is not one’s own, that is not anybody’s at all (though, for her, presumably that wouldn’t have the negative connotations of ‘cultural appropriation’). And perhaps this is why she – and Kierkegaard and Pessoa, but not Plato or pseudo-Augustine – chose names of thinkers who never existed: so as to have the freedom not only to appropriate an existing perspective, but also to create and inhabit one anew. But none of these examples, from philosophical felines to pseudo-Augustine or imaginary Chinese Platonists, is quite as perplexing as that of the Ḥatäta Zera Yacob. The Ḥatäta, or ‘enquiry’ (the root of which, ሐ-ተ-ተ, in the ancient Ethiopian language of Geʽez literally means ‘to investigate, examine, search’ ) is an unusual work of philosophy for a number of reasons. It is not only a philosophical treatise but also an autobiography, a religious meditation and a witness of the religious wars that plagued Ethiopia in the early 17th century; it presents a theodicy and cosmological argument apparently independent of other traditions of Christian thought; it employs a subtle philosophical vocabulary that is virtually without precursors. Finally, and most perplexingly, the progenitor of these ideas, the Zera Yacob who is the subject of the autobiography and gives his name to the title, may never have existed. Why might we think this? The text is composed in the voice of one Zera Yacob, a man born to poor parents in ‘the lands of the priests of Aksum’ in northern Ethiopia around the turn of the 17th century. Driven from his hometown by religious conflict between Orthodox ‘kopt’ and Catholic ‘ferenj’, our eponymous narrator Zera Yacob flees to the hills and finds a cave in which he ‘meditated all day on humanity’s quarrels and wickedness, and also on the wisdom of the Lord their creator, who keeps silent when they act wickedly in his name, persecute their neighbours, and kill their own brothers and sisters.’ The basic problem of his philosophy is how to understand how God allowed this violent conflict to take place – a version of the classic problem of evil – and further to understand what, if anything, is true in religion. In the most strident chapters, Zera Yacob critiques the religious practices and social organisation of his day Zera Yacob poses the problem by asking how we can decide between two religions whose justifications and standards of justification are internal to their own systems of thought – who ‘decided everything according to [their] own creed’: Where will I find someone who will decide [on the religions and creeds] truthfully? Because [just as] my religion seems true to me, so does another’s religion seem true to them.The problem is not only that different groups disagree, but that there seems no way to resolve these disagreements without bloodshed. His answer is remarkable. The only thing that can decide between competing religious claims is something that every human has inside them: the god-given faculty of lebbuna (variously translated as ‘reason’, ‘intelligence’ and ‘understanding’) that allows us to perceive what is right and wrong, good and bad by means of it being attuned to a kind of pre-established harmony between the creator, creation at large and this faculty itself. Lebbuna is common to kopt and ferenj, man and woman, young and old: truth and goodness is accessible to all, equally. And yet humans do not use it. It is onerous to apply one’s reason, and mankind is by nature lazy, preferring to be led by received wisdom. The most strident chapters of the book follow, with Zera Yacob using the normative standards set by lebbuna to critique the religious practises and social organisation of his day. He criticises slavery for treating man like a beast; asceticism for perverting natural desires; and the practice of marriage for treating a wife as the slave of a husband. When the civil unrest ends with the death of the emperor, he returns to society, settling in the town of Enfraz where he finds work and eventually an intellectual disciple in the form of a youth named Walda Heywat, who urges his teacher to write down his reflections before his death. He presents here a vision of the good life as living in harmony with the natural order of creation, earning his sustenance and that of his family by honest work. The historical details of the political background are all accurate, the language of the text beautiful, lyrical Geʽez. So why think that this character, so convincingly evoked, may never have existed? The troubled afterlife of the text begins when the work is ‘discovered’ in 1852 by a lonely Capuchin monk named Giusto da Urbino in the highlands of Ethiopia. Before this date, there is no mention of the text in the historical record. The work was sent off to da Urbino’s patron back in Paris, the Irish-Basque explorer, linguist and astronomer Antoine d’Abbadie, and placed in the Ethiopian collections of the Bibliothèque Nationale de France. Over the next couple of decades, scholars flocked to consult this fascinating, seemingly unprecedented text. The Ḥatäta was edited and translated into Russian and Latin, and began to gain a wider readership among European intellectuals. Then in 1920, an Italian Orientalist named Carlo Conti Rossini published an article in the Journal Asiatique, claiming that, far from being a masterpiece of 17th-century Ethiopian thought, the Ḥatäta was in fact a forgery, composed by the man who had claimed to discover it: da Urbino. Conti Rossini had been tipped off by an Ethiopian convert to Catholicism that da Urbino had been scheming with local scholars to create ‘heretical’ and ‘masonic’ works to undermine Catholicism and Ethiopian orthodoxy alike. Conti Rossini now started seeing proof everywhere, adducing philological arguments and cultural speculations in equal measure to the conclusion that this book was written by an Italian in the 19th century, not an Ethiopian in the 17th. Conti Rossini was the pre-eminent Ethiopianist of interwar Europe, and his arguments were eventually accepted by almost all scholars, including those who had spent so long translating and commentating upon the work. But Conti Rossini was also a colonial administrator in Italian East Africa, and a supporter of Mussolini’s invasion of Ethiopia, even going so far as to publish an article in 1935 titled ‘Ethiopia Is Incapable of Civil Progress’, arguing that the country could, indeed should, be colonised by a ‘civilising’ power, explicitly invoking his refutation of the Ḥatäta as part of his argument. If faking a passport gets you somewhere, what does a fake work of philosophy get you? The argument has raged for more than a century now, with new arguments being made on both sides. Claude Sumner, a Jesuit missionary who called himself ‘Canadian by birth, Ethiopian by choice’, made a passionate case for an Ethiopian authorship in his five-volume Ethiopian Philosophy (1974-8), building on the argument of Ethiopian scholars like Alemayyehu Moges and Amsalu Aklilu. The French historian Anaïs Wion has produced an ingenious argument against an Ethiopian authorship in her series of articles The History of a Genuine Fake Philosophical Treatise (2013), and these arguments have been taken up by scholars like Fasil Merawi and Daniel Kibret back in Addis Ababa. Finally, the late, great scholar of Ethiopian manuscripts Getatchew Haile reversed his position, held for half a century, that the work was a forgery in a paper published shortly before his death in 2021. It is no exaggeration to say that today, as interest in the Ḥatäta begins to peak once again with a series of new books, podcasts and the publication of a new translation of the Ḥatäta, the question of its author’s existence is in limbo. The difference between the case of Leila Tov-Ruach and Zera Yacob is that the identity of the author of the Ḥatäta really seems to matter. Many Ethiopian intellectuals are understandably proud of the work, holding it up as a masterpiece of 17th-century literature and a foundation of an alternative, specifically Ethiopian path to modernity. And they are understandably furious at the idea that the writings of a fascist intellectual might deprive one of their greatest geniuses of his rightful credit. In Europe and the United States, philosophers keen to diversify and decolonise their curriculums have seized on Zera Yacob as evidence of an ‘African Enlightenment’, as an African Descartes or Kant. As Sumner put it, the Ḥatäta demonstrates that ‘modern philosophy, in the sense of a personal rationalistic critical investigation, began in Ethiopia with Zera Yacob at the same time as in England and in France.’ If the work is a forgery, it seems that the Ḥatäta cannot fulfil this lofty role allotted to it. The implication seems to be that, if it is not written by a 17th-century Ethiopian scholar, it is not all that interesting or important after all. So it seems that we do very much care who wrote it. But should we? The assumption on the side of both the proponents and opponents of authenticity is that either the work is totally genuine, in which case it can be used to diversify and decolonise, or else it is totally fake, a ‘mere forgery’ and of little interest, other than perhaps as a case of late-colonial cultural appropriation (or immersion, if one prefers). But what is a ‘mere forgery’ anyway? If you forge a passport, you are creating a fake document that permits you to cross borders as if it were real. If you forge a work of art, you are creating a convincing (and therefore lucrative) fake that can be attributed to a known artist and sold as if it were genuine. But what might the forging of a work of philosophy be, beyond attributing the work to someone else, à la pseudo-Augustine or pseudo-Aristotle? If faking a painting gets you something and faking a passport gets you somewhere, what does a fake work of philosophy get you? Presumably, what we care about most in a philosophical text are its arguments, its attempts to get at the truth and its means of getting there. If the argument is what interests us, then should the authorship matter, given that the argument is exactly the same, regardless who wrote it? Of course, historical context is important, both for understanding how the text might have come to be and what the text means. But unless this exploring of context is employed in the service of understanding and elucidating the arguments, we are treating the work as a historical curiosity rather than a source of insight. In the case of the Ḥatäta Zera Yacob, this would be a mistake, for the arguments are powerful and abidingly relevant. These arguments – about the causes of human suffering and conflict, the epistemology of disagreement and the twin temptations of relativism and blind absolutism, the relation between the world and our cognitive faculties – are precisely what tends to fall out when the discussion of the Ḥatäta focuses exclusively on the topic of authenticity. We might conclude by offering a different sense of philosophical forgery, one less concerned with the cultural politics of a particular text than the words it leaves on the page. Forging in this sense might have more to do with the work of the blacksmith than of the counterfeiter. Rather than forging as deception, we might think of forgery as creation, namely as the creation of new words and, with it, new ideas. Consider that whoever wrote the Ḥatäta did so in a language, namely Geʽez, that previously quite literally did not have the words for expressing its most central ideas. Whoever wrote the Ḥatäta forged a philosophical-conceptual vocabulary. This process of linguistic innovation, of coining new terms and adapting existing words to new meanings is by no means unique to Geʽez. It is more than 20 centuries since Cicero attempted to ‘teach philosophy to speak Latin’, not only by importing originally Greek words into Latin (dialectica, politica), but by teaching philosophy new terms (moralia, naturalis) from his native language. In a way, it takes place every time philosophy learns to ‘speak’ in a new language, including our own: we owe a great many words, both arcane (‘quiddity’, ‘apperception’) and commonplace (‘politics’, ‘nature’ and ‘self’) to the translation of philosophy into English in the 16th and 17th centuries. But rarely has it happened so suddenly, in such a concentrated way in a single text. This is impressive enough if its author is a 17th-century Ethiopian named Zera Yacob. If it’s the work of a 19th-century forger, it is an utterly astounding work of linguistic and cultural immersion. Ultimately, the words on the page should be more philosophically interesting than the identity of the person who wrote them, and therefore the Ḥatäta (and, by extension, other such contested texts) should be judged on the philosophical quality and linguistic innovations, not on the name at the top of the page. There is a sense in which the identity of an author matters. Rorty wrote Tov-Ruach and Zhang LoShan into existence, and in doing so created two distinct philosophical voices, just as Kierkegaard conjured countless original perspectives. Plato wrote the perspectives of Glaucon, Protagoras and Thrasymachus in a way that may or may not have corresponded to their real views. Zera Yacob may be one such voice that is an unknowable mix of real historical individual and literary creation. But, then again, so is Socrates.
Jonathan Egid
https://aeon.co//essays/from-the-pseudo-to-the-forger-the-value-of-faked-philosophy
https://images.aeonmedia…y=75&format=auto
Stories and literature
Devon, 1970s: I’m a rector’s son, hanging out with Boz the biker. My life is about to open up – what does it promise for him?
‘If you believe you’re a citizen of the world, you’re a citizen of nowhere. You don’t understand what the very word “citizenship” means.’– Theresa May, Conservative Party conference, 2016I grew up in a small Devon village nestled in a remote, crooked valley below the wilderness of Dartmoor. My father was the local rector, with a group ministry of four parishes in the valley. I attended a grammar school in Exeter. The village boys went in the other direction, to the grammar in Newton Abbot or, more often, the secondary modern in Kingsteignton. I hated school, felt alienated from study, teachers, fellow pupils, the lot. The village lads, rural working class, were my friends. We played football, explored the woods and streams around; played table tennis in the youth club; went to the village disco; learned to smoke and drink together. But I was never one of them. I spoke with a different accent, and our expected futures had divergent trajectories: they would leave school at 16, find manual work on a farm or at the local quarry, while I would sit A levels and follow my elder sister to a university, and a life, far away. As it happened, I failed to follow this course, and, adrift between social classes, I left school at 16 too. I could say the teaching was poor (our O-level texts were ruined by a lazy and arrogant English teacher. It would be 10 years before I could bear to revisit those works, to discover with astonishment that this teacher had, with a kind of anti-pedagogical genius, alienated his entire fifth form from two of the jewels of the English language: Wilfred Owen’s war poetry and William Shakespeare’s Twelfth Night). I could say I was unhappy (our mother had left the family home three or four years earlier). I could say I was uncomfortable in my own skin (I’d had operations and a long spell in an orthopaedic hospital). And, though these were true, they weren’t the whole story. My father was a ‘man’s man’ who’d had extraordinary wartime experiences liaising with communist partisans in Yugoslavia; he’d played rugby for Bath. He was a priest committed to his ministry, for whom doubt was an intrinsic element of faith; but also, a fierce intellectual, who loved political or religious argument over gin and cigarettes. And he was also a gentle man, who’d brought up my sisters and myself alone, a tender, loving father. I rebelled against him the only way I could: by disappointing him. I challenged the way his political convictions had been compromised; I told him Christ had been right, he should give up everything, live simply, among the people. My father was perplexed. I took on the first of a succession of menial jobs, on building sites and elsewhere. I travelled a little, without breaking away. In the summer of 1977, I was still living at home. Suspicion emanated from him, as if he thought I had designs on his son Boz had joined the village football team the season before, unusual in that he was a biker, generally a breed apart, who shunned social norms like sport. We played together at centre-back. Boz was tough, his every tackle and header were made with fearless commitment. I cleared up around him, but he also made me braver than I was (if only because I had the assurance that, if an opponent kicked me, Boz would sort him out). It was months into the season before I realised that, besides his physical power, Boz was a highly intelligent reader of the game. He flew into a challenge, where the temptation was to see simply the challenge, but he could make it only because he’d anticipated how play would unfurl to that point. Boz was soon made captain. Before each game, he drew us into a tight huddle, and said the same thing: ‘Boys. Get that fuckin ball before they do.’ I had a car, and gave Boz a lift to away games. When I called on him, walking to their council house set back from the road, his father often answered the door. The man didn’t trust me. Suspicion emanated from him, as if he thought I had designs on his son. Looking back, he was right; I just didn’t realise it. The taboos of the time – an internal taboo strongest of all – meant that only in hindsight was it obvious that Boz was the first love of my life. I was walking through the village early one summer evening when I heard the rich throaty growl of bikes behind me. Two went past, but one dawdled. ‘Jump on, bud,’ Boz said. ‘We’re goin up the moor.’ I was generally timid, cautious; even now, the spontaneity with which I responded was passive. I did not act of my own volition but did as I was told, and climbed on behind Boz. It was the first time I’d been on a motorbike. I’d observed how passengers put their hands on a metal bracket at the back of the seat. It had struck me as an unnatural position (like watching present day footballers avoiding handball by defending with their arms behind their backs). Now, as Boz accelerated into the curves of the unwinding road before us, it felt suicidally precarious. I did what I knew I shouldn’t and put my arms around Boz. This was what a female passenger could, indeed should, do. I understood how inappropriate it was for me to, but chose shame over certain death. The greater shame would be Boz’s, though, and I prepared myself for him to bring the bike to a halt and tell me to get off. He did not. The terrifying speed at which Boz rode was incomprehensible. Any second, an unforeseen hazard – oil on the road; a bird flying out of a hedge – could inflict death or agonising mutilation. Why ride so close to that edge? But gradually my brain adjusted. The landscape hurtled by less alarmingly. I leaned closer to Boz, inhaled the patchouli scent impregnating his leather jacket. We caught up with his companions, and rode to a farm beyond Manaton whose taciturn tenant sold us plastic flagons of cheap rough cider. Jim on one bike, Sharon on the back of Benjy’s, I knew by sight. We nodded to each other. The ritual reminded me of how my father’s curate carried out various arcane blessings at a communion service We rode past North Bovey and then, manoeuvring the bikes through a gate, onto a track and up to Easdon Tor. We could have parked up and walked, but it seemed a point of honour for these boys to get their bikes all the way to wherever they were heading; to go on foot would have been a comedown. But the grass was dry and cropped close to the ground by the ubiquitous sheep, so passage was easy enough. At the Tor, there were four other bikers – three boys, one girl, on three bikes – from Moretonhampstead, apparently. They too acknowledged me with an incurious nod. We sat around, drinking, smoking. One of the Moreton lads had a lump of Lebanese dope; he intermittently requested normal cigarettes, which he broke open and used the tobacco, sprinkled with hash, to roll large spliffs. We watched him silently. The ritual reminded me of the way my father’s curate carried out various arcane blessings and preparations of the bread and wine at a communion service. Then the spliff was passed around. Jim pulled a battery-operated radio/cassette player out of his pannier and set it on a boulder. He inserted a tape of Sin after Sin, Judas Priest’s latest album. ‘Can you turn that squeaker up, bud?’ Boz asked. ‘It’s on max,’ Jim told him. Now and then, someone got up and went off for a piss. When they returned, they often danced in a perfunctory manner for a bit, getting into the groove, before resuming their place on the grass. The two girls danced together for a while. The rest of us watched lazily. ‘Hear that Polson cunt give Johnny Sidwell a hidin t’other week,’ said one of the Moreton crew. ‘He’s a mean fucker.’ ‘He’s a hard fucker.’ ‘I’d like to see Boz here get a hold of him.’ ‘Reckon you could take him, Boz?’ Boz shrugged modestly, and took a swig of cider. I felt a thrill at the notion of acceptance into this society of rural outlaws ‘Why did he beat up Johnny, anyways?’ ‘Reckoned he was eyeing up his bird.’ ‘Was he?’ ‘I don’t know why anyone would,’ said Sharon. ‘Have you seen her?’ Everyone laughed at that. ‘Cat’s got her claws out,’ said Benjy. ‘Come here.’ He leaned over and pulled Sharon towards him. ‘I love you when you’re mean, babe.’ Conversation limped along like this: a bit of gossip, widening into opinion and banter, then dying away. The sun was setting. Silence prevailed once more. Two of the Moreton boys got into an argument and one suddenly went for the other, and they were on the ground, wrestling furiously. It looked serious to me, I felt an urge to pull them apart, as I sometimes did at football, but the others barely took any notice except to laugh and jeer at the boys’ inexpert grappling. Then one appeared to prevail, they heaved each other up, dusted themselves off, and embraced, before hitting the cider. I was clearly the outsider. I wore a tracksuit top over my T-shirt rather than a leather jacket, I had no home-made tattoos and, if I was to speak, then my standard middle-class accent would have announced my alienness. I said nothing. No-one addressed me, asked me anything, questioned my presence. They knew I’d come with Boz – and we all knew that his girlfriend, Mo, refused to ride on his bike – and that was enough. Might I be admitted to the gang? Could I be, if I proved myself? If I had LOVE and HATE inked across my knuckles? If I bought a leather jacket and invited them to help me baptise it – and by extension myself – with whisky, piss and chicken’s blood; then, later, patchouli oil? If I got my own wheels? If I wrestled with Boz? I felt a thrill at the notion of acceptance into this society of rural outlaws. Even if the activity, like the music, was kind of boring. At home, we had a family tree, printed on a sort of pseudo-parchment. I believe my paternal grandfather had visited the British Museum when it mounted an exhibition on genealogy, and, if you could trace your ancestors back a generation or two, you stood a chance of connecting with one of a set of lineages the Museum had prepared. Thus, some weeks later our grandfather received in the post this proof of our heritage going back to and beyond Charlemagne. To be precise, to Charlemagne’s great-grandfather, Pepin the Fat. As a boy, I occasionally consulted this family tree (hundreds of close facsimiles doubtless moulder at the back of attics around the country). It excited me to gaze at Frederick the One-Eyed, Duke of Swabia or Grayza, Duke of Hungary; Anne of a noble Bulgarian family or Adelheid of Alsace. I traced the countries these fanciful ancestors came from on maps (seeding a lifelong cartographical enchantment) and assimilated the liberating message that I was not merely an Englishman; a Devon boy. Already part Welsh through our maternal grandfather, this heterogeneous lineage meant my sisters and I had a right to multiple identities. A variety of genes had flowed through the generations, and we had a claim of inheritance on as many of them as we wished. Boz had no family tree, fictional or otherwise. His father worked at Trusham Quarry, his grandfathers were both farm labourers, and going back further was to get lost in mists of family myth: a great-grandmother who bred hens that folk came from distant counties to buy; a great-grandfather who laid out a gypsy boxer at the Okehampton fair. What Boz took for granted was that they all toiled on the land for meagre reward, going back through time to peasants and serfs and no doubt beyond, unchanged, forever – and that this was both an inevitable and an honourable state of affairs. Boz himself had a job on a pig farm, over Hennock way, which he considered an improvement on his father’s position at the quarry; an authentic reconnection with the past. I was disturbed and fascinated by the man, so unlike my own father I took this at face value, appreciating the worth of working with animals, on the land, in the open air. Until, driving up that way, I realised that the farm I was passing, a large Union Jack on a flagpole by the house, was the one where Boz worked. And that the ominous row of low buildings meant these pigs were kept indoors; it was a factory. Boz’s father was a known bully. He knocked his boys about (Boz was the eldest of three) and beat his wife when he was inebriated, which was increasingly often. Sometimes, Boz came to my house to hang out and he’d be seething. I came to recognise the particular mood: it was always to do with his father, and Boz didn’t wish to talk about it. I was disturbed and fascinated by the man, so unlike my own father – and notwithstanding his evident distaste for me. He was an implacable force in my friend’s family. Women rarely left their abusive husbands in those days – the mother who had scandalously abandoned her family was mine, and she’d done so less for escape than excitement. Children, needless to say, had no appeal against a domestic tyrant. I occasionally learned about him from other people. He’d had a dispute with a neighbour. Rather than bring in the council, or the police, he’d sorted it out with his fists. Except that he hadn’t: the neighbour did call the police. Boz’s father was arrested and charged, and received a suspended sentence. When we’d come to the village, my father had visited one household after another to introduce himself. Boz’s father refused to see him: Boz’s mother had to explain that a previous rector had ‘upset’ him as a boy, by choosing a different child to light the candles before a church service. ‘He swore he’d have nothin to do with the Church,’ she said. ‘Once his mind’s made up, you can’t change it.’ Parenting has a curious way of providing an example that the child doesn’t even notice. Even as I tussled with my father, I began reading the Russian novels in his study. He bequeathed me his curiosity; my cultural antennae were up. BBC2 was showing foreign films on a Friday night. One evening, Boz was round, and we watched Bernardo Bertolucci’s The Conformist (1970). For me, it was a revelation. By the end, my jaw ached from smiling with what I can only describe as aesthetic pleasure. Boz had fallen asleep. I woke him and he stumbled out, muttering: ‘Them subtitled films ain’t for me, bud.’ As with our concocted family tree and the translated novels in my father’s library, the foreign language of the film – and others I now devoured – enticed me. The subtitles of foreign films were a window, while for Boz they were a barrier. How come I had this appetite for the exotic, the multiplicity of experience, while Boz did not? Meanwhile, punk had burst on the scene. I bought vinyl records at the Left Bank shop in Exeter and wore out needles playing them on my record player at home. This urban, working-class music spoke to a shy, rural middle-class boy; it gave me an impetus and a confidence to start writing short stories. The Clash were playing Torquay Town Hall: I bought tickets, and played their exhilarating first album to Boz. Cultural class barriers remained, peppered with turnstiles that let people through in one direction but not the other He listened, and shrugged. ‘Reckon I’ll stick with metal, bud,’ was his verdict. I’d been born at the end of an era of high and low culture, distinctions that were already crumbling: the middle and upper classes abandoned their prejudices to enjoy popular music, cinema, comedy. Bob Dylan’s lyrics were as dazzling as John Keats’s poetry; the Beatles were making music for the ages. Television democratised access: my middle-class generation may not have been taken to football matches by our rugger-playing fathers, but the game was given to us through the cathode-ray tube. What was clear was the degree to which this democratisation of culture was a one-way ticket. Yes, the privileged saw the value of what they’d considered beneath them, and demanded access. What largely failed to materialise was movement in the opposite direction. Despite libraries and museums, despite the best efforts of teachers, curators, publishers and many others, only a select few – largely working-class grammar-school kids – seized on serious literature, opera, art. Cultural class barriers remained in Britain, peppered with turnstiles that let people through in one direction but not the other. My father was an authoritative figure, both because of his position as rector and on account of his natural bearing. It was noticeable how the villagers changed how they spoke when addressing him: what they thought of as properly, stiff and wary, with a grammatical formality. Boz was different. He spoke with my father just as he spoke with me (minus the swearing, I guess) and the two of them got on well. My father attended our football games when he could, and I used to wonder whether he’d played rugby more elegantly, like me, or fiercely, like Boz. The Teign river came down from the moor, snaking through a long, wooded valley between Chagford and Dunsford. Boz gave me a lift there one August evening. It was my fourth such excursion, still an outsider, a more or less silent guest. Bad Reputation by Thin Lizzy played on Jim’s cassette player. We drank rough cider for a while; they indulged in their desultory, inconclusive conversation. ‘Hey,’ I said, during a long pause. ‘Anyone fancy a bit of target practice?’ I pulled the pistol out of my pocket. ‘It’s only an airgun,’ I admitted, ‘but we might have a laugh?’ I could hardly have made a finer contribution – other than a bigger gun. The other guys responded with immediate enthusiasm, which they masked by poking fun at my ‘pea-shooter’, as Benjy called it. I’d brought some cardboard targets and we spiked these on tree branches, and had intense competition for as long as the box of pellets I’d brought lasted. Poor shots were ridiculed mercilessly. The girls – there was an extra Moretonhampstead lass that evening – were given a turn, and teased when (perhaps just as well, given the sexism of the subculture and the era) they missed the target. I won the shooting competition, such as it was. Of course, I had an unfair advantage: it was my gun and I’d had a lot of solitary practice, which the others were not slow to point out – along with plenty of euphemisms along the lines of Watch out, girls, this one knows how to shoot straight. Still, I could feel a new warmth towards me. I’d noticed a bruise or two on his knuckles, but paid them no heed; he often carried wounds from his work We’d resumed drinking and smoking dope. Boz stumbled off for a piss, and Jim said: ‘You all hear what matey boy done?’ ‘Boz? No.’ ‘Give him a good fuckin hidin, that’s what.’ ‘Who? Polson?’ ‘No. His old man.’ ‘Jesus wept.’ ‘You are pullin my plonker.’ ‘The bastard started up again, Boz wouldn’t take it no more.’ Boz had said nothing to me. I was shocked, but I shouldn’t have been: Jim was his much older friend. I’d noticed a bruise or two on his knuckles, but paid them no heed; he often carried wounds from his work. ‘Yeh,’ Jim said. ‘Our boy’d had enough.’ ‘I don’t blame him.’ ‘Me neither.’ When Boz came back into the clearing, he received, to his surprise, a round of applause. ‘Well done, buddy.’ ‘Good on you, Boz.’ Boz looked wryly at Jim, and said to the assembled company: ‘Someone’s been talkin.’ ‘Did you beat him bad?’ ‘No,’ Boz said, modestly. ‘Well, ask Jim, he’s seen him.’ ‘Cunt’s got a hell of a shiner,’ Jim informed us. ‘Swollen’s so he can see fuck all out of it.’ ‘He had it comin, Boz,’ Benjy said. ‘Yeh,’ Boz agreed. ‘I don’t reckon he’ll knock us about now.’ ‘Includin your Mum?’ Sharon asked. ‘If he hits her again, I’ll kill him,’ Boz replied, matter of factly. I don’t think any of us doubted him. It was my last outing with the bikers. As for my own father, my absurd rebellion couldn’t last. In fact, it wasn’t rebellion at all. The fledgling writer inside me had simply intuited that he’d be best left alone to incubate whatever talent he might have, to read and write without the mediation of tutors, much less the distraction of a proper job. Psychologically, I belonged among the losers, not the winners, but temperamentally I just needed time and space. I finally left home, with years of menial jobs ahead, scribbling away. Mobile phones were some years in the future, but back then few homes in the village even had their own landline. The council houses set back from the lane had a red phone box situated halfway up: you could dial the number and let it ring until someone was walking past; they’d pick up and, if you were lucky, fetch the person you wanted. Late on Saturday morning on the day before I left, I called the number. No one answered. I pictured the phone ringing in the empty phone box, in a vacant world. So I walked up through the village to Boz’s house. His father answered the door. He did not say anything, simply stood there, staring at me, assessing me, daring me to speak. His black eye was almost healed: a dark purplish line was still visible beneath it. ‘Is Boz in?’ I asked. ‘Who wants him?’ his father demanded. He knew full well who I was. Why was he asking? What did he want? I was at a loss. He was being pointlessly antagonistic, and I lacked some male reflex, some pocket of testosterone, that enabled one to respond to machismo. Tall and skinny (‘A streak of piss,’ according to our stout goalkeeper, Kendo), I’d never learned to fight. The idea of punching someone, never mind being struck myself, was dreadful. I’d actually asked Boz for advice, but all he’d said was: ‘Hit ’im first, bud, hit ’im hard.’ Which didn’t really help. Boz performed two contradictory actions: he ignored but also shoulder-barged his father Boz’s father stared at me through hooded eyes – at midday, on a Saturday, perhaps he was meanly drunk already. A spasm of defiance swam through my nervous system. ‘Me,’ I said. ‘I want him. Is he in?’ ‘He might be.’ Our little stand-off was resolved abruptly. Boz appeared through the gloom of the house interior. Reaching the doorstep, Boz performed two contradictory actions: he ignored but also shoulder-barged his father. Off balance, the old man tottered forward, past me. ‘Come on in, bud,’ Boz said. I stepped inside. Boz let me past, then tossed some coins onto the ground outside. ‘Fuck off down the pub,’ he told his father, and closed the door. He led the way, into the front room. I was ready to discuss his father, but Boz appeared to have dismissed him from his mind already. ‘How you doin?’ ‘Not bad,’ I said. ‘I’ve come round to tell you I’m heading off, tomorrow.’ Boz grinned. ‘Finally. I was wonderin if you was ever goin to get round to it.’ I’d never spoken with Boz of plans to leave – not that I’d ever really had any. ‘Were you?’ ‘You don’t belong round here. You wants to get out. The world is yours, bud.’ ‘I’m going somewhere even more remote, actually. A cottage in Wales.’ Boz frowned, then he grinned again, like he’d sussed my secret intention. ‘You’re gonna write that fuckin book a yours.’ ‘That’s the idea. Well, get started, at least.’ ‘Good,’ Boz said. ‘Good.’ He nodded, in agreement with himself, his approval of my plan. I smiled. He smiled. I nodded. It was awkward. I’d never been inside his, or indeed any of the other village boys’ houses. Playing host, to a guest, in his parents’ home, was clearly a novel experience for Boz too. Inspiration came to me. ‘I could kill a cuppa,’ I said. Relieved, Boz bounced to his feet. ‘How d’you take it?’ ‘Milk, no sugar, thanks.’ While Boz was gone, I looked around the small, tidy front room. It felt unused; there was a layer of dust on the mantelpiece. Communal life presumably took place in the kitchen. Two armchairs faced a settee across a bare coffee table. There were paintings on the walls, landscapes with lurid sunsets. The TV must be in the kitchen, I figured, but there was something else missing. I scanned the room, and it struck me: there were no books. Not a single one. Sitting there, I understood that the ladder of Boz’s upbringing had a rung missing, and though he was just as intelligent as me, he would likely never read a novel or volume of poetry; not watch a serious play or another subtitled film; go willingly to any of the great art galleries or museums. Was it simply that he had not been provided with a crucial example, as I had, so that his cultural tastebuds had not been awakened, but lay dormant, his access denied to an immeasurably rich realm of human experience that I was just beginning to get a glimpse of? Boz came back with two mugs of tea. ‘What are your plans?’ I asked him. Boz rolled a cigarette. ‘Just sold the bike,’ he said. ‘She’ll do me good. This place’ll do me, too, this fuckin beautiful country’ I was gobsmacked. Boz adored his Triumph Bonneville, he worshipped it. ‘What are you getting?’ I asked. ‘Don’t tell me you’re going Japanese?’ ‘Never,’ Boz spat. ‘No, I’ve gone and got myself four wheels, haven’t I?’ ‘A car? You are kidding.’ ‘I’ll need it, with Mo. She won’t go on the bike.’ ‘I didn’t know you two were so serious.’ Boz shrugged. ‘She’s up the spout, bud.’ He opened his palms in a helpless gesture, grinning, as if he had no idea how it’d happened. ‘I thought she was takin care of all that. She’s on about us gettin hitched; reckons we’ll get on the housin list easy.’ ‘Sounds like she’s sorting you out, man.’ Boz smiled. After swallowing a mouthful of tea, he lowered his head, clearly considering something. Then he raised it and said: ‘I can tell you; I can’t tell the others, they won’t understand. They’re like kids. You, you’re leavin, and anyway, you’re not, you know, one of us, bud, no offence. But the truth is, I can’t hardly wait to have kids. Mo’s a brilliant bird and she’ll do for me. She’ll do me good. This place’ll do me, too, this fuckin beautiful country. Mac up the farm wants to step back from the day-to-day and let me take over.’ ‘Wow,’ I said. ‘Really? Amazing, Boz.’ ‘Don’t get me wrong, bud, I loved the bikin and the birds and the dope, all that. But, fuck it, I’m 20 years old. It’s time to grow up.’ Boz took the empty mugs into the kitchen. How strange it was: my life was about to open up, to expand, into unknown terrain; his was about to contract into a limited future that was just what he wanted. I was on my feet when Boz returned, and he accompanied me to the door. We hugged each other. I inhaled his masculine smell of sweat, unwashed clothes, a faint trace of patchouli. Neither of us said ‘Keep in touch’ or ‘Don’t be a stranger.’ Instead, Boz reached up, grasping the back of my head with both hands. He pulled me to him and kissed my forehead. Then he relaxed his grip, a little, so that we were almost pressing our brows against each other; in a huddle, a secret space. ‘Fuckin fly,’ he whispered. ‘Do it for me, brother.’ Boz let go, we looked at each other. I nodded. ‘Thank you, man,’ I said, and turned, and walked away down the garden path, and into the lane.
Tim Pears
https://aeon.co//essays/a-story-about-who-grows-up-and-who-goes-away-in-1970s-england
https://images.aeonmedia…y=75&format=auto
Anthropology
I wanted to visit Colombia’s sacred mountains. But there are some places we cannot go – and some things we cannot know
I am standing on the beach in Santa Marta, a small port city on Colombia’s humid Caribbean coast. Around me, brightly dressed families are eating ice cream and grilled meat. Venezuelan refugees beg for coins, and shredded plastic bags are snagged in the cactuses. Offshore, cargo vessels idle on blue-grey waves, perhaps heading east towards the Atlantic, or west to Panama and the Pacific. The industrial port bristles with cranes and gantries. Looking inland, my view is curtailed by palm trees and crumbling apartment blocks. But somewhere beyond the urban sprawl, densely forested foothills rise towards the summits of a mountain range called the Sierra Nevada de Santa Marta. This is the reason I’m here. I’m not the first foreigner to have stood on this coast and imagined the forests and misty highlands that lie beyond. Near me on the seafront is the statue of Rodrigo de Bastidas, the Spanish explorer who founded this city in 1525, laying claim to the ancestral lands of the Indigenous people who lived here: a civilisation of farmers and goldsmiths known as the Tairona. In the centuries since the arrival of de Bastidas, Santa Marta has been the starting point for explorers, conquistadors, settlers, farmers, miners, loggers, narcotraffickers and, more recently, tourists. In their various ways, all have gazed towards that hinterland and seen the gleam of treasure. Even before my journey starts, I wonder if my presence here is also part of that extractive, questing lineage. I am a travel writer. I have come looking for treasure, too. I have come to Santa Marta in search of a way into those mountains, to learn about a culture that has remained uncolonised. I have come to encounter the sacred landscape that culture has been protecting. At least, as I sweat here on this beach, that’s what I think I have come here for. In the end, this story isn’t about that journey at all. The city that de Bastidas founded was one of the first Spanish settlements in what would later be named Colombia, and the second oldest Spanish city in South America. It marks a cultural ground zero. This is where the meteor hit. You can still feel its impact. From this point, the European invasion rolled across the continent, collapsing civilisations as it went and dragging silver and gold from the rubble, a pressure wave of devastation that reached almost everywhere. Everywhere but the Sierra Nevada de Santa Marta. At first, the Tairona of the coast resisted the seizure of their lands, with a major uprising taking place in 1599. But by the mid-17th century, they had been overwhelmed. In the face of slavery, genocidal violence and disease, the survivors of the invasion fled to mountain settlements, protected by dense tropical jungle and precipitous terrain. The Spanish saw no need to pursue them, as riches could be found more easily elsewhere. Stripped of its coastal territories, the Indigenous civilisation collapsed but, in the fastness of the mountains, descendants of the Tairona survived. Over the centuries, those descendants separated into four tribes. The Wiwa, the Kangwama and the Arhuaco each found their niches in the range, living at different altitudes with varying degrees of acculturation to the settlers around them. The Kogi, meanwhile, living nearest to the tops of the mountains, cut themselves off from the world below and maintained almost total isolation. Incredibly, that isolation endures today. An uncolonised Indigenous culture has thrived less than 50 kilometres from one of South America’s first colonial Spanish settlements, literally within the invaders’ sight. For all that time, two separate worlds have existed side by side. Its foothills touch the Caribbean Sea and its peaks are crowned year round with snow On maps, the mountains bear their Spanish name, Sierra Nevada de Santa Marta. But in the language of the Kogi they are called Gonawindúa. Colloquially, they are also known to the Indigenous people who live there as El Corazon del Mundo, which means ‘the Heart of the World’. Rising like a pyramid to a height of more than 5,700 metres, the Heart of the World is the tallest coastal mountain range on Earth. It stretches 160 kilometres across the north of Colombia, with foothills that touch the Caribbean Sea and peaks crowned year round with snow. Between those two extremes exist vastly different ecosystems, ranging from glaciers and misty highlands to steamy tropical jungle. It is one of the most biodiverse places on the planet and a UNESCO Biosphere Reserve. Because of its extreme elevation and changes in topography, the range is a haven for more than 600 species of birds, including 36 found nowhere else, and 189 species of mammals. In 2013, the International Union for Conservation of Nature named the Sierra Nevada de Santa Marta National Natural Park the most ‘irreplaceable’ site in the world for endangered species. The mountains are geographically and ecologically important but, for the Kogi, the Heart of the World must also be understood cosmologically. According to their traditions, this is where all life began. They regard themselves as the ‘Elder Brothers’, protecting the Heart’s vitality by tending its sacred sites with rituals, prayers and offerings to maintain the wider ecological and spiritual balance of the planet. This balance is threatened by the actions of the ‘Younger Brothers’, the name that the Kogi, and others in the sierra, call us, the foreigners who invaded their land. Given the damage of colonisation, it is a generous description. It paints us not as monstrous or malevolent, but as toddlers wrecking unfathomable destruction, unable to comprehend the consequences of what we do – poisoning rivers, burning down forests, pillaging precious metals and coal. For centuries, the Kogi have gazed at their violent siblings below with a mixture of alarm, bewilderment and dismay. Like many people, I first encountered the Kogi in a documentary. In the late 1980s, the tribal priests, or Mamos, made the difficult decision to end their centuries of isolation – the only thing that had saved their culture from destruction – and invited a journalist into the sierra. Alan Ereira, a British filmmaker, collaborated with the tribe over the next two decades to produce From the Heart of the World (1990) and Aluna (2012). In these documentaries, the Kogi delivered a message to the Younger Brothers, warning us in simple terms what our plundering ways were leading to. The sacred rivers were drying up. The glaciers were melting on the peaks. The Heart of the World was growing sick. Perhaps it was even dying. In Ereira’s book The Heart of the World (1990), he describes how his first experience with the Kogi ended. After work on the documentary was finished, the camera crew crossed a small bridge and: [The Mamos] symbolically closed the bridge behind us. The Younger Brother should not return, we were told. The message had been given, no one else should come. Only I was to come back, with the finished film, so that they could see that I had done what I had promised.Ereira did what he promised, but ecological breakdown has only accelerated in the years since. The warning, in the Elder Brothers’ eyes, has been ignored. The films have a complex legacy. They have made the Kogi iconic among ecologically minded Westerners and, through increased visibility and an international profile, have given the tribe a measure of unofficial protection against further encroachment from those who might seek the Heart’s resources. The Colombian government, mindful of its image, now has more stake in their survival. But inevitably this new-found fame has attracted tourists, as well as anthropologists and spiritual seekers of all kinds, who venture to the mountains in the hope of meeting an ‘untouched’ Indigenous culture and being enlightened by its wisdom – or at least taking some photographs. Of course, this threatens the very isolation on which the Kogi have depended. This is not a travel story – or, rather, it’s a different type of travel story As a travel writer, I have long been fascinated by stories of the Heart of the World. But I know this fascination presents a deep dilemma: the prohibition against outsiders visiting could not be clearer. This is spelled out by one of the Mamos in Ereira’s first documentary: [W]e want Younger Brother to know that he can’t come here again, he can’t come back. We are putting a barrier here so that he respects us. Understand, we don’t want him coming up here and interfering with us. He has destroyed so much.But, as it turns out, the Mamos did not close that barrier completely. Since Ereira first visited in the late 1980s, a small number of outsiders have been admitted to the sierra as guests rather than gatecrashers: academics, linguists and ecologists whose motives the tribe can trust. Official permission must be gained through the Gonawindúa Tayrona Organisation (OGT), which acts as the representative body of the Indigenous government. Its offices are in the Casa Indígena (‘Indigenous House’) in a suburb of Santa Marta. That is why I am here: to ask for the OGT’s permission to enter the sierra; to encounter that sacred landscape for myself; to tell a story about a different way of being in the world. But the story I end up telling will be different. I do not get to the Heart of the World. I do not climb the mountain. I do not hack my way through impenetrable jungle with a machete, or hike up steep hillsides, to reach its unspoiled villages, or its places of spiritual power, or the springs where its sacred rivers start, hidden high above the clouds in a land few Westerners have seen. This is not a travel story – or, rather, it’s a different type of travel story. Rather than being about my journey into a sacred landscape, it’s about a sacred landscape expanding beyond me. The Casa Indígena is situated in a district called Los Naranjos, ‘The Oranges’, but there are no orange trees to be seen. My mototaxi drops me off in front of a compound with high walls. Standing outside is a group of men, women and children all dressed in white. The men wear loose cotton tunics and trousers, and the women wear cotton shifts. All have woven bags made of cactus fibre slung around their chests. One of the men wears a conical white hat like a helmet. He is a Mamo, a Kogi priest. His hat represents the snow-capped peaks of the sierra. Inside the compound, I sit across from the OGT spokesman Jose Manuel, a softly spoken man also dressed in white cotton with two white bags around his chest. Fluent in Spanish and well travelled, he is an ambassador, an intermediary between the spheres of the Elder and Younger Brothers. He begins by explaining why his people first chose to make contact. By the 1960s, the long-delayed tide of colonisation was finally lapping at the foothills of the sierra. Colombian farmers had flocked to the region, burning back forests to create grazing land. Once the soil was depleted, many of them turned to more lucrative cash crops: marijuana and, later, cocaine. In the subsequent decades, narcotrafficking fuelled both sides of a brutal civil war between communist guerrillas and Right-wing paramilitaries, both of whom had established bases in the surrounding jungle. As the Younger Brothers spread further and further up the mountains, burning things and fighting each other, the Kogi saw that their continued isolation was threatened as never before. ‘Maltreatment. Murder. Enslavement. Sexual violence. Dispossession. Banishment.’ These are Jose Manuel’s words for what the tribe feared was coming. Without some form of representation, his people would have no voice with which to advocate for their rights and legal protection. ‘The Kogi were in danger of disappearing,’ he says. If the body of the world grows sick, the Heart of the World grows sick The first step was to learn the language of the invaders. The neighbouring Wiwa spoke Spanish, so already had a foot in both worlds. Now some Kogi stepped across the linguistic divide. Early contact was fraught with difficulty, not least among the Kogi themselves, as different communities scattered across a wide area of mountainous terrain were now presented with the challenge of speaking in a united voice. The OGT was founded in 1987 as their mouthpiece. It is the channel through which communication flows – as well as requests to visit the sierra, such as mine. When I ask Jose Manuel to describe the Heart of the World, he answers with a picture. On a sheet of paper, he draws a wobbly pyramid divided into four quarters labelled ‘Kogi’, ‘Arhuaco’, ‘Kangwama’ and ‘Wiwa’. This is a map of the sierra, but it is more cosmological than cartographic. Near the summit he draws four dots, connected by meandering lines to four different dots at the range’s base, along the Caribbean coast. He labels these dots ezuamas. Ezuamas, he says, are sacred sites such as springs and river mouths, connected by a spiritual current that flows invisibly through, and beyond, the sierra. The Spanish word he uses to describe this current – Mother Earth’s life-energy – is intocable, ‘untouchable’, but its existence is completely real to the Kogi. The role of the Mamos is to guard the ezuamas and ensure that they stay healthy, that the life-energy flows. But increasingly, as shown in Ereira’s films, the Mamos are concerned that the sites are growing sick. The reason is environmental devastation, from the pollution of rivers to the extraction of metals and coal. These are not ‘resources’ in the Kogi’s eyes, but the living organs of Mother Earth’s body. If the body of the world grows sick, the Heart of the World grows sick. Each localised point of damage affects the system as a whole. In a feedback system that is intricately connected, nothing is divisible from or independent of anything else. Despite their apparent segregation, ensconced in a land above the clouds, the Kogi have never been isolated but rather connected to everything in subtle, intocable ways that outsiders cannot see. ‘Now the ezuamas are disappearing, the knowledge is disappearing,’ Jose Manuel says simply. ‘We are very sad about this.’ Despite these disappearances, the region is more stable today than it was when Ereira first visited. Most of the guerrillas, paramilitaries and narcotraffickers have moved on. The Indigenous peoples of the sierra have jurisdiction over their land, and officials like Jose Manuel – in his traditional white clothes, but with a smartphone in his hand – act as two-way transmitters, conversant with both cultures. We pause while an assistant brings us both a cup of Kogi coffee, harvested from wild plants in the tropical forests of the foothills. We sip in silence for a while. Then at last Jose Manuel comes to the subject of my visit. Patiently, he explains that I have come at a sensitive time. Growing numbers of Younger Brothers have been finding their way to Kogi land, and the OGT has decided that restrictions should be tightened. In the past, outsiders have come to the mountains to extract Indigenous knowledge, removing it from its legitimate owners without permission. This is a type of theft, he says, like extracting precious metals or coal. In order to receive permission, I would need to write a proposal explaining what knowledge I am seeking to gain, what I intend to do with that knowledge, and who I plan to share it with. A decision would then be made in the mountains; the answer could take months. Unfortunately, I don’t have months, only weeks here in Colombia. My request to visit the Heart of the World is gently but firmly declined. Back in my hostel in Santa Marta, in the wet heat of a Colombian spring, I ask myself why I came, what I wanted to find in the Heart of the World. Like others who came before me, I wanted to learn more about the Kogi, how they have survived for all this time. In crude terms, I suppose I wanted to understand their secret. Perhaps I also wanted to assuage my ecological grief and fear. In a damaged world that is spiralling ever deeper into catastrophe, who wouldn’t desperately want to learn – from some of the few people who never forgot – how to stay more connected to the earth, how to live without destroying it? Who wouldn’t want to be admitted to the world’s heart? But I also came as a travel writer, looking for a story. I came in search of observations, descriptions and local colour. I came looking for material, and to take that material home with me, to be refined into a product that other people would consume. The Kogi are taking the steps that are needed to safeguard their culture When foreigners first came to these lands to extract material wealth, the knowledge of those they invaded was rejected and discarded. At best, that knowledge was seen as useless; at worst, it was evil. Now that the easily extractable resources have been taken, another market is booming in Indigenous wisdom. Academics and researchers need data. Others seek spiritual salvation. They come to find knowledge of medicinal plants and routes to a higher consciousness; to learn techniques for reconnecting with nature; to understand different forms of healing and transformation. Given the depth of our appetites, no wonder the Kogi are wary of anyone coming to take something away – even an idea. In this context, I am somewhat relieved that my access has been refused. Clearly the Kogi are taking the steps that are needed to safeguard their culture. But before I left the Casa Indígena, clutching my map with its scribbled ezuamas, Jose Manuel tells me I am welcome to talk with his brother, Simigui, who also works with the OGT. He does not live here in Santa Marta but 80 kilometres further up the coast, in the mountains near Palomino, on the outskirts of Kogi territory. The following day, I am headed east to meet Simigui. For the next week, I stay in a village called Rio Ancho, a short mototaxi ride from Palomino, situated on a rocky river that flows from the mountains to the sea. Greenery surrounds it, and its roads are made of mud. Its houses have breezeblock walls, and their windows contain no glass – partly because glass is expensive, and partly because it never gets cold here, only a few degrees north of the equator. My first impressions are of brightly painted walls, amplified music, motorbikes, dogs, many children, and the ever-present crash of the river. This is a settlement of colonos, literally ‘colonists’, Colombian farmers who have moved to the region in the past few decades. Much of the original forest has been cleared for crops and grazing land. But a few Wiwa families live here, too – the people who first found a footing in both worlds – and 15 kilometres up the road is the Kogi village of Tungueka. Rio Ancho is also home to several foreigners, including gringas like Nina Dahlgreen. Dahlgreen is a Dane who has lived here for a decade, having fallen for a Colombian man when she was travelling. Her house is airy, with a wide veranda and a garden containing tropical flowers, banana trees and coca plants whose leaves, when chewed or steeped in tea, are a medicine and stimulant. Next door is a family of Evangelical Christians who sing joyous hymns, and a parrot that shouts ‘Hola!’ in a cracked voice. Dahlgreen rents me two adjoining rooms with a mosquito-netted bed and a tin roof that thunders when it rains. From her, I learn that Rio Ancho is safe because ‘only one paramilitary group is active in this region now.’ They have a base somewhere in the hills but also live openly in the village, and every mototaxi driver, hotel owner or property developer must pay them protection money. They are moving back down the mountains to resettle ancestral lands ‘Safe’ in Colombia is very much a relative term. On the highway that runs to Palomino, she points to a spot where a body was dumped just a couple of months ago – someone who didn’t pay his dues. Sometimes, there are irregular targeted killings by vigilante groups. Two friends of hers, a husband and wife who were campaigning for land reform, were shot dead several years ago as payback for their activism. In Colombia, environmental defenders, and land activists, many of whom are Indigenous, are murdered at one of the highest rates in the world. Tungueka, the Indigenous village on the periphery of the Heart, is a two-hour walk away. Kogi people from the village are often seen, distinctive in their white clothes, buying supplies in Rio Ancho’s bodegas, leading mules back up the mountain or taking rides on mototaxis, their long hair flying behind them. Dahlgreen says in the past few years some have bought their own motorbikes. Living in close proximity to the Younger Brothers, unlike their cousins higher up the mountains, these Kogi are becoming ‘civilised’ – at least, that is the term that the local colonos use. When I arrived in Rio Ancho, I assumed that the incomers, whether peasant farmers or foreigners, were encroaching on Tungueka, and eroding the boundaries of Kogi land. But the truth is more surprising. Far from being an ancient settlement whose long isolation is under threat, Tungueka is only 15 years old. Another Kogi village nearby was established only in the past five years. During the past decade, the area around both villages has been reallocated as Indigenous land but, like everything in Colombia, the situation is complicated. Land in this region was originally granted by the state to poor farmers, many of whom then sold it on to foreigners for a higher price, which means that the reallocation of land back to its traditional owners is not without dispute. But the result is that, seemingly against all odds, the Kogi are moving back down the mountains to resettle ancestral lands, effectively expanding their territory for the first time in five centuries. Tungueka consists of 100 conical huts, with mud walls and thatched roofs, clustered around a larger construction called a nuhue, a sacred ritual gathering place only for Kogi men. (Women, understood as being intrinsically closer to Mother Earth, are considered already sacred.) Tungueka, I have learned, is an ‘open’ village that accepts visitors, and which seems to function as a cultural interzone. Just as the Kogi visit Rio Ancho to buy products that are useful to them, Colombian and foreign tourists are allowed into Tungueka. There is a small entrance fee, and a Kogi guide gives a tour. Toddlers reach out pudgy hands for biscuits, and women accept bags of rice, both of which visitors are advised to bring as offerings. In The Elder Brothers’ Warning (2009), Ereira describes Kogi culture as one of silence and secrecy. ‘Communication with the outside world is taboo,’ he writes, ‘children are taught to hide from strangers, and adults regard all outsiders as dangerous.’ Clearly things have changed in the 14 years since the book was published – or at least they have here, on the outskirts of the sierra. The Kogi adults who visit Rio Ancho seem to be on friendly or neutral terms with the local colonos, whom children see as a source of biscuits. Many aspects of my visit make me uncomfortable, especially the distribution of gifts, with the colonial power dynamic implicit in this exchange. It turns me into another tourist, gawking at the life of an ‘authentic’ Indigenous village, at people going about their day, washing clothes, and preparing food. But the Kogi of this region, on the border of a re-expanding domain, also seem to exercise a high degree of control. Entrance is at their discretion. Going further up the mountain is banned. Behind the huts, the foothills rise abruptly in an imposing wall, dense with jungle and veiled by cloud – the beginning of the guarded sierra that is firmly closed to me. Against that backdrop, Tungueka appears less like a village under siege and more like an outpost in decolonised land. A hair-raising mototaxi ride brings me to Palomino, 12 kilometres to the west, for my appointment with Jose Manuel’s brother Simigui. Palomino, like Rio Ancho, takes its name from a river. The river takes its name from Rodrigo Álvarez Palomino, a Spanish explorer who drowned in its rapids in the early 16th century, after helping de Bastidas establish Santa Marta. To the Kogi, this river – called Wazenkaka – is sacred. Palomino, the colono village that grew up at its mouth, had until recently a violent, sleazy reputation as a battleground for armed groups. Though the violence has diminished, the sleaziness remains. Today, the village has been reinvented as a backpacker tourist resort, squeezed between a highway and a white-sand beach. There are hostels and thatched beach bars, ambient techno music thumping from behind bamboo walls, and the tropical air smells of sewerage and marijuana. Posters advertise full moon parties, DJ sets and magic mushrooms, as well as ‘Indigenous tours’ to Tungueka: ‘Experience the Untouched Culture of the Kogi’ says one. Pierced and tattooed foreigners browse shops that sell ethnic souvenirs, and the whole place has an atmosphere of lazy indulgence. 13716In the village of Tungueka. Photo supplied by the author 13717Simigui. Photo supplied by the author I meet Simigui in a café garden shaded by banana trees. He has travelled down the sierra for a couple of hours to be here. In his bright white clothes and tall straw hat, he is instantly recognisable. His lips are green from chewing coca leaves, and in his hand is the poporo used by all Kogi men: a sacred gourd containing burnt seashells, the lime of which combines with the coca to amplify its narcotic effect. Its application is believed to foster harmony with Mother Earth. Using a stick, he brings the powder to his mouth and wipes the excess on the gourd, which is coated with residue from many years of use. Like the white hats worn by the Mamos, and the conical roofs of the huts, this thick pale-yellow crust represents the sierra’s snows. Throughout our conversation, he chews the small, bitter leaves as naturally as he blinks and breathes. While his brother spoke of how the Heart of the World is being constricted, Simigui speaks of the ways in which it is also expanding. He conjures the mountains as a node in a web of connections that stretches between distant ecosystems ‘Recovery comes from the sea,’ he says. Ever since the Spanish invasion severed access to the coast, the Kogi have been cut off from some of their most sacred places. They have kept a connection to the sea by ingesting the lime of shells, but sourcing these shells and transporting them back to the mountains was often dangerous. In 2013, with the backing of the Colombian government and international charities, the tribe raised enough money to buy back a sliver of coastline. The sacred site of Jaba Tañiwashkaka, 32 kilometres east of Palomino, was formerly a degraded wasteland covered in tonnes of plastic trash; after a decade of restoration, mangrove forests have been replanted, and fish, caiman, crabs and capybara have returned. The ezuama at the mouth of the Jerez River is under a Mamo’s protection again. After 500 years, the Kogi have regained the sea. Simigui tells me of the linea negra that is central to Kogi thought. Like many things in the Kogi world, this ‘black line’ can be understood both physically and intangibly. In one sense, it is a line on a map, a border that skirts the base of the sierra, demarcating the traditional boundary of Indigenous lands. In 2018, this was used as the basis for the Black Line Decree, in which the Colombian government gave formal recognition to the ancestral territories of the Kogi, the Wiwa, the Arhuaco and the Kangwama. But in another, intocable sense, the black line is spiritual. It binds together sacred sites on the periphery of the Heart – ezuamas, such as the one at Jaba Tañiwashkaka – to ensure the continued flow of life-energy between them. To these explanations, Simigui adds a further layer of meaning. The black line, he says, can be understood not only as a border, and a link between local sacred sites, but metaphorically as a connective thread reaching beyond the sierra to other points in the world – Europe, Africa, Asia. He conjures an image of the mountains as a node in a web of connections that stretches between distant ecosystems, and places of spiritual energy, across the planet. ‘The sierra is the Heart of the World,’ he says, ‘but she has arms.’ In this sense, the violence of colonialism did not just break the line around Kogi territory, but a more fundamental connection that once joined all things to all things. By reconnecting the mountains with the sea to maintain contiguous ecosystems between glaciers, highlands, montane forest, jungle and tropical coastline, the Kogi have taken on the task of weaving the threads back together. That task, clearly, is immense. The damage is still being done. Perhaps the most urgent signs of this are the glaciers on the mountaintops – the eternal snows that are represented everywhere in Kogi culture, from hats to huts to the yellow-encrusted gourd in Simigui’s hand. Today, they are rapidly melting, causing landslides, droughts and vanishing rivers. The scientific view attributes this to climate change, reducible to the parts-per-million of carbon dioxide in the atmosphere, but Simigui’s explanation turns this framing on its head. The sacred places are not sick because of distant climate change; rather, climate change is occurring because the sacred places are sick, the ezuamas having been neglected and abused for too many years. This explains the deeper importance of reclaiming ancestral lands. The damage is physical, but the restoration must be spiritual. To the Kogi, saving the world begins exactly here. But, Simigui says, the Mamos can’t help Mother Earth recover on their own. There must be dialogue between the Elder and Younger Brothers. ‘Reciprocity. Understanding. Balance. Harmony.’ These are the words he uses. After we shake hands at the end, he takes one of the cactus-fibre bags from around his neck. A few stray coca leaves flutter down as he empties it out. ‘Un regalo,’ he says. A gift. Then we walk back to the street and go our separate ways. After our meeting, I follow a winding path away from the tourist town. I have no destination in mind, no mountain to ascend. Soon the jungle envelops me: a living mass of fern-covered trunks, palms, vines and leaves, labyrinths of entangled roots, things growing on other things. The canopy whoops, whirrs and wails. Leafcutter ants march everywhere. At every turn, I scatter clouds of iridescent blue butterflies. I have never been anywhere so astonishingly alive. In the jungle, the path traces the contours of steep slopes, drops down to the wide green river and steeply climbs up again. I pass a Mamo with his poporo, a barefoot woman with a wrinkled face, and a young girl who is weaving a bag with white thread as she walks. I also pass a party of tourists carrying giant inflatable tubes. They’ll use them to drift down the green river in a leisurely flotilla, pumping out music and drinking beer as they float toward the sacred ezuama at its mouth. To the Kogi, Ereira writes, the end of the world will come when ‘Columbus reaches his final goal’, penetrating the Heart of the World and plunging the environment into chaos: ‘The snow will melt on the peaks, the waters will dry up. The balance of nature will be overthrown.’ For the first time, I have a glimpse of what the Kogi are protecting. There are some places we cannot go; some things are not ours to know After an hour comes a break in the canopy ahead. There are distant plots of maize, cooking smoke rising from conical roofs. Above soar the summits of the sierra, so close yet unreachable. At first, I think I am seeing cloud in the whiteness of their peaks; in this dripping heat, it takes some time to recognise it as snow. This is as close as I will come to the Heart of the World. There are some places we cannot go; some things are not ours to know. After centuries of exploration, colonisation and exploitation, perhaps we are entering a time when travellers (and travel writers) must recognise the extractive impulses that drive us forwards. But as I leave, retracing the path through the jungle back towards Palomino – and from there to Santa Marta, and my flight home to England – it seems that something else is travelling with me as I walk. I am struck by an image of the sacred landscape behind me expanding, its threads being rewoven by Kogi hands, refilling the space that was left when the meteor hit.
Nick Hunt
https://aeon.co//essays/on-the-intangible-border-of-the-kogis-sacred-mountains
https://images.aeonmedia…y=75&format=auto
Space exploration
Space junk surrounds Earth, posing a dangerous threat. But there is a way to turn the debris into opportunity
Every human-made object sent into orbit around Earth will meet a fiery death. It will fall out of orbit, and be promptly eradicated by our atmosphere, or else be left for dead in an orbital graveyard of decommissioned spacecrafts, destined to pollute our exosphere and slowly but surely follow the same sacrificial path back to our home planet. The first, more violent choice is preferred. When the life cycle of a spacecraft ends and is formally decommissioned, its mere presence in orbit is a hazard. To sustain it would be costly. Required resources include the labour force to follow and study its movements, along with fuel to keep it on track. But to what end? And the effort would be greater for crafts that have been up there for years if not decades – because their technology would already be outdated. Governments and investors legitimately question preserving stations and satellites instead of investing in the development of new ones. Such discussions took place last year regarding the Hubble Space Telescope, which narrowly escaped decommissioning following the successful launch and operation of the James Webb Space Telescope. But if such an object is left for dead in space without supervision and direction, it eventually becomes debris – threatening, uncontrolled metal that could crash into other functioning crafts, including inhabited ones. The size of the debris would not matter; even a speck could be enough to cause a catastrophic collision. In 2016, such a speck was enough to chip at the window of the International Space Station’s Cupola module; if it had been any larger than a centimetre, it could have penetrated the shields of the Station’s crew modules. The threat is ever-growing. As of July 2023, the North American Aerospace Defense Command was tracking more than 44,900 space objects, and that number concerns only objects of a significant size. Furthermore, low Earth orbit harbours a multitude of space debris pieces accumulated over 65 years of space missions, including defunct satellites, fragments resulting from collisions, and miscellaneous debris from activities like stage separations. A dead Soviet spy satellite and a used Soviet rocket stage came within 6 metres of each other in January this year; a full-on collision between those two objects would have created thousands of dangerous new pieces of debris. As of 2020, 8,000 metric tons of debris are estimated to be in orbit, a figure that is expected to increase. With no concrete plan yet for cleaning up this debris, controllers have no choice but to manoeuvre spacecrafts when potentially hazardous debris approaches, and hope for the best. Tiny CubeSat satellites launched from the ISS on 4 October 2012. Photo courtesy of NASA I’ve been studying the problem for years. Given my background in architecture with a focus on space structures, I started by envisioning lunar bases and extraterrestrial habitats. Every blueprint I created was tailored to a specific lunar landscape; at the heart of this pursuit was development of construction techniques grounded in technology we could deploy now. However, as I dived deeper into the practicalities of these projects, a formidable conundrum unfurled – one that transcends the realm of architecture and echoes the intricacies of space exploration at large. The quandary revolved around the complex logistics of ferrying essential materials across the cosmic expanse. The crux of the matter extends beyond architectural innovation, encompassing the critical challenge of establishing an enduring and reliable connection between our home planet and these distant extraterrestrial outposts. While using resources on the Moon for some of the construction holds promise, a stable tether to Earth remains essential throughout each habitat’s lifecycle. This multifaceted puzzle spurred an alternative avenue of investigation, one that combines engineering ingenuity with sustainable principles to tackle the mounting predicament of space debris – what if the tether could be the space debris itself? Failed missions contribute to the quantity of space debris in a steady way The first reason to halt the creation of space debris is the inherent wastefulness of the practice. As it stands, traditional rocket launches are a significant impediment to sustainable space exploration. Even supposedly reusable tech, for instance, SpaceX’s Falcon 9, is unsustainable – not just wasting fuel, but the rocket stages that are cast off into Earth orbit and the payloads themselves. The space launch industry produces fewer atmospheric emissions than aviation, and perhaps that has contributed to a lack of urgency. However, this comparison can be misleading, because rockets release pollutants at higher altitudes, potentially leading to longer-lasting atmospheric effects. There is also the concern that carbon particles from rocket emissions might inadvertently contribute to geo-engineering Earth, absorbing heat and exacerbating climate change; black carbon emissions from rockets have nearly 500 times the heat-trapping capability of all other sources of soot combined, intensifying the warming effect. Current rocket launches, whether reusable, horizontal or traditional, all rely on the same propellant, collectively releasing around 1 gigagram of black carbon into the stratosphere each year. With launches projected to increase, this annual amount could surge to 10 gigagrams within a few decades. And with launches increasing, a greater number may fail – even if the percentage of failures stays the same. As long as crewed missions with astronauts are meticulously prepared and safeguarded, catastrophic events are exceptionally rare. However, the same cannot be said for commercial missions. Between 2000 and 2016, almost one in three small-satellite missions failed, with the rate increasing as yearly launches rose. Those failed missions contribute to the quantity of space debris in a steady way. Could we recycle all this space trash and use it again? ‘In the modern sense, recycling is inexorably bound to concepts of sustainability, dwindling reserves, ecological destruction, and essentially cost, both pecuniary and environmental,’ write Victoria Sainsbury and Ruiliang Liu, two experts in archaeology and the history of art. But it’s also an ancestral practice driven by necessity and symbolism. Artefacts made from stone, minerals, mortar, textiles, pottery and bones demonstrate physical reshaping and repair, embodying the practical reuse of the same material. While materials like metal and glass, which can be melted and recast, may not visibly display these changes, they too have been subject to invisible recycling over centuries. Such objects changed purpose and use throughout each iteration and rebirth, and the symbolism or history behind them usually isn’t factored in. Spolia, the Latin word for ‘spoils’, are defined as architectural fragments taken out of their original context and reused in a different context; essentially, pieces of structures transplanted into different structures. An example of unintentional usage of spolia is the Mausoleum at Halikarnassos (modern-day Bodrum in Turkey). Following its burial due to an earthquake, both the Knights of St John and the Turks, who later settled in the region, viewed the former monument as a convenient source of construction materials, using spolia to build a castle and houses, respectively. Occasionally, specific fragments and artefacts were intentionally selected for symbolic meaning. For instance, at the Arch of Constantine in Rome, sections of sculptures originally dedicated to Marcus Aurelius and Trajan were incorporated to symbolise the comparable greatness of Constantine. Similarly, during the Byzantine era, it became common to repurpose columns from ancient Greek temples and integrate them into churches, symbolising the triumph of Christianity. Recycling materials in the microgravity environment entails unique technical challenges and risks Preserving the history of space exploration is a powerful reason to keep old space structures intact. Back in 2007, the Chinese military used a ground-based missile to hit and destroy its ageing Fengyun-1C weather satellite while in orbit. The anti-satellite test created more than 100,000 pieces of debris orbiting Earth, with about 2,600 of them more than 10 centimetres across, according to a NASA estimate. In 2009, the abandoned Russian satellite Kosmos 2251 collided with a working satellite deployed by the US telecommunications corporation Iridium, resulting in a massive release of space debris. That incident, together with the 2007 Chinese anti-satellite missile test, is responsible for the majority of spacecraft fragments currently hurtling around Earth. Now the crafts – relics of an important and fascinating past – are shattered to bits. Preserving such structures instead would provide insights into the space race, the Cold War, global politics, government and business expenditures, cutting-edge technologies, and the interconnected stories of generations. As tangible links to significant milestones in space exploration, they offer a window into the triumphs, challenges and endeavours that have shaped humanity’s journey beyond our home planet. These space remnants offer a multifaceted narrative about human exploration of the cosmos and our values here on Earth. At the same time, polluting space with debris tells a different story, the tale of a consumer-driven, throwaway culture with little perspective on its own legacy, intelligence or drive. As we venture back into space exploration, the idea of recycling has significant obstacles to overcome. We know that launching equipment and supplies into low Earth and geostationary orbit comes with a hefty price tag, and adding recycling equipment to the mix only adds to the expense. Furthermore, recycling materials in the microgravity environment entails unique technical challenges and risks. Despite these hurdles, the motivation is there. The European Space Agency, for example, has recognised the potential of recycling technologies in space and is exploring circular economies beyond Earth’s atmosphere. The main focus is in-orbit servicing technologies, geared initially towards smaller-scale objects, which would pave the way for repairing, upgrading and refuelling satellites in orbit, while also addressing the growing concern of space-debris removal. Northrop Grumman’s Mission Extension Vehicle (MEV) programme, for instance, involves spacecrafts designed to rendezvous with and dock to a commercial geosynchronous satellite to provide life-extension services. MEV-1 successfully docked with the 20-year-old satellite Intelsat 901 in 2020, while MEV-2 docked with Intelsat 1002 in 2021. The first practical benefit is the potential reduction in launching new materials from Earth; we could also save valuable space and weight on spacecrafts, enabling longer and more complex missions. Soon, satellites are expected to carry less fuel and larger instruments. The goal is for them to be made of modular, easy-to-assemble parts, designed with features that allow service and disposal of worn-out parts in orbit. And, for the same reason that electronics plugs and sockets have standard shapes, discussions on standardised docking mechanisms have begun to make it easier for one model of servicing spacecraft to latch on to different satellites. Despite the initial promise, extending the life of satellites offers only a temporary solution to the space junk problem, and merely delays the inevitable: the rapid obsolescence of technology and the escalating need for repairs, in which the servicing spacecrafts would consume more fuel, potentially offsetting any gains. The ISS will become the biggest source of space junk of all time It’s become increasingly clear that space constructions have a finite lifespan under the current approach. In this context, the only practical option that remains is raw material extraction; salvaging valuable components and materials from space debris, to fabricate new spacecrafts and undertake repairs. And, once the material is extracted, their carcasses would be discarded into the graveyard orbit once again. With [Mir] will perish a Bible, a Koran, 11 tonnes of scientific equipment, more than 100 books left behind by astronauts, a ‘greenhouse’ of experimental crops, personal items left by more than 100 visitors, uninvited fungi, a photograph of Yuri Gagarin, the first man in space – and the hopes and dreams of a nation. Thus wrote Amelia Gentleman and Tim Radford in The Guardian in 2001, when the Russian space station was decommissioned and fell to Earth. The Russian space agency chose to see Mir as a defective and decaying mass of rusting machinery, and as Sergei Gorbunov, the agency’s spokesman, told the paper: The technology is outdated and we don’t have the money for repairs. No research is being carried out on board and recent missions have been devoted exclusively to repairing Mir. There’s no longer any point to Mir. Yet, to the public, Mir embodied an era of cosmic achievements, 140 metric tons of intellectual metal. This intellectual quality similarly permeates the very material of the International Space Station (ISS); its metal has seen more than 250 individuals from 21 countries who have visited, an astounding 269 extravehicular activities conducted as of now, and the completion of more than 3,000 experiments while traversing billions of miles in space. Its significance, though, extends far beyond its scientific accomplishments. The ISS, the joint project of five space agencies from 15 countries, was first launched in 1998, and has been continuously occupied since November 2000. It was the crowning achievement at the beginning of the 21st century, the symbol of science, diplomacy and international collaboration in a post-Cold War world, where space is viewed as the common inheritance of all people. ISS is the largest artificial object in orbit around Earth, weighing 420 metric tons, and it cost an estimated $150 billion to build and operate. Approach view of the Mir Space Station viewed from the Space Shuttle Endeavour during the STS-89 rendezvous. Photo courtesy of NASA It is also expected to be decommissioned in the 2030s. To be crashed, in other words, into the ‘spacecraft cemetery’, Point Nemo in the Pacific Ocean, the point that is farthest from land. The ISS’s annual $3 billion cost, coupled with technical challenges in maintenance after the end of the space shuttle programme, and the difficulty of transporting large replacement parts, all contribute to the possibility that it will become the biggest source of space junk of all time. There is a solution to all of this: a skyhook, a space structure that could lift materials from Earth to near-Earth orbit and beyond. The ability to deal with junk and construct structures in space requires a paradigm shift, one that encourages us to seek opportunities and potential that might otherwise go untapped. In this case, the untapped resource is the common denominator shared by all the space objects we’ve created: mass. From the early days of meticulous payload calculations to the present, the accumulation of mass in Earth’s orbit could be a surprisingly valuable asset. Consider the sheer complexity and expense of sending objects into space. Rockets, with their massive fuel requirements and substantial failure rate, still face a formidable challenge to reach orbital altitudes. This alone makes any object that manages to overcome this barrier an invaluable asset. Moreover, the presence of manufactured space objects in orbit comes with another advantage – control. Unlike natural objects like asteroids, whose orbits could be unpredictable and chaotic, the objects we’ve generated can be precisely positioned and tracked. With control over their orbits, we can predict collisions, optimise trajectories and avoid potential hazards, easing the logistical burden. Envisat, for example, one of the largest Earth observation satellites ever launched, with a length of 26 metres and a width of 10 metres, ceased communications in 2012 and transformed into one of the largest pieces of space junk in orbit; its trajectory is still being closely monitored by us. Envisat has the potential to become a linchpin for the next step in spaceflight – non-rocket space-launch systems like a skyhook, which could require an anchor, a massive object to provide stability, that is already in space. The most promising candidate for a counterweight turns out to be the ISS The idea of a skyhook has been under study for half a century now; it would take the form of a long and strong tether extending from a base station on Earth’s surface into space. The other end of the tether, a counterweight like Envisat, would remain in orbit around Earth. As the tether rotates, the counterweight generates centrifugal force, creating tension in the tether. Spacecrafts and payloads can then be attached to the tether and released into space when they reach the desired velocity, essentially ‘hooking’ them into orbit. The counterweight’s substantial mass and its fixed position in space would act as the pivot point for the entire system, allowing the tether to maintain tension and transfer momentum. Depending on the tether’s length, materials and the specific rotational characteristics of the skyhook, the momentum it imparts to payloads could potentially extend their reach beyond Earth’s orbit to reach other celestial bodies. Further into the future, skyhooks could span across three celestial bodies – Earth, the Moon and Mars – forming a seamless interconnected network. Diagram illustrating the HASTOL skyhook concept. Courtesy of Stanford University In 2000, Boeing conducted the Hypersonic Airplane Space Tether Orbital Launch (HASTOL) study, which investigated the feasibility of using a skyhook to launch payloads into orbit. The authors of the study stated that we do not need ‘magic materials’ like carbon nanotubes to make a skyhook’s tether, and that existing, already commercially available materials will do. The main challenges are in the design and construction of the skyhook, such as ensuring that it is strong enough to withstand the forces involved and that it is protected from the effects of atomic oxygen. In the follow-up phase, the study concluded that there are no ‘fundamental technical show-stoppers’ to the idea. This study was primarily focused on the feasibility of using a skyhook to launch payloads into orbit, and it didn’t delve deeply into the specifics of the counterweight. But once we start designing space-based tether systems, we’ll reach a point where the concepts of recycling, cultural preservation and spaceflight converge into a single, comprehensive solution. The most promising candidate for a counterweight turns out to be the International Space Station, but marshalling that potential requires several key steps. First, we’d have to shut down the station and remove hazardous elements. Next, we’d have to depressurise and reinforce the structure to maximise stability. Additional propulsion would be required to position the station at a higher altitude as well. Once the ISS is in position, designing attachment points capable of withstanding the forces imposed by the rotating cable would be critical. When those points are in place, the skyhook cable could be deployed and attached to the station. According to the latest skyhook designs, the counterweight must be significantly larger than the payload, with a minimum ratio of 1,000 times the payload’s size. So, in order to boost the skyhook’s capability to handle larger payloads, additional counterweights, such as defunct satellites, might be strategically attached to the ISS, effectively increasing its mass and efficiency, a retro-fusion of sorts. What began as a combination of technological progress and waste – the accumulation of mass in space – has now become a valuable asset, reshaping the foundation of spaceflight. Embracing the untapped potential of space objects and debris would lead us to a design shift that goes beyond viewing objects as potential pieces of a greater whole. This shift would guide their conceptualisation from the bottom up, fostering a new approach where each object is informed, designed and constructed with the intention of preserving and repurposing it within a sustainable and resourceful paradigm of spaceflight, effectively preserving cultural value and transforming our current wasteful system. Clearly, the imminent destruction of the ISS represents a telling decision of our modern ways, as well as a missed opportunity to preserve an invaluable historical artefact of human space exploration. A missed opportunity, also, to accelerate progress. By integrating retired structures into new systems, we construct a dynamic archive, a living tapestry showcasing our progress in space expansion and enabling exploration of new worlds.
Angelos Alfatzis
https://aeon.co//essays/space-junk-could-have-a-transcendent-purposeful-afterlife
https://images.aeonmedia…y=75&format=auto
Ageing and death
When loved ones are traumatically lost, bereaved families become accidental activists by turning grief into grievance
Today, I filled out our US Census form. The dead are not counted. My girl does not count. I suppose that’s why I must MAKE her count somehow. Now that I’ve failed my job of keeping her alive, of growing her up, my job is now to make her short life mean something. This is the work of the bereaved parent, I suppose. Our drive to keep going is really our desire to do our children’s unfinished work.I wrote these words almost 10 years to the day since my 17-year-old daughter was killed by a distracted driver. In the hours after her death, her stepfather and I began conceptualising an organisation in our daughter’s name. We felt strongly – almost instinctively – that we had to make some good from our tragedy, to make her life – and her death – count. Gracie was so much more than the victim of a fatal car crash but, undeniably, she was that, too. This fact carried a near-magnetic pull. Wasn’t it my job to prevent more senseless deaths like hers? A resounding yes pounding in my chest, more questions churned: how do I carry her story forward? How do I honour her? I wrestled mightily with ambivalence about what to do. I was wary about engaging in safe-driving advocacy because, if we went that route, our daughter’s memory would be reduced to that of a girl who died in a car crash. I also worried that repeatedly revisiting her death would keep me in a painfully dark and ugly place, compounding my daily suffering. Around this time, I attended a survivors’ panel organised by the Massachusetts Office of Victim Assistance. Each panellist offered suggestions for coping with the violent death of a loved one. One of the panellists – the brother of a murdered young man – paused during his remarks, sighed heavily, and said: ‘And whatever you do, don’t start another fucking foundation. Let the professionals do that. Your job is to take care of yourselves.’ His emphatic statement tapped right into my ambivalence. His words offer one way to think. The bereaved activist mother Mamie Till-Mobley shows us another: when her Black teen son Emmett was abducted, tortured and lynched in Mississippi, she insisted that his casket remain open so that ‘the world [can] see what they did to my baby.’ Her act is often credited with launching the civil rights movement in 1955. I thought about the tragically endless stream of current-day Till-Mobleys – all those Black mothers of children shot dead by the police who become, overnight, a temporary fascination, as they struggle to find words in a sea of microphones flanked by Ben Crump and Rev Al Sharpton. I wondered if, given the opportunity, these mothers would prefer to grieve privately, like me – buried under the duvet, scrolling through family videos and weeping. Is this instantaneous activist role their choice? Do they feel exploited, their private pain transformed into public spectacle? Doesn’t reliving the story of their tragedy compound their trauma? Are they weary of turning their pain into purpose? Would they rather someone ready their bed? I am a grieving parent. I am also a scholar of social movements, so I took my grief and my questions and, instead of starting that foundation, I did what I do. Together with my students, I collected data and used it to make sense of the who, why, what, when and how of grief-induced activism. While traumatic loss is ubiquitous, our productive engagement with its consequences lags far behind. There is little systematic investigation into how trauma shapes activism, even while activist efforts flourish in diverse areas, from gun law reform and police brutality to opioid addiction, suicide and COVID-19. I agree with the writer-activist Malkia Devich-Cyril who insists: ‘To have a movement that breathes, you must build a movement with the capacity to grieve.’ If it is true that grief is love with nowhere to go, then activism provides a place for that love. Accidental activism agitates for a better, changed tomorrow. But it is wider than that. It makes meaning and instils purpose. It knits people into communities. And it enables those living with a hole in their hearts to remain in relationship with the loved ones lost. By transforming their grief into grievance, they carve a sustainable path forward in the interminable wilderness of traumatic grief. Rather than letting go, bereaved activists are holding on, and they are challenging us all to sit with painful truths instead of running from them. To do my research, I sought out grief- and trauma-induced activists (borrowing from the moniker that the bereaved activist Kim Witczak claims for herself, ‘the accidental advocate’) – those driven to establish charities, organise fundraisers, and/or launch awareness campaigns to address the causes of death, all in their loved ones’ names. Jenny Stanley is among those interviewed. She lost her six-year-old daughter Sydney when she climbed in the family car – parked in the driveway – by herself. Sydney could not get back out, and no one could find her until she was dead of heatstroke. Her mother has become a national authority on the fatal dangers of hot cars. My students and I spoke to people who, now, in the wake of their traumatic loss, became experts in hazing, partner violence, suicide, drug addiction, medical errors of various kinds, hospital-acquired infections, natural disasters, car and truck crashes, illness and pulmonary embolisms linked to hormonal birth-control use. This list – which I often recite – invariably generates head-shaking because it captures our worst fears come to life – and death. These losses are preventable and thus, impossible – for most – to rationalise. Especially for the bereaved parents, they are also a direct affront to the natural order of things, so much so that we lack a common term for bereaved parent. When others are pressing them to let go, they use their activism to hold on Some of those interviewed are familiar, such as Candace Lightner, founder of Mothers Against Drunk Driving (MADD); Judy Shepard, mother of Matthew Shepard, and Susan Bro, mother of Heather Heyer (both victims of high-profile hate crimes); and Cindy Sheehan, who reignited the antiwar movement in the US in the early 2000s after her son Casey Sheehan was killed in the war in Iraq. Others are connected to events that continue to grab national attention, such as parents of children murdered in mass shootings in Uvalde, Texas in 2022; in Parkland, Florida in 2018; in Newtown, Connecticut in 2012; and in Aurora, Colorado in 2012. Others still are members of the #SayHerName Mothers Network, family of Black women and girls killed by police. The accidental activists long for a deeper cultural recognition of loss. Through their work, they make their grief legible in a culture that turns away from their pain. As activists, they can grieve out loud. Their activism is a potent means of survival. When others are pressing them to let go, they use their activism to hold on. The accidental activists taught me how they hold on in three interrelated ways. First, through their activism, they craft a purpose that gives shape and meaning to their lives. Second, their activism connects them with a supportive community of other bereaved people – a real lifeline. And third, as activists, they continue their role – as parent, sibling, partner – with their dead loved one. Their lessons also point us toward a new way of thinking about grief. Grief is not a problem to be solved, but rather an emotional reality that demands acknowledgement. As the psychiatrist Judith Herman asserts in Truth and Repair (2023), trauma is a social problem, not an individual one. It requires collective and sustained acknowledgement. While on a college service trip to a Haitian orphanage in 2010, Britney Gengel was killed in a catastrophic earthquake. Just hours before the quake hit, Britney texted her mom with her wish to someday return to Haiti and set up her own orphanage. Soon after Britney’s death, the Gengel family founded the Be Like Brit foundation, realising Britney’s vision. The orphanage is shaped in the letter B for Brit, best visible from the sky. In our interview, Britney’s dad, Len Gengel, shared: For me, continuing the activism has given her death meaning. Without it, it feels completely senseless, I can make no sense of it. But with it, it helps me to make some sense of a devastating and senseless death. That if her death, and her life, can in any way help other people, and it has helped thousands and continues to help thousands. It really helps me with my grief. It’s about finding purpose and meaning. And if there can be meaning made from her death, it is much easier to hold.Like Len says, if there can be meaning, it is much easier to hold. The accidental activists are crafting purpose, searching for meaning – what David Kessler (a bereaved parent and long-time collaborator of the celebrated grief expert Elisabeth Kübler-Ross) calls the ‘sixth stage of grief’. To be clear, they are not seeking a meaning for the death of their loved one, but rather a way to transform their pain into purpose, a process that restores much-needed control for the traumatised. As Nicole Hockley, the mother of one of the victims of the Sandy Hook Elementary school shooting in Newtown, Connecticut explained: ‘What happened, the murder of my youngest son, was not something I could control. What happened after? That was my choice.’ Hockley, now a nationally recognised expert on school safety, chose to co-found the Sandy Hook Promise foundation to prevent school-based violence. She told me that, every morning when she rises, she kisses the urn that holds her son Dylan’s ashes. This daily act renews her purpose and enables her to live with the unrelenting pain of her son’s senseless murder. The accidental activists expressed frustration because their trauma and grief often went unacknowledged. They complained about family gatherings, for instance, where the person missing goes unnamed. They told tales of running into old friends who breezily asked: ‘What’s new? Everything good?’ without so much as a nod to the loss they are bearing. So, what’s a bereaved person to do? In short, find others who share your experience. Forging community means finding that ‘soft place to land’, a phrase I gratefully adopt from Sandy and Lonnie Phillips who founded Survivors Empowered after their daughter Jessica Redfield Ghawi was murdered in the 2012 mass shooting in a movie theatre in Aurora, Colorado. In 2014, Tanisha Anderson was in the midst of what family members call ‘one of her bad days’ when her brother called 911 for mental health support. When the police arrived, she panicked and ran into the street. The police cuffed and forced her on to the pavement outside her home in Cleveland. As an officer kneeled on her back, Tanisha died of a heart attack. Her death was determined a homicide. For bereaved activists of colour, finding community was especially important Tanisha’s mother Cassandra Johnson was a proud member of the #SayHerName Mothers Network until her own death in 2021. Cassandra helps us understand the healing power of the network, directly challenging the assumption that being encircled by others who also suffer compounds our misery. In a video posted on the #SayHerName website, through tears, she shared: Oddly as it may sound, to be around these mothers … makes you feel better. Most people, they might say: ‘Oh I know how you feel.’ No, you don’t!! So, to be around mothers that really do … makes you feel a lot better, as oddly as it may sound.We collected dozens of comments like Cassandra’s. For bereaved activists of colour, once they located their purpose, finding community was especially important. Activists described how they forged new connections through activism at fundraisers, legislative strategy sessions, lobbying days, press junkets and participation in issue-oriented conferences. It was in these spaces where they felt most at home or, as one bereaved activist put it, ‘where I could actually breathe’. Activism enabled them to connect with others who ‘got it’, and these linkages enabled them to keep going. The state of grieving is fertile ground for activism, but it needs community to sustain and grow. The greatest fear of the bereaved is that their dead loved ones will be forgotten. To resist this slow erasure, the activists assumed the role of caretaker of their loved ones’ legacy. Some, like the Gengel family, do this by continuing their unfinished projects or realising goals that their premature deaths prohibited. Many also work to control the narrative of their loved ones’ life and death, especially when the circumstances of the death are (unfairly) ripe for victim-blaming, such as in the case of police-involved murders or drug overdoses, especially when the victim is Black, Brown, poor, queer, mentally ill or otherwise socially marginalised. Activists will, for example, work to circulate images of their loved ones that challenge racist assumptions of ‘thugs’ and, instead, represent the deceased as respectable and productive members of the community. In some cases, surviving loved ones frame the death as a catalyst for change. As George Floyd’s young daughter Gianna shouted at a press event: ‘Daddy changed the world!’ Rae Ann Gruver travels the country speaking to fraternities and sororities about the dangers of hazing. She lost her son Max Gruver to a hazing incident in his first weeks of college at Louisiana State University in 2017. Unwavering, she exclaimed: ‘I want every member of every fraternity to know Max’s name.’ The impulse, yet again, is for the death to ‘mean something’. Through storytelling and advocacy, relational bonds are maintained. For those left behind, there is a persistent sense of obligation, amplified for those carrying regret Throughout this work, the activists often speak of the dead in the present tense and endow them with the capacity to guide the work of the living. For example, Lori Alhadeff lost her teen daughter Alyssa when she and 16 others were shot dead at Marjory Stoneham Douglass High School in Parkland, Florida. Lori, now a school board member and founder of the NGO Make Our Schools Safe, declares: ‘Alyssa still, you know, is here making an impact and saving lives.’ Lori believes that her daughter is actually guiding her to make change. She told me: ‘Alyssa is pulling the strings.’ What’s more, for those left behind, especially bereaved parents, there is a persistent sense of obligation, amplified for those carrying regret. As Dru West, who lost her daughter Julia West Ross to a pulmonary embolism linked to birth control, and has since become a fierce health advocate, now explained: ‘I am trying to do for others what I wish someone had done for my daughter.’ Importantly, they did not experience this responsibility as a burden, but rather as means to maintain connection and, perhaps, make amends. The bereaved activists are clear they are not ready to end the relationship with their loved ones. Fundraising for a foundation, drafting legislation (typically named after the deceased) and speaking at rallies provides a platform where they can invoke their loved one in socially acceptable ways. In this context, they can still be ‘Gracie’s mom’ or ‘Woody’s wife’. Indeed, the bereaved activists challenge the very finality of death. But there may be downsides to accidental activism. Sybrina Fulton, the mother of Trayvon Martin, reflecting on her highly visible activism fighting for justice for her son, offered this in the documentary Rest in Power: The Trayvon Martin Story (2018): ‘Had the tragedy not been so public, I could have been given more time to grieve, but I was not given that privilege.’ The privilege to grieve? The notion of grief as anything but a right freely claimed and a healthy response to loss is perverse. But, in a grief-averse culture that offers little more to the grieving than sappy sympathy cards and a few weeks of casseroles delivered to their front door, activism seems an apt response. Fulton’s experience highlights how broken are our systems of support for the bereaved. We cannot forget that these activists are forged in the horrible crucible of their traumatic losses; they are transformed overnight, and they need support and acknowledgement of their daily struggle to survive. In 2016, Korryn Gaines, a 23-year-old hairstylist and the mother of two young children, was shot to death by Baltimore police after a nearly six-hour standoff in her home. Her five-year-old son, Kodi, was also shot but survived. Later, Korryn’s brother posted a memorial video to Twitter with the comment: ‘I miss her so much, but Im okay’ [sic]. Her mother, Rhanda Dormeus, re-Tweeted the video with the poignant rejoinder: ‘WE’RE NOT OK.’ These few words are a portal to the world of the bereaved who struggle against the culture of grief-aversion and a rush to closure. Increasingly, we pathologise grief, as seen in the latest edition of the DSM-5, known as the bible of psychiatry. It now includes a controversial new diagnosis: prolonged grief disorder. Yet there are consequences for our chronic discomfort with grief, even while trauma, paradoxically, gains traction in an ever-widening collection of narratives surrounding us. Traumatic grief is omnipresent in film, stand-up comedy, TV series, TED talks, music and writing of all sorts. It is so ubiquitous, writes Parul Sehgal in The New Yorker in 2022, that the ‘trauma plot’ has been reduced to a trope – reductive, stereotypic and distorted. We need to stop pushing closure as the salve that will heal the broken heart This discomfort manifests in the rush to end grief, to ‘solve’ it. We call this ‘closure’, and it is, without a doubt, the dominant discourse of grieving. Yet the researcher of family stress and ambiguous loss Pauline Boss and the sociologist Nancy Berns both assert that chasing closure is counterproductive. Every one of the individuals interviewed for my study categorically – emphatically – refused the goal of closure. When asked what they thought about closure, they laughed, rolled their eyes, shook their heads or threw up their hands. West, whose daughter died from a hormonal contraceptive’s side-effect, put it best: ‘Closure is for cupboards.’ Then why do people assume we want it and need it? In short, others want us to find closure; others think that grief has an expiration date because our grief makes them uncomfortable. And as Nelba Márquez-Greene said on the 10-year anniversary of her daughter Ana Grace’s murder at Sandy Hook Elementary School: ‘I walk into a room and I still make people cry.’ So what do these grieving activists tell us? In short, we need to resist the cultural aversion to grief, and we need to stop pushing closure as the salve that will heal the broken heart. These moves are not repairs. Instead, they are invalidations of true suffering, and they block honest reckoning. When bad things happen, we struggle with what to do, what to say. People react with pity – which is a distancing move. Or with sympathy, which also puts space between us. In an RSA short video in 2010, the renowned speaker and author Brené Brown says: ‘Empathy fuels connection. Sympathy drives disconnection.’ Rarely, if ever, does an empathetic response begin with ‘at least’. ‘Silverlining it’ (Brown’s tongue-in-cheek term) does not help. And neither does putting our grief in a box on a shelf. Grief is powerful, necessary and enduring. When we embrace grief in our social movements, we inch closer to accountability and justice. Devich-Cyril’s question hits the mark: ‘What becomes possible when movements are brought more healthfully to grief, and what can we do to support leaders, organisations and movements to get there?’ For bereaved people of colour, the need is especially urgent. Let’s be real. For those lacking privilege, (not too) angry shouts capture attention, but quiet tears do not. Racism erases some stories of trauma, often rationalised by victim-blaming; at the same time it renders stories of Black and Brown death as sources of prurient entertainment. As Rhaisa Kameela Williams explains in ‘Towards a Theorization of Black Maternal Grief as Analytic’ (2016), historically speaking, Black maternal grief has been rendered illegible until it is transformed into the more culturally acceptable form of grievance. However, what bereaved families and friends – of all races – want is accountability. Banded together with others who experienced similar traumatic losses, they are emboldened to fight for the justice owed. At the same time, I will not forget the bereaved brother who advised against founding ‘another fucking foundation’. Is activism in fact a road that should be less travelled? Can we build an emotionally literate and more community-minded world where the bereaved are held and heard without having to do extraordinary things? Márquez-Greene’s exasperated Tweet really underlines this question. After the massacre in Uvalde, Texas, she wrote: ‘One of the biggest mistakes made in covering the uniquely United Statesian epidemic of gun violence in media is the demand for a tragedy-to-triumph narrative because the reality is too hard.’ So, what needs to change in the wake of tragedy? Frustratingly, I offer no clear prescriptions other than this: meet the grieving where they are at, especially when that place makes you uncomfortable and even fearful. And when you encounter accidental activists, remember that, even though they are carrying the grief, it does not mean it’s not heavy. Offer to help. Show up. Contribute. Learn, and pass on the lessons. What about me? I hesitate to call myself an accidental activist, certainly in comparison with the people I’ve met through my work. While we have made some modest efforts to raise awareness about distracted driving, our work honouring Gracie has focused on what brought her joy in her life – figure-skating and art. We set up a scholarship for skaters for a few years, and we have funded a youth public-art project – a set of middle- and high-school student-designed beautiful banners that line our main street every spring since 2016. Each banner includes the line: ‘Made possible by the family of Gracie James’, and when I see that, my feelings are complicated: pride, deep sorrow, longing and wonder: what would my forever 17-year-old daughter think of this visibility? And then I tell myself: if it helps me – in some small way – to stay connected to my girl, to ‘do something’ in her name, and to remind the people in my community that there was a beautiful girl who mattered, someone who counted, then that eases my suffering just a bit. But my ambivalence troubles me. Should we fund this project indefinitely? No matter what we do, Gracie is forever gone. Am I seeking comfort in the wrong place? But I should not have to translate my grief into grievance or public art or skating scholarships to be heard and seen. I was profoundly altered after her birth, and then even more profoundly after her death. Who is willing to witness my pain – without expectation or judgment or, worse, shrinking away? Who will see and hear yours? After all, grief is coming for us all.
Chris Bobel
https://aeon.co//essays/why-bereavement-turns-to-activism-in-a-grief-averse-culture
https://images.aeonmedia…y=75&format=auto
Race and ethnicity
Training is a cheap solution to a hard problem. It is the systems that allow for biased behaviour that need to change
On a Thursday afternoon in April 2018 in a Starbucks in downtown Philadelphia, police handcuffed two African American entrepreneurs, Rashon Nelson and Donte Robinson. A manager had reported them for waiting inside the coffeehouse while not having purchased anything. About a month later, on 29 May, Starbucks closed its 8,000 stores nationwide – at a cost of an estimated $16.7 million in sales – so that its 175,000 employees across the United States could participate in a four-hour ‘implicit bias’ training session that day. Implicit bias was once jargon that academic psychologists used to refer to people’s automatically activated thoughts and feelings toward certain groups rather than others. Now, it’s a buzzword that regularly appears in news articles and, occasionally, presidential debates. Implicit biases stand in contrast to explicit biases, people’s conscious or self-reported thoughts and feelings toward certain groups over others, such as when people overtly voice dislike toward Asian people. Implicit biases are more subtle. You can think of them as tiny stories that flicker in our minds when we see other people. A pharmacy employee might see a Black woman crouching on the floor and zipping up a bag, and immediately think she’s attempting to steal, as indeed happened in 2015 at a Shoppers Drug Mart in Toronto (which was later fined $8,000 for the discrimination). Or a border patrol officer might enforce an identity check upon Black citizens, thinking they pose a threat, as happened in the Netherlands in 2018; the Dutch appeal court this year ruled that unlawful. The concept of implicit bias has captivated social psychologists for decades because it answers a perennial question: why is it that, while most people espouse diversity, they still discriminate? And why is it that, though they say – and genuinely believe – they want equality, they behave in ways that favour some groups over others? Indeed, a research study with more than 600,000 participants demonstrated that, while white participants self-report relatively neutral explicit biases toward Black people, they still hold anti-Black implicit biases; another research study found that citizens of 34 countries implicitly associate men with science, more so than they do women. The assumption that drives implicit bias research, then, is that these biases, unchecked, can substantially influence thoughts and behaviours, even among well-meaning people. For instance, foreign-sounding names from minority job applicants’ résumés receive fewer call-backs for job interviews than equally qualified white counterparts; men dominate leadership positions in fields like medicine even when there is no shortage of women. So, implicit bias is a problem. What do most organisations do to solve it? Implicit bias training, sometimes known as ‘anti-bias training’ or ‘diversity training’, aims to reduce people’s implicit biases (how people think), and thereby presumably reduce discrimination (how people act). While the structure and content of these trainings can vary substantially, what typically happens is that, in one or two hours, an instructor provides attendees with a primer on implicit biases, explaining, for instance, the theory and evidence behind the concept; attendees then complete an Implicit Association Test (IAT), used to measure implicit biases, and reflect on their scores; and, finally, the instructor briefs attendees on ways to mitigate these biases (for instance, the National Institute of Health’s online implicit bias training module suggests employees ‘be transparent’ and ‘create a welcoming environment’. These trainings have become a burgeoning industry: McKinsey & Company estimated in 2017 that implicit bias training costs US companies $8 billion annually. Scores of criticisms around these tests already exist online, but I can give you my sense of why they’re so ineffectual. I completed an ‘unconscious bias training’ module as part of a work orientation from my alma mater. (Note: unconscious bias and implicit bias are not actually the same.) After spending about 30 minutes watching three modules of content that were supposed to last 90 minutes (I fast-forwarded most of the videos), and completing the quizzes after each module, I was left feeling the same way as I did after going through a workplace orientation module: bored, exasperated, like I had wasted my time on another check-box exercise or diversity-posturing activity. I’m also an implicit bias researcher, and here’s what the scientific literature says about these trainings: they largely don’t work. There are three main reasons why. First, the trainings conflate implicit biases with unconscious biases; this risks delegitimising discrimination altogether by attributing biased behaviour to the unconscious, which releases people from responsibility. Second, it’s very difficult to change people’s implicit biases, especially because social environments tend to reinforce them. And third, even if we could change people’s implicit biases, it wouldn’t necessarily change their discriminatory behaviours. Here’s where I land: while trainings, at best, can help raise awareness of inequality, they should not take precedence over more meaningful courses of action, such as policy changes, that are more time intensive and costly but provide lasting changes. If organisations want to effect meaningful societal changes on discrimination, they should shift our focus away from implicit biases and toward changing systems that perpetuate biased behaviour. To understand all of this, it’s important to know how the common measurement tool for implicit biases – the IAT – works. (My lab is devoted to improving these kinds of tools.) The easiest way to understand what the test entails is to do one: the standard version measuring racial biases is publicly available through the website Project Implicit, a domain that houses IATs for a variety of topics (race, gender, sexual orientation). Otherwise, here’s a quick rundown. The IAT flashes on your screen two kinds of stimuli: faces, either of Black people or white people, and words, either good words (‘smile’, ‘honest’, ‘sincere’) or bad words (‘disaster’, ‘agony’, ‘hatred’). In some trials, you’re then asked to press ‘E’ on your keyboard if either a Black face or bad word is shown, and ‘I’ on your keyboard if either a white face or good word is shown. But here’s where it gets tricky: what’s associated with each key mixes up as you progress. If in earlier trials ‘E’ means Black or bad, it can now mean Black or good (and ‘I’ white or bad). Let’s say that you’re now slower to press ‘E’ when it pairs Black with good than when it pairs Black with bad. That could suggest you hold more negative implicit biases toward Black people compared with white people because you’re slower to respond to Black when linked with good than with bad. (The ‘compared with’ is important here; the standard IAT evaluates one group relative to another.) At the end of the test, people receive their IAT test score, which tells them which group they have an ‘automatic preference’ for. This is the part that can incite shock or horror because, when people see that they hold an automatic preference toward white people, it might lead them to believe that, while they thought they preached equality, they were subconsciously biased the entire time. What some people get wrong, though, is that an automatic preference is not the same as an unconscious bias. Unconsciousness presumes an absence of awareness and thus conscious control. But an automatic preference doesn’t necessarily require either of those qualities. It’s like a habit, say nail-biting: you’ve associated stress with nail-biting so strongly that it doesn’t take long for stress to trigger you to bite your nails, but that doesn’t mean you’re not aware of it, that you can’t predict when it happens, or that you can’t, with effort, stop it when it happens. We generally pardon wrongdoers if their offence was accidental as opposed to intentional Numerous studies have shown that people can be aware of their implicit biases. One 2014 study by the psychologist Adam Hahn and his colleagues shows that people can generally predict their own IAT scores with a high degree of accuracy. They found an average correlation of r = .65 between participants’ predictions of their IAT scores and their actual IAT scores – a correlation that is typically considered large in psychological research; for instance, the heritability of IQ and education are also around that mark. If it were the case that people generally aren’t aware or conscious of their implicit biases, they wouldn’t be able to predict their IAT performance. Insofar as the IAT measures implicit biases, these biases are likely not unconscious. Unfortunately, this misunderstanding remains widespread. For instance, an article by Christine Ro on the BBC in 2021 uses ‘implicit biases’ and ‘unconscious biases’ synonymously, as does an article on the website of the Office of Diversity and Outreach at the University of California San Francisco, an article by David Robson in The Guardian in 2021, and an article by Francesca Gino and Katherine Coffman in the Harvard Business Review in 2021. To be clear, unconscious biases may exist, and just because someone might be aware of their implicit biases doesn’t mean they’re conscious of the effects of their biases on other people or that we can effectively control them. But here’s why it’s important not to conflate ‘implicit bias’ and ‘unconscious bias’: claiming that discrimination arises from the unconscious psychologises it, presents discrimination as an unintentional act rather than a preventable consequence – and thereby enables people to feel less morally culpable for discriminating. One study from 2019 demonstrates this experimentally. The social psychologist Natalie Daumeyer and her colleagues at Yale showed participants a fabricated article in which both Democratic and Republican doctors demonstrated bias toward patients based on their own political ideology when they engaged in somewhat politicised health behaviours (say, gun ownership or marijuana use). In one condition, they read that the doctors were somewhat aware that they were treating patients differently. However, in the other condition, they defined bias as unconscious bias – the ‘attitudes or stereotypes that affect our understanding, actions, and decisions in ways that we are typically not aware of’ – and also read that the doctors held no conscious knowledge that they treated their patients differently based on their own political views. Finally, participants completed a questionnaire measuring whether the doctor should be held responsible and whether they merit being punished. What did they find? When the doctors were described as having no conscious knowledge of unfair treatment, participants rated them as needing to be held less accountable, and less deserving of punishment, compared with when the doctors’ behaviour was ascribed to conscious bias. Why the difference? Awareness signifies intentionality, and we generally pardon wrongdoers if their offence was accidental as opposed to intentional. This detail matters. If diversity practitioners perpetuate this notion that unconscious bias underlies daily acts of discrimination, they could reduce accountability toward perpetrators and prevent behaviour change. Even when implicit bias is conscious, it is notoriously hard to change. One study tested nine implicit bias interventions previously shown to reduce implicit biases, and found that these changes subsided after several hours or, at best, several days. That’s because, while biases might be an individual characteristic (similar to someone’s personality type or temperament), they require people’s social environment – work, family, political and technological circumstances, for instance – to make them accessible, as the social psychologist Keith Payne argues in ‘The Bias of Crowds’ (2017). If the environment does not change, the bias will return. To support this view, consider the fact that IATs generally measure individuals’ implicit biases unreliably. In other words, the IAT score you receive today can differ from the IAT score you receive tomorrow. Psychometricians consider IATs ‘noisy’ measures: your scores can fluctuate depending on context, for instance, your mental state (tired, anxious), your physical surroundings (with friends, with colleagues), and what you were exposed to before doing the test (for instance, if you watched Barbie before doing the IAT, you might be more primed to respond more positively to women in a gender-science IAT). So, changing people to shift biases may be a futile exercise: since our social environment heavily influences our biases, short-term implicit bias interventions can hope to achieve only temporary effects before the environment re-instates our initial biases. It’s one thing to know whether the IAT measures implicit biases. But how – if at all – do these biases relate to behaviour? This question has been studied thoroughly, with four meta-analyses (studies that compile and analyse other studies) synthesising the findings of hundreds of studies that largely use the IAT. They converge on a common finding: while implicit biases do demonstrate a reliable correlation with individual behaviour, this correlation is generally weak; that’s why Project Implicit warns participants against using their IAT scores to diagnose anything meaningful about themselves. Implicit biases at a regional level can be strongly associated with regional-level behavioural outcomes On the other hand, in line with the ‘Bias of Crowds’ model, aggregating the scores of many people taking the IAT test at once can help us predict behaviour. The IAT poorly predicts the behaviour of one person, but what about taking the average IAT scores of an entire city or state, and correlating with outcomes? One study, by the social psychologist Eric Hehman and his colleagues, provides some insight. They studied the implicit biases of more than 2 million residents across the US within their metro areas, and also drew from metro-area sociodemographic data using crowdsourced and fact-checked databases for measures like overall wealth, unemployment rate and overall crime levels. They found that, out of 14 variables, only one – greater anti-Black implicit bias among white residents of certain metro areas – significantly correlated with greater use of lethal force against Black people relative to the base rates of that metro area. For instance, metro areas in Wisconsin held higher anti-Black implicit bias on average, which correlated with higher use of lethal violence against Black people in that area. These findings, in line with the ‘Bias of Crowds’ model, highlight that, whereas implicit biases aren’t strongly associated with individual-level behaviour, implicit biases at a regional level can be strongly associated with regional-level behavioural outcomes, possibly because implicit biases reflect systemic, rather than personal, differences. Note, however, that most studies on the relationship between implicit biases and behaviour, including the study by Hehman and colleagues, are correlational. Even if we could change people’s individual implicit biases, would that lead to a change in levels of discrimination? In other words, let’s say implicit bias training successfully reduced individual police officers’ implicit biases against Black people. Would that reduction in bias translate to them discriminating against a Black person less often? One meta-analysis looked at 63 randomised experiments that used an IAT and a behavioural measure; and randomised experiments, unlike correlational studies, do allow us to infer some causation. Yet they just confirmed what others have found. Changes in measures like the IAT – at the individual level – do not relate to changes in individual behaviour toward other groups, demonstrating, again, that changing people’s minds is unlikely to work. This finding shouldn’t strike us as surprising given the gap between attitudes and behaviours that has been documented again and again. That gap usually follows a principle of correspondence: the extent to which an attitude predicts behaviour usually depends on how well the attitude matches the behaviour. For example, attitudes specific to organ donor registration (‘How do you feel about registering yourself as an organ donor?’) are better predictors of registration behaviours than general attitudes about organ donation (‘In general, how do you feel about organ donation?’). IATs usually measure implicit biases toward broad groups, like Black people in comparison with white people, without more information about what they’re doing or where they are. Furthermore, attitudes interact with context to predict behaviour. Most of us demonstrate a positive attitude toward exercise, for instance, but that doesn’t mean we’ll go to the gym this weekend: we don’t feel motivated, the gym might be closed, or the weather rainy. In the same way, someone might show a negative implicit bias toward Asian people, but that doesn’t mean they’ll behave negatively toward an Asian person upon meeting one. A classic study in 1934 by the sociology professor Richard LaPiere at Stanford University illustrates this point: when he drove through the US with a Chinese couple, they stopped at more than 250 restaurants and hotels and were refused service only once. Several months later, the owners were surveyed on whether they would serve Chinese people and 92 per cent said they would not. Given all this, the question that emerges, is: what can we really do? Here’s what we don’t need: more implicit bias trainings. In fact, as an implicit bias researcher, I think that organisations should decentralise, or do away with, the concept of implicit biases entirely. Implicit biases, as an empirical concept, are interesting and potentially valuable. But as a tool for diversity, equity and inclusion (DEI) pedagogy? It just confuses people and distracts from the actual problem. The reason why these trainings exist is because they are cheap, easily scalable solutions that, from an optics standpoint, allow organisations to prop up an image that they care about DEI when the actions accompanying the imparted values are often vacuous. It’s ironic, isn’t it: the very notion of implicit biases stands on the discrepancy between values and actions, but the concept just perpetuates this problem. Before organisations preach the dangers of implicit biases, they should look at their hiring systems, policies and practices that actually discriminate against minorities by putting them at a disadvantage. Here’s what I think: let’s stop caring so much about how people think, and focus more on how people – and companies – behave. I’m partly inspired by the paper ‘Stuck on Intergroup Attitudes: The Need to Shift Gears to Change Intergroup Behaviors’ (2023) by the psychologist Markus Brauer. It argues that researchers and practitioners, rather than relying on interventions that change people’s attitudes, should focus on interventions that directly target behaviour. For instance, rather than asking a hiring manager to participate in a workshop to change their attitudes toward women applicants, an organisation could instead enforce hiring criteria prior to seeing an applicant pool to reduce the biasing effect of applicants’ gender. Research shows that this approach has already been used successfully. Biases don’t come from a vacuum: they’re triggered by certain cues – the colour of someone’s skin, their accent, or the clothes they wear – attached to people. So, if we hide biasing information when it matters, we could also mitigate the effects of bias. Using hiring criteria is an obvious example, but behavioural science research reveals other creative ways to attenuate discrimination from the top down rather than the bottom up. For instance, besides concealing information, organisations can restructure the way they present choices to employees. In business, one common reason we don’t see as many women becoming leaders is that the leadership selection process requires them to self-promote and self-nominate. Yet women who assert themselves can incur backlash for behaving in this counterstereotypical way, causing them to step back from competition. Here’s where organisations can push back by leveraging a behavioural economics concept known as ‘defaults’ – they can shift the default so that nominating oneself is a decision that women need to actively opt out from – and, if they don’t opt out, they get an opportunity to get promoted. These trainings should not exist until organisations try doing the structural work first The management professor Joyce He at the University of California, Los Angeles and her colleagues demonstrated the efficacy of this intervention in their study in 2021. On the recruitment platform Upwork, they recruited 477 freelancers for a data-entry job. At one point, they gave the freelancers (who were unaware of the experiment) the ability to choose between two tasks: a standard data-scraping task, paid at $5 per hour base compensation with a $0.25 bonus commission, or a more advanced, higher-paying task, paid at $7.50 base compensation with a $1 bonus commission. The freelancers had to compete with other workers for the advanced task and, if they didn’t win, they risked not earning any money at all. Here’s where defaults come in: in the opt-in condition, freelancers were by default enrolled in the standard non-competitive task, with the option to opt-in to the advanced task, whereas in the opt-out condition, the freelancers were by default enrolled in the advanced competitive task, with the option to opt-out to the standard task. They found a statistically significant gender gap between men and women freelancers in the opt-in condition (57 per cent of women versus 72.5 per cent of men chose to compete), whereas they did not find statistical significance in the opt-out condition. To minimise biases and promote diversity and inclusion, we need to redesign biased processes to include more disadvantaged groups, rather than attempt to change people’s minds. Still, I have two caveats. One is that structural ‘behavioural interventions’ are considered relatively low-hanging fruit compared with inclusive policies: policies that mitigate unequal wages between men and women, that increase access to paid parental leave, that reduce racial disparities, and that promote mentorship programmes for minorities – tackling the root causes of discrimination rather than symptoms. Furthermore, I don’t think that implicit bias training is useless, because, executed correctly (that is, using accurate science and emphasising behavioural strategies), it can be an effective awareness-building tool. And changing individual minds can catalyse structural changes. But I adamantly believe that these trainings should not exist until organisations try doing the structural work first. And here’s another good thing about changing social structures: they can also impact individuals’ biases – and at a large scale. For instance, changing legislation can also change biases within a populace. One of my previous colleagues at McGill University, the intergroup relations researcher Eugene Ofosu, asked whether same-sex marriage legalisation was associated with reduced anti-gay implicit biases across US states. His team studied US IAT scores between 2005 and 2016, and what they found was striking. While the implicit anti-gay bias for each state, on average, decreased at a steady rate before same-sex marriage legislation, these biases decreased at a sharper rate following legalisation, even after controlling for demographic variables such as participants’ age and gender, as well as state-level factors such as education and income. Legislation and policy don’t just tell us what to do, but what to think: they signal our social norms, the unwritten rules that define what’s acceptable and appropriate, that undergird our attitudes. Other studies also reinforce this point at an organisational level. Women working for companies perceived to have more gender-inclusive policies report more supportive interactions with their male colleagues, lower levels of workplace burnout, and a greater commitment to the organisation. Stop distributing implicit bias training as a cure-all. Stop with the meaningless virtue-signalling. Stop selling these trainings under the guise of research. I get it. Trainings are easy. They’re cost effective. But one-off solutions do not work, and implicit bias is not really the problem. Biased systems and structures that allow for biased behaviour are the problem. Real DEI requires rebuilding biased systems from the ground up. It takes time. It requires top-down, versus bottom-up, change. It requires real accountability and leadership. Don’t ask how people can change their biases to get at diversity, equity and inclusion; ask what organisations and institutions have done – in their hiring systems, their DEI policy, or otherwise – to embody these values and provide every group an equal opportunity at success.
Jeffrey To
https://aeon.co//essays/the-implicit-bias-problem-wont-be-solved-by-training-alone
https://images.aeonmedia…y=75&format=auto
Ethics
Ethical values can be both objective and knowable – torture really is wrong – yet not need any foundation outside themselves
Many academic fields can be said to ‘study morality’. Of these, the philosophical sub-discipline of normative ethics studies morality in what is arguably the least alienated way. Rather than focusing on how people and societies think and talk about morality, normative ethicists try to figure out which things are, simply, morally good or bad, and why. The philosophical sub-field of meta-ethics adopts, naturally, a ‘meta-’ perspective on the kinds of enquiry that normative ethicists engage in. It asks whether there are objectively correct answers to these questions about good or bad, or whether ethics is, rather, a realm of illusion or mere opinion. Most of my work in the past decade has been in meta-ethics. I believe that there are truths about what’s morally right and wrong. I believe that some of these truths are objective or, as they say in the literature, ‘stance-independent’. That is to say, it’s not my or our disapproval that makes torture morally wrong; torture is wrong because, to put it simply, it hurts people a lot. I believe that these objective moral truths are knowable, and that some people are better than others are at coming to know them. You can even call them ‘moral experts’ if you wish. Of course, not everyone agrees with all of that. Some are simply confused; they conflate ‘objective’ with ‘culturally universal’ or ‘innate’ or ‘subsumable under a few exceptionless principles’ or some such. But many people’s misgivings about moral objectivity are more clear headed and deeper. In particular, I find that some demur because they think that, for there to be moral truths, let alone objective, knowable ones, morality would have to have a kind of ‘foundation’ that, in their view, is nowhere to be found. Others, anxious to help, try to show that there’s a firm foundation or ultimate ground for morality after all. It’s my view that both sides of this conflict are off on the wrong foot. Morality is objective, but it neither requires nor admits of a foundation. It just kind of floats there, along with the evaluative realm more generally, unsupported by anything else. Parts of it can be explained by other parts, but the entirety of the web or network of good and evil is brute. Maybe you think that’s weird and even worthy of outright dismissal. I once thought the same thing. The purpose of this essay, which is based on my book Pragmatist Quietism: A Meta-Ethical System (2022), is to encourage you to start seeing this aspect of the world as I now see it. The first question we should ask is: what exactly is a ‘foundation’? We can get clearer on what a foundation is by querying whether a moral theory like utilitarianism might count as one. Utilitarianism says that actions are right to the extent, and only to the extent, that they promote overall wellbeing. So, is utilitarianism in the running for being a foundation for morality? Well, it certainly purports to explain a lot when it comes to right and wrong. Why give to the poor? Promotes wellbeing. Why not punch your neighbour in the face? Doesn’t promote wellbeing. Should the Bank of Canada raise interest rates this quarter? Not clear, because it’s not clear whether it promotes wellbeing. And so on, and so on. Nonetheless, utilitarianism is not what I have in mind by a ‘foundation’. This is not because utilitarianism is incorrect; it is because utilitarianism is a moral theory. But a foundation is not a moral theory. It’s the kind of thing that’s supposed to ground, or support, or justify, moral theories, and moral claims generally, without itself being a claim within the domain of morality. Here’s another way to think about it. Suppose that a moral sceptic were to declare, along with David Hume: ‘You cannot rationally infer an “ought” from an “is”!’ Now imagine that I replied: ‘Oh yes you can! Utilitarianism is true, and so, from the fact that an action promotes overall wellbeing, you can infer that it’s what you ought to do.’ I suspect that our sceptic would regard this response as unsatisfactory. ‘You can’t show that Hume was wrong about “ought” and “is” by just wheeling in some further “ought”,’ she might respond. ‘To show that the move from “is” to “ought” can be rational, you would need to step outside of “ought”-discourse entirely, and provide a…a…’ ‘And provide,’ I would finish the sceptic’s sentence, ‘what I’m calling a “foundation”.’ The right and the good have the feel of being supernatural, like ghosts and auras So a moral theory doesn’t count as a foundation. What would count? Here’s a possible candidate. One thing that philosophers of language try to do is to explain why terms and concepts refer to the things in the world that they do. Many of these theories of reference invoke the relation of causal regulation – regulation of our ‘tokening’ of the concept ‘cat’ or our use of the word ‘cat’, for instance, by the comings and goings of the long-tailed housepets that like to stretch out on the windowsill. Some philosophers have applied this theory of reference to moral terms and concepts, yielding a view on which a concept like ‘good’ refers to whichever property or cluster of properties causally regulates our employment of it. Anything that then had that property(-cluster) would therefore be good. Note that our starting point here is not a claim or theory that is, intuitively speaking, within the subject-matter of ethics. Rather we began with a theory of reference – something belonging to the philosophy of language – that purports to explain how terms and concepts across the board are anchored in the world. One might say that, in doing so, we gave ethics a foundation. Here is another theoretical move that might count as an attempt at offering a foundation for ethics. Many philosophers these days are leery about accepting the existence of objects, processes or properties that are outside the ‘natural’ order. This may seem to present a problem for ethics, because the right and the good have the feel of being supernatural, like ghosts and auras, rather than natural, like clams and carbon. But a few philosophers have suggested that this is too quick. There may be, in Philippa Foot’s words, ‘natural goodness’. Doctors speak of a well-functioning kidney, farmers of an underdeveloped calf, and nobody takes them to be dipping into the realm of, as they say, ‘woo’. And while some philosophers have expressed suspicion about so-called ‘teleological’ features like functions and ‘final ends’, others have argued that a closer look at scientific practice reveals their explanatory value. But if there is nothing problematic about goodness in the way of a heart, there should be nothing problematic about goodness in the way of a human being. On this, as it’s sometimes called, ‘neo-Aristotelian’ picture, then, ethical features are part of the natural world. What makes a semantic account like the causal theory of reference or a metaphysical view like neo-Aristotelian naturalism a candidate for being a foundation, while a theory like utilitarianism is not? They are capable of serving as foundations for ethics because, basically, they’re not ethics; they’re semantics – they’re about what words and concepts mean – or they’re metaphysics, cataloguing what sorts of things exist in the world. Utilitarianism, by contrast, is ethics, and ethics is no more capable of hoisting itself up by its own bootstraps than is anything else. I think we can go a little further, though. While a theory like utilitarianism offers a direct explanation – maybe a good one, maybe a bad one – of what is right or good or whatnot, our causal theory of reference does not. It offers a theory of what concepts and terms refer to, which has implications for which ethical claims are true, which in turn has implications for what’s right or good. But ultimately, it tells you about what things mean, while a theory like utilitarianism tells you what’s right. One indicator of the difference between the respective theories’ explanatory roles is the difference between them in terms of what we may call ‘domain generality’. Theories like ‘terms refer to the features that causally regulate their usage’ or ‘only things posited by the successful natural sciences exist’ have implications beyond ethics – into what ‘cat’ means, or about whether René Descartes’s postulated res cogitans exists — while utilitarianism is solely a theory of right and wrong, and that’s it. Now, if you were to go on the website formerly known as Twitter and search for ‘foundation morality’ or something similar, you’d turn up many threads about God or religion. So it’s worth asking: is God the kind of thing that people like me have in mind when we talk about a ‘foundation’? There’s much to be said on this matter, but on the face of it, no. If someone were to claim that an action is morally wrong if and only if God forbids it, I’d take this as an ordinary moral theory on a par with the claim that an action is morally wrong if and only if it fails to promote wellbeing. If utilitarianism isn’t the sort of thing that’s even eligible to be a foundation, then neither is this simple version of divine command theory. Now, to be sure, there are ways of beefing up divine command theory so that it might properly be regarded as a stab at a foundation – bringing in the metaphysics of ‘God’s nature’, for example. (It should be said: there are parallel ways of beefing up other normative ethical theories, too.) The only point I wish to make now is that ‘God commands X’ no more takes us ‘outside of ethics’ than ‘X maximises overall wellbeing’. The moral relevance of each one is up for dispute, and that dispute would take place in the arena of regular old first-order moral thinking, with the rest of the normative-ethical gladiators. So why is it so often thought that morality requires a foundation? It may seem difficult to explain a way of thinking that strikes one as so obviously correct. I, however, do not think it is correct, let alone obviously correct, and so let me try my hand. Basically, I suspect that many people think morality needs a foundation because they in some way or other assimilate the enquiry that gets called ‘normative ethics’ to ordinary factual enquiry, in which there do indeed seem to be foundations/explanations for the most argued-over claims. Whether or not you accept highfalutin philosophical positions like the principle of sufficient reason, my guess is that that you would look askance at someone who said that it’s going to snow tomorrow but then claimed that there was no explanation for that – that it’s just a brute fact. But if that claim strains credulity, then the view in which ethics as a whole ‘just floats there’, as I put it, untethered from anything that might serve to explain it, is apt to strike you as downright absurd. Correlatively, the fundamental reason why I don’t think that morality requires a foundation is that I deny that the relevant sorts of ethical disputes are akin to ordinary factual disputes. They have features that make it easy to be fooled into thinking otherwise, but in fact they’re crucially different. More specifically, disputes that get called ‘normative ethics’ are most like disputes that many people have labelled ‘merely verbal’ or ‘non-substantive’. A classic example comes from William James’s book Pragmatism (1907). A man is chasing a squirrel around a tree. Is the man thereby going around the squirrel? One disputant says ‘no’, because the man is always behind the squirrel. Another says ‘yes’, because the man is first north of the squirrel, then west, then, south, then east of it. The people in this dispute have different beliefs, to be sure; their conflict is not a conflict of desires or emotions. Still, there’s a clear sense in which they’re not really representing the world in different ways. The side you take in this dispute does not determine, either directly or indirectly by way of inference, the way you think any aspect of the world looks, smells, sounds, etc; nor would taking one side or the other of this dispute guide you to act in a way that achieves your aims, whatever these aims may be and whatever your powers may be. The belief, in other words, doesn’t function in the way a representation like a map does. I think the debates that tend to get called ‘normative ethical’ are a lot like this. The way that the world will look, smell, sound, etc if utilitarianism is true is just the way it will look, smell, sound, etc if utilitarianism is false. Taking sides for or against utilitarianism does not help us to further our ultimate goals, whatever they may happen to be, in the way that a map does. Rather, it simply changes what our ultimate goals are. Connections with motivation and emotion fool us into assimilating disputes about utilitarianism to ordinary factual disputes With that said, there are also some important differences between the ‘utilitarianism’ and ‘squirrel’ debates. I said that we sometimes call disputes like that about the squirrel ‘merely verbal’ or ‘non-substantive’. We also sometimes say of them something like: ‘You could say this, or you could say that. What’s the point?’ This is because not only is there no representational accuracy up for grabs in these debates – nothing of value seems to be afforded by them. They seem to be, again, pointless. Not so the majority of our debates about morality and politics. This is because such debates bear on our own and others’ motivations, as well as on praise, blame, esteem and so forth in a way that debates like ‘squirrel’ seem not to. We might say that they are significant, but not substantive. Unlike ‘squirrel’, they matter. But then unlike ordinary factual disputes, the way that they matter is not by affording accurate representation of the world. It’s these connections with motivation and emotion that fool us into assimilating disputes about utilitarianism, or the ‘trolley problem’, or distributive justice, to ordinary factual disputes. Because they bear on what we do and how we feel, we do not reckon that we can simply ‘go either way’ on them in a willy-nilly fashion. We do not regard them as arbitrary, in other words, in the way that we regard ‘squirrel’. Nor do we think it’s acceptable to settle them by conceptual fiat, as we would settle disputes like ‘squirrel’. Here is what I mean by that. Were I to find myself embroiled in a discussion about whether the man is going around the squirrel, I would probably try to put a stop to it by saying: ‘Look, all I mean by “going around” is this…’. By contrast, suppose we were embroiled in a dispute about whether the media would be right to mothball a story in an attempt to ensure that a disfavoured candidate is not elected. Here I would not try to settle the dispute by saying, eg, ‘Look, by “right”, all I mean is “maximises overall wellbeing”…’ I’d see such a dispute as to be settled by argument, not by stipulation. And again, I think we can chalk up this difference to the fact that normative-ethical disputes, despite failing to afford representational significance just like ‘squirrel’, are significant in practical and affective ways that ‘squirrel’ is not. This all puts ‘normative-ethical’ disputes in a strange category, and makes it difficult to know what to say about them in terms of philosophical theory. I actually consider this an advantage, for it is manifestly not obvious what to say about truth and objectivity and knowledge when it comes to ethics! This is witnessed by the fact that some super-smart philosophers think that there are objective truths about ethics, some think ethics is bullshit along the lines of alchemy, some think ethical disputes are really conflicts of desire-like attitudes in disguise, and so on. Anyone who thought ethical disputes work in such a way that one theoretical interpretation is just utterly obvious and natural and easy to state would then owe us an explanation of how so many smart people could be getting it so terribly wrong at this late stage in intellectual history. And so, acknowledging that it is by no means obvious, here is my own theoretical interpretation. The reason why ethics neither requires nor admits of a foundation outside of itself is that, like ‘squirrel’ but unlike any ordinary factual disputes, the relevant kinds of ethical dispute are non-representational or, as I prefer to put it, fail to afford ‘representational value’. That is to say, one does not represent or mirror or copy the world in any robust sense that is worth caring about by coming to any conclusion rather than another pursuant to such a dispute. But the sorts of extra-ethical considerations drawn from metaphysics, semantics and so on that people typically call upon to serve as ‘foundations’ could be relevant to ethics only by bearing on which moral beliefs, if any, were good or bad in representational respects. They’re not ethically important in the ways that happiness, freedom, equality, dignity and other such things are. But since representational value and disvalue aren’t on the cards when it comes to normative-ethical disputes, these considerations regarding the metaphysics of moral properties, the sense and reference of moral terms and so on, are irrelevant to fundamental ethics. And so it would be a mistake to think, with so-called ‘error theorists’ or ‘nihilists’ about morality, that there are no such things as moral properties in the world, and so all attributions of rightness or wrongness are false. The world doesn’t have to have these little moral doodads for things to be right or wrong; there just has to be happiness and unhappiness, freedom and tyranny, and so forth. It would be a mistake to think, with Elizabeth Anscombe in her influential paper ‘Modern Moral Philosophy’ (1958), that the moral ‘ought’ lacks sense, as it were, and so there is nothing that we morally ought to do. If these are problems, they’re problems for anyone who thinks things Whether something ‘lacks sense’ is a semantic matter, and semantics does not bear on normative ethics. It would bear on ethics only if it went towards determining the representational values of beliefs about ethics, but there are no such values at stake. As I said at the outset, my quarrel is not only with the sceptics. Someone who attempted to wring some positive moral conclusions out of claims in semantics (eg, about the sense or reference of moral terms) or metaphysics (eg, about what would best accomplish the reduction of morality to some cluster of suitably ‘natural’ properties) is making the same sort of basic error. They are treating normative-ethical enquiry as representational, even though it is not. But if neither side of a normative-ethical dispute is representing or ‘mirroring’ the world any more successfully than the other is, then why can’t we ‘go either way’, as it seems we can in ‘squirrel’? How can there be a truth of the matter, if there’s no possibility of accurate or inaccurate representation in any robust sense? My basic answer is that what gives these normative-ethical debates the appearance of mattering – their conclusions’ influence on motivation and affect – also makes it the case that they actually matter. There’s value and significance up for grabs in these ethical disputes, then, but it’s not value that inheres in representing the world in a robust sense. It’s what I call ‘specifically ethical value’ – the value of doing the right thing for the right reason. And it’s from this sort of value that I try to wring a kind of truth or correctness that’s proprietary to ethics. Imagine a kind of advisor who’s ideal in all non-moral respects – true beliefs about non-evaluative matters, perfect inferential abilities, etc. If we plug a particular moral belief into such a person, and she advises you to do all and only right actions, then that belief counts as true in this proprietarily ethical sense, even though the belief does not ‘picture’ or ‘mirror’ the world. Note that my brief for ethical truth bottoms out in claims about ‘specifically ethical’ value, and that my argument for the irrelevance of metaphysics, semantics, etc to ethics bottoms out in claims about what I called ‘representational’ value. This might strike you as begging the question against the sceptic about evaluative truth and knowledge – in other words, as assuming at the outset just what I intend to demonstrate to such a sceptic. My rejoinder: yes, I do beg the question, but this, in itself, does not put me in bad company. Everyone who ventures a positive claim about some subject matter – the external world, induction, mathematical knowledge, what-have-you – rather than withholding judgment entirely, must at some point confront the so-called ‘Agrippan trilemma’: either posit certain facts as unexplained, or beg the question, or accept an infinite regress. If these are problems, they’re not problems for me specifically; they’re problems for anyone who thinks things. So I say that the true sin lies not in question-begging, but in failing to subsume aspects of the world within a more general vindicatory framework. For example, a theory of a priori knowledge that explains how knowledge of that very theory is possible might beg the question, but so long as it accounts for a priori knowledge in general – eg, of mathematics, logic and morality – and not just a priori knowledge of itself, it needn’t be problematic. A theory of accurate mental representation of the world that explains how our beliefs in that very theory accurately represent the world also begs the question, but this should not worry us insofar as it explains accurate mental representation across the board. These theories earn their keep by making sense of what would otherwise remain mysterious, and so it should not trouble us if they end up vindicating themselves in the process. I propose to attain a similar sort of explanatory unity by vindicating all claims and domains that are worthy of it – not just ethics, but everything from biochemistry to sports prognostication – fundamentally in terms of values, be these representational, specifically ethical, or other sorts of values. It is this values-first re-imagining of enquiry for which I reserve the label ‘pragmatism’. Pragmatism offers a way of making sense of ethical truth, objectivity and knowledge by ensconcing these within a more comprehensive world picture, but not in such a way that would count as providing a foundation for ethics in some allegedly more fundamental area of enquiry. What emerges is a free-floating evaluative sphere, coupled with an account of why this is not so odd or mysterious after all.
Andrew Sepielli
https://aeon.co//essays/ethical-values-can-be-both-objective-and-yet-without-a-foundation
https://images.aeonmedia…y=75&format=auto
Economics
It’s not in the interests of the ordinary person but it’s not a conspiracy either. A cashless society is a system run amok
Four centuries ago, a woman named Else Knutsdatter was executed in Vardø, a small coastal town in Norway. She was accused of having used witchcraft to raise an ocean storm that claimed the lives of 40 men. She wasn’t the only one to fall victim to 17th-century folk who – in the absence of other explanations – could be convinced that disasters were conjured by malevolent sorcerers. Ninety others were executed for conspiring to produce the same storm. Today, we know that physics and atmospheric pressures produced those storms. So, in the realm of weather, we’ve moved to systemic thinking, where bad things don’t need to be explained with reference to bad actors. When it comes to descriptions of politics and economics, the progress is not so unequivocal. Do bad things like climate change, conflict and corporate greed happen because powerful politicians and CEOs construct it like that, or do they emerge in the vacuum of human agency, in the fact that nobody’s actually in control? This is a question that confronts me in the campaign to protect the physical cash system against the digital takeover by Big Finance and Big Tech. Photo supplied by the author For more than eight years, I’ve advocated for the protection and promotion of physical notes and coins. I wrote a book called Cloudmoney: Why the War on Cash Endangers Our Freedom (2023). In that book, I point out that the public has swallowed a false just-so story that says we are pining for a cashless society. All over the world, public and private sector leaders claim that ‘our’ desire for speed, convenience, scale and interconnection drives an inevitable digital transition. This is supposed to bring a ‘frictionless’ world of digital payment-fuelled commerce, done at the click of a button or scan of the iris. The message is: keep up or else face being left behind. The fact that so many leaders recite this script triggers some folks into thinking ulterior motives are guiding them, and it is true that the finance and tech sectors, for example, gain massively from the digitisation hype. Over the past few decades, they’ve launched various top-down attacks against the cash system, something I chronicle in my book. Physical cash is issued by governments (via central banks), whereas the units in your bank account are basically ‘digital casino chips’ issued by the likes of Barclays, HSBC and Santander. ‘Cashless society’ is a privatisation, in which power over payments is transferred to the banking sector. Every tap of a contactless card or Apple Pay triggers banks into moving these digital casino chips around for you. It gives them enormous power, revenue and data. They can share that data with governments but, more often than not, they’re using it for their own purposes (such as passing it through AI models to decide whether you get access to things or not). By rejecting the story that cashless society is driven primarily from the bottom up, I sometimes get accused of being a conspiracy theorist. It’s not hard to imagine the outlines of a ‘conspiracy’ when you look at who benefits most from payments privatisation. Not only are Visa, Mastercard and the banking sector big beneficiaries, the fixation on digitisation also extends the power of Amazon and other corporate behemoths that are moving beyond the internet into the physical world via smart devices and automated stores that plug into digital finance systems. It’s a small jump to imagine how governments can piggyback on this digital enclosure to spy on us, or manipulate us. Angst about this creeping enclosure finds widespread expression on social media. In London, and other places where the use of cash has plummeted, it’s turning up in the form of warning posters and pamphlets handed out by conscientious objectors against ‘cashless’ establishments. They warn against a looming digital takeover, but what they don’t realise is that the powerful corporations leading this takeover are themselves led by a larger puppetmaster, and this ‘puppetmaster of puppetmasters’ is no conspiring group of elites. It’s a system, and the dominant stories about digital progress are its ideology. Systemic thinking requires stretching out the mind to picture powerful but invisible forces. So, let’s ease in through a simple thought experiment: imagine a million blindfolded people tied together, trying to find a direction to walk. They collectively form a system, but its interdependence is so complex that it’s almost impossible for people to coordinate. This means they default to some lowest common denominator, vaguely stumbling in a direction without knowing why. This resembles how our global economic system works. We’re all tied into complex webs of interdependency, and the system generates pressures that require it to expand and accelerate. Its logic demonstrates almost evolutionary properties, such that anyone who goes against its default tendencies hits a wall, while anyone who stumbles in the direction of its prevailing current doesn’t. This may sound abstract, but we can see it clearly at work in the world with physical cash. For centuries, the capitalist system has been underpinned by nation-states that have fostered the growth of large firms. For a long time, cash helped that system to expand and accelerate. In the 1950s, corporates were more than happy to have adverts featuring people using cash to buy their products, but in the contemporary moment firms are turning against it. Cash is hard to automate. It cannot be plugged into globe-spanning digital infrastructures. It operates at human scale and speed within a system that increasingly demands inhuman scale and speed. It’s creating ‘friction’ at a systemic level, so even if you like cash at a local level, you’ll gradually find yourself coerced away from it. Amazon lacks infrastructure to process cash, and street-level shops are drawn into this systemic recalibration ‘Coercion’ in this situation doesn’t mean a consortium of CEOs or politicians will force you to stop using cash. If you are tied into a system that contains processes beyond your control, then the system itself can just pull you along. Capitalism often operates on autopilot, with the players following a set formula to boost profits, and one part of that formula is to automate stuff. In 1759, Adam Smith introduced the metaphor of the ‘invisible hand’ to illustrate how all these movements, and these chains of interdependency, can be mapped. For example, Lloyds Bank, guided by shareholder demands for profits, shuts down physical branches to cut costs by pushing you on to automated apps. Having no branches makes it harder for small businesses to deposit cash, so they are nudged toward putting up signs saying ‘We’re cashless.’ That then sends a message to customers that there’s something newly unacceptable about cash. At the same time, people will notice that banks have shut down many ATMs, with the banks justifying this by saying their customers are ‘going digital’, but this creates a self-fulfilling prophesy because removing ATMs lowers public access to cash, making it harder to use. Lloyds and other banks then see the resulting up-tick in digital finance as implicit permission to close down further branches. What we have here are a series of feedback loops, all serving the prevailing systemic logic of expansion and acceleration. Cashless society, then, is not just a privatisation process, but also an automation process. Automated giants like Amazon in fact lack any infrastructure to process physical cash, and street-level shops are being drawn into this systemic recalibration. Hipster cafés in London have signs saying ‘We’ve gone cashless’; what they are actually saying is ‘We’ve joined an automation alliance with Big Finance, Big Tech, Visa and Mastercard. To interact with us you must interact with them.’ The politics of the ‘invisible hand’ can be visualised with a pyramid: Where does power lie in this pyramid? Anyone who wishes to divert attention away from the top will likely claim that it resides in numbers, at the bottom. Appealing to legitimacy-from-below is a major tactic used by politicians, who present their governments as reflecting the will of the people, with industry following suit. Rather than admitting to their own interests, banks and fintech companies present the decline of cash as a bottom-up phenomenon driven by popular support. In this view, HSBC’s decision to close ATMs must simply reflect the fact that ordinary people no longer care for cash. In this view, industry simply responds to our demands. Big firms turn to freemarket doctrine in these situations, which maintains that businesses survive only if they mould themselves to our needs. So the presence of thriving corporations can indicate only that they’re serving us well. Left-wing thinkers reject this freemarket dogma, pointing out that some industries are powerful enough to effectively legislate the conditions of our lives. We all know that firms invest heavily in warping our perceptions via marketing, and often secure our consent only through tricks and misrepresentation. Left-wing calls for government regulation in turn compel freemarketeers to accuse them of stifling both popular will and business. Market conservatives paint a picture of consumers, workers and small entrepreneurs battling the clumsy state, while Lefties present workers, citizens and mom-and-pop shops fighting the corporate behemoths. Economic politics is all about painting these contrasting David-and-Goliath options. When it comes to money, though, the battle lines get more confusing, because the monetary system is a public-private hybrid. Physical cash is government money, but it has properties – like anonymity – that appeal to some anti-government libertarians. Privacy-invading card-payment systems, by contrast, have historically been run by the private sector, so those pro-business libertarians who are concerned by surveillance are forced to accuse banks of being phoney ‘crony capitalists’ collaborating with controlling governments. This collaboration can be seen in the case of the 2022 anti-vax ‘Freedom Convoy’ truckers, whose bank accounts were frozen by a Canadian government order. Libertarians rallied in support of the truckers, but there’s many variations of these alliances between states and payments firms. For example, the US government agency USAID has funded programmes like Catalyst: Inclusive Cashless Payment Partnership, pushing Visa as a tool of empowerment in India. In its 2017 annual report, Visa talks about doubling its market penetration into India after it ‘worked closely’ with Narendra Modi’s government in its ‘demonetisation’ efforts in 2016, during which time certain banknotes were outlawed. The Indian prime minister’s open attacks on the public cash system also drew fawning praise from Indian digital-payments firms. It’s easy to get stuck in a binary of explaining cashless society as either a bottom-up phenomenon demanded by us, or a top-down enclosure pushed by power players. The reality is a more complex mix. Because at scale it’s cheaper to push billions of people through a handful of centralised players, almost every industry in the world is dominated by oligopolies of large firms. Those firms will inevitably build political connections, while smaller firms get relegated to the periphery. Oligopolistic firms fluctuate between collaboration and competition, but the evolutionary logic of our economic system is always towards greater automation. Corporate executives benefit if they nudge everyone in this direction, and they have a niggling insecurity that, if they don’t, competitors will leave them behind. The problem is that many people don’t love digital acceleration, and it takes a considerable effort over time to erode their resistance. This is why big retailers like Tesco start by tentatively testing cashless stores in certain locations to set a precedent. It took years for the airline industry to make it feel ‘normal’ to refuse cash, but that norm is still not universal. Even last year, I found myself seated next to a man on a flight who was humiliated and flustered when the attendants refused his banknote. The man wasn’t a frequent flyer and came from a working-class background, pointing toward an important fact: when a capitalist system is resetting to a state of higher speed and automation, it often does so first through social elites. In London, a hipster barber targeting yuppies may very well refuse cash, but a hair salon targeting working-class immigrants will almost certainly ‘still’ take it. Words like ‘still’ are loaded, because they imply that whoever is still doing the thing has yet to go through some evolutionary upgrade. Digital payments giants like Visa invest heavily in presenting ‘going cashless’ as a grassroots triumph for the small entrepreneur who wants to cut costs. In reality, this alliance between Big Finance/Big Tech and small and medium-sized enterprises applies only to businesses with middle-class customers. A decade ago, many of those customers didn’t even perceive cash as particularly inconvenient. Even now, they would prefer choice (the fact that I sometimes use my card doesn’t mean I asked a shop to remove its cash till). It’s businesses that remove our payments choice, but they rely on the fact that most middle-class people simply adapt their expectations and edit their memories to forget those old days when cash felt totally normal. Once new cultural norms are established, it compels compliance. Eventually, you get discriminated against if you insist on being that guy who complains that the London bar won’t accept your coins. The fact that people fall into line and begin displaying a preference for card payments is read by politicians as a signal to support the transition. They too are worried about being ‘left behind’. This pressure to go along with the transnational automation drive means that the average UK Labour Party politician doesn’t challenge cashless society. Rather, they call for a slight slowdown in the imagined ‘race’ towards it, to give cash-dependent communities a chance to ‘catch up’. Cashless pubs allow hundreds of unmasked people in while refusing cash to protect their employees So, capitalism has inherent trends, but it also has inherent contradictions. Here’s one of them. Our cashless card payments rely upon ‘digital casino chips’ issued to us by banks, but – as anyone who has been to a casino knows – such chips have power only because you believe they can be redeemed for cash. In the total absence of cash, there could be a collapse in the public’s belief in bank-issued digital money. Banks and corporates make private decisions that erode our cash infrastructure, but in doing so they are undermining the public basis of confidence in their private systems. This was accelerated by the outbreak of COVID-19, which gave companies a convenient cover to fast-track their automation plans. It’s easier for a retailer to announce they don’t accept cash because of COVID-19 than to admit that they’re trying to shave a percent off their costs. For example, Visa entered a deal with the US National Football League to promote cashless Super Bowls. Signed in 2019 and piloted in 2020, it went public in 2021 during the pandemic, with attendant media coverage presenting it as a measure of public hygiene. Cashless pubs in London allow hundreds of unmasked people in their establishments while claiming to refuse cash to protect their employees from any coronavirus that may be stuck to the notes (a contention that is scientifically inaccurate). In 2020, such scaremongering, along with the fact that so many of us were forced into online shopping during the pandemic, caused a precipitous drop in transactional cash use. This raised the possibility of a financial stability problem, because cash psychologically (and legally) backs our cashless digital casino chips. This puts central banks in a bind. They know that the trajectory leads to a crisis-prone bank-dominated version of cashless society. So they think about how to maintain public access to government money without upsetting the transnational automation agenda. One way they are trying to resolve this is with a new form of ‘digital cash’ – central bank digital currency (CBDC). To understand CBDC, imagine being able to download a payments app on the iPhone App Store from your nation’s central bank (like the US Federal Reserve or the Bank of England). Various countries have appointed teams to experiment with this hypothetical government payment system, but it creates a new problem. In a country like the UK, a state-issued digital pound would upset banking giants like Barclays, Lloyds and HSBC. They would rightly perceive it as competition to their own digital money empires. Given that central banks are supposed to maintain the stability of private banks, rather than directly compete with them, the Bank of England (and all other central banks) will have to make concessions: any future CBDC will be watered down to prevent disruption to the banking sector, and its operation will be outsourced to private partners… like the banks themselves. In 2015, I was one of the few people raising awareness of the dangers of cashless society from a Left-wing perspective. Then the pandemic hit, and a new generation of pro-cash activism emerged in the so-called populist Right. Libertarians seized upon early COVID-19 controls as evidence of a new era in totalitarianism. Social conservatives had already cast Big Tech firms as hives of ‘wokeness’. Conservative commentators began to weave these perspectives together. They presented themselves as rebellious champions protecting the everyman from an alliance of liberal corporate elites and authoritarian socialist governments. In May 2020, my mother was sent a video by her friend on Facebook. It claimed that Bill Gates had orchestrated COVID-19 to microchip us via vaccines and to usher in a cashless society where our every economic move could be monitored. Her friend was very excited to announce that ‘Your son is in this! You must be so proud.’ Sure enough, there was a clip of me (used without my permission), in which I was describing how financial institutions engage in a war on cash. It was followed by a clip of an evangelical pastor warning that ‘the Bible clearly links the mark of the beast with the emergence of a cashless society’. How is it that I end up in a video like this? Conspiracy theorists happily take my work out of context in order to push their version of events. Rather than analysing the logic of capitalism, many of them have decided that behind digital innovation-speak lie satanic overlords, paedophiles, Marxists, Jews or caricatured banksters smoking cigars. Ironically, it’s central banks’ response to the corporate attack on cash that has really spurred the new wave of pro-cash activism. The possibility of a state-controlled digital pound or digital euro replacing the battered cash system has galvanised the imagination of libertarian activists. Libertarians have always faced a tension when complaining about the surveillance that accompanies cashless society. This is because digital payment systems are pushed by private sector fintech entrepreneurs, and libertarians are supposed to be pro-entrepreneurialism. CBDC has enabled them to escape this bind. It allows them to rework the story of cashless society as being driven by an oppressive digital state. These systems limit choice, and can be used to push people’s business to big retailers, rather than small ones This mutated version of the cashless society story is now spreading virally. My dad recently forwarded me a video, which he received on WhatsApp, about the looming spectre of CBDC. The anonymous producers stitched together clips from libertarian activists, self-help gurus and even the populist UK politician Nigel Farage, all of whom cast CBDC as a new form of digital totalitarianism. They argued that this centralised digital money will be sold to us under the banner of convenience, but that the true agenda is to enable governments to micromanage us by controlling our payments. The conclusion? Say no to CBDC. Say yes to physical cash. They’re not wrong to point out the dangers of digital control, but their selective curation of the form and examples misrepresents why it is happening and how to oppose it. The cashless system is run by transnational corporations, and the actually existing examples of payments control often concern welfare recipients: for instance, the Australian ‘cashless welfare card’ was a Visa card system that blocked Indigenous Australians on benefits from buying non-approved goods in non-approved stores. These systems not only limit choice, but can be used to push people’s business to big retailers, rather than small ones. Farage and his contemporaries don’t focus on the payments censorship of Indigenous welfare recipients. They fixate on conservative fears, like the hypothetical blocking of transactions for guns and meat. This is causing me problems, because moderate progressives – who previously would have expressed some concern about corporate power – have started associating a pro-cash stance with reactionaries, and to a broader suite of ideas that they espouse. In Germany, I’ve even been accused of being aligned with the neo-Nazi Reichsbürger movement, purely on the basis that they too are pro-cash. I’ve seen digital payments promoters use this disorientation to their advantage. They can suggest that critiques of their industry are the realm of crackpot antisemites. If conspiracy theorists are the ones leading the charge against digitisation, surely it must show the concern is built from the wild fantasies of paranoid flat-Earthers. Rather than fight cashless society, then, they suggest we should promote corporate financial inclusion: give a helping hand to all those people who have yet to be absorbed into Big Finance. Get them accounts. Help them become corporate consumers. Moderate progressives are often taken in by this story and in backing away from the cashless society battle they cede territory to the far Right. It’s an example of a trend in our post-pandemic moment, where the meeting of two sides of the political horseshoe has led to the spread of Right-wing ideas among people who previously considered themselves Leftists. The new Right has appropriated the rebellious language of Left-wing hacker culture, which pushed digital privacy for decades (for a pop-culture version of this, watch the TV series Mr. Robot, in which anti-capitalist hackers target the corporate giant ‘Evil Corp’). Top-down power has been re-ascribed to a generic blob of ‘globalists’, acting via institutions like the World Economic Forum (WEF), but anti-WEF campaigning was a standard part of Left-wing culture in the 1990s and ’00s. To Left-wingers, the WEF represented venal corporate capitalists, which is why the ‘alter-globalisation’ movement championed the World Social Forum as an alternative. In the midst of lockdowns, however, it was anti-mask and anti-vax campaigners who took on the aesthetics of Occupy Wall Street, holding street protests with placards warning about cashless society and digital ID. A surreal twilight zone has formed between the language of the old Left and that of the populist Right, and into it has stepped a character like Russell Brand. In 2013, he came out as an anti-corporate socialist and, back then, every Lefty activist I knew was clamouring to find his email address in the hope that he’d platform their cause. Fast-forward several years, and he renamed his podcast to Stay Free, peppering it with libertarian language and topics that appeal to the Right. He presents himself as being on an open-minded search for the truth that the mainstream media won’t tell us, and it increasingly involves him having discussions with conservative edge-lords. In November 2022, he released an obligatory video about CBDCs, entitled ‘Oh Sh*t, It’s REALLY Happening’. Notably, no cashless establishments use CBDC, because it doesn’t exist yet. They all use the private sector digital payments system but, in choosing to focus on the fantasy version of cashless society, rather than the actual one, Brand signals that his allegiance lies with the Right wing. In the martial arts classic Kill Bill: Vol 2 (2004), the five-point palm exploding-heart technique is a precise sequence of five hits that cause an opponent’s heart to stop. In the conspiracy world, the five-point punch of the globalists involves them hitting us with digital IDs, 5G technology, vaccines, COVID-19 passports and now CBDCs. This is supposed to trigger a global cardiac arrest called the ‘Great Reset’. The Great Reset is actually the name of a real programme convened by the WEF, in which they talk about the need for a post-pandemic digital and green transition. Those goals emerge from different sources because, while capitalism generates a digitisation agenda to speed things up, it doesn’t generate a conservation impulse to slow things down. Green transition rhetoric doesn’t emerge from market processes: it’s the result of decades of relentless campaigning from civil society groups, who pushed past the lobbying of the fossil fuel industry to showcase the economic risks of climate change. Big business and politicians now pay lip service to that. Nevertheless, they attempt to subordinate it to their automation fixation by proposing digital techno-fixes for climate change. This is a gift to our conspiracy theorists. They can now present CBDCs as being a future tool to force us to buy only low-carbon vegan sausages, under the control of Greta Thunberg and the Bank for International Settlements (a BIS video about CBDC is a favourite among them). An anti-cashless society propaganda leaflet. Supplied by the author Cashless society authentically sucks. It’s a world where your kid cannot sell lemonade on the side of the road without paying Mastercard executives in New York. It’s an attack on privacy, autonomy, local independence and casual informal interactions in favour of surveillance, dependence and centralisation of power in large institutions. I frequently interact with people who have very real concerns about it, but who – like our 17th-century folk who lost loved ones to a storm – have been steered into reactionary ideas about it. Our struggle to see large-scale systemic processes gives oxygen to conspiracy theorists. I frequently get asked to go on Right-wing media channels, such as GB News, to be interviewed by anti-woke libertarians or Christian evangelists. Many of them imagine capitalism to be the realm of the small individual, and present elites as being malevolent actors who attack the system from above. It’s an easy story to tell. But the reality is that elites are a by-product of our system. The invisible hand likes tapping the contactless card, regardless of whether you as an individual do, and the role of the elites in the war on cash is to simply unblock resistance to that. More often than not, they’re examples of Hannah Arendt’s banality of evil. They’re just people ‘doing their job’, serving a system that wants to commodify any aspect of our lives that remains un-commodified and un-automated. The dominant tendencies in capitalism pull upon all of us but it’s possible to demand space for other values. It’s been done before. There was a time when the automobile industry seemed ascendant, and bikes were pushed off the roads, but we built a cultural movement to demand bicycle lanes. That’s why we should see cash as being like the public bicycle of payments, and support efforts across the political spectrum to protect and promote it. Digital bank systems are the private Uber of payments: they may appear convenient, but total Uberisation unleashes demons that cash historically kept in check – surveillance, censorship, digital exclusion, and serious resilience and financial stability problems. The point isn’t to argue that everyone must always use the ‘bicycle’. It’s to ensure that we don’t get totally ‘Uberised’ in private and public life. We need to promote a healthy balance of power between different forms of money in the system, and that’s within our collective political abilities.
Brett Scott
https://aeon.co//essays/going-cashless-is-a-bad-idea-but-its-not-a-conspiracy
https://images.aeonmedia…y=75&format=auto
Metaphysics
Being a twin (as our author knows) cracks open our ideas of the perfectly bounded self and might liberate us all
In Washington state in 2002, Lydia Fairchild nearly lost custody of her three children, when a test revealed that none of them shared her DNA. It turned out that Fairchild’s body was populated with cells from a non-identical twin she’d unknowingly had before birth, making her, in effect, the biological aunt of her own children. The technical term for Fairchild is a ‘human chimera’: a human being composed of cells that are genetically distinct. The phenomenon can happen artificially, through a transfusion or transplant, or naturally, as in Fairchild’s case, through the early absorption of a twin zygote. Only 100 cases of natural chimerism are documented, but there may be many more. Scientists estimate that 36 per cent of twin pregnancies involve a vanishing twin. Most such twins likely disappear without a trace, but some get partly absorbed into their neighbour. The survivor is unlikely to learn of their lost sibling’s genetic presence, unless an unrelated test or procedure inadvertently reveals it. Go in for a routine cheek swab, come out with a twin. Many find the idea of unknowingly carrying the vestiges of their twin unsettling. One person I told about Fairchild instantly burst into tears. I’m less perturbed by it, likely because I’ve known I have a twin for decades. My own twin Julia survived our joint gestation (rather than me, what, eating her? Gross!) If I find out I’ve got another one in there somewhere, it won’t be my first rodeo. What mainly interests me about human chimeras are the philosophical, not the personal, implications. What should we say, metaphysically, about Fairchild and her ilk? Journalists reporting on Fairchild’s case didn’t quite know what to make of it. ‘She’s her own twin,’ proclaimed ABC News. ‘The many yous in you,’ intoned Ed Yong in National Geographic. ‘A Guide to Becoming Two People at Once,’ wrote Maia Mulko in Interesting Engineering in 2021. Such headlines are clickbait because they challenge a standard presumption of modern Western culture, so basic as to go unstated. Westerners generally think that each person is physically discrete, cleanly distinguished from all other people by their location, solo, within an unbroken continuum of skin. Actually, though, human chimeras leave this assumption intact. Fairchild isn’t two people in one, because the mere presence of human DNA doesn’t indicate the presence of a person. Any stray hair you leave on your pillow overnight is biologically human, but that doesn’t mean that, every time you shed hair, you’re multiplying the number of people in the room. Personhood requires something more than a particular type of genetic material: it arises only with the larger-scale structural organisation of that material, which permits capacities like consciousness, thought and moral agency. At the macro level that matters for personhood, Fairchild is a singleton. Still, the one-person-per-body assumption is worth questioning, and there’s a much more convincing example of its violation at hand. Conjoined twins, unlike chimeras, contain only one genetic cell line. But (when two heads are present) they overwhelmingly consider themselves to be two unique, distinct beings, despite sharing a body. It’s typical for them to speak of themselves as individuals, and to develop a personality and tastes different from each other’s. Their families and friends, too, think of them as two people who just happen to be physically attached. The case of conjoined twins reveals the falsity of the assumption that bodies correlate one-to-one with people. Recognising this has large implications. If one body can contain two people, why couldn’t one person be spread across two bodies? Why couldn’t that person be me, or you? Singletons are always implying that twins aren’t fully distinct people, but rather a single person, split or duplicated. Antonio asks of Sebastian and Viola in Twelfth Night: ‘How have you made division of yourself? An apple, cleft in two, is not more twin than these two creatures.’ The Nuer people of South Sudan don’t hold a ceremony when one twin dies, because they believe the deceased lives on in their surviving twin. And any pair of twins you know will have tales of being given a single birthday present to share, or being referred to as ‘the twins’ instead of by their individual names, and being treated as essentially interchangeable by teachers, friends or relations. For much of my life, I’ve vigorously resisted this attitude. Sure, there are various ways in which one twin can be a stand-in, stunt double, accessory or control for the other. Julia and I never switched classrooms or sexual partners (a twin rumour that’s mainly fake news), but we once startled a customer of the bookstore chain we both worked at, when I sent him to her branch after he’d called in at mine, and I appeared to be waiting for him when he arrived at the other store. As kids and teens, Julia and I were pros at pooling resources, whether of the mental or material kind. We collaborated on creative projects, studied for exams together, and each saw our wardrobe magically expand when the other bought clothes. I outsourced many life experiments to Julia, my bolder counterpart: she tried out driving, sex and spinal surgery first, and her dalliance with peroxide helpfully took blondeness off the table for us both for the rest of our lives. But actual metaphysical merger? No way, I used to think. Julia and I have distinct personalities: she’s the assertive extrovert, Susan in The Parent Trap (1961); I’m Sharon, the amenable introvert, chiefly enthused about books and my cat. We now live independent lives in different countries, 19 airplane hours from each other. I don’t have access to Julia’s calendar, let alone her thoughts; when someone steps on her foot, I don’t feel it. If there’s any basis for thinking we’re one person, I’ve always assumed, it must be some incoherent or mystical conception of personhood that it’d be not only unprofitable but uncharitable to examine. Still, I’ve been thinking more about twins recently, and I’m no longer so sure about that. There now seem to me at least three ways in which twins can genuinely function as a single person. I trust her memories of our distant past as much, if not more, than my own First, twins can share a mind. I’m not referring to telepathy here, which is a dubious matter of extra-sensory communication between minds. Instead, I’m referring to twins using each other’s minds – or, maybe better, using their own mind but outside the skull we normally associate with them. In their paper ‘The Extended Mind’ (1998), the philosophers Andy Clark and David Chalmers argued that, to identify something as an instance of thought, we simply need to identify a process that plays the functional role that thinking does. It doesn’t matter where the process is. For instance, if your use of your phone’s calculator plays the same role for you as your tallying up the numbers internally does, we should see both acts as forms of thinking and, provided that your phone is deeply and reliably enmeshed in your life, it and your brain should be classed as a single cognitive system. If your mind can extend to an inanimate object, why not also an animate person? Some empirical work in social psychology supports the idea. Daniel Wegner’s studies of what he terms transactive memory explore how couples or groups use each other as repositories of distinct forms of information, allowing each to recall more than they would singly. Couples also ‘cross-cue’ each other, remembering in tandem by throwing prompts back and forth till they trigger each other’s recollections – ‘in a sense,’ as Clive Thompson suggested in Slate, ‘Googling each other.’ Any couple could think jointly this way, but close twins are surely among the world’s best instances. Julia and I did practically everything together till I left the country at 21: we attended the same schools, were interested in the same subjects, lived with our parents through college, had many of the same friends, and took all our vacations together. My memory for detail is embarrassingly bad, so it’s handy to have Julia at hand to recall all this for me. I trust her memories of our distant past as much, if not more, than my own, and when I’m dredging up the more recalcitrant secrets of my personal history, it doesn’t feel all that different from asking her to do it instead. A second way in which twins can share personhood is by acting as a plural agent. Philosophers have spelled out the concept of plural agency in different ways, but according to Bennett Helm’s account, what’s crucial is that two or more people have genuinely joint concerns and values. They recognise a set of common aims, commit to acting as a group to pursue them, and care about the group itself, as an aspect of their own agency. In this way, they create and act from a new, unified entity alongside their own individual selves. Twins are a compelling example of a plural agent, if anyone is. As Laura Spinney wrote on twins in Aeon, ‘in the best instances’ they possess ‘absolute mutual trust, a highly developed theory of the other’s mind, and an ability to work together that surpasses that of any other human dyad.’ Julia and I were like this as kids, in a way that probably made our singleton friends envious. I could count on my twin to enthusiastically sign up for any plan I proposed, whether it was co-creating a novel (I wrote; Julia illustrated), throwing a party (‘sea-themed – in a lighthouse!!’), or making someone cool like us (who can resist the seductive power of twins?) We executed our various missions jointly, with almost no friction. It was like having an extra jetpack strapped to your will. How can I reconcile my sense that my self is both separate from Julia’s and shared with her? Finally, twins can share not only cognition and action, but also an identity. People who regularly form a plural agent in important and extensive areas of their lives come to deeply identify with one another, and their relationship becomes central to who they each individually are. This is likely what Aristotle had in mind when he referred to a close friend as ‘another self’, and it explains why the death of an intimate can cause such deep mourning. In losing a dear friend, you’ve lost the plural person you formed together. If you acted as that person in wide and deep domains of your life, it’s not purely metaphorical to say that part of your own self has been ripped from your chest. Not all twins get along but, when they do, the bond they share is special. When one twin dies, the surviving twin’s score on the Grief Experience Inventory is, on average, the highest on the planet. I read once about a conference held for twins that included a session on grieving a lost twin. Apparently, not one of the many conference attendees turned up to that one: they couldn’t stand it. I told Julia about this, and she just nodded. We wouldn’t go either. I now think that these three phenomena – the sharing of cognition, agency and identity – support the idea that (close) twins share personhood to a significant degree. But I still resist the suggestion that Julia and I are simply the very same person. That would imply some pretty wild things: for instance, that if Julia committed a crime, there’d be no moral difference between punishing her for it and punishing me; that I’m her kid’s mother, rather than her kid’s aunt; and that whoever I’m dating, she’s dating, too. Pure chaos! How can I reconcile my sense that my self is both separate from Julia’s and shared with her? Lately I’ve been thinking that the problem comes from seeing personhood as unitary and static. What if it’s dynamic and discontinuous instead? What if a person isn’t only something you are, but also something you do? Since what you do varies over time, you could then move in and out of shared personhood with another person, at different times and in different domains of life, and to different degrees, depending on how you’re interacting with them. My life bears out this picture of twins dipping in and out of shared personhood over time. Julia and I haven’t lived in the same country for more than two decades, and the occasions when the border between us seems to blur are rarer now than when we led our daily lives alongside each other and in concert. But those experiences of merger still arise, usually when we spend an extended amount of time together on vacation. In one striking recent instance, when Julia and I were both pressed for time, I found myself absent-mindedly offering to go to the restroom to pee on her behalf. Do I really think I share a bladder with my twin? No. Do I think I share personhood with her? These days, I’m giving a qualified yes. When people in Western culture imply that twins are one person, what they often seem to mean is that twins are less than one person: that neither I nor Julia, for example, achieves full personhood by virtue of our overly close enmeshment with each other. ‘It’s high time you quit being twins and began being people,’ says one sister’s boyfriend in the teen romance novel Double Trouble (1964). ‘Separate people.’ As if those two things were equivalent. Twins understandably react poorly to the suggestion that they’re less than full people, since a powerful set of norms tells us that only full people can be moral agents, rights-bearing citizens, and beings of dignity and worth. Being half a person, we assume, is like being no person at all. But what if the underlying premise that full personhood requires closely guarded separateness from others is wrong? It’s only relatively recently in our species that the best life has been portrayed as one of self-governed individual action, free of the influence and demands of others. For most of the human past, across most of the planet, personhood has been grounded in social relationships. Who you are has been seen as a function of how you fit into an interdependent network of kin and communal relations. Twins who share personhood can be seen as a problematic throwback to this benighted past. (‘Boundaries!’ we all scream.) Or such twins can be seen as a vivid reminder of the truth and beauty of the older picture. We don’t really need chimeras or twins to reveal the deeply relational nature of our species. The experience of merged personhood is common in many other types of close couples. New parents speak of the startling sensation of having a part of themselves exist outside their body: their infant, a piece of their actual heart, sleeping quietly in the next room. Frank Sinatra croons to his lover: ‘I’ve got you under my skin … so deep in my heart that you’re really a part of me.’ Michel de Montaigne wrote, after his best friend’s death, that he’d become ‘so formed and accustomed to being a second self everywhere that only half of me seems to be alive now.’ We can take all of this figuratively – as a poetic expression of strong feeling – or we can treat it as a literal and defensible metaphysical stance. After all, once we twins have embraced breaking the body barrier, what’s stopping singletons from doing it, too? What makes you so sure that all of you is contained within that single envelope of skin?
Helena de Bres
https://aeon.co//essays/being-a-twin-helpfully-cracks-open-our-ideas-of-individuality
https://images.aeonmedia…y=75&format=auto
Philosophy of mind
Consciousness science should move past a focus on complex mammalian brains to study the behaviour of ‘simpler’ animals
Twenty-five years ago, the burgeoning science of consciousness studies was rife with promise. With cutting-edge neuroimaging tools leading to new research programmes, the neuroscientist Christof Koch was so optimistic, he bet a case of wine that we’d uncover its secrets by now. The philosopher David Chalmers had serious doubts, because consciousness research is, to put it mildly, difficult. Even what Chalmers called the easy problem of consciousness is hard, and that’s what the bet was about – whether we would uncover the neural structures involved in conscious experience. So, he took the bet. This summer, with much fanfare and media attention, Koch handed Chalmers a case of wine in front of an audience of 800 scholars. The science journal Nature kept score: philosopher 1, neuroscientist 0. What went wrong? It isn’t that the past 25 years of consciousness studies haven’t been productive. The field has been incredibly rich, with discoveries and applications that seem one step from science fiction. The problem is that, even with all these discoveries, we still haven’t identified any neural correlates of consciousness. That’s why Koch lost the bet. If the easy problem is this hard, what does that make the ‘hard problem’? Chalmers described the hard problem of consciousness as understanding why material beings like us have experience at all. Solving the hard problem would give us a secure theory of consciousness that explains the nature of conscious experience. Philosophers and scientists alike want to solve the hard problem, and to do so many are focusing on the easy problem. But all that attention is making the hard problem harder than it needs to be. We might enjoy a hard puzzle but abhor a puzzle with pieces missing. Today’s consciousness science has more pieces than it did 25 years ago. But there is reason to think that key pieces are still missing, turning an intellectual puzzle into an intractable problem. To see why, we have to revisit the assumptions that launched the field of consciousness research. Only eight years before Koch and Chalmers made their bet, there wasn’t exactly a unified field of consciousness studies. A few scientists advocated studying animal consciousness, and while there was research on blindsight, amnesia and people with split-brains, these research programmes were largely independent from one another. Calls to study consciousness from within some sciences were met with scepticism and derision. For example, the ethologist Donald Griffin wrote four books advocating for the study of animal consciousness, starting with The Question of Animal Awareness (1976). Though Griffin was a highly respected scientist who had co-discovered echolocation in bats, he didn’t have much success promoting the study of consciousness in his field. Students were warned away from the topic, with one comparative cognition textbook deriding attention to animal consciousness, since ‘It seems positively foolhardy for an animal psychologist to blunder in where even philosophers fear to tread.’ For many, consciousness was a taboo subject, much like other fanciful questions about artificial intelligence, psychedelics or alien life (all of which are also enjoying scientific attention these days, interestingly enough). Arguably, it was Koch who helped turn consciousness studies into a real science with the publication of ‘Towards a Neurobiological Theory of Consciousness (1990). This paper was coauthored with Francis Crick, who comes with about as much scientific prestige as you could ask for – after all, Crick won the Nobel Prize in 1962 for his role in the discovery of the structure of DNA. The Crick and Koch manifesto had an enormous impact on the development of this new science, setting the stage for how it should proceed: We shall assume that some species of animals, and in particular the higher mammals, possess some of the essential features of consciousness, but not necessarily all. For this reason, appropriate experiments on such animals may be relevant to finding the mechanisms underlying consciousness … We consider that it is not profitable at this stage to argue about whether ‘lower’ animals, such as octopus, Drosophila or nematodes, are conscious. It is probable, though, that consciousness correlates to some extent with the degree of complexity of any nervous system.By supposing that ‘higher mammals’ possess some essential features of consciousness, Crick and Koch took up Griffin’s call to study consciousness in animals. By taking this courageous approach, Crick and Koch put aside the still-common Cartesian view that language is needed for conscious experience: [A] language system (of the type found in humans) is not essential for consciousness. That is, one can have the key features of consciousness without language. This is not to say that language may not enrich consciousness considerably.By rejecting the language-centrism of the day, Crick and Koch were giving scientists more puzzle pieces to work with. Specifically, they suggested that scientists focus on a capacity that humans share with ‘higher animals’ – vision. The reasons they give for this choice are pragmatic, but also explicitly anthropocentric and theory driven: At this point we propose to make a somewhat arbitrary personal choice. Since we hypothesise that there is a basic mechanism for consciousness that is rather similar in different parts of the brain (and, in particular, in different parts of the neocortex), we propose that the visual system is the most favourable for an initial experimental approach … Unlike language, it is fairly similar in man and the higher primates. There is already much experimental work on it, both by psychophysicists and by neuroscientists. Moreover we believe it will be easier to study than the more complex aspects of consciousness associated with self-awareness.Reading the Crick and Koch manifesto today is almost eerie, given how well it predicted the next 33 years of consciousness studies with its focus on vision in mammals. In jumpstarting the field of consciousness studies, Crick and Koch designated the range of acceptable research subjects and research questions. Their idea was that we can’t search for consciousness without relying on consciousness as we know it, and consciousness as we know it is human consciousness. The so-called ‘higher mammals’ are animals like us, social primates who rely heavily on vision to engage with the world. Now it isn’t language that is presumed necessary for consciousness, but a nervous system What had been set aside is that animals quite unlike us also use vision. The so-called ‘lower mammals’ also have eyes, since all mammals do. The same goes for birds and most reptiles and fish, with only some blind cave fish who lost the ability. But it isn’t just in these familiar species where we find eyes. The box jellyfish has 24 eyes, with four different types specialised for different tasks. Scallops have around 200 eyes of the same type, which include pupils that can dilate and two retinas. When the study of consciousness is grounded in the study of human-like vision, it makes the field of consciousness studies unapologetically anthropocentric, discounting animal models that might be key puzzle pieces. More importantly, it also makes the field conspicuously neurocentric. By including only ‘higher mammals’ in the study of consciousness, Crick and Koch replaced the language-centric views of consciousness with a neurocentric one. Now it isn’t language that is presumed necessary for consciousness, but a nervous system. The theory behind Crick and Koch’s proposal presumes that there are similar neural mechanisms for consciousness across different regions of the human brain and, since some animals have neural systems that are similar to some of our neural systems, we can study the brains of those animals – animals like us. If we are committed to the idea that complex brains are needed for consciousness, we wouldn’t profitably study scallops, who don’t even have a brain, or jellyfish, who have a small net of approximately 10,000 neurons. The Chalmers-Koch bet was framed within this commitment, which is why it was over whether the science would discover the neural correlates of consciousness. While the last decades of research under this approach failed to support a particular theory of consciousness, the neuroscientific research did pay off in a very different, and surprising, respect – it was used to identify other conscious animals. In 2012, scientists held a conference memorialising the research of Crick, who had died eight years earlier. Here they publicly proclaimed the Cambridge Declaration on Consciousness, stating that there is sufficient evidence to conclude that ‘all mammals and birds, and many other creatures, including octopuses’ experience conscious states, and that: The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviours.The Declaration uses the term ‘substrates of conscious states’, which implies that there have been established discoveries about the source of consciousness – an answer to the easy problem. But, as the outcome of the bet made clear, we don’t have a secure theory. Instead, the Declaration identifies new consciousness markers, features that offer evidence that the system is conscious. In everyday life, humans rely on markers such as goal-oriented behaviour, communicative interaction and emotional expression when we see other humans as conscious agents. We also rely on these sorts of markers when we see our pets (and other ‘higher mammals’) as conscious. These ordinary markers help us interpret behaviour, seeing it as the result of actors’ desires and informational states, and they help us explain why individuals do what they do. Prescientific markers lead me to think that my dog Riddle enjoys going for walks, because he gets excited when I pick up the leash. They also lead me to think that he’s pretty attached to me, because if someone else in the family gets his leash, he looks nervously at me, worried that I won’t be accompanying the party. By not shifting attention to other species and aspects of consciousness, we make the hard problem so much harder The Declaration points to five consciousness markers that are the results of scientific research: homologous brain circuits; artificial stimulation of brain regions causing similar behaviours and emotional expressions in humans and other animals; neural circuits supporting behavioural/electrophysical states of attentiveness, sleep and decision-making; mirror self-recognition; and similar impacts of hallucinogenic drugs across species. All five markers are derived markers – the outputs of scientific studies on humans and the higher mammals. The authors of the Declaration took having some of these markers as sufficient evidence for consciousness. For octopuses, their neurophysiology was deemed sufficiently complex to conclude that they are likely conscious, even though they haven’t demonstrated mirror self-recognition. Mirror self-recognition is a behaviour that animals can perform that makes no assumptions about how the behaviour is supported physiologically. You can pass the mirror test by touching or rubbing a mark that was surreptitiously made on your body. Children pass this test around 18 months. Great apes, dolphins, cleaner fish, magpies, Asian elephants and, most recently, ghost-crabs have passed this test too. But mirror self-recognition is just one marker, and the other markers emphasise neurophysiology, reflecting the neurocentrism proposed by Crick and Koch. Passing a behavioural test can offer some evidence of consciousness but, for the Declaration, the robust evidence comes from having the right kind of neuroanatomical, neurochemical and neurophysiological properties. It’s this emphasis on the neurological that may be holding the science back. Similarity to human physiology can support conclusions that other animals are conscious, but we shouldn’t take our physiology to be necessary for consciousness. By engaging in animal research in the first place, researchers are already endorsing multiple realisability – the view that mental capacities can be instantiated by very different physical systems. When we look only at slightly different physical systems, we may be overlooking the key piece to the consciousness puzzle. The anthropocentrism in Crick and Koch’s original proposal, perhaps surprisingly, led to new conclusions about other conscious animals. This shift away from humans might be seen as an invitation for scientists to profitably study consciousness in new species such as octopuses. However, in the past 10 years there has not been a big shift in the species studied, with most labs still focusing on vision in humans and monkeys, and still committed to the idea that consciousness correlates to the degree of complexity of the nervous system. Change can be hard, and expensive, especially when it centres around primate research. But by not shifting attention to other species and other aspects of consciousness, we’re making the hard problem so much harder. Vision might seem like a simple conscious capacity to study if you compare it with self-consciousness as Crick and Koch did, but the mammalian visual system is a highly evolved feature of the nervous system, appearing more than 200 million years ago. That’s a long time for a system to gain complexity. The proposal to investigate consciousness by studying simpler systems follows standard scientific procedure, since much progress in science comes from studying simpler cases before moving on to more complex ones. Our current neurocentrism is in tension with the method of studying simpler systems first. Studying convoluted examples of conscious animals to understand consciousness is like reverse-engineering the electric calculator to understand how machines perform addition, rather than starting with the abacus. In biology, model organisms like the nematode worm Caenorhabditis elegans have been a significant player in many of our scientific discoveries of the past 80 years, with their simple nervous system and easily observable cell development and death. These microscopic worms are being used to study phenomena from nicotine addiction to ageing. Why not use them to study consciousness, too? The answer to that question is also simple: such animals are not presumed to be conscious. We see this view expressed again and again in the consciousness literature. The philosopher Michael Tye writes in Tense Bees and Shell-shocked Crabs (2016): ‘Since worms have only around 300 neurons, it seems a great leap of faith to suppose they could genuinely feel pain.’ The worry is repeated by the neuroscientist Anil Seth in his book Being You (2021): ‘By the time we reach the nematode worm with its paltry 302 neurons I find it difficult to ascribe any meaningful conscious status …’ If future AI systems are anything like current ones, they will closely resemble us in terms of linguistic behaviour The opinion that worms aren’t conscious is reflected in a 2020 survey of philosophers’ opinions about major philosophical issues, which included a question about what sorts of entities are conscious. A majority of philosophers accept (or lean toward accepting) consciousness in adult humans (95.15 per cent), in cats (88.55 per cent), in newborn babies (84.34 per cent), and in fish (65.29 per cent). There is quite a bit more scepticism about flies (34.52 per cent), worms (24.18 per cent), and plants (7.23 per cent). It is quite telling – and note this survey was conducted before the introduction of Chat-GPT – that 39.19 per cent of philosophers surveyed think future AI systems will be conscious. If future AI systems are anything like current AI systems, they will not have neurons, but they will closely resemble us in terms of linguistic behaviour. Today, even as scientists approach the question of consciousness by examining neural correlates, we are wondering about nonbiological consciousness in AI systems. The question of AI consciousness sits uneasily next to the neurocentrism of current science. It may be that the anthropocentrism drives opinions about what is conscious more than the neurocentrism. Neurocentrism is a consequence of the anthropocentric reasoning that drives consciousness research, with mammalian-like nervous systems being identified as the key feature. If Chat-GPT encourages researchers to move away from neurocentrism, we may end up back with the language-centrism that Griffin worked to undermine. That would not be productive science. But there is another move, and that is to extend animal research beyond the current laser focus on mammalian brains. Crick and Koch proposed the study of the visual system because we already know a lot about it, and it is similar across mammals. Presumably, they also proposed the study of the visual system because they thought vision generally involves conscious experience. Vision is one sensory modality, and though it is widely shared across many taxa of animals, it isn’t the only sensory modality, and it isn’t the sensory modality that evolved first in the animal lineage. Chemoreceptors sense chemical properties, as in taste and smell, and these exist across animal taxa, including C elegans. Such sensory capacities allow C elegans to sense taste, smells, temperature and motion, and to form associations through habituation and association. The worms habituate to manual taps, and learn to avoid salt ions that have been previously paired with garlic. They learn, they have memory, and they move through their environments toward the things they need and away from the things they don’t. A few researchers are studying consciousness in invertebrates, but this research tends to be focused on identifying markers that provide evidence that the animal is conscious. For example, recent research on bumblebee consciousness has focused on identifying markers for pain experience, and a report commissioned by the UK Department for Environment, Food and Rural Affairs presented evidence of pain experience in crabs and octopuses. What might we learn if our anthropocentrism didn’t lead us to focus on the brain as the relevant part of physiology needed for consciousness, but instead led us to examine the behaviours that are associated with experiences? We could then study the nature of consciousness by looking at bees, octopuses and worms as research subjects. All these animals have a robust profile of behaviours that warrant the hypothesis that they are conscious. Moving away from painful stimuli, learning the location of desirable nutrients, and seeking out what is needed for reproduction is something we share widely with other animals. By studying other animals such as C elegans, animals that show evidence of associative learning and that have sensory systems, we can significantly simplify research on consciousness. Studying consciousness in animal species who lack a mammalian nervous system doesn’t help the science avoid anthropocentrism. We are still starting with the case of humans, and considering the sorts of behaviours we engage in that are associated with conscious experience – perceiving the environment, sensing pain and pleasure. And that’s OK. Anthropomorphism is unavoidable in the science of consciousness studies as much as it is unavoidable in our other sciences. This is because we are human, and we see things from our human perspective. As the philosopher Thomas Nagel pointed out, there is no view from nowhere. Instead, there are views from various perspectives. As humans, we have some shared perspectives given typical human physiology and life history. But we also have perspectives that are incredibly different from one another. The perspective reflected in the survey of philosophers that infant humans are probably conscious, that fish may be, and that plants probably are not, is a cultural perspective that reflects the demographics of today’s professional philosophers. What might the starting assumptions about consciousness look like if professional philosophers were not overwhelmingly white, male and WEIRD? We should push back on the popular view of our time that a complex brain is needed for consciousness The science can start with understanding consciousness as a property of humans, and still sit with surprising and perhaps disturbing cases of consciousness in unfamiliar places – in animals whose lives are largely hidden from us due to their size, morphology or habitats. Finding similarities between us and the smallest and simplest animals might make some uneasy, but such similarities also raise intriguing puzzles and give us more pieces we can use to solve the problem. Focusing on model organisms closely related to humans was perhaps important in the 1990s and 2000s, when Crick and Koch launched the scientific study of consciousness. At that time, there was still significant scepticism in some quarters about animal consciousness due to language-centrism. Today, we see that accepting the premise that ‘higher mammals’ are conscious hasn’t led to a theory of consciousness, but it has led us to accept more animals as conscious. It’s time we turn consciousness research to include these other species, too. Just as Crick and Koch pushed back on the popular view of their time that language is needed for consciousness, today we should push back on the popular view of our time that a complex brain is needed for consciousness. Maybe in another few years we will need to push back on another assumption, and at that time find it profitable to study consciousness in biological systems beyond animals, such as plants and fungi. If we recognise that our starting assumptions are open to revision and allow them to change with new scientific discoveries, we may find new puzzle pieces, making the hard problem a whole lot easier.
Kristin Andrews
https://aeon.co//essays/are-we-ready-to-study-consciousness-in-crabs-and-the-like
https://images.aeonmedia…y=75&format=auto
Psychiatry and psychotherapy
In the 1960s, psychedelic research was driven underground. Now it’s re-emerging – with lessons for the study of psychosis
‘A sense of special significance began to invest everything in the room; objects which I would normally accept as just being there began to assume some strange importance.’‘I became interested in a wide assortment of people, events, places, and ideas which normally would make no impression on me. Not knowing that I was ill, I made no attempt to understand what was happening, but felt that there was some overwhelming significance in all this …’The first of these quotations is from an individual describing a psychedelic trip they took after taking peyote. The second is a person describing an experience of psychosis. While rarely looked at together today, experiences of psychedelics and psychosis share a lot of subjective territory. In the past, some scientists considered them to be different versions of the same experience. However, today, experiences of psychosis and psychedelics are seen as radically different. Examining the journey from that past approach to the current perspective reveals a great deal about our assumptions and values, and the limits and biases of the current day. In the mid-20th century, researchers thought of psychosis and psychedelics as deeply entangled, and scientific comparisons between the two experiences were common; entire academic papers were spent contrasting detailed descriptions of experiences of patients diagnosed with schizophrenia and experiences of research participants who had taken psychedelic drugs. As a result of the close resemblance between these descriptions, many researchers believed that psychedelics induced a short-term psychosis, providing a perfect scientific model for those who wanted to learn more about schizophrenia. By inducing a ‘temporary psychosis’, researchers could observe biological changes in research participants who had taken a psychedelic, and compare these with measurements of patients diagnosed with schizophrenia. In the hunt for the mysterious ‘substance-M’ that could explain what was underlying both experiences, various candidates were considered, including adrenaline, norepinephrine and serotonin. While the hunt for a common biological factor was unsuccessful, for decades many believed that these experiences were different versions of the same thing. There was widespread scientific belief in the similarity between psychosis and psychedelics in the mid-20th century. But the years since have led to a remarkable divide between our understandings of these phenomena. In the 1960s, moral panic related to psychedelic drugs set in. At the same time, requirements for evidence in medicine were becoming more rigorous, and creating barriers for psychedelic research. Funding, access and permissions for research related to psychedelic drugs slowly dried up, and research into these fascinating substances was largely forgotten by psychiatry. In the intervening decades, research related to psychosis has continued unfettered and changed shape radically. Gone is much mainstream interest in detailed descriptions of the experience of psychosis that psychoanalytically trained psychiatrists often sought out in the past. Instead, psychosis research today shares with the rest of psychiatry an often singular focus on neurobiological and genetic research. Investigations related to childhood, trauma and social forces are given much less consideration and, importantly, much less funding. Psychoanalysis has fallen out of favour, in part because of the difficulty it had fitting into novel models of evidence being adopted across medicine (it’s difficult to conduct randomised control trials on the talking cure), and in part because psychiatry needed to prove it was consistent and replicable (these are challenging features to demonstrate in an approach as complicated and variable as psychoanalysis). In its place, a neurobiological model of psychiatry has been taken up, seeking explanations and treatments for mental disorders largely at the level of genes and neurotransmitters. This approach is most apparent if we look at where money for psychiatric research flows. In the past two decades, the US National Institute of Mental Health, the largest funder of mental health research in the world, has introduced a new framework for research, the Research Domain Criteria, or RDoC. This framework is composed of domains and units of analysis. The domains include psychological phenomena like perception or a sense of acute threat, similar to symptoms, but including both positive and negative aspects. The units of analysis focus on things like genes, cells and circuits, directing researchers where to look to explain psychological phenomena like a sense of acute threat, in order to better understand both a well-functioning and a disordered response. The framework aims to do away with the issues and problems that the DSM (Diagnostic and Statistical Manual) is known for, including the way that many of the psychiatric categories within it include overlapping symptoms, or the many possible symptom profiles that can lead to a single diagnosis. These issues suggest that research relying on the DSM categories may not reflect real categories, which makes room for a lot more noise in the research process. The hope is that RDoC will provide more promising paths for research, particularly at the level of the brain and genes. Given this focus, it is no surprise that, within psychiatry today, experiences of hearing voices or of hallucinations are largely seen as symptoms to be managed, or ideally removed, rather than experiences that can be both challenging and meaningful. Meanwhile, after many years of hiatus, a resurgence of psychedelic research is taking place. Dedicated labs have popped up at Johns Hopkins University in Baltimore, Imperial College London and a number of other institutions, and research into the potential of psychedelic-assisted therapy to treat various mental disorders is charging forward. Conditional approval for patients with a terminal diagnosis who are seeking to relieve end-of-life anxiety through the use of psilocybin-assisted therapy has already been granted in many jurisdictions (including within the United States and Canada). The US Food and Drug Administration designated the use of MDMA-assisted therapy as a breakthrough in 2017, and Phase 3 trials for those experiencing post-traumatic stress disorder have been completed. A number of trials have looked at the potential of giving trial participants psilocybin or ketamine along with therapy, in hopes of improving their experiences of treatment-resistant depression. Others are looking into the potential of ibogaine to help those with substance use issues, or of psilocybin for smoking cessation. Still others are considering whether microdosing might promote creativity or whether psychedelic-assisted therapy can help those suffering as a result of racial trauma. In order to appease regulators, the medical community and potential patients, psychedelic research must overcome its problematic past associations and demonstrate that psychedelic therapy can fit the mould of evidence-based medicine, no small feat for a class of drugs that are infamously difficult to pin down. As a result of these pressures, any association between psychedelics and psychosis is a dangerous one. Indeed, Rick Doblin, the founder of the influential Multidisciplinary Association for Psychedelic Studies (MAPS), has suggested that references to psychedelics as models of psychosis should be abandoned because they imply that ‘psychedelic experiences can be discounted as crazy and distorted’. Of course, experiences of psychosis and psychedelics are not identical. Crucially, there are fundamental differences related to the voluntary nature of the experience as well as the timeline. In most cases, psychedelics are ingested by choice and their effects last only a number of hours. In contrast, experiences of psychosis usually come about without any willingness on the part of the individual, and can last for days, weeks or months. Subjective experiences are not identical either; even in the past century, some researchers noted that hallucinations and synaesthesia – when multiple senses are experienced simultaneously – are often more pronounced in those who have taken psychedelics than in those experiencing psychosis. Others thought the similarities between these experiences were exaggerated. At a neuroscientific level, there has been continued interest in the relationship between experiences of psychedelics and psychosis. It is still common to use psychedelics to model psychosis in animals, but the generalisation of this research to human beings is contested by some scientists. Others have been interested in the ways in which the same receptors (for serotonin, dopamine, opioid) play a role in experiences of both psychosis and psychedelics. It has also been suggested that early stages of psychosis involve a similar neurobiological state to the one brought about by psychedelic drugs. Yet today scientific approaches to these phenomena are far apart. Psychosis research has developed along with the rest of psychiatry into a largely biomedical science, seeking genetic or neuroscientific explanations for the symptoms associated with the experience. Psychedelic research has taken a different path, and still carries with it a variety of features that were present within investigations of these substances nearly 75 years ago. Among other things, the divergence reflects the assumptions and values guiding each field. A little bit of cross-fertilisation is warranted again. Research related to psychosis has little interest in the meaning or mysticism that may arise While we might accept that experiences of psychosis and psychedelics are different in important ways, we might also acknowledge that their phenomenological (‘what it’s like’) similarities make them worth looking at in tandem. Both kinds of experiences are highly variable, but can involve a number of overlapping psychological and physical sensations, including changes in sensory experiences related to vision, hearing and smell, as well as one’s sense of self, as ego dissolution and depersonalisation are common. A variety of emotional responses are also common across psychosis and psychedelics, including feeling paranoid, scared, euphoric or withdrawn. Examining how a science approaches its targets of investigation can be highly revealing of assumptions and values guiding the field. As such, looking across the radically different research programmes related to psychosis and psychedelics today can show us how our beliefs and goals related to psychosis and psychedelics differ, and give rise to these contrasting approaches. In turn, we can learn important lessons about how to understand, and respond to, psychosis from approaches to psychedelics. Take what we choose to measure as an example. Psychedelic scientists often focus on meaning and mysticism, asking participants how spiritually significant their experience was, or whether they had an experience they ‘knew to be sacred’ or if they ‘felt at one with the universe’. Headlines frequently refer to research participants describing their psychedelic experiences as among the most meaningful in their lives, helping them cope with past traumas or face impending ones. This contributes to a view of psychedelic experiences as intense, but also meaningful and transformative. In contrast, headlines related to psychotic experience focus not on the magical or ineffable qualities but rather the challenging and fearful aspects of the phenomenon. In part, this is because research related to psychosis has little interest in the meaning or mysticism that may arise within the experience. Instead, research overwhelmingly focuses on how to reduce symptoms and help individuals return to work or school. Research that looks closely at the qualitative aspects of experiences of psychosis, and makes space for individuals to share positive, negative and elusive components of such experiences, is quite rare. Inevitably, these different focal points shape what scientists, and the public, see when thinking about psychosis and psychedelics. Psychosis often brings to mind delusions and hallucinations, characterised as symptoms that ought to be reduced at all costs. In contrast, the hallucinations induced by psychedelics may be seen as a way to access remarkable, life-changing experiences. Another example can be found in the kinds of causal variables we look at in relation to psychosis and psychedelics. Research related to psychedelics expresses a considerable amount of interest in how experiences are impacted by various factors particular to the individual and the context they are in. The notion of ‘set and setting’, popularised by Timothy Leary, is widely accepted in psychedelic communities; it refers to how emotions, expectations and environment can have an enormous impact on psychedelic experiences. Ongoing research focuses on how factors like music and nature can influence psychedelic experiences. Those experiencing psychosis are often met with restraint, seclusion and, far too often, violence No parallel concept similar to ‘set and setting’ exists in psychosis research, which continues to search for the elusive physiological substance-M. Questions related to how psychosis is shaped by beliefs, culture, environments or relationships are rarely asked within psychiatry. Instead, the focus is largely on what causes psychosis and how it can be reduced, rather than what contextual features might give rise to more difficult or more positive experiences of psychosis. Again, these differences are revealing and impactful. While psychedelic researchers understand that set and setting can make psychedelic experiences better or worse for individuals, psychiatrists studying psychosis spend little time considering how individual factors related to one’s set and setting might impact an experience of psychosis. Intervention tends to steer towards psychiatric drugs as the primary approach to treatment. Perhaps the most alarming contrast is between how we currently respond to those having challenging experiences as a result of psychedelics or psychosis. Underground psychedelic communities have been developing forms of ‘psychedelic first aid’ for decades, approaches characterised by warmth and compassion, and that prioritise safety, comfort and nonjudgmental regard for someone having a bad trip. In the journal Dancecult, Deirdre Ruane, a researcher at the University of Kent, Canterbury in the UK, describes the kind of care provided when a psychedelic trip at a festival becomes challenging. That support starts with ruling out an acute medical need and then making the festivalgoer comfortable, offering ‘water or tea and a private, low-stimulus space if desired’. Afterwards, ‘sitters remain with visitors, talking, listening or simply sitting quietly with the visitor as desired. The aim is to create an atmosphere of safety in which the visitor feels able to confront and process difficult emotions.’ Such a picture contrasts sharply with the usual responses to those having a challenging psychotic experience. Those experiencing psychosis, or other mental health crises, are often met with restraint, seclusion and, far too often, violence. Police with little training are often sent to respond with force to those in mental distress – very rarely, those in crisis are supported by peers who know what it’s like and what might help, or even professionals who have been trained in empathic and nonviolent forms of care. The tragic case of Daniel Prude exemplifies these types of responses. Prude, who may have been experiencing psychosis, began behaving erratically one night in 2020, running naked through the streets of Rochester, New York. When police arrived, despite Prude being unarmed, they restrained him, placed a hood over his head, and pinned him to the ground. Video footage shows Prude struggling to breathe and then becoming unresponsive. He was later pronounced brain dead. This disturbing way of responding to individuals who are struggling leads to a significant number of unnecessary and heartbreaking deaths every year. In turn, these different responses engender different beliefs and expectations in the public. Funny stories of teenagers having a bad trip make it all the more acceptable for us to laugh at and with those who have taken magic mushrooms. Stories of police shooting a person in the midst of a mental health crisis make it more likely that we will fear those we believe are experiencing psychosis. In reality, individuals diagnosed with a mental disorder are much more likely to be victims of a violent crime than responsible for one. While there are important differences between experiences of psychosis and psychedelics, the similarities should make us stop and think – and correct course. There may be space for seeing the magical and mystical in psychosis, as well as the terrifying and distressing. While the term psychosis may bring to mind associations of fear and violence, many of those who have experienced psychosis have been advocating for a wider picture of the experience for years. A recent compilation from service users called ‘Psychosis Outside the Box’ includes a variety of descriptions that do not easily fit the mould drawn around the phenomenon of psychosis within psychiatry. For example, one individual describes a ‘very healing experience’ in which they are ‘transported to a rural village in Africa and I’m lying on the ground and a healer puts large leaves all over my body. All the people in the village (including children) take turns surrounding my body and laying hands on me and continuously singing 24 hours a day for seven days.’ Another writes: ‘I have felt I experienced an aspect of the divine usually a warm wind or sun breaking through the clouds.’ These types of experiences are not captured by symptom checklists and a focus on the psychosis as merely pathological. While some experiences of psychosis are challenging and difficult, involving fear and paranoia, and isolation from loved ones, others could be described as beautiful, magical or mystical. However, clinicians and researchers largely fail to ask questions about these kinds of experiences. The researchers Nev Jones and Mona Shattell conducted interviews with a number of people who had experienced psychosis or had unusual experiences, in order to better understand these experiences and how they fail to map on to ‘conventional understandings of either psychopathology or healing’. The authors asked about experiences of agency, hearing voices, doubt and certainty, as well as experiences often characterised as hallucinations and delusions. Summarising feedback they received in their interviews, the authors write that they repeatedly heard people say ‘No one has ever asked me detailed questions about what I experience’ and ‘no one has tried to understand how this has affected who I am, no one has listened to the complications, to the richness, to the good things as well as the bad.’ Can psychedelic science provide a map for one way in which such richness and complexity can be taken seriously? Similarly, set and setting, so important in psychedelic experience, might influence the experience of psychosis – and we could pick up lessons there as well. How do aspects of one’s experience, expectations or environment influence the ‘what it’s like’ within an experience of psychosis? Research from anthropology and other fields indicates there may be other causal avenues that remain underexplored when it comes to psychosis; cultural expectations, for one, appear to have a significant impact on whether one hears voices that are experienced as distressing or comforting. For example, when the anthropologist Tanya Luhrmann and colleagues examined experiences of voice-hearing in the US, India and Ghana, they found that it was much more common for US participants to describe voices they found violent and upsetting, and these participants often referred to diagnostic criteria in describing their experiences. In contrast, patients in India and Ghana did not necessarily see voice-hearing as a bad experience. Those in Ghana often reported that the voice they heard was God and that it was positive. In India, voices were often family members, reminding them of things they needed to do. Their goal is not merely to eliminate voices, but to experience them as less distressing, even beneficial One’s relationship to one’s lived environment can make a big difference too. Increasing evidence indicates that when individuals have a different skin colour than the majority population where they live, they are more likely to develop psychosis. This suggests that understanding the phenomenological experiences of being a racialised minority and how that might relate to experiences of psychosis is worth examining more closely. These factors need to be considered if we are to conduct comprehensive research related to psychosis and learn how to best support those experiencing it. Communities like the Hearing Voices Network (HVN) are well aware of the influence of set and setting, and teach members to acknowledge these factors and work towards developing better relationships with their voices. Their goal is not merely to eliminate voices, as is often assumed within psychiatric research, but to experience voices as less distressing, and sometimes even as beneficial or comforting. With HVN groups now established all over the world, people who hear voices are finding peer-based support that allows them to understand their experiences as not merely symptoms of madness, but complex responses to life’s challenges that they can have some control over. Researchers are involved as well, investigating the ways in which voice-hearing is also experienced by individuals not designated as disordered, making space for a more complicated understanding of the phenomenon. And there are certainly lessons to be learned when it comes to how we respond to those in crisis. When 146 people experiencing a mental health crisis were killed by police in the US in 2022 alone, and a disproportionate number of those were people of colour, there should be a significant sense of urgency when it comes to rethinking crisis response. If underground communities have spent years developing caring and compassionate ways to respond to those having challenging experiences as a result of psychedelics, there is no reason we can’t find similar ways to infuse care and compassion into the way we respond to those having a difficult experience of psychosis. Fortunately, promising alternatives to crisis response have been developed in several places, focused on providing safe and compassionate care to those experiencing distress and in need of respite. These include, among others, the Open Dialogue model of care from Finland, CAHOOTS (Crisis Assistance Helping Out on the Streets) in Eugene, Oregon, and the Soteria House therapeautic community in Vermont. These lessons may need to be taken up by psychiatry sooner rather than later, however. Increasing regulatory pressures mean that psychedelic research may not be able to hold on to these expansive and complex ways of thinking about psychedelic experiences. In the coming years, as psychedelic science works hard to demonstrate that these drugs can fit the mould established by regulated treatments, these differences may disappear. Outcome measures required for approval by regulators include symptom checklists, with no space for measures related to mysticism or oneness with the universe. For regulatory approval, treatment approaches are required to be streamlined and replicable, leaving little room to examine the influence of set and setting; instead, treatment settings, timelines and practitioner trainings will become solidified along with dosages. And psychedelic first aid is unlikely to be funded on a massive scale; instead, as these drugs become regulated and swept up into the medical model, underground use, and underground responses to crises, may be pushed further into hiding.
Phoebe Friesen
https://aeon.co//essays/what-can-psychedelic-science-teach-psychiatry-about-psychosis
https://images.aeonmedia…y=75&format=auto
Metaphysics
Neither atheism nor theism adequately explains reality. That is why we must consider the middle ground between the two
If you’re interested in the themes of this essay, join us at Sophia Club London on 3 December 2023 to explore quantum cosmology and the origins of reality. If you don’t believe in the God of the Bible or the Quran, then you must think we live in a meaningless universe, right? People get stuck in dichotomies of thought. If you don’t like Soviet communism, then you must be in favour of US-style capitalism. Well, not if there are political opinions other than those two (which of course there are). Another dichotomy is between traditional religion and secular atheism. Whose team are you on, Richard Dawkins’s or the Pope’s? Over a long period of time, I’ve come to think that both these worldviews are inadequate, that both have things about reality that they can’t explain. In my book Why? The Purpose of the Universe (2023), I explore the much-neglected middle ground between God and atheism. I was raised religiously, although the Catholicism of my parents was more about getting the community together than accepting dogmas. From an early age, the secular world all around me was much more of an influence than Sunday school, and by the age of 14 I self-identified as an atheist. It never occurred to me that there was a credible option between these identities: the religious and the secular. Of course, I was aware of the ‘spiritual but not religious’ category, but I was socialised to think this option was unserious and essentially ‘fluffy thinking’. And thus I remained a happy atheist for the next 25 years. This all changed a mere five years ago when I arrived as a faculty member at Durham University, where I was asked to teach philosophy of religion. It was a standard undergraduate course: you teach the arguments against God, and you teach the arguments for God, and then the students are invited to decide which case was stronger and write an essay accordingly. So I taught the arguments against God, based on the difficulty of reconciling the existence of a loving and all-powerful God with the terrible suffering we find in the world. As previously, I found them incredibly compelling and was reconfirmed in my conviction that there is almost certainly no God. Then I taught the arguments for God’s existence. To my surprise, I found them incredibly compelling too! In particular, the argument from the fine-tuning of physics for life couldn’t be responded to as easily as I had previously thought (more on this below). This left me in quite a pickle. For me, philosophy isn’t just an abstract exercise. I live out my worldview, and so I find it unsettling when I don’t know what my worldview is. Fundamentally, I want the truth, and so I don’t mind changing my mind if the evidence changes. But here I was with seemingly compelling evidence pointed in two opposing directions! I lost a lot of sleep during this time. A few weeks into this existential morass I was peacefully watching some ducks quack in a nearby nature reserve, when I suddenly realised there was a startingly simple and obvious solution to my dilemma. The two arguments I was finding compelling – the fine-tuning argument for ‘God’, and the argument from evil and suffering against ‘God’ – were not actually opposed to each other. The argument from evil and suffering targets a very specific kind of God, namely the Omni-God: all-knowing, all-powerful, perfectly good creator of the universe. Meanwhile, the fine-tuning argument supports something much more generic, some kind of cosmic purpose or goal-directedness towards life that might not be attached to a supernatural designer. So if you go for cosmic purpose but not one rooted in the desires of an Omni-God, then you can have your cake and eat it by accepting both arguments. And thus my worldview was radically changed. One of the most fascinating developments in modern science is the surprising discovery of recent decades that the laws of physics are fine tuned for life. This means that, for life to be possible, certain numbers in physics had to fall within an incredibly narrow range. Like Goldilocks’s porridge, these numbers had to be just right, not too big and not too small. Perhaps the most striking case is that of the cosmological constant, the number that measures the force that powers the accelerating expansion of the universe. The cosmological constant is an odd number: it’s extremely small but non-zero. You don’t tend to find fundamental constants with that kind of value. But it’s a good job that it does. Because, if the cosmological constant were a bit bigger, everything would have been forced apart so rapidly that no two particles would ever have met. We would have no stars, no planets nor any kind of structural complexity. On the other hand, if the cosmological constant was less than zero, it would have added to gravity, meaning that the entire universe would have collapsed back on itself within a split second. For life to be possible, this number had to be in the strange, highly specific category it in fact occupies: extremely close to zero without crossing over into the negative. There are many other examples of finely tuned constants in current physics. Fundamentally, we face a choice. Either: it’s a coincidence that, of all the possible values that the finely tuned constants of physics may have had, they just happen to have the right values for life;or: the constants have those values because they are right for life.The former option is wildly improbable; on a conservative estimate, the odds of getting finely tuned constants by chance is less than 1 in 10-136. The latter option amounts to a belief that something at the fundamental level of reality is directed towards the emergence of life. I call this kind of fundamental goal-directedness ‘cosmic purpose’. As a society, we’re somewhat in denial about fine-tuning, because it doesn’t fit with the picture of science we’ve got used to. It’s a bit like in the 16th century when we started getting evidence that our Earth wasn’t in the centre of the universe, and people struggled to accept it because it didn’t fit with the picture of the universe they’d got used to. Nowadays, we scoff at our ancestors’ inability to follow the evidence where it leads. But every generation absorbs a worldview it can’t see beyond. I believe we’re in a similar situation now with respect to the mounting evidence for cosmic purpose. We’re ignoring what is lying in plain view because it doesn’t fit with the version of reality we’ve got used to. Future generations will mock us for our intransigence. Fine-tuning being a fluke is massively more improbable than bank thieves getting a combination right by chance The most common response online to fine-tuning worries is known as ‘the anthropic response’: if the universe hadn’t had the right numbers for life, we wouldn’t be around to worry about it, and so we shouldn’t be surprised to find fine-tuning. The philosopher John Leslie devised a vivid thought experiment (here presented in a slightly modified version) to show where the anthropic response goes wrong. Suppose you’re about to be executed by five expert marksmen at close range. They load up, take aim, fire… but they all miss. Again, they load up, take aim, fire… again, they all miss. This happens time and time again for more than an hour. Now, you could think: ‘Well, if they had hit me, I wouldn’t be around to worry about it, and so I shouldn’t be surprised that they all missed.’ But nobody would think this. It clearly needs explanation why these expert shooters repeatedly missed at close range. Maybe the guns have been tampered with, or maybe it’s a mock execution. Likewise, while it’s of course trivially true that, if the universe wasn’t compatible with life, we wouldn’t be around to reflect on the matter, it still needs explaining why, of all the numbers in physics that might have come up, a universe ended up with one in the narrow range compatible with life. Could fine-tuning have been just a fluke? Sometimes, things come together in surprising and unexpected ways, without our feeling compelled to postulate an underlying purpose to reality. But there are limits to this. Suppose thieves break into a high-security bank and get the 10-digit combination right first time. Would it be an option to say: ‘Well, maybe they just randomly tried a number and it just happened to be the right one’? This would clearly be an irrational thing to think, as it’s just too improbable that they would get the combination right by fluke. But the fine-tuning being a fluke is massively more improbable than the thieves getting the combination right by chance. Taking fine-tuning to be a matter of luck is just not a rational option. But aren’t there many incredibly improbable things we accept as just chance? My existence depends on an incredibly finely tuned set of circumstances: my parents having met, and their parents having met, and so on back to the start of humanity. Indeed, if a different sperm had fertilised the egg that produced me, I would not be here. It can induce a sense of vertigo to reflect on how unlikely it is that one should ever have existed. And yet, while I believe there is a cosmic directedness towards life, my ego is not (yet!) inflated enough to suppose that there was a cosmic directedness towards Philip Goff coming into being. What’s the difference? The difference is that life has objective value, and hence is an outcome of significance independently of it being the outcome that happened to occur. A universe in which there are plants and animals, and people who can fall in love and contemplate their own existence, is much greater than a universe in which there is only hydrogen. In this sense, the numbers consistent with such valuable happenings are special in a way that other possible values of the constants are not. In contrast, there’s nothing particularly special about Philip Goff existing, as opposed to whoever would have been here if, say, my father had married someone else. To make the point clearer with an analogy, contrast the case in which some random person, Jo Bloggs, wins the lottery, with the case in which Mr Rich, the partner of the lottery boss, wins the lottery. Jo Bloggs is noteworthy only as a result of winning the lottery, and hence we can accept that her win was a fluke. This is a bit like the case of me being born as opposed to some other random individual. But there is a significance to Mr Rich independently of the fact that he won: he’s the partner of the lottery boss. And so, when, of all the people who might have won, Mr Rich wins, we suspect foul play. Likewise, when, of all the possible numbers that might have turned up in physics, we have a rare combination that allows for objective value to emerge, we rightfully suspect that this is more than just fluke. I often find, when I discuss the fine-tuning on Twitter, people express a sentiment that it’s brave to boldly accept something so improbable, like you’re not scared to take it on. But it’s not brave to believe highly improbable things, it’s irrational. In my view, a commitment to cosmic purpose is the only rational response to the evidence of current science. God provides an explanation of fine-tuning, but a very poor one. Maybe for our ancestors it made sense that a God who was so much greater than us could do what he liked with his creatures. But moral progress has taught us that each individual has fundamental rights that nobody, no matter how powerful and cognitively sophisticated, is permitted to infringe. In my book Why? I focus on the work of the great philosopher of religion Richard Swinburne in responding to the problem of evil. Swinburne argues that there are goods that exist in our universe that would not exist in one with less suffering. If we just lived in some kind of Disneyland-esque world with no danger or risk, then there would be no opportunities to show real courage in the face of adversity, or to feel deep compassion for those who suffer. The absence of such serious moral choices would be a great cost, according to Swinburne. Even if we concede that this is indeed a cost, I don’t believe that God would have the right to cause or allow suffering in order to allow for these goods. A classic argument against crude forms of utilitarianism imagines a doctor who could save the lives of five patients by killing one patient and harvesting their organs. Even if the doctor could increase wellbeing in this way, he would not have the right to kill and use the healthy patient, at least not without their consent. Likewise, even if God has some good purpose in mind for allowing natural disasters, it would infringe the rights to health and security of the individuals impacted by such disasters. Maybe our limited designer feels awful about how messy such a process inevitably is, but it was that or nothing Fortunately, there are other possibilities. Thomas Nagel has defended the idea of teleological laws: laws of nature with goals built into them. Rather than grounding cosmic purpose in the desires of a creator, perhaps there just is a natural tendency towards life inherent in the universe, one that interacts with the more familiar laws of physics in ways we don’t yet understand. For some, the idea of purpose without a mind directing it makes no sense. An alternative possibility is a non-standard designer, one that lacks the ‘omni’ qualities – all-knowing, all-powerful, and perfectly good – of the traditional God. What about an evil God? As Stephen Law has explored in detail, the evil-god hypothesis faces a ‘problem of good’ mirroring the problem of evil facing the traditional good God: if God is evil, why did God create so much good? I think a better option is a limited designer who has made the best universe they are able to make. Perhaps the designer of our universe would have loved to create intelligent life in an instant, avoiding all the misery of natural selection, but their only option was to create a universe from a singularity, with the right physics, so that it will eventually evolve intelligent life. Maybe our limited designer feels awful about how messy such a process inevitably is, but it was that or nothing. A supernatural designer comes with a parsimony cost. As scientists and philosophers, we aspire to find not just any old theory that can account for the data but the simplest such theory. All things being equal, it’d be better not to have to believe in both a physical universe and a non-physical supernatural designer. For these reasons, I think overall the best theory of cosmic purpose is cosmopsychism, the view that the universe is itself a conscious mind with its own goals. In fact, this is a view I first entertained in Aeon back in 2017, before deciding that the multiverse, the topic of the next section, was a better option. Having been finally persuaded that the multiverse is a no-go (more on this imminently), I was prompted to explore a more developed cosmopsychist explanation of fine-tuning in my book Why?, and this now seems to me the most likely source of cosmic purpose. Warning: the next section is a little technical, and can be skipped over, at least on first reading. There are many scientists and philosophers who share this conviction that the fine-tuning of physics can’t be just a fluke, but who think there is an alternative explanation: the multiverse hypothesis. If there is a huge, perhaps infinite, number of universes, each with different numbers in their physics, then it’s not so improbable that one of them would happen to have the right numbers by chance. And we surely don’t need an explanation of why we happened to be in the fine-tuned universe; after all, we couldn’t have existed in a universe that wasn’t fine-tuned. The latter part of the explanation is known as the ‘anthropic principle’. For a long time, I thought the multiverse hypothesis was the most plausible explanation of fine-tuning. But I eventually became persuaded through long discussions with probability theorists that the inference from fine-tuning to a multiverse involves flawed reasoning. This is a much-discussed issue in philosophy journals but, in a typical case of academics talking to themselves, it is almost entirely unknown outside of academic philosophy, despite huge public interest in the issue of fine-tuning. One of my motivations for writing the book Why? was to convey this argument, which changed my life, to a broader audience. There’s a crucial principle in probabilistic reasoning known as the ‘total evidence requirement’. This is roughly the principle that we should always use the most specific evidence available to us. Suppose the prosecution tells the jury that the accused always carries a knife around with him, neglecting to add that the knife in question is a butter knife. The prosecution has not lied to the jury, but it has misled them by giving them generic information – that the accused carries a knife – when it could have given them more specific information – that the accused carries a butter knife. In other words, the prosecution has violated the total evidence requirement. Respecting the total evidence requirement renders the inference to a multiverse invalid How does this principle relate to fine-tuning and the multiverse? It’s relevant because there are two ways of interpreting the evidence of fine-tuning: generic evidence: a universe is fine-tuned; orspecific evidence: this universe is fine-tuned.The multiverse theorist works with the generic way of construing the evidence. They have to do this to infer from fine-tuning to a multiverse. The existence of many universes makes it more likely that a universe will be fine-tuned, but it doesn’t make it any more likely that this universe in particular – as opposed to, say, the next universe down – will be fine-tuned. Hence, the multiverse hypothesis is supported only if one works with the generic way of construing the evidence. But this is in conflict with the total evidence requirement, which obliges us to work with the more specific form of the evidence, namely that this universe is fine-tuned. Respecting the total evidence requirement, therefore, renders the inference to a multiverse invalid. We can make the point clearer with an analogy. Suppose we walk into a casino and the first person we see, call her Sammy Smart, is having an incredible run of luck, calling the right number in roulette time after time. I say: ‘Wow, the casino must be full tonight.’ Naturally, you’re puzzled and you ask me where I’m getting that idea from. I respond: ‘Well, if there are a huge number of people playing in the casino, then it becomes statistically quite likely that at least one person in the casino will win big, and that’s exactly what we’ve observed: somebody in the casino winning big.’ Everyone agrees that the above is a fallacious inference, and the reason it’s fallacious is that it violates the total evidence requirement. There are two ways of construing the evidence available to us as we walk into the casino: generic evidence: someone in the casino has had a great run of luck; or specific evidence: Sammy Smart has had a great run of luck.In the above scenario, my strange reasoning essentially involved working with the generic way of construing the evidence: it is indeed more likely that someone in the casino had a great run of luck if we hypothesise that there are many people playing well in the casino. But, again, the total evidence requirement obliges us to work with the more specific way of construing the evidence – Sammy Smart had a great run of luck – and, once we do this, the inference to a full casino is blocked: the presence or absence of other people in the casino has no bearing on whether or not Sammy Smart in particular will play well. The reasoning employed by the multiverse theorist makes exactly the same error. To respect the total evidence requirement, we need to work with the specific version of the evidence – that this universe is fine tuned – and the presence or absence of other universes has no bearing on whether or not this universe in particular turns out to be fine tuned. Many argue that this is where the anthropic principle kicks in. While we could have entered the casino and observed someone rolling badly, we could not have observed a universe that wasn’t compatible with life. But isn’t there independent scientific evidence for a multiverse? Yes and no It is of course trivially true that we could not have observed a universe incompatible with the existence of life. But no theoretical justification has ever been given as to why this would make it OK to ignore the total evidence requirement. Moreover, we can easily insert an artificial selection effect into the casino example by imagining there’s a sniper hidden in the first room of the casino, waiting to kill us as we enter unless there is someone in the first room having an extraordinary run of luck. With this in place, the casino example is relevantly similar to the real-world case of fine-tuning: just as we could not have observed a universe with the wrong numbers for life, so we could not have observed a player rolling the wrong numbers to win. And yet, nobody disputes that the casino example involves flawed reasoning, reasoning that, in my view, is indiscernible from that of the multiverse theorist. But isn’t there independent scientific evidence for a multiverse? Yes and no. There is tentative support for what cosmologists call ‘inflation’, the hypothesis that our universe began with a short-lived exponential rate of expansion. And many physicists have argued that, on the most plausible models of inflation, the exponential expansion never ends in reality considered as a whole, but ends only in certain regions of reality, which slow down to become ‘bubble universes’ in their own right. On this model, known as ‘eternal inflation’, our universe is one such bubble. The problem is that there are two possible versions of eternal inflation: heterogenous eternal inflation – when a new bubble forms, probabilistic processes determine that the values of the constants, and so the vast majority of bubble universes, are not fine tuned; or homogenous eternal inflation – the values of the constants do not vary between bubble universes.Pretty much all multiverse theorists assume heterogenous eternal inflation, which is probably because only this version can have a hope of explaining away fine-tuning. Only if there’s enough variety in the ‘local physics’ of different bubble universes does it become statistically likely that the fine-tuning is just a fluke. But there is zero empirical evidence for this. Moreover, if we respect the total evidence requirement, then the fine-tuning itself is powerful evidence against heterogenous eternal inflation. Remember that the total evidence requirement obliges us to work with the specific way of construing the evidence of fine-tuning: specific evidence: this universe is fine-tuned.According to our standard mathematical way of defining evidence – known as the Bayes theorem – a hypothesis fits with data to the extent that the hypothesis makes the data probable. If heterogenous eternal inflation were true, it would be incredibly unlikely that our universe would be fine tuned, as the probabilistic processes that fix the constants of each universe are entirely random. But if we combine homogenous eternal inflation with some form of cosmic goal-directedness towards life, then it becomes massively more likely that our universe will be fine tuned. In other words, even if we adopt the eternal inflation multiverse, the evidence of fine-tuning still pushes us towards cosmic purpose. The Christian philosopher William Lane Craig has argued that, if the universe has no purpose, then life is meaningless. Along similar lines, the atheist philosopher David Benatar proposes that, in the absence of cosmic purpose, life is so meaningless that we are morally required to stop reproducing so that the human race dies out. At the other extreme, it is common for humanists to argue that cosmic purpose would be irrelevant to the meaning of human existence. I take a middle way between these two extremes. I think human life can be very meaningful even if there is no cosmic purpose, so long as we engage in meaningful activities, such as kindness, creativity and the pursuit of knowledge. But, if there is cosmic purpose, then life is potentially more meaningful. We want our lives to make a difference. If we can contribute, even in some tiny way, to the good purposes of the whole of reality, this is about as big a difference as we can imagine making. There are no certain answers to these big questions of meaning and existence. It’s possible the abundant evidence for cosmic purpose in our current theories will not be present in future theories. Even if there is a fundamental drive towards the good, without an omnipotent God, we have no guarantee that cosmic purpose will ultimately overcome the arbitrary suffering of the world. But it can be rational, to an extent, to hope beyond the evidence. I don’t know whether human beings will be able to deal with climate change; in fact, a dispassionate assessment of the evidence makes it more likely perhaps that we won’t. Still, it’s rational to live in hope that humans will rise to the challenge, and to find meaning and motivation in that hope. Likewise, I believe it’s rational to live in hope that a better universe is possible. Why? The Purpose of the Universe (2023) by Philip Goff is published via Oxford University Press.
Philip Goff
https://aeon.co//essays/why-our-universe-can-have-cosmic-purpose-without-god
https://images.aeonmedia…y=75&format=auto
Logic and probability
Some have thought that logic will one day be completed and all its problems solved. Now we know it is an endless task
Maria is either at home or in the office. She’s not at home. Where is she? You might wonder why I started with such an unpuzzling puzzle. But in solving it, you already used logic. You reasoned correctly from the premises ‘Maria is either at home or in the office’ and ‘She’s not at home’ to the conclusion ‘Maria is in the office.’ That might not seem like a big deal, but someone who couldn’t make that move would be in trouble. We need logic to put together different pieces of information, sometimes from different sources, and to extract their consequences. By linking together many small steps of logical reasoning, we can solve much harder problems, as in mathematics. Another angle on logic is that it’s about inconsistency. Imagine someone making all three statements ‘Maria is either at home or in the office’, ‘She’s not at home’, and ‘She’s not in the office’ (about the same person at the same time). Those statements are jointly inconsistent; they can’t all be true together. Any two of them can be true, but they exclude the third. When we spot an inconsistency in what someone is saying, we tend to stop believing them. Logic is crucial for our ability to detect inconsistency, even when we can’t explain exactly what has gone wrong. Often, it is much more deeply hidden than in that example. Spotting inconsistencies in what is said can enable us to work out that a relative is confused, or that a public figure is lying. Logic is one basic check on what politicians say. To put your pattern of reasoning in the simplest form, you went from premises ‘A or B’ and ‘Not A’ to the conclusion ‘B’. The deductive action was all in the two short words ‘or’ and ‘not’. How you fill in ‘A’ and ‘B’ doesn’t matter logically, as long as you don’t introduce ambiguities. If ‘A or B’ and ‘Not A’ are both true, so is ‘B’. In other words, that form of argument is logically valid. The technical term for it is disjunctive syllogism. You have been applying disjunctive syllogism most of your life, whether you knew it or not. 13636All images courtesy Houghton Library/Harvard University 13637136381363913640Except for a few special cases, logic can’t tell you whether the premises or conclusion of an argument are true. It can’t tell you whether Maria is at home, or whether she’s in the office, or whether she’s in neither of those places. What it tells you about is the connection between them; in a valid argument, logic rules out the combination where the premises are all true while the conclusion is false. Even if your premises are false, you can still reason from them in logically valid ways – perhaps my initial statement about Maria was quite wrong, and she is actually on a train. The logical validity of forms of argument depends on logical words: as well as ‘or’ and ‘not’, they include ‘and’, ‘if’, ‘some’, ‘all’, and ‘is’. For instance, reasoning from ‘All toadstools are poisonous’ and ‘This is a toadstool’ to ‘This is poisonous’ illustrates a valid form of argument, one that we use when we apply our general knowledge or belief to particular cases. A mathematical instance of another form of argument is the move from ‘x is less than 3’ and ‘y isn’t less than 3’ to ‘x is not y’, which involves the logical principle that things are identical only if they have the same properties. In everyday life and even in much of science, we pay little or no conscious attention to the role of logical words in our reasoning because they don’t express what we are interested in reasoning about. We care about where Maria is, not about disjunction, the logical operation expressed by ‘or’. But without those logical words, our reasoning would fall apart; swapping ‘some’ and ‘all’ turns many valid arguments into invalid ones. Logicians’ interests are the other way round; they care about how disjunction works, not where Maria is. Philosophers have sometimes fallen into that trap, thinking that logic had nothing left to discover Logic was already studied in the ancient world, in Greece, India and China. To recognise valid or invalid forms of argument in ordinary reasoning is hard. We must stand back, and abstract from the very things we usually find of most interest. But it can be done. That way, we can uncover the logical microstructure of complex arguments. For example, here are two arguments: ‘All politicians are criminals, and some criminals are liars, so some politicians are liars.’‘Some politicians are criminals, and all criminals are liars, so some politicians are liars.’The conclusion follows logically from the premises in one of these arguments but not the other. Can you work out which is which? When one just looks at such ordinary cases, one can get the impression that logic has only a limited number of argument forms to deal with, so that once they have all been correctly classified as valid or as invalid, logic has completed its task, except for teaching its results to the next generation. Philosophers have sometimes fallen into that trap, thinking that logic had nothing left to discover. But it is now known that logic can never complete its task. Whatever problems logicians solve, there will always be new problems for them to tackle, which cannot be reduced to the problems already solved. To understand how logic emerged as this open-ended field for research, we need to look back at how its history has been intertwined with that of mathematics. The most sustained and successful tradition of logical reasoning in human history is mathematics. Its results are applied in the natural and social sciences too, so those sciences also ultimately depend on logic. The idea that a mathematical statement needs to be proved from first principles goes back at least to Euclid’s geometry. Although mathematicians typically care more about the mathematical pay-offs of their reasoning than its abstract structure, to reach those pay-offs they had to develop logical reasoning to unprecedented power. From Oliver Byrne’s The First Six Books of the Elements of Euclid (1867). Public Domain An example is the principle of reductio ad absurdum. This is what one uses in proving a result by supposing that it does not hold, and deriving a contradiction. For instance, to prove that there are infinitely many prime numbers, one starts by supposing the opposite, that there is a largest prime, and then derives contradictory consequences from that supposition. In a complex proof, one may have to make suppositions within suppositions within suppositions; keeping track of that elaborate dialectical structure requires a secure logical grasp of what is going on. There was a trend to rigorise mathematics by reducing it to logical constructions out of arithmetic As mathematics grew ever more abstract and general in the 19th century, logic developed accordingly. George Boole developed what is now called ‘Boolean algebra’, which is basically the logic of ‘and’, ‘or’ and ‘not’, but equally of the operations of intersection, union, and complementation on classes. It also turns out to model the building blocks for electronic circuits, AND gates, OR gates and NOT gates, and has played a fundamental role in the history of digital computing. Boolean logic has its limits. In particular, it doesn’t cover the logic of ‘some’ and ‘all’. Yet complex combinations of such words played an increasing role in rigorous mathematical definitions, for instance of what it means for a mathematical function to be ‘continuous’, and of what it means to be a ‘function’ anyway, issues that had led to confusion and inconsistency in early 19th-century mathematics. The later 19th century witnessed an increasing trend to rigorise mathematics by reducing it to logical constructions out of arithmetic, the theory of the natural numbers – those reached from 0 by repeatedly adding 1 – under operations like addition and multiplication. Then the mathematician Richard Dedekind showed how arithmetic itself could be reduced to the general theory of all sequences generated from a given starting point by repeatedly applying a given operation (0, 1, 2, 3, …). That theory is very close to logic. He imposed two constraints on the operation: first, it never outputs the same result for different inputs; second, it never outputs the original starting point. Given those constraints, the resulting sequence cannot loop back on itself, and so must be infinite. The trickiest part of Dedekind’s project was showing that there is even one such infinite sequence. He did not want to take the natural numbers for granted, since arithmetic was what he was trying to explain. Instead, he proposed the sequence whose starting point (in place of 0) was his own self and whose generating operation (in place of adding 1) constructed from any thinkable input the thought that he could think about that input. The reference in his proof to his own self and to thoughts about thinkability was unexpected, to say the least. It does not feel like regular mathematics. But could anyone else do better, to make arithmetic fully rigorous? A natural idea was to reduce arithmetic, and perhaps the rest of mathematics, to pure logic. Some partial reductions are easy. For example, take the equation 2 + 2 = 4. Applied to the physical world, it corresponds to arguments like this (about a bowl of fruit): There are exactly two apples.There are exactly two oranges.No apple is an orange.Therefore:There are exactly four apples and oranges.Phrases like ‘exactly two’ can be translated into purely logical terms: ‘There are exactly two apples’ is equivalent to ‘There is an apple, and another apple, and no further apple.’ Once the whole argument has been translated into such terms, the conclusion can be rigorously deduced from the premises by purely logical reasoning. This procedure can be generalised to any arithmetical equation involving particular numerals like ‘2’ and ‘4’, even very large ones. Such simple applications of mathematics are reducible to logic. However, that easy reduction does not go far enough. Mathematics also involves generalisations, such as ‘If m and n are any natural numbers, then m + n = n + m’. The easy reduction cannot handle such generality. Some much more general method would be needed to reduce arithmetic to pure logic. A key contribution was made by Gottlob Frege, in work slightly earlier than Dedekind’s, though with a much lower profile at the time. Frege invented a radically new symbolic language in which to write logical proofs, and a system of formal deductive rules for it, so the correctness of any alleged proof in the system could be rigorously checked. His artificial language could express much more than any previous logical symbolism. For the first time, the structural complexity of definitions and theorems in advanced mathematics could be articulated in purely formal terms. Within this formal system, Frege showed how to understand natural numbers as abstractions from sets with equally many members. For example, the number 2 is what all sets with exactly two members have in common. Two sets have equally many members just when there is a one-one correspondence between their members. Actually, Frege talked about ‘concepts’ rather than ‘sets’, but the difference is not crucial for our purposes. Is R a set that is not a member of itself? If it is, it isn’t, and if it isn’t, it is: an inconsistency! Frege’s language for logic has turned out to be invaluable for philosophers and linguists as well as mathematicians. For instance, take the simple argument ‘Every horse is an animal, so every horse’s tail is an animal’s tail.’ It had been recognised as valid long before Frege, but Fregean logic was needed to analyse its underlying structure and properly explain its validity. Today, philosophers routinely use it for analysing much trickier arguments. Linguists use an approach that goes back to Frege to explain how the meaning of a complex sentence is determined by the meanings of its constituent words and how they are put together. Frege contributed more than anyone else to the attempted reduction of mathematics to logic. By the start of the 20th century, he seemed to have succeeded. Then a short note arrived from Bertrand Russell, pointing out a hidden inconsistency in the logical axioms from which Frege had reconstructed mathematics. The news could hardly have been worse. The contradiction is most easily explained in terms of sets, but its analogue in Fregean terms is equally fatal. To understand it, we need to take a step back. In mathematics, once it is clear what we mean by ‘triangle’, we can talk about the set of all triangles: its members are just the triangles. Similarly, since it is equally clear what we mean by ‘non-triangle’, we should be able to talk about the set of all non-triangles: its members are just the non-triangles. One difference between these two sets is that the set of all triangles is not a member of itself, since it is not a triangle, whereas the set of all non-triangles is a member of itself, since it is a non-triangle. More generally, whenever it is clear what we mean by ‘X’, there is the set of all Xs. This natural principle about sets is called ‘unrestricted comprehension’. Frege’s logic included an analogous principle. Since it is clear what we mean by ‘set that is not a member of itself’, we can substitute it for ‘X’ in the unrestricted comprehension principle. Thus, there is the set of all sets that are not members of themselves. Call that set ‘R’ (for ‘Russell’). Is R a member of itself? In other words, is R a set that is not a member of itself? Reflection quickly shows that if R is a member of itself, it isn’t, and if it isn’t, it is: an inconsistency! That contradiction is Russell’s paradox. It shows that something must be wrong with unrestricted comprehension. Although many sets are not members of themselves, there is no set of all sets that are not members of themselves. That raises the general question: when can we start talking about the set of all Xs? When is there a set of all Xs? The question matters for contemporary mathematics, because set theory is its standard framework. If we can never be sure whether there is a set for us to talk about, how are we to proceed? Logicians and mathematicians have explored many ways of restricting the comprehension principle enough to avoid contradictions but not so much as to hamper normal mathematical investigations. In their massive work Principia Mathematica (1910-13), Russell and Alfred North Whitehead imposed very tight restrictions to restore consistency, while still preserving enough mathematical power to carry through a variant of Frege’s project, reducing most of mathematics to their consistent logical system. However, it is too cumbersome to work in for normal mathematical purposes. Mathematicians now prefer a simpler and more powerful system, devised around the same time as Russell’s by Ernst Zermelo and later enhanced by Abraham Fraenkel. The underlying conception is called ‘iterative’, because the Zermelo-Fraenkel axioms describe how more and more sets are reached by iterating set-building operations. For example, given any set, there is the set of all its subsets, which is a bigger set. Set theory is classified as a branch of mathematical logic, not just of mathematics. That is apt for several reasons. First, the meanings of core logical words like ‘or’, ‘some’ and ‘is’ have a kind of abstract structural generality; in that way, the meanings of ‘set’ and ‘member of’ are similar. Second, much of set theory concerns logical questions of consistency and inconsistency. One of its greatest results is the independence of the continuum hypothesis (CH), which reveals a major limitation of current axioms and principles for logic and mathematics. CH is a natural conjecture about the relative sizes of different infinite sets, first proposed in 1878 by Georg Cantor, the founder of set theory. In 1938, Kurt Gödel showed that CH is consistent with standard set theory (assuming the latter is itself consistent). But in 1963 Paul Cohen showed that the negation of CH is also consistent with standard set theory (again, assuming the latter is consistent). Thus, if standard set theory is consistent, it can neither prove nor disprove CH; it is agnostic on the question. Some set theorists have searched for plausible new axioms to add to set theory to settle CH one way or the other, so far with little success. Even if they found one, the strengthened set theory would still be agnostic about some further hypotheses, and so on indefinitely. A proof in a framework of formal logic is still the gold standard, even if you never see a bar of gold A working mathematician may use sets without worrying about the risk of inconsistency or checking whether their proofs can be carried out in standard set theory. Fortunately, they normally can. Those mathematicians are like people who live their lives without worrying about the law, but whose habits are in practice law-abiding. Kurt Gödel in 1925. Courtesy Wikipedia Although set theory is not the only conceivable framework in which to do mathematics, analogous issues arise for any alternative framework: restrictions will be needed to block analogues of Russell’s paradox, and its rigorous development will involve intricate questions of logic. By examining the relation between mathematical proof and formal logic, we can start to understand some deeper connections between logic and computer science: another way in which logic matters. Most proofs in mathematics are semi-formal; they are presented in a mix of mathematical and logical notation, diagrams, and English or another natural language. The underlying axioms and first principles are left unmentioned. Nevertheless, if competent mathematicians question a point in the proof, they challenge the author(s) to fill in the missing steps, until it is clear that the reasoning is legitimate. The assumption is that any sound proof can in principle be made fully formal and logically rigorous, although in practice full formalisation is hardly ever required, and might involve a proof thousands of pages long. A proof in a framework of formal logic is still the gold standard, even if you personally never see a bar of gold. The standard of formal proof is closely related to the checking of mathematical proofs by computer. An ordinary semi-formal proof cannot be mechanically checked as it stands, since the computer cannot assess the prose narrative holding the more formal pieces together (current AI would be insufficiently reliable). What is needed instead is an interactive process between the proof-checking program and human mathematicians: the program repeatedly asks the humans to clarify definitions and intermediate steps, until it can find a fully formal proof, or the humans find themselves at a loss. All this can take months. Even the finest mathematicians may use the interactive process to check the validity of a complicated semi-formal proof, because they know cases where a brilliant, utterly convincing proof strategy turned out to depend on a subtle mistake. Historically, connections between logic and computing go much deeper than that. In 1930, Gödel published a demonstration that there is a sound and complete proof system for a large part of logic, first-order logic. For many purposes, first-order logic is all one needs. The system is sound in the sense that any provable formula is valid (true in all models). The system is also complete in the sense that any valid formula is provable. In principle, the system provides an automatic way of listing all the valid formulas of the language, even though there are infinitely many, since all proofs in the system can be listed in order. Although the process is endless, any given valid formula will show up sooner or later (perhaps not in our lifetimes). That might seem to give us an automatic way of determining in principle whether any given formula is valid: just wait to see whether it turns up on the list. That works fine for valid formulas, but what about invalid ones? You sit there, waiting for the formula. But if it hasn’t shown up yet, how do you know whether it will show up later, or will never show up? The big open question was the Decision Problem: is there a general algorithm that, given any formula of the language, will tell you whether it is valid or not? Almost simultaneously in 1935-36, Alonzo Church in the US and Alan Turing in the UK showed that such an algorithm is impossible. To do that, they first had to think very hard and creatively about what exactly it is to be an algorithm, a purely mechanical way of solving a problem step by step that leaves no room for discretion or judgment. To make it more concrete, Turing came up with a precise description of an imaginary kind of universal computing machine, which could in principle execute any algorithm. He proved that no such machine could meet the challenge of the Decision Problem. In effect, he had invented the computer (though at the time the word ‘computer’ was used for humans whose job was to do computations; one philosopher liked to point out that he had married a computer). A few years later, Turing built an electronic computer to break German codes in real time during the Second World War, which made a major contribution to defeating German U-boats in the North Atlantic. The programs on your laptop are one practical answer to the question ‘Why does logic matter?’ Alternative logicians are far more rational than the average conspiracy theorist Logic and computing have continued to interact since Turing. Programming languages are closely related in structure to logicians’ formal languages. A flourishing branch of logic is computational complexity theory, which studies not just whether there is an algorithm for a given class, but how fast the algorithm can be, in terms of how many steps it involves as a function of the size of the input. If you look at a logic journal, you will see that the contributors typically come from a mix of academic disciplines – mathematics, computer science, and philosophy. Since logic is the ultimate go-to discipline for determining whether deductions are valid, one might expect basic logical principles to be indubitable or self-evident – so philosophers used to think. But in the past century, every principle of standard logic was rejected by some logician or other. The challenges were made on all sorts of grounds: paradoxes, infinity, vagueness, quantum mechanics, change, the open future, the obliterated past – you name it. Many alternative systems of logic were proposed. Contrary to prediction, alternative logicians are not crazy to the point of unintelligibility, but far more rational than the average conspiracy theorist; one can have rewarding arguments with them about the pros and cons of their alternative systems. There are genuine disagreements in logic, just as there are in every other science. That does not make logic useless, any more than it makes other sciences useless. It just makes the picture more complicated, which is what tends to happen when one looks closely at any bit of science. In practice, logicians agree about enough for massive progress to be made. Most alternative logicians insist that classical logic works well enough in ordinary cases. (In my view, all the objections to classical logic are unsound, but that is for another day.) What is characteristic of logic is not a special standard of certainty, but a special level of generality. Beyond its role in policing deductive arguments, logic discerns patterns in reality of the most abstract, structural kind. A trivial example is this: everything is self-identical. The various logical discoveries mentioned earlier reflect much deeper patterns. Contrary to what some philosophers claim, these patterns are not just linguistic conventions. We cannot make something not self-identical, however hard we try. We could mean something else by the word ‘identity’, but that would be like trying to defeat gravity by using the word ‘gravity’ to mean something else. Laws of logic are no more up to us than laws of physics.
Timothy Williamson
https://aeon.co//essays/more-than-argument-logic-is-the-very-structure-of-reality
https://images.aeonmedia…y=75&format=auto
Values and beliefs
The colourful Swiss sport of stone putting illuminates Aristotle’s insights into the shortcomings of conservative thought
Between my feet sits a 184 lbs boulder. The rock has a slightly oblong, albeit uneven shape. It’s made out of granite from the Bernese Alps, and has the years 1805 and 1905 engraved into it – historic dates of the Unspunnenfest, a celebration of Swiss cultural traditions. A few hundred spectators are here to watch the early heats for stone put at the 2023 edition of the Unspunnen games. Only the best three athletes will advance to the televised finals in the games’ main stadium, which seats around 16,000 people. To make it there, I will need to throw near the 12 ft mark. I hold my breath and get into position. I lift the boulder above my head, find a comfortable position, and set my feet ready to start my run-up: I need to build up enough speed on the roughly 40 feet between me and the edge of the sandpit. The author hurls the Unspunnen stone, hoping to reach the competition finals in Interlaken, Switzerland in 2023 The scale of the finals – the live television cameras, commentators, pundits and doping tests – clash with the romanticised image of an Alpine life of pastoral idyll that the competition self-consciously preserves. Stone putters compete with historical boulders, and the games emphasises tradition and the pastoral imagery that goes along with it: athletes wear the traditional, light-blue shirts of Alpine dairymen, and compete for bells or ornately carved wooden furniture. They’re led into the arena by ‘ladies of honour’, also in traditional costume. The sport of stone put, then, seems to be first and foremost a tradition. The first Unspunnenfest was inaugurated in 1805 near Unspunnen Castle, as an explicit response to the instability that gripped Switzerland as part of the French revolutionary wars. Following the French occupation of Switzerland, the Bernese patriciate families (to whom all founding members belonged) were engaged in a prolonged renegotiation of their hegemony vis-à-vis the rural communities. Napoleon’s army had stayed in Switzerland until 1802, and under his reign the rural regions enjoyed unprecedented autonomy. When it left, these areas again fell under the government of the city of Bern: a state of affairs that had within it the potential for civil war and rebellion. The urban elites wanted to preserve the power structures of old, before the turmoil and ideas of the French Revolution upset the status quo. The founding fathers of the games tried to do this by emphasising the preservation of traditions. They made no outward mention of their true intention of bringing the rural communities back under their sway. Instead, they publicly invoked the reconciliatory power of tradition. In a newspaper announcement, the co-founder Franz Sigmund Wagner describes the ‘sole purpose’ of the games as that of ‘reviving and preserving the simple old traditions and joys of our forefathers [so as] to let ancient, mutual good will grow and bloom again.’ The patriciate embraced Alpine traditions because it was a way to sustain their conservative aristocracy. The Unspunnenfest marks one of the first occasions in Swiss history where genuinely old traditions were presented in a new context, with political goals, and under an explicitly conservative guise. The festival of 1805 thus betrays the hallmarks of what the British historian Eric Hobsbawm in 1983 called an ‘invented tradition’. Perhaps for this reason, the ostensible goal of the festival’s founders – reconciliation – failed. The secret files of the Bern state council of the time reveal that the political opponents of the patricians saw through their deception. They contain a note from Friedrich Ludwig Thormann, another of the co-founders, according to whom a man from the town in which the Unspunnenfest was to be held had stockpiled gunpowder and ammunition. Johan Caspar Beugger, one of the champions of the inaugural games, was later arrested as a revolutionary in the 1814 uprisings. This is where the alliance between Alpine tradition, nostalgia and conservatism was first made: anything that didn’t fit the image of the Alpine traditions that the Unspunnenfest organisers had conjured up needed to be rejected. On 30 March 1804 (one year before the festival), Thormann accordingly vowed to ‘suppress everything’ that runs counter to and challenges the structures of the old. Things needed to be a certain way because they had always been that way. The ideas of the French Revolution brought dangerous innovations that – as Wagner writes in his personal notes – would spell ‘complete ruin [for] the old, venerable traditions of our Alpine people’. On this conservative view, a tradition is what needs to be preserved and handed down, unmixed with any alien intrusions. Tradition, therefore, stands in contrast to progress. But does conservatism really have a monopoly on tradition? Is tradition inevitably the property of nostalgia and the natural ally of those looking backwards, driven to preserve and keep things the way they were in the past? Can conservatism be uncoupled from tradition? The conservative concept of tradition finds its fullest and most influential philosophical expression in the work of the Anglo-Irish statesman and philosopher Edmund Burke. Like the Bernese patriciate, Burke reacted with disapproval to the French Revolution, as he explained in his Reflections on the Revolution in France (1790), a defining work of modern conservatism. It is here that he articulates the alliance of tradition with conservatism that remains popular today. Burke’s best argument in favour of tradition is based on its social utility: tradition provides a strong and perhaps even necessary basis for social solidarity. The dissolution of all tradition, he holds, will mean that nothing but force and the fear of punishment will hold society together. Burke accordingly equated the revolution in France with a self-consciously arbitrary and tyrannical rejection of tradition. As he wrote in a letter in 1791, what had taken place was ‘a revolt of innovation, and thereby the very elements of Society have been confounded and dissipated.’ Tradition shapes political structures and the social order, so adhering to it ensures the stability and ‘unchangeable constancy’ of the political system. A few years later, when the Reign of Terror descended upon France and thousands met their end on the guillotine, many accordingly saw Burke’s warnings justified. It seemed that innovation and revolution, unmoored by the certainties of tradition, led to ruin and chaos. The kind of changes and developments that Burke envisages are not progressive or innovative Burke defined ‘tradition’ as that which is an ‘inheritance from our forefathers’. This inheritance, which comprises civil institutions, ancestors, monuments and other cultural artefacts, instils in us a sense of dignity and nobility. We revere our traditional inheritance not because it has some intrinsic value, but because it is old: the mere fact of its age instils in us a natural sense of reverence. This reverence also explains why we have to preserve it. Burke makes clear that preserving tradition is a matter of conserving it unmixed with any intrusions – as he puts it, we must take care to not ‘inoculate any scion alien to the nature of the original plant’. To graft a tree onto the stem of a different plant would be against nature and therefore destructive. While Burke is resistant to the idea of revolution and radical change, he does allow and embrace certain kinds of change: those that are based on tradition, and which are thus essentially modifications of traditions. Specifically, any reforms should be modelled on precedent, authority and example from our forefathers. Tradition allows us to orient ourselves with regard to the authorities of the past, and it is by the use and reliance of analogies on traditional ideas that we find solutions for current problems. In any given situation, one should first consult the forefathers and ask how they solved the problem. The kind of changes and developments, then, that Burke envisages are not progressive or innovative. They are fundamentally conservative. So, on the conservative way of thinking, tradition self-consciously stands in contrast to progress. In response to our question – whether conservatism has a monopoly on tradition – this dichotomy between tradition and progress is evidence that conservatism has successfully claimed tradition as its own. However, it is not evidence that their alliance is baked into the concept of tradition itself. We can see the latter point by going even further back in time to before this alliance became fortified, to Aristotle and his Sophistical Refutations. The Sophistical Refutations is the last book in the collection of Aristotle’s writings on logical analysis and dialectic. These are known as the ‘Organon’, a label that his students applied to this part of his corpus because these writings concern the instruments (organa) for knowledge and scientific enquiry. The concluding lines of the Sophistical Refutations provide Aristotle with an opportunity to reflect on the intellectual debts he owes to those who worked on these topics before him. In these lines, he articulates two concepts of tradition – one conservative and one progressive. Aristotle’s different concepts of tradition emphasise different aspects of what we today simply call ‘tradition’. One picks out the contents that are to be passed on (and which must be preserved). The other emphasises the action of passing something on for development or completion. Aristotle uses two different words for them: paradosis for the former, conservative kind, and epidosis for the latter, progressive concept. On this view, tradition and progress are not opposing forces. Aristotle is clear that a worthwhile contribution in any field will be part of a progressive tradition and not a conservative one. The first part of his case involves reflection on what a conservative tradition can and cannot accomplish. With its emphasis on the content that is passed on, the conservative concept of tradition envisages a model in which those at the receiving end of a tradition learn by rote what is passed down to them. Aristotle offers two examples of what conservative tradition looks like in practice. First, there is Gorgias who ‘used to hand out rhetorical speeches to be learned by heart’. Second, there are the teachers of eristic (ie, of arguments designed to dispute any and all arguments an interlocutor might produce, regardless of their truth), who ‘handed out speeches in the form of question and answer’ that, like Gorgias’ rhetorical arguments, are to be memorised by the recipients. The focus in both examples of the conservative tradition is on the contents that someone produces: they pass on the contents of their work to be preserved exactly as they were. The expectation of both Gorgias and of the teachers of eristic speech is that any problem or question could be adequately dealt with by consulting the memorised content. The consequence of this is that this concept of tradition warrants things staying a certain way: a certain kind of question will always receive the exact same answer. Tradition is a stock of wisdom that we inherit from our forefathers, consult in any situation, and on the precedent, authority and example of which we model our own solutions to problems. This is a view of tradition as inherently backwards looking: it is preoccupied with preserving things unchanged and unadulterated. Burke would wholeheartedly agree. A conservative tradition equips those who rely on what is handed down with false views about a subject But Aristotle recognises a danger of the conservative view. He compares this kind of tradition to passing down a collection of all the different kinds of shoes – rather than the art of shoemaking: For they used to suppose that they trained people by imparting to them not the art but its products, as though anyone professing that he would impart a form of knowledge to obviate any pain in the feet were then not to teach a man the art of shoe-making … but were to present him with several kinds of shoes of all sorts.This points to an important shortcoming of conservative tradition. The preoccupation with preserving specific contents deprives those at the receiving end of the ability to actually master the contents being passed down and make progress on them. Aristotle observes that, while a tradition that preserves and passes on specific contents to later generations has the advantage of being quick and relatively easy to implement, it also involves a pernicious kind of ignorance (atechnia). In particular, he suggests that, while those at the receiving end of a conservative tradition may be able to meet some of their needs by relying on the materials passed on to them, the very possession of these contents impedes their progress to actual competence and thus also their ability to make progress. It isn’t just that a conservative tradition involves a lack of insight, but that it equips those who rely on what is handed down or delivered to them with false views and ideas about a subject. Think of those who learned by heart the rhetorical arguments of Gorgias or the arguments in question-and-answer form from the teachers of eristic: even if they become very skilled at deploying these arguments, the very possession of that skill will hamper their ability to produce arguments of their own. By contrast, the progressive concept of tradition – which Aristotle himself endorses – emphasises not contents but the action of passing something on for development. This progressive concept of tradition fits into Aristotle’s broader optimism about human knowledge, which he believes to have a broadly teleological structure. In this concept of tradition, the contents that are passed down to later generations are small in comparison with what will be developed from them later on, and the significance and importance of these traditional contents lies not in what they currently are but in the substantive later developments they facilitate: [I]n the case of all discoveries the results of previous labours that have been handed down from others have been advanced bit by bit by those who have taken them on … As the saying is ‘the first start is the main part’; … for in proportion as it is most potent in its influence, so it is smallest in magnitude; … but when this is once discovered, it is easier to add and develop what remains.Tradition is not the conservation of these small starting points in unchanged and unadulterated form but their development through passing them down to successive generations. The emphasis, in other words, in progressive tradition is not on the contents inherited and passed on, but on their development. Their power lies in their potential for progress: tradition makes investments in the future. However, given this focus on development, why think about this as ‘tradition’ at all? Aristotle thinks it is often easier to develop something further when a tradition (in this sense) exists than if we had to begin with a blank slate. Rhetoric ‘has attained considerable dimensions’ only because of this progressive kind of tradition (no thanks to Gorgias, Aristotle says). The ‘celebrities of today’ are in their position of prominence because, Aristotle says, they are ‘heirs (so to speak) of a long succession’ – ie, of a progressive tradition. By contrast, Aristotle considers himself to be making the very first contributions to dialectic. His achievement, as he sees it, is that he began with a blank slate and for the first time put something in place that can now be developed. Aristotle realises that it will strike some as implausible to claim that nothing at all existed on this topic before his own work: the eristics, one could argue, had already long been practising dialectic by the time the Sophistical Refutations were written. Aristotle rebuffs this objection by explaining that this earlier work is part of a tradition of the conservative type, that is, the tradition that emphasises the preservation of specific contents that are handed down and not the passing on of something for development and completion. Aristotle makes available to us the idea that the alliance of tradition and conservatism is not part of the concept of tradition itself. The progressive view of tradition he outlines is not just compatible with progress – the issue is not about how to reconcile the opposing forces of two sides of a dichotomy – but tradition is itself a way of making progress. Moreover, what Aristotle has to say about the conservative concept of tradition in the context of the eristics, Gorgias and the passing on of footwear suggests that the alliance between tradition and conservatism not only fails to be inevitable but is in fact an uneasy pairing to begin with. Which brings us back to the Alpine festival. Stone put is a tradition that consists in a sport. Historically, this claim has been parsed through the lens of a conservative understanding of tradition and taken to express two irreconcilable ideas: as a conservative tradition, stone put is to be preserved unchanged; but, as a sport, it is about improving (both on oneself but also upon others) and so beholden to what athletic progress demands, whether that is better equipment, training methods or venues. The dichotomy between conservative tradition and sport is so sharp that a schism emerged in the 1980s: conservative traditionalists argued that athletes treating the Alpine games as a sport – which meant that they systematically trained and prepared for the competitions – violated the spirit of the festival, and led to them having an unfair advantage. However, following Aristotle, we can dispense with this old dichotomy, and think of stone put itself as a progressive tradition. Stone put should be thought of as a tradition in the sense that it passes its contents forward for improvement and ultimately perfection. What is passed forward can in principle include both what is central to the sport – its rules, records, equipment, techniques, as well as the implements with which the athletes compete – and what is incidental to it (such as the nature of the prizes, the specifications of the dress code, etc). The relevant criterion for whether something comes within the reach of a progressive tradition is only whether the concept of tradition emphasises the act of passing it forward for development or the preservation of specific content in unadulterated form. Of course, one can alternatively join the founding fathers of the festival and think of stone put in a nostalgic way, with the conservative concept of tradition that emphasises the preservation of particular contents. This lends itself to invented traditions with its idealisation, romanticisation and fictionalisation of the past (which is where, historically speaking, the blue shirts of dairymen for athletes, bell prizes, historic costumes, ‘ladies of honour’ etc have their origin). The reason for this – as we saw with the patrician fathers of the original festival – is that what matters to those who define conservative traditions is not whether these traditions are invented or genuine (or whether successive generations can in fact successfully preserve what is ostensibly old and unadulterated) but whether these traditions accomplish for them what they intend them to accomplish in the present. The stone had lost 4.5 lbs and, due to the position of the new engravings, was no longer the same to handle By contrast, in a progressive tradition of stone put, the bells, historic costumes etc will not be part of the tradition of stone put at all. The athletic activity itself constitutes the tradition: throwers pass on their techniques, records and insights about training to later generations. The very nature of sport and athletic competition requires that those things are passed forward for successive perfection (rather than preserved unchanged). Moreover, due to one of its most unique features – the use of the same stone in competition – the concept of tradition that emphasises the act of passing things forward is not merely extrinsic to stone put but part of the athletic activity itself: since the festival aims to retain the same exact stone for competition, the history of the sport is saliently involved in the athletic act itself. From the moment the stone putter uses the dates engraved in it to position the rock for the initial overhead clean, they know that this stone has been used in competition by many athletes before them and will be used by many more after them. Projecting the same stone – a reified metaphor for the enduring – into the sandpit as part of an athletic competition, they know that they are developing something and also laying the starting points for further development by later generations of athletes. We need the forward-looking concept of tradition to make sense of the sport. Even here, however, stone put is not a tradition in the sense that it seeks to preserve a specific item or content. This is illustrated by stone put’s reaction to one of the more bizarre aspects of its history. In 1984, the Unspunnen stone was stolen by a group of four Jura separatists (who seek complete independence of the Jura region from the canton of Bern). Those who took the stone – as well as those who complained most vocally about its disappearance – tended to argue their case in terms of the stone’s role as a relic of what I have been calling the conservative concept of tradition: a tradition, so it seemed to them, that had been disrupted. When the stone re-appeared in 2001, the separatists had made a series of further engravings in it. As a result, the stone had lost around 4.5 lbs in weight and, due to the position of the new engravings, was no longer the same to handle (opinions differ on whether it has become harder or easier to hold). In short, the stone was no longer usable for competition. Interestingly, however, this left the discipline of stone put largely unimpressed. The keepers of the stone in Interlaken immediately commissioned a replica, and when, once more, that replica disappeared in 2005, a long-time great of the sport went as far as saying that he was glad the stone was gone again, hopefully for good this time. The replica, he added, was much nicer as a sporting implement anyway. What happened here can be explained in terms of the two different concepts of tradition gripping past each other: the disappearance concerns the conservative tradition, but the sport of stone put ultimately embodies a progressive one. Conservatism, then, does not have a monopoly on the kind of tradition my fellow stone putters and I are engaging in on this day at the 2023 Unspunnen games. Uncoupling tradition from conservatism in the way that the progressive concept of tradition proposes to do promises a remedy against an anxiety that defines our time: how can we make progress without thereby becoming disconnected from where we began and without thereby growing fearful of the process itself? Aristotle’s concept of tradition makes available a model for how traditions survive the progress they are instrumental in facilitating, and how their content changes in the process. A progressive tradition gives us the tools to balance our own achievements and the achievements of our time on the one hand, and the value and authority of tradition on the other. In doing so, it enables us to constructively criticise the processes by means of which political and social progress is made, without damaging trust in them. As I reach the edge of the sandpit – boulder balanced overhead – I plant my left foot, jump and push up and against the 184 lbs to let it fly. I land on my right foot with my arms outstretched to keep my balance (overstepping or stepping onto the beam renders the attempt invalid). The rock lands with a thud, and the officials approach with a stick and measuring tape: 11.98 ft puts me fourth overall. This is, narrowly, not enough to take me to the finals tomorrow; but it is certainly a starting point.
Daniel Kranzelbinder
https://aeon.co//essays/how-swiss-stone-putting-shows-traditions-can-be-progressive
https://images.aeonmedia…y=75&format=auto
Cities
In Nanjing, Hong Kong and other Chinese cities, rapid urbanisation is multiplying a fear of death and belief in ghosts
On the 11th floor of a suburban Hong Kong tower, an 86-year-old woman lived alone in a tiny, decrepit apartment. Her family rarely visited. Her daughter had married a man in Macau and now lived there with him and their two children. Her son had passed away years earlier, and his only child now attended a university in England. One September evening, the old woman fell and broke her hip while trying to change a lightbulb. She couldn’t move, and no one heard her crying for help. Over the next two days, she slowly died from dehydration. It took an additional three days for the neighbours to call the authorities – three days for the stench to become truly unbearable. The police removed the body and notified the family. A small funeral was held. A few weeks later, the landlord had the apartment thoroughly cleaned and tried to rent it out again. Since the old woman’s death was not classed as a murder or suicide, the apartment was not placed on any of Hong Kong’s online lists of haunted dwellings. To attract a new tenant, the landlord reduced the rent slightly, and the discount was enough to attract a university student named Daili, who had just arrived from mainland China. On the first night that Daili slept in the apartment, she saw the blurry face of an old woman in a dream. She thought little of it and busied herself the next morning by buying some plants to put on the apartment’s covered balcony. She hung a pot of begonias from a hook drilled into the bottom of the balcony above. The next night, Daili saw the woman again. And so it went every night, with the old woman’s face becoming more detailed in each new dream. Sometimes the woman would speak to her, asking her to visit: Why don’t you come by? Where are you? How long until you come again? As the dreams persisted, Daili had trouble sleeping. Sometimes, rather than lying awake, she would go to the balcony to water her plants or look at the Moon. One night, the dreams were particularly vivid, but even after Daili woke up and went to the balcony, the woman’s voice didn’t stop. Come visit me. Where are you? Daili climbed a small step ladder to water her begonias at the edge of the balcony. I’m lonely. You never stop by. Daili poured some water into the flowerpot. I need your help, now! ‘OK,’ Daili replied. She looked out over the edge of the balcony, jumped from the stepladder, and fell 11 floors to her death. The police ruled the death a suicide, and the apartment was listed on the city’s online registers for haunted apartments. The landlord had no choice but to discount the rent by 30 per cent – and wait for a tenant who did not believe in ghosts. When a university student in Hong Kong first sent me this story, which I have translated from Chinese and slightly modified, I knew it wasn’t true. Many similar fictional tales of ghosts, hauntings and unnatural deaths can be found online. Though these stories are not factual reports, I have found they reflect the experiences and anxieties of many who live in urban China: elderly parents left without family at the end of their lives; ghosts harming strangers (even leading them to take their own life); a pervasive fear of death; and a strengthening relationship between a fear of ghosts and the real estate market. This may appear counterintuitive. In the official view, a belief in ghosts is mere superstition, a vestige of a traditional agricultural society that has been left behind in the name of progress. There is an assumption that people in cities should be less superstitious than their rural neighbours. But ghostly beliefs are integral to the experience of urban living and rapid urbanisation. Though a fear of ghosts may have a long history in China, I suspect that such beliefs both transform and deepen during the process of urbanisation. And, in turn, these fears are altering social life and urban space as they become tangled up with the remembrance and repression of the dead. The deceased’s body is typically kept at home in a coffin for a few days between the death and the funeral Belief in ghosts takes an ambiguous form in contemporary urban China. Though not everyone admits to believing in them, almost everyone I spent time with during decades of ethnographic research in Nanjing, Shanghai, Jinan and Hong Kong has acted in ways that implied that ghosts exist. These people took special precautions when visiting cemeteries and funeral homes; they indicated that abandoned buildings felt haunted; they avoided talking about or having any association with death, including not renting or purchasing apartments that might be, in their words, ‘haunted’. I have been conducting anthropological research in China since the late 1980s. Back then, I lived in a rural area of Shandong province, at a time when few non-Chinese had the opportunity to live in a Chinese village. I came to Shandong province to investigate patterns of social interactions among village families, and it was here that I was first exposed to rural funeral practices, which are relatively similar across China. After someone dies, the deceased’s body is typically kept at home in a coffin – sometimes made from cedar, now often refrigerated – for a few days between the death and the funeral. People come by and pay their respects to the body, give a gift, and offer condolences to the family. The funeral itself is organised and conducted by familial elders. After the funeral, the body is either buried intact on village land or first cremated and then buried. But in all my time in rural China, I never heard anyone complain that their neighbour might be keeping a dead body at home. I never heard anyone say that the fields where they worked – and where their relatives were buried – were ‘haunted’. I assumed funerals and beliefs about the dead would be similar in the cities. But I didn’t really know much about urban funerary practices. In the years after living in Shandong province, I had attended only a few urban memorial services for friends and relatives (my wife is from the city of Nanjing). All of that changed when I began a research project on funerals in Chinese cities. In 2013, I began interviewing people who worked in China’s urban funerary sector and visited funeral homes and cemeteries in many Chinese cities, with a particular focus on Nanjing and Hong Kong. I found that funerary practice in urban China differed considerably from that in rural locales. In general, people in rural areas appeared less afraid of death, dead bodies and places of burial than people living in cities. As soon as a dead body is discovered in Nanjing, Shanghai and Hong Kong, it is removed from the home or hospital room and taken either to a hospital morgue or a funeral home. The funeral is organised and conducted by industry professionals rather than family members. After the funeral, the body is cremated and the ashes are buried in a cemetery or a columbarium located far from the city centre – in Shanghai, it took me more than two hours by public transport to reach the popular cemetery, Fu Shou Yuan. When I described rural funeral practices to people in large Chinese cities – where everyone lives in apartment buildings – most found such practices distasteful. One man I interviewed in Nanjing was particularly disgusted by the idea of keeping a body in an apartment, even if it was kept in a refrigerated casket with no smell. Such a practice, he said, would bring bad luck and disrepute to the people who lived in the same building. And besides, he added, it would be illegal to keep a dead body in an apartment building. Indeed, when I asked government officials in Nanjing and Hong Kong about such a law, they confirmed that anyone who discovered a dead body in a home setting was required to notify the local government immediately and that the government would organise the removal of the body as soon as possible. In China’s largest cities, even practices that announce death publicly have been outlawed. Some of my students at the Chinese University of Hong Kong who came from small cities in central China told me about funerals in their hometowns where tents for attendees were set up outside apartment blocks. But such tents are no longer permitted in large cities such as Shanghai or Tianjin. Mourners stepped over the fire to counter the yin energy that comes from spending time around the dead In Nanjing, I have seen home altars in family apartments with pictures of the deceased as a replacement for keeping a body at home. Friends and relatives could visit these home altars and pay their respects. But because of the steady stream of guests coming and going, and the visible placement of symbols related to death on the apartment’s front door, other residents would often become aware that someone in the building had passed away. In Nanjing, though some people set up home altars, others said they found the practice distasteful. As one woman told me during an interview: ‘How dare a family be rude enough to announce that something as inauspicious as a death had happened in their apartment building!’ In the largest cities I visited, including Shanghai and Beijing, I was repeatedly told that no one set up home altars. In Tianjin, a city of about 15 million, I saw an official billboard explaining that it was now illegal to set up a home altar in an apartment. If neighbours notified the government that a resident had set one up, a large fine would be imposed. It seems that the larger the city, the more likely it is that neighbours will not want to know about a death in their apartment block, and the more likely it is that practices announcing death will become illegal. While interviewing funerary professionals in the 2010s, I learned that urban distaste for announcing death was matched by a cautious attitude towards visiting places associated with death. Urban funerary professionals often told families how to counter the ghostly energy, considered ‘yin’ in the yin/yang dichotomy, that permeates places like funeral homes and cemeteries. This yin energy can be countered with yang activities, including drinking warm, sugary liquids, going to places that brim with people, or performing a fire-stepping ritual. In Shanghai and other cities, places for stepping over fire are built into the exits of funeral homes. After observing a funeral in Nanjing, I watched a funeral professional light a small grass fire on a metal platform they had set up in the parking lot. The mourners all stepped over the fire before leaving to absorb yang energy and counter the yin that comes from spending time around the dead. I never saw such a ritual at a rural funeral. People in China’s major cities, it seems, fear dead bodies and burial places more than people in rural areas. Even the thought of a death occurring in their neighbour’s apartment bothers them. Rapid urbanisation seems to intensify a fear of death. And this fear eventually leads to the removal of death-related infrastructure from urban areas. Throughout China, cemeteries and funeral homes are constantly relocated away from city centres. The rapid expansion of cities and their borders has necessitated the repeated relocation of state-run funeral homes and crematoria, requiring many cemeteries to be dug up. When I asked one Nanjing official why, he said: People are still afraid of ghosts. The value of real estate near cemeteries and funeral homes is always lower than in the central districts. So, to protect the value of its real estate, the municipal government always attempts to keep funeral homes located far from the city centre.I once told an official in Nanjing’s Office of Funerary Regulation about an American relative of mine whose ashes were scattered in his favourite park. The official replied: [W]e cannot allow people to dispose of their parent’s ashes in public parks. People fear ghosts. People would not like Nanjing’s parks if they thought they had ghosts, so it is illegal to scatter cremated remains there, even if they do not pollute the environment and are indistinguishable from the rest of the dirt.If a neighbour fears death or dead bodies, then they have the right to restrict the activities of an undertaker In Hong Kong, fears of the dead echo those on the mainland. This fear even impacts the operation of funeral parlours (places where funerary rituals may be conducted). As of 2022, Hong Kong has only seven licensed funeral parlours and approximately 120 licensed undertakers, who help with funerary arrangements but do not have the facilities to conduct funerals. Only those undertakers who started their businesses before the current regulatory regime began in the 2000s can openly advertise the nature of their business, display coffins in their shops and store crematory remains. These businesses have what are called type-A undertaking licences. Those with type-B licences cannot store cremated remains or display coffins in their stores if any other business or homeowners in their vicinity object. Those with type-C licences are even further restricted: they may not use the word ‘funerary’ in the signs displayed publicly in front of their stores. The logic here is the same as described by people I spoke with in Nanjing: if a neighbour fears death or dead bodies, or fears that other people’s fear could affect the value of their business or property, then they have the right to restrict the activities of an undertaker. In practice, this means that the business activities of all proprietors with type-B and type-C licences are affected. Currently, most of the undertakers with type-A licences are in the Hong Hom residential district of Hong Kong, and many apartments there have a window from which one can see an undertaker’s shop (and shop sign). These apartments rent for less than those without such a view. In Hong Kong, as the story of Daili’s suicide reminds us, there are online resources one can use to locate ‘haunted dwellings’ where unusual deaths have occurred. These apartments also sell and rent for discounted prices. So why are modern Chinese urbanites so afraid of ghosts? Four factors seem important: the separation of life from death in cities, the rise of a ‘stranger’ society and economy, the simultaneous idealisation and shrinking of families, and an increasing number of abandoned or derelict buildings. What is important to note here is that all four factors are products of urbanisation itself. Urbanisation makes ghosts. There is also a fifth point, which is distinct from these other factors but still compounds the haunting of modern China: a politics of repression. The first factor is the increasing separation of life from death. People in cities don’t usually die at home. Instead, they die in hospitals, where staff do all they can to hide dead bodies. Even in cases when someone doesn’t die at a hospital, dead bodies are quickly taken to be stored at funeral parlours. The result is that many people in China’s major cities have never seen a dead body. This separation only increases as cemeteries and funeral homes are moved further and further away from city centres. The less people experience death, the more fearsome it becomes. For many, just mentioning death is inauspicious. More important, I believe, is the second factor: the rise of a ‘stranger’ society and economy. In village settings, relatives are buried together in the same vicinity, but in urban cemeteries strangers are buried side by side – a situation comparable to large apartment buildings where neighbours may not know one another. In urban China at least, the concept of ghost (鬼, Romanised as ‘gui’, refers to malevolent spirits of many types, but also, perhaps metaphorically, to malevolent people or even animals) is directly related to the notion of the ‘stranger’. Kin become ancestors; strangers become ghosts. Ghosts can do evil and must be feared. In the opening story, the ghost of the old lady leads Daili to suicide. In burial ceremonies at urban cemeteries in Nanjing, funeral practitioners often introduce the newly buried to their ‘neighbours’ in the hopes that the spirits next door will not act like ghosts. Urban economies are economies of strangers. In cities, we purchase goods and services from those we do not know and hope these strangers treat us fairly. Most crucially, funerals in Chinese cities are arranged and run by strangers. These strangers handle the bodies at the funeral homes and crematoria, and they work in hospital morgues and cemeteries (or at stalls outside, selling flowers and funerary paraphernalia). As in many places around the world, workers in this sector are stigmatised. They have trouble finding marital partners and often marry each other. They avoid shaking hands with their customers. They lie about their occupation to strangers and tell their children to do the same if anyone asks about their parents’ line of work. This stigmatisation of funerary sector workers is related to the fear of ghosts in cemeteries. A comparison with how sex work is viewed in China is illustrative: a woman who has sex with her husband is seen as an upstanding citizen, but a woman who has sex with strangers for money is seen as a polluted, polluting figure. Likewise, a person who helps with the funeral of a relative in a village is a moral person, but those in cities who help with the funerals of strangers for money are to be shunned. Burying ancestors is an act of filial duty; burying strangers and dealing with their ghostly yin energy exposes one to spiritual pollution. Because they are stigmatised, both sex work and funerary work can be relatively lucrative forms of employment. Both sectors cross uncomfortable lines between domains of the familial and the monetised economies of strangers. We must learn to live with our ghosts rather than repress them Related to this idea is the third factor behind the fear of ghosts in urban China: the idealisation of family. As China urbanises and modernises, not only does contact with strangers become more prevalent, but the size of families and households also shrinks. Rather than a person’s entire social world being composed of relatives of varying degrees of distance, the social universe of urbanites is composed of a few close relatives and a larger society of strangers and acquaintances. As families shrink, the contrast between kin and non-kin becomes more critical. Family becomes an idealised site of moral interaction; the world of strangers is where one might face exploitation, robbery and treachery. But if a family shrinks too much, a person might become completely isolated, and end up a ghost – like the old woman in the story. As China urbanises, its ideas about ghosts are transforming. Only in urban and urbanising China are ghosts equated with strangers. In traditional, rural Chinese society, ghosts were often thought of as relatives or kin who had been mistreated in life and not given a proper burial. The whole purpose of a funeral was to make sure that a dead relative became an ancestor instead of a ghost. When a person’s social universe is composed almost entirely of family, then both good and evil must be located within the family. In urban settings, these can be separated: family can be imagined as purely good, while evil is located in strangers. A century ago in rural China, dead infants, toddlers and young children were not given any funerals at all. Their bodies were tossed into ditches for animals to consume. They were thought of as evil spirits, ghosts of a sort, that had invaded a woman’s womb and would return again if given a proper burial. But in contemporary urban China, losing a child is one of the most painful things imaginable. Dead children can receive elaborate funerals and children’s tombs are often the most ornate of all. Dead children represent only the love of their families and are never associated with evil. Family is sacred; strangers (and their ghosts) are dangerous. The fourth factor related to the haunting of Chinese cities is the existence of abandoned buildings, neighbourhoods and factories. These places once brimmed with life, but because they are slated for urban renewal, residents and workers have been forced out. Empty and often derelict, they remind the people who are left behind (or who live nearby) of the loss of communities or ways of life. Areas targeted for renewal include rural areas but also previously urbanised locations, especially those not so intensely built up. After redevelopment, these areas become new districts that rise higher and are more densely populated. The communities affected by these projects may have protested, or attempted to protest, but in China such protests are often quickly suppressed. Ghosts are not only strangers, but also someone or something that should not be remembered, at least in the eyes of an authority figure. Since memories of them are repressed, these spirits must actively haunt the living to receive recognition. The political repression of these memories, especially prevalent in China, makes them only more spectral. This leads to the final point: the ways that a fear of ghosts is connected to a broader politics of memory and fear. Projects of urban renewal are simply one of many occasions that could potentially lead to anti-government protests, and, in the eyes of the government, all such resistance must be suppressed. The current Communist Party regime in China imagines its spirit must live forever; all other spirits are ghostly enemies, strangers, to be banished. From this perspective, the ghosts from the party’s now-repudiated past – the Great Leap Forward, the Cultural Revolution or the Tiananmen Square massacre – must never be mentioned again. But I believe that the totalitarian impulse of the Communist Party regime to banish all spirits other than that of the party itself can only increase the haunting of urban China. We must learn to live with our ghosts rather than repress them. In urban China, a fear of ghosts is not a rapidly disappearing vestige of a traditional rural past. It is produced in the process of urbanisation itself and politically amplified. The separation of life from death, the rise of stranger sociality, the idealisation of family and its separation from the wider society, the constant disruption of communities of living and production, and the repression of their memory all contribute to the haunting of urban China. It is a haunting narrated through ghost stories of all kinds, which tell of spirits arising from familial abandonment, the destruction of urban areas in the process of urban renewal, and wrongful deaths caused by strangers, like that of Daili and the old woman. These narratives tell us both that the demise of extended families and communities increases the chances that we will die alone, and that, as we depend more and more upon strangers in all aspects of our lives, we become more vulnerable to harm. In China’s cities, cemeteries and funeral homes are visited only when necessary and dead bodies are rarely, if ever, seen. Yet death still forces its way into our personal space. Its sudden and unwelcome appearance makes it only more spectral. As our urban lives increasingly involve interactions with strangers, with people or beings whose comings and goings are complete mysteries, more and more ghosts haunt our cities. As urban neighbourhoods are razed and rebuilt again and again, as urban economies are restructured and disrupted over and over, as the pace of societal change increases and political repression continues, the memories that haunt us will only multiply.
Andrew Kipnis
https://aeon.co//essays/rapid-urbanisation-is-stoking-paranormal-anxieties-in-china
https://images.aeonmedia…y=75&format=auto
Love and friendship
You might have the unconditional love of family and friends and yet feel deep loneliness. Can philosophy explain why?
Although one of the loneliest moments of my life happened more than 15 years ago, I still remember its uniquely painful sting. I had just arrived back home from a study abroad semester in Italy. During my stay in Florence, my Italian had advanced to the point where I was dreaming in the language. I had also developed intellectual interests in Italian futurism, Dada, and Russian absurdism – interests not entirely deriving from a crush on the professor who taught a course on those topics – as well as the love sonnets of Dante and Petrarch (conceivably also related to that crush). I left my semester abroad feeling as many students likely do: transformed not only intellectually but emotionally. My picture of the world was complicated, my very experience of that world richer, more nuanced. After that semester, I returned home to a small working-class town in New Jersey. Home proper was my boyfriend’s parents’ home, which was in the process of foreclosure but not yet taken by the bank. Both parents had left to live elsewhere, and they graciously allowed me to stay there with my boyfriend, his sister and her boyfriend during college breaks. While on break from school, I spent most of my time with these de facto roommates and a handful of my dearest childhood friends. When I returned from Italy, there was so much I wanted to share with them. I wanted to talk to my boyfriend about how aesthetically interesting but intellectually dull I found Italian futurism; I wanted to communicate to my closest friends how deeply those Italian love sonnets moved me, how Bob Dylan so wonderfully captured their power. (‘And every one of them words rang true/and glowed like burning coal/Pouring off of every page/like it was written in my soul …’) In addition to a strongly felt need to share specific parts of my intellectual and emotional lives that had become so central to my self-understanding, I also experienced a dramatically increased need to engage intellectually, as well as an acute need for my emotional life in all its depth and richness – for my whole being, this new being – to be appreciated. When I returned home, I felt not only unable to engage with others in ways that met my newly developed needs, but also unrecognised for who I had become since I left. And I felt deeply, painfully lonely. This experience is not uncommon for study-abroad students. Even when one has a caring and supportive network of relationships, one will often experience ‘reverse culture shock’ – what the psychologist Kevin Gaw describes as a ‘process of readjusting, reacculturating, and reassimilating into one’s own home culture after living in a different culture for a significant period of time’ – and feelings of loneliness are characteristic for individuals in the throes of this process. But there are many other familiar life experiences that provoke feelings of loneliness, even if the individuals undergoing those experiences have loving friends and family: the student who comes home to his family and friends after a transformative first year at college; the adolescent who returns home to her loving but repressed parents after a sexual awakening at summer camp; the first-generation woman of colour in graduate school who feels cared for but also perpetually ‘in-between’ worlds, misunderstood and not fully seen either by her department members or her family and friends back home; the travel nurse who returns home to her partner and friends after an especially meaningful (or perhaps especially psychologically taxing) work assignment; the man who goes through a difficult breakup with a long-term, live-in partner; the woman who is the first in her group of friends to become a parent; the list goes on. Nor does it take a transformative life event to provoke feelings of loneliness. As time passes, it often happens that friends and family who used to understand us quite well eventually fail to understand us as they once did, failing to really see us as they used to before. This, too, will tend to lead to feelings of loneliness – though the loneliness may creep in more gradually, more surreptitiously. Loneliness, it seems, is an existential hazard, something to which human beings are always vulnerable – and not just when they are alone. In his recent book Life Is Hard (2022), the philosopher Kieran Setiya characterises loneliness as the ‘pain of social disconnection’. There, he argues for the importance of attending to the nature of loneliness – both why it hurts and what ‘that pain tell[s] us about how to live’ – especially given the contemporary prevalence of loneliness. He rightly notes that loneliness is not just a matter of being isolated from others entirely, since one can be lonely even in a room full of people. Additionally, he notes that, since the negative psychological and physiological effects of loneliness ‘seem to depend on the subjective experience of being lonely’, effectively combatting loneliness requires us to identify the origin of this subjective experience. Setiya’s proposal is that we are ‘social animals with social needs’ that crucially include needs to be loved and to have our basic worth recognised. When we fail to have these basic needs met, as we do when we are apart from our friends, we suffer loneliness. Without the presence of friends to assure us that we matter, we experience the painful ‘sensation of hollowness, of a hole in oneself that used to be filled and now is not’. This is loneliness in its most elemental form. (Setiya uses the term ‘friends’ broadly, to include close family and romantic partners, and I follow his usage here.) Imagine a woman who lands a job requiring a long-distance move to an area where she knows no one. Even if there are plenty of new neighbours and colleagues to greet her upon her arrival, Setiya’s claim is that she will tend to experience feelings of loneliness, since she does not yet have close, loving relationships with these people. In other words, she will tend to experience feelings of loneliness because she does not yet have friends whose love of her reflects back to her the basic value as a person that she has, friends who let her see that she matters. Only when she makes genuine friendships will she feel her unconditional value is acknowledged; only then will her basic social needs to be loved and recognised be met. Once she feels she truly matters to someone, in Setiya’s view, her loneliness will abate. Setiya is not alone in connecting feelings of loneliness to a lack of basic recognition. In The Origins of Totalitarianism (1951), for example, Hannah Arendt also defines loneliness as a feeling that results when one’s human dignity or unconditional worth as a person fails to be recognised and affirmed, a feeling that results when this, one of the ‘basic requirements of the human condition’, fails to be met. These accounts get a good deal about loneliness right. But they miss something as well. On these views, loving friendships allow us to avoid loneliness because the loving friend provides a form of recognition we require as social beings. Without loving friendships, or when we are apart from our friends, we are unable to secure this recognition. So we become lonely. But notice that the feature affirmed by the friend here – my unconditional value – is radically depersonalised. The property the friend recognises and affirms in me is the same property she recognises and affirms in her other friendships. Otherwise put, the recognition that allegedly mitigates loneliness in Setiya’s view is the friend’s recognition of an impersonal, abstract feature of oneself, a quality one shares with every other human being: her unconditional worth as a human being. (The recognition given by the loving friend is that I ‘[matter] … just like everyone else.’) Just as one can feel lonely in a room full of strangers, one can feel lonely in a room full of friends Since my dignity or worth is disconnected from any particular feature of myself as an individual, however, my friend can recognise and affirm that worth without acknowledging or engaging my particular needs, specific values and so on. If Setiya is calling it right, then that friend can assuage my loneliness without engaging my individuality. Or can they? Accounts that tie loneliness to a failure of basic recognition (and the alleviation of loneliness to love and acknowledgement of one’s dignity) may be right about the origin of certain forms of loneliness. But it seems to me that this is far from the whole picture, and that accounts like these fail to explain a wide variety of familiar circumstances in which loneliness arises. When I came home from my study-abroad semester, I returned to a network of robust, loving friendships. I was surrounded daily by a steadfast group of people who persistently acknowledged and affirmed my unconditional value as a person, putting up with my obnoxious pretension (so it must have seemed) and accepting me even though I was alien in crucial ways to the friend they knew before. Yet I still suffered loneliness. In fact, while I had more close friendships than ever before – and was as close with friends and family members as I had ever been – I was lonelier than ever. And this is also true of the familiar scenarios from above: the first-year college student, the new parent, the travel nurse, and so on. All these scenarios are ripe for painful feelings of loneliness even though the individuals undergoing such experiences have a loving network of friends, family and colleagues who support them and recognise their unconditional value. So, there must be more to loneliness than Setiya’s account (and others like it) let on. Of course, if an individual’s worth goes unrecognised, she will feel awfully lonely. But just as one can feel lonely in a room full of strangers, one can feel lonely in a room full of friends. What plagues accounts that tie loneliness to an absence of basic recognition is that they fail to do justice to loneliness as a feeling that pops up not only when one lacks sufficiently loving, affirmative relationships, but also when one perceives that the relationships she has (including and perhaps especially loving relationships) lack sufficient quality (for example, lacking depth or a desired feeling of connection). And an individual will perceive such relationships as lacking sufficient quality when her friends and family are not meeting the specific needs she has, or recognising and affirming her as the particular individual that she is. We see this especially in the midst or aftermath of transitional and transformational life events, when greater-than-usual shifts occur. As the result of going through such experiences, we often develop new values, core needs and centrally motivating desires, losing other values, needs and desires in the process. In other words, after undergoing a particularly transformative experience, we become different people in key respects than we were before. If after such a personal transformation, our friends are unable to meet our newly developed core needs or recognise and affirm our new values and central desires – perhaps in large part because they cannot, because they do not (yet) recognise or understand who we have become – we will suffer loneliness. This is what happened to me after Italy. By the time I got back, I had developed new core needs – as one example, the need for a certain level and kind of intellectual engagement – which were unmet when I returned home. What’s more, I did not think it particularly fair to expect my friends to meet these needs. After all, they did not possess the conceptual frameworks for discussing Russian absurdism or 13th-century Italian love sonnets; these just weren’t things they had spent time thinking about. And I didn’t blame them; expecting them to develop or care about developing such a conceptual framework seemed to me ridiculous. Even so, without a shared framework, I felt unable to meet my need for intellectual engagement and communicate to my friends the fullness of my inner life, which was overtaken by quite specific aesthetic values, values that shaped how I saw the world. As a result, I felt lonely. In addition to developing new needs, I understood myself as having changed in other fundamental respects. While I knew my friends loved me and affirmed my unconditional value, I did not feel upon my return home that they were able to see and affirm my individuality. I was radically changed; in fact, I felt in certain respects totally unrecognisable even to those who knew me best. After Italy, I inhabited a different, more nuanced perspective on the world; beauty, creativity and intellectual growth had become core values of mine; I had become a serious lover of poetry; I understood myself as a burgeoning philosopher. At the time, my closest friends were not able to see and affirm these parts of me, parts of me with which even relative strangers in my college courses were acquainted (though, of course, those acquaintances neither knew me nor were equipped to meet other of my needs which my friends had long met). When I returned home, I no longer felt truly seen by my friends. One need not spend a semester abroad to experience this. For example, a nurse who initially chose her profession as a means to professional and financial stability might, after an especially meaningful experience with a patient, find herself newly and centrally motivated by a desire to make a difference in her patients’ lives. Along with the landscape of her desires, her core values may have changed: perhaps she develops a new core value of alleviating suffering whenever possible. And she may find certain features of her job – those that do not involve the alleviation of suffering, or involve the limited alleviation of suffering – not as fulfilling as they once were. In other words, she may have developed a new need for a certain form of meaningful difference-making – a need that, if not met, leaves her feeling flat and deeply dissatisfied. Changes like these – changes to what truly moves you, to what makes you feel deeply fulfilled – are profound ones. To be changed in these respects is to be utterly changed. Even if you have loving friendships, if your friends are unable to recognise and affirm these new features of you, you may fail to feel seen, fail to feel valued as who you really are. At that point, loneliness will ensue. Interestingly – and especially troublesome for Setiya’s account – feelings of loneliness will tend to be especially salient and painful when the people unable to meet these needs are those who already love us and affirm our unconditional value. Those with a strong need for their uniqueness to be recognised may be more disposed to loneliness So, even with loving friends, if we perceive ourselves as unable to be seen and affirmed as the particular people we are, or if certain of our core needs go unmet, we will feel lonely. Setiya is surely right that loneliness will result in the absence of love and recognition. But it can also result from the inability – and sometimes, failure – of those with whom we have loving relationships to share or affirm our values, to endorse desires that we understand as central to our lives, and to satisfy our needs. Another way to put it is that our social needs go far beyond the impersonal recognition of our unconditional worth as human beings. These needs can be as widespread as a need for reciprocal emotional attachment or as restricted as a need for a certain level of intellectual engagement or creative exchange. But even when the need in question is a restricted or uncommon one, if it is a deep need that requires another person to meet yet goes unmet, we will feel lonely. The fact that we suffer loneliness even when these quite specific needs are unmet shows that understanding and treating this feeling requires attending not just to whether my worth is affirmed, but to whether I am recognised and affirmed in my particularity and whether my particular, even idiosyncratic social needs are met by those around me. What’s more, since different people have different needs, the conditions that produce loneliness will vary. Those with a strong need for their uniqueness to be recognised may be more disposed to loneliness. Others with weaker needs for recognition or reciprocal emotional attachment may experience a good deal of social isolation without feeling lonely at all. Some people might alleviate loneliness by cultivating a wide circle of not-especially-close friends, each of whom meets a different need or appreciates a different side of them. Yet others might persist in their loneliness without deep and intimate friendships in which they feel more fully seen and appreciated in their complexity, in the fullness of their being. Yet, as ever-changing beings with friends and loved ones who are also ever-changing, we are always susceptible to loneliness and the pain of situations in which our needs are unmet. Most of us can recall a friend who once met certain of our core social needs, but who eventually – gradually, perhaps even imperceptibly – ultimately failed to do so. If such needs are not met by others in one’s life, this situation will lead one to feel profoundly, heartbreakingly lonely. In cases like these, new relationships can offer true succour and light. For example, a lonely new parent might have childless friends who are clueless to the needs and values she develops through the hugely complicated transition to parenthood; as a result, she might cultivate relationships with other new parents or caretakers, people who share her newly developed values and better understand the joys, pains and ambivalences of having a child. To the extent that these new relationships enable her needs to be met and allow her to feel genuinely seen, they will help to alleviate her loneliness. Through seeking relationships with others who might share one’s interests or be better situated to meet one’s specific needs, then, one can attempt to face one’s loneliness head on. But you don’t need to shed old relationships to cultivate the new. When old friends to whom we remain committed fail to meet our new needs, it’s helpful to ask how to salvage the situation, saving the relationship. In some instances, we might choose to adopt a passive strategy, acknowledging the ebb and flow of relationships and the natural lag time between the development of needs and others’ abilities to meet them. You could ‘wait it out’. But given that it is much more difficult to have your needs met if you don’t articulate them, an active strategy seems more promising. To position your friend to better meet your needs, you might attempt to communicate those needs and articulate ways in which you don’t feel seen. Of course, such a strategy will be successful only if the unmet needs provoking one’s loneliness are needs one can identify and articulate. But we will so often – perhaps always – have needs, desires and values of which we are unaware or that we cannot articulate, even to ourselves. We are, to some extent, always opaque to ourselves. Given this opacity, some degree of loneliness may be an inevitable part of the human condition. What’s more, if we can’t even grasp or articulate the needs provoking our loneliness, then adopting a more passive strategy may be the only option one has. In cases like this, the only way to recognise your unmet needs or desires is to notice that your loneliness has started to lift once those needs and desires begin to be met by another.
Kaitlyn Creasy
https://aeon.co//essays/how-is-it-possible-to-be-loved-and-yet-to-feel-deeply-lonely
https://images.aeonmedia…y=75&format=auto
Language and linguistics
For First Nations people, health is not a matter of mechanical fitness of the body, but of language, identity and belonging
Roughly 250 kilometres northeast of Alice Springs in Australia’s Northern Territory is a place called Utopia. Composed of a loose collection of sparsely populated clan sites in the inland desert, the area is the traditional homeland of the Alyawarr and Anmatyerr peoples, roughly 500 of whom still live in Utopia today. The area wasn’t settled by white colonisers until the 1920s, when a group of German pastoralists – ‘demented by the ferocity of the heat and dust’, as the veteran Australian journalist John Pilger put it in an interview for the online magazine Truthout – arrived at a place where the rabbits were so unafraid of humans that they could be caught by hand. Beyond the women’s batik club, founded in Utopia in the 1970s, and responsible for producing some of the most prominent 20th-century artists in Australia, including Emily Kame Kngwarreye, who represented the country at the 1997 Venice Biennale, there is an important sense in which the territory seems to be living up to its idealistic name: a small body of relatively new scholarship has identified Utopia – where 88 per cent of the population speaks Alyawarr, and just 3.5 per cent report speaking exclusively English at home – as the site of an intriguing phenomenon, the link between the wellbeing of a language and the wellbeing of its speakers. ‘Language is medicine,’ state the authors who explore precisely this nexus in The Oxford Handbook of Endangered Languages (2018). Collectively, these authors are involved in documenting, teaching, researching and maintaining a diverse array of languages across what is now North America. Their striking observation, informed in many cases by scholarship in the authors’ own communities, crystallises the central claim of a small but growing body of research that insists that the declining health of a community’s language does not merely occur alongside sickness in a community but is itself the root of this sickness. If true, the opposite holds as well: namely, that strengthening the use of Indigenous languages offers a path towards physical and emotional healing for their speakers. As the language advocate X̱ʼunei Lance Twitchell put it in the opening to his Tlingit learners’ guide: ‘The Tlingit language is medicinal in its importance to Tlingit people.’ The title of his textbook puts matters more plainly: Haa Wsineix̱ Haa Yoo X̱ʼatángi – Our Language Saved Us (2016). At a time when minority languages around the world face continuing pressures from dominant cultures to assimilate – something we witnessed clearly during the COVID-19 pandemics, when vital medical information was literally unavailable across the United States’ big cities in numerous languages spoken by minority groups – what can these perspectives tell us about how we define wellness? What might they add to our understanding of where the tongue ends and the body (corporeal and politic) begins? Given that European settlers reached Utopia only last century, it was never the site of a religious mission, which would have converted Indigenous people to Christianity and then trained them to be manual labourers for white settlers. Nor did Utopia become a government-run reserve – its land designated for Aboriginal inhabitation, but where white managers often exerted extreme control. As a result, the experience of Utopia’s Alyawarr and Anmatyerr communities differs from that of other Native peoples across Australia, for whom the arrival of James Cook marked centuries of violence and dispossession. In a colonial coup de grâce, from 1905 until the 1970s, the Australian government pursued a policy of forcible child removal, with the goal of indoctrinating Indigenous children in white culture, severing their ties with their families, their traditions and their languages. This long history of trauma, internalised, embodied and generational, helps explain why to this day Aboriginal and Torres Strait Islander people across Australia bear a disproportionate burden of disease compared with their non-Indigenous counterparts – a disparity that is compounded by structural racism in hospitals and other settings. The latest data, compiled by the Australian Bureau of Statistics between 2015 and 2017, shows a roughly eight-year life-expectancy gap between Indigenous and non-Indigenous Australians (a gap the government has pledged to close by 2030, although the Australian Human Rights Commission predicts that, if current patterns hold, this goal will not be met). The data further reveals that Indigenous Australians are at greater risk of suffering from health problems across the board – particularly cardiovascular disease, which accounts for almost a quarter of all Indigenous deaths in the country. But in 2008 a decade-long study of health in Utopia turned up some intriguing results among the cohort of 296 local participants: there were significantly lower rates of hospitalisation and mortality from cardiovascular disease compared with other Aboriginal communities in the state, even when controlling for factors such as education, income and access to housing. To be sure, the authors acknowledge that part of the explanation might be the healthier lifestyle of rural outstations: more exercise, better diet, little access to alcohol (from 2007 to 2022, alcohol was banned in all Indigenous homelands in the Northern Territory). And yet, the Utopia study ran counter to conventional public health wisdom relating to Aboriginal and Torres Strait Islander people, which suggests that those living in remote areas are more likely to bear the brunt of health disparities. What’s more, Utopia today possesses high levels of unemployment and poverty – the kinds of markers, one might assume, that would be associated with worse health outcomes, not better ones. Linguistic marginalisation is not merely a barrier to accessing care but a risk factor for disease in and of itself The Utopia study’s authors conclude that the particular social environment fostered by Utopia and its historically high degree of autonomy – which has allowed for the kind of ‘connectedness to culture, family and land, and opportunities for self-determination’ (exemplified by the rich artwork of Kngwarreye, Minnie Pwerle and other Utopian painters whose work is world renowned) likely played a key role in the health findings. After white pastoralists laid claim to the area, they hired Indigenous people to tend to their animals, a form of employment that allowed workers to stay ‘on country’ and among their people. As such, the Indigenous residents of Utopia were able to maintain place-based traditions like foraging for local foods, gathering medicine, and visiting and caring for sacred places, until their legal campaigning to regain title to the land was successful in 1980. Utopians were also more easily able to maintain the use of Indigenous languages, and it is the vitality of the region’s native tongues – which is particularly striking, given that colonialism has severely threatened if not extirpated so many other Aboriginal languages across the country – that has caught the eye of some academics and government officials as a protective factor worthy of exploring. The Australian Institute of Aboriginal and Torres Strait Islander Studies cited the Utopia study’s results in a 2014 report that recommended the government ‘include allocation of funding to language activities as part of health and justice programmes’ (emphasis mine). The Institute further encouraged regional health departments themselves to either fund or directly implement language-revitalisation measures. The implication here is pointed, in suggesting that linguistic marginalisation serves not merely as a barrier to accessing care (as, for instance, when public health messaging is available only in a dominant language), but that it is a risk factor for disease in and of itself – one that could be resolved by promoting (re)connection with or continued use of a heritage language. Policy proposals aside, the Utopia study has become an important point of reference for wider research into the connections between revitalising minority languages and improving the health and wellbeing of their speakers. A 2014 study of First Nation communities in Alberta, Canada – where the burden of diabetes among Indigenous adults was more than double that of the general population at the time of publication – identified a correlation between high rates of Indigenous language knowledge and lower rates of diabetes within communities, even when taking socioeconomic differences into account. Other researchers have examined the relationship between the state of a community’s language and its residents’ mental health, from self-esteem to an individual’s sense of belonging to depression. A 2007 study of First Nation communities in what is now British Columbia – which has one of Canada’s highest concentrations of Aboriginal languages at risk of disappearing – found that the erosion of traditional language knowledge was a strong predictor of elevated youth suicide rates, and to a greater degree than other cultural factors: First Nation communities where only a minority of members possessed conversational knowledge of an Indigenous language reported almost six times the number of youth suicides, compared with those communities where a majority of members were speakers. Among those who have borne the brunt of attempts to suppress their native tongues, individuals will sometimes speak of lasting negative bodily reactions produced by the trauma of language deprivation. At residential schools – state-funded institutions of enforced assimilation that operated across North America – violence was routinely meted out by teachers against students caught speaking Indigenous languages, often leaving lifelong psychic scars. One residential-school survivor, Annie Johnston, one of the co-authors of the Oxford chapter mentioned above, describes the visceral physical sensations that speaking her ancestral tongue Tlingit – a language native to coastal areas of the Pacific Northwest – still provokes in her as a result of internalising these punishments: ‘You get the lump in your throat. Your stomach starts churning.’ Language oppression and revitalisation, in other words, are not abstractions. They are tied directly to the health experienced, subjectively and bodily, by speakers of oppressed languages. In common parlance, the terms ‘endangered’ and ‘extinct’ are frequently used to talk about languages where the number of living fluent speakers is approaching, or has reached, zero. Cribbed from ecology, this framing inadvertently casts such disappearances as natural, even inevitable – the product of unfortunate but ultimately impersonal forces acting upon the world. A 2020 study of public attitudes in Australia towards Aboriginal and Torres Strait Islander language-revitalisation programmes found that more than 40 per cent of those canvassed viewed such work negatively, believing it to represent a vain attempt to prop up so-called ‘doomed’ languages, instead of focusing on the supposedly more practical matter of English education. Like Hittite or Latin, the thinking goes, some languages are merely fated to pass out of existence, and to resist such change is not only to deny modernity: it is to actively impede the forward march of history. But comparisons with long-dead ancient tongues or classical languages that have a written rather than spoken afterlife (Latin, for instance, persists in liturgical contexts and, indirectly, in the form of its many Romance offspring) obscure the real and ongoing colonial violence behind the precipitous decline of so many Indigenous languages in the centuries since Christopher Columbus. These languages are not suns that set nor moons that wane. They are not things that simply enter and exit the world, absent of outside influence. Rather, they are actively minoritised, marginalised and pushed out to the fringes. They are banished from classrooms and radio broadcasts and government announcements, muscled out of the market, out of the public eye and ear, and forced to retreat into little corners of life where the authorities cannot go: into kitchen conversations and dreams. Or perhaps not even there. Re-immersion in traditional language and lifeways is a kind of homecoming The decision to adopt one tongue and stop using another is not neutral; it is tied, inevitably, to power and prestige. This is precisely why linguists and activists alike now reject the framing of ‘endangered’ and ‘extinct’ languages in favour of ‘oppressed languages’. This term sets the role that forces like colonialism play in coercing shifts in language use front and centre, and helps us see that the struggle against them is at bottom a political struggle. Language revitalisation is thus one part of a broader struggle for linguistic justice, which embraces the ability to thrive in one’s language of choosing, equitable access to information regardless of one’s native tongue, the redistribution of resources, and the fostering of esteem for all languages. Put another way, language is an easily legible banner of a people’s autonomy: the right to speak in the words you choose, with (and without) whom you choose, and to foster bonds of kinship and understanding not only with those around you, but with all past speakers of the same tongue, and those yet to be born. It can be said that the health of languages is deeply tied to the speakers’ feelings of rootedness (of being in, and in possession of, one’s homeland) and that languages themselves can become a kind of home, even in conditions of displacement; a safe refuge and site of belonging, where one can get to know and become one’s fullest self. The Nêhiyawi (Cree) Nation has a long history of forced displacement, from the horrors of the residential schools to the Canadian state’s deliberate redirection of floodwaters towards Native land in 2011 to avert damage to valuable white-owned property. Yet the province of Alberta, where ongoing wildfires have forced more than 1,000 First Nations people to evacuate, is also the site of a community-driven initiative to support Nêhiyawi maternal and neonatal health. The programme recruited Elders to act as mentors for expectant parents, sharing their experiences, providing a listening ear, and encouraging cultural practices, including using the Nêhiyawi language. ‘The girl that is the mother today is … very removed psychologically, emotionally, mentally, physically – in all aspects of her being, from her culture, her values, and her belief system,’ one Elder in the programme explained. Her evocation of distance captures at once the literal forced relocation to which the Nêhiyawi have been repeatedly subjected, but also the idea that re-immersion in traditional language and lifeways is a kind of homecoming. Indeed, the connection between language, land and the body finds resonance in the idea of health as conceived of by Nêhiyawi and other Indigenous peoples, which expands beyond the borderlines of a single body and single mind to encompass community, knowledge, and a rootedness in one’s identity and culture. In a 2010 study of how members of the Thunderchild First Nation conceptualise wellness, many respondents reported that they saw practising the Cree language as a way of maintaining their health, which they considered to encompass not just their bodily wellbeing, but their spiritual wellbeing as well. This idea finds an echo in the mno bmaadis – a concept shared by Cree and other Anishinabek peoples that emphasises the interconnectedness of physical, emotional, mental and spiritual states, as well as the cultivation of balanced relationships with the people and spirits around one. Similarly, Simon J Ortiz, a poet and an enrolled member of the Pueblo of Acoma (an Indigenous community native to what is now New Mexico), argues that language itself is what teaches us to have a healthy relationship with the world around us: ‘[I]n fact, language initiates and originates in the relationship we, as a human community, have with the land. … [W]ithout language, there is no verbal acknowledgment of relationships possible.’ These findings on language as a protective factor for Indigenous communities form one strain in a larger body of scholarship examining how ties to traditional culture and land foster the health and wellbeing of Native peoples. In 2012, Joseph P Gone, then a research psychologist at the University of Michigan, and Patrick E Calf Looking, director of a substance abuse treatment programme on the Pikuni (Blackfeet Nation) reservation in Montana, set out to provide proof of concept for the efficacy of Indigenous treatment methods – not just grafting traditional elements like sweat-lodge rites on to pre-existing Western models such as Alcoholics Anonymous (AA), but creating an entirely new treatment plan from the ground up. For the extent of their pilot programme, which took place not in a hospital setting but at a specially erected outdoor camp whose tepees were pitched near a creek on the reservation, a small group of clients immersed themselves in what Gone and Calf Looking called ‘culture as treatment’: foraging, hide-tanning, lodge-cover painting, visits to sacred sites, and other activities largely unfamiliar to participants prior to their enrolment in the trial. Guiding them through it all were representatives from the Crazy Dog society, whose members are experts in the preservation and transmission of Blackfeet lifeways. Unlike the rigid patient-doctor relationship typical of Western medicine, the Blackfeet programme cultivated a non-hierarchical dynamic between those in treatment and those providing it, creating an ambience of supportiveness, non-judgment and egalitarianism. In post-trial feedback, participants pointed to this atmosphere of collaboration as part of what made the treatment camp stand out from other substance abuse programmes, in which their dominant impression had been one of impotence and lack of control – particularly given the AA directive of surrender to a higher power. But no matter how effective they may prove to be, many ‘culture as treatment’ programmes struggle for funding. Those that depart starkly from accepted Western models of what medicine should look like are a hard sell for grant-disbursing bodies and peer reviewers of scholarly journals, especially when narrowly tailored to particular communities, rendering cherished norms of experimental design, such as random sampling, difficult. Gone and Calf Looking report that, after more than two frustrating years of searching, they sourced funds sufficient to mount only an abbreviated version of the treatment camp as it was originally conceived – less than half the duration they’d hoped for. He started picking up Tlingit as a way of spending time with his sick grandfather The bigger problem, as things stand, is that academic standards of proof often exclude Indigenous voices, while quantitative research on the relationship between language and health remains largely confined to analyses of correlation. But whether or not it is legible in a bar graph, there’s no short supply of feedback from Indigenous communities across the world that cultural continuity (including the vitality of Indigenous languages) is an indispensable part of health. As the authors of the 2007 study of First Nation communities put it: ‘The generic association between cultural collapse and the rise of public health problems is so uniform and so exceptionless as to be beyond serious doubt.’ As the Tututni linguist Jaeci Hall, one of the co-authors of the Oxford handbook, said about her own experience of language revitalisation on wellness: ‘Even if we’re only saying nouns here and there, it’s this endorphin rush, that we’re successful, we’re doing something that’s bigger than just us, that connects into healing the wounds of colonisation.’ It’s easy to see how the sense of rootedness that Hall invokes might speak to the links between language preservation and wellbeing. Fostering healing connections is often also what brings people to the work. Twitchell, professor of Alaska native languages at the University of Alaska Southeast, and the author of the aforementioned Our Language Saved Us, recently told Harvard International Review that he started picking up Tlingit as a way of spending time with his sick grandfather. Since then, he’s begun to lobby the state legislature to grant Indigenous languages official status, advocate for Tlingit-medium schooling, and study how successful revitalisation models in Hawai‘i and New Zealand might be transferred to Alaska. On his YouTube channel, Twitchell posts everything from beginner lessons to classes on oral history. On the flip side, it isn’t difficult to imagine how the alienation and deracination of which language loss is both synecdoche and symptom can create a harmful feedback loop. ‘Kids who don’t know who they are don’t understand their culture – they feel lost and are much more likely to fall prey to the type of problems that are plaguing a lot of Native communities,’ said Robert Elliott, interim director of the Northwest Indian Language Institute at the University of Oregon when we spoke over Zoom at the height of the COVID-19 quarantine. Elliott’s father, who is Diné (Navajo), was taken from his mother when he was two years old; Elliott himself, who originally trained as an ESL instructor, got involved with Indigenous language revitalisation only when he was asked to help film something at the Institute that he would later head. But he worried that those unfamiliar with the concrete impact of language preservation might see it as expendable. Organisations that do such work, he told me, are coming increasingly under the microscope, with greater competition for grant funding adding further financial pressure. ‘I’m concerned about more funding drying up,’ Elliott told me. ‘That this won’t be a priority during a crisis time, that language will be viewed as a luxury.’ Linguistic oppression has a measurable body count. The trouble is that, in a world that caters primarily or exclusively to dominant-language speakers, those outside of the linguistic majority often cannot access crucial, even life-saving medical information and care. Care that becomes all the more imperative once you recognise that those who are linguistically marginalised are often in greatest need of it. This was glaringly apparent during the pandemic. In 2020, the Endangered Language Alliance (ELA), a nonprofit organisation that documents and supports the linguistic diversity of New York City, published a map that overlaid the city’s coronavirus hotspots onto a cluster chart showing the population’s linguistic distribution. ‘Multilingual immigrant communities have been among the hardest hit,’ the ELA concluded, pointing to severely affected areas in Brooklyn, the Bronx and Queens where many speakers of minority languages reside. Part of the problem is the dearth of public health information that’s available outside more widely spoken languages – a problem that extends far beyond the city limits of New York. The Pakistani government’s official coronavirus response website, for example, was available exclusively in English, cutting off not only speakers of smaller languages, but those in the country’s larger linguistic communities as well. Official COVID-19 information provided by the government in Indonesia was mainly in Indonesian, but was heavily larded with English loanwords such as ‘lockdown’ that rendered it largely unintelligible to people in villages. Even organisations that are notionally in charge of safeguarding the wellbeing of Indigenous people fail at this task: FUNAI, Brazil’s department of Indigenous affairs, refused to disseminate any COVID-19 information in Indigenous languages. Such failures of states to recognise the potentially disastrous impact that linguistic oppression entails speak to the underlying and systemic marginalisation of minority groups in the very fabric of societies born of colonial hegemony. However, translating public health information word for word is not enough. As the linguist Alejandra Hermoza Cavero pointed out in her article for the site Language on the Move in 2020, the fact that the Peruvian government made COVID-19 information sheets available in languages like Quechua and Aymara is all well and good, until one realises that their advice about proper handwashing techniques was impossible to execute for the many rural Andean villagers who lack access to clean running water. (Indeed, when COVID-19 struck Utopia in January 2022, a combination of flooded rural roads and inadequate phone lines made the situation ‘absolutely dire’, according to local health clinic staff.) Real linguistic justice requires not just dictionaries and word lists but dismantling colonial structures All of this points to the difficulty of untangling linguistic oppression from other forms of inequity. At what point does a discussion about language become a discussion about housing? About migration? About land rights and broken treaties? About health care? According to Ross Perlin, co-director of the ELA, the amplified impact of the pandemic on multilingual immigrant communities ‘gets to larger health disparities, which have existed for a long time’, and which are exacerbated by social problems; for example, ‘if they’re undocumented, whether or not people are insured, obviously income level, overcrowding’. In New York, the ELA supported mutual aid work – notably food distribution programmes that catered to Maya families from Guatemala as part of its pandemic response – alongside publishing audio diaries by speakers of languages like K’iche’ and Amdo. Linguists coming from outside minority-language communities often make abstract, heady appeals to diversity when seeking to build support for their work. They argue that a language’s death is a loss for all of humanity. In this context, as the author Michael Erard has said, writing in the online magazine Undark in 2016, the topic of language ‘extinction’ can easily become fetishised in media produced by and for the mainstream – that is, non-Indigenous speakers of majority languages – with white linguists cast as heroes, and language loss understood as both inevitable and irreversible. It’s a depoliticised narrative that neglects to take into account both the structural violence that drives language marginalisation in the first place, and the fact that achieving real linguistic justice requires not just dictionaries and word lists but dismantling colonial structures. It’s telling that such universalist ideas are less relevant to people on the ground seeking to revive their own heritage languages, and who are motivated less by some diffuse concept of universal heritage than by the more immediate and concrete desire to strengthen a personal sense of cultural identity, to foster community, and repair historic and ongoing harms. ‘Language revitalisation is prefigurative,’ writes the linguist Gerald Roche in Language on the Move, ‘in that it restores languages to a community and the world before broad-scale transformation has taken place, as a model of how the world could and should be.’ Or perhaps, to invoke the original meaning of the term utopian – a model of the no-place that is the good place, the key that allows us to, in the words of the Acoma Pueblo poet Ortiz, ‘recognise the relationships I share with everything.’ In 2021, a Utopian teenager translated an English book for young children into the Alyawarr dialect spoken in the area so that children younger than him could practise reading in their language. Time will tell whether efforts like his will succeed – and what effect ‘success’ will have on the bodies of the speakers.
Erica X Eisen
https://aeon.co//essays/language-is-at-the-heart-of-indigenous-community-health
https://images.aeonmedia…y=75&format=auto
Religion
Postcolonial intellectuals and Iran’s rulers agree that secularism is just Western imperialism in disguise. They are wrong
The latest waves of uprisings in Iran following the movement in defence of Iranian women’s freedoms are among the most significant since the Islamic Republic was established after the overthrow of Mohammad Reza Pahlavi in 1979. The regime’s resulting crackdown has led to mass arrests and prison sentences, as well as a string of executions. These uprisings are symptomatic of prolonged and multifaceted discontent with the Islamic Republic’s perceived governance. One of the oft-cited causes is growing dissatisfaction with principles of government grounded in a religious worldview, and its subsequent patterns of civil liberty violations. The most visible of these violations, which has served as a focal point for resistance, is the law of mandatory hijab for women. Gathering reliable empirical data on religious belief in Iran is difficult – apostasy (at least from Islam) is illegal and punishable by death under the vaguely defined crime of Ifsad-e-filarz, or ‘corruption on Earth’. Nevertheless, some available evidence from 2020 suggests predominant opposition to mandatory hijab, to the extent that even some hijabi women have joined the protests to defend everyone’s equal right to liberty. More recent evidence from 2022 also suggests a significant favourable shift towards secularism broadly, with the majority in favour of a separation of religious and civil affairs. Some contemporary research has suggested that, ironically, Iranian theocracy has triggered these trends, which have naturally raised the question of the role of religion in Iranian society. Although the popular Iranian resistance chant ‘Zan, Zendegi, Azadi’ (‘Woman, Life, Freedom’) speaks to the potential promise of secular change, a recurring criticism of calls for a secular Iran emanates from a suspicion that secularism is a thinly veiled imperialist or colonialist tool for subversion, dressed up in the language of freedom and human rights. Antisecularism as a form of anticolonialism was a consistent and fundamental theme of revolutionary discourse among clerical factions in the lead-up to the 1979 Iranian revolution. It remains so to the present day, and is even repeated by allegedly Left-leaning non-Iranian factions in Europe and North America impressed with postcolonial theory. Our aim is to, first, clearly reconstruct the anti-imperialist argument against a secular Iran in an attempt to understand the professed motivation of its proponents. We then argue that, on the contrary, the argument is feeble, at least as it is commonly deployed: secularism’s inherent merits can be (and routinely are) divorced from any alleged use of it as a colonial imposition. Shortly after Ayatollah Khomeini gained power in 1979, a new constitution was instituted that sought to embody the religious principles derived from the Twelver Jaʿfarī school of Shia Islam. This constitution explicitly sets as its foundational principles ‘a system based on belief in … the One God … His exclusive sovereignty and right to legislate, and the necessity of submission to His commands’ and ‘Divine revelation and its fundamental role in setting forth the laws’ (Article 2). The constitution clearly expresses how ‘All civil, penal, financial, economic, administrative, cultural, military, political, and other laws and regulations must be based on Islamic criteria’ (Article 4), going on to proclaim the Twelver Jaʿfarī school of Islam as the official state religion (Article 12). Clearly, this theocratic framework of governance is fundamentally at odds with secular approaches to the political domain. Secularism is the view that participants in public political discourse should never be in a position to assume that their interlocutors share the same religious assumptions and, as a result, the state ought to be neutral in matters of religious belief when determining public policy. Contrary to some persisting views, this does not amount to ‘state-enforced atheism’, but rather a disfavouring of religious privilege in civil matters and a favouring of impartiality and pluralism in an attempt to guarantee equal opportunities and respect for citizens, regardless of their religious beliefs, or lack of them. The corollary principle for the practical implementation of this position is that any appeal to religious reasons in public political discourse is insufficient to justify laws that would coerce citizens into certain kinds of behaviours. One of the most influential modern justifications for secularism was offered by John Locke in his Letter on Toleration (1689): I esteem it above all things necessary to distinguish exactly the Business of Civil Government from that of Religion, and to settle the just Bounds that lie between the one and the other. If this be not done, there can be no end put to the Controversies that will be always arising, between those that have, or at least pretend to have, on the one side, a Concernment for the Interest of Men’s Souls, and on the other side, a Care of the Commonwealth.Secularism seems reasonable because it is very rare for an entire nation to share belief in one source of law as an authority, let alone share the same interpretation of that law. Because there is no widespread informed agreement about which religion (if any) is the ‘correct’ one, our epistemic limitations dictate that it is prudent to avoid basing civil laws upon any of them, with an eye to protecting the civil rights of all citizens. In nations with significant religious diversity, this form of neutrality is all the more pressing. Secularism can itself become oppressive if it assumes that the outcome of secular legislation is ‘sacred’ However, it might be argued that the type of ‘neutrality’ that secularism depends upon is a myth. The way we define and conceptualise neutrality is almost always rooted in the structure of the context we live in. The alleged implication is that ‘neutrality’ is not itself neutral. So secularism might be ‘neutral’ based upon one particular type of power structure (ie, the one dominant in the West) but not necessarily those prevalent elsewhere. The anthropologist Saba Mahmood, for example, argued that political secularism’s legal framework is not neutral because an intrinsic part of the nation-state’s structure is shaped by its unique historical norms and values. This is a fair point to make. The ideal version of pure neutrality does not exist anywhere. Human beings are all situated in particular contexts; hence our value systems for navigating the world are by default contextual. However, this does not entail that we cannot rise from particular contexts and imagine other value systems, nor that some level of neutrality is not achievable. Seeking this level of neutrality towards citizens’ diverse religious beliefs is important because, without it, oppression is an inevitable result. No doubt that secularism can itself become oppressive if it operates under the dubious assumption that the outcome of secular legislation – ie, its contents – is ‘sacred’, and so must be accepted without critical analysis. Under such circumstances, secularism would not respect impartiality and pluralism. Ironically, an example of this is the prohibition on wearing the hijab in public spaces in Iran under the Kashf-e hijab initiative, enforced during the early Pahlavi dynasty from 1936-1941. But, crucially, what explains why such policies ought to be condemned is precisely that they fail to protect beliefs of conscience – of which religious belief is merely one among others – in a tolerant society that achieves an appropriate degree of state neutrality. The tension between the principles of secularism and the principles of the Islamic Republic is quite deliberate. Properly understanding the function of religion in contemporary Iranian governance requires acknowledging how the notion of an ‘Islamic Republic’ was, and still is, championed as an explicit and allegedly superior alternative to secular governance. We noted earlier that one of the most pervasive objections to a secular Iran – made by both the current regime and various non-Iranian factions in the Western world – is anchored in an anti-imperialist and postcolonial framework. Some versions of the objection hold, additionally, that secularism is fundamentally antireligious in nature (and therefore anti-Islamic). Combined with a further claim that Islamic ideals are (or ought to be) at the fundamental kernel of Iranian cultural identity, secularism is considered to be anti-Iranian, and a means by which foreign powers have aimed to homogenise the interests and evaluative outlook of Iranians in a way that more closely aligns with their own, thus facilitating a greater sphere of influence and an easier extraction of resources. This objection has its origins at least partially in dissatisfaction with the rapid state-enforced modernisation instituted by the preceding Pahlavi dynasty (1925-79), where its secularism was concurrent with increased Anglo-American influence in Iranian state affairs and industry, including a CIA-backed coup to oust the nationalist prime minister Mohammad Mosaddegh and reinstall the Shah in 1953. But narratives of this kind are not unique to the Iranian sphere – they have found wide acceptance in the Muslim world more broadly. As it emerged in the European context, secularism was a product of widespread debate within those societies, provoked by socioeconomic changes and the concurrent challenges of guaranteeing civil obedience in light of increasingly fracturing religious authorities. But in the Muslim world, modern secularism was typically installed from the top down, first by the colonial powers and then the postcolonial state. As in the case of Tunisia, Algeria, Egypt, Syria, Iraq, Yemen, Turkey (arguably similar to Pahlavi Iran), these states were secular autocracies, often installed or heavily supported by Western governments, and they sought to ‘modernise’ their nations in ways felt by many to be too quick. Subsequently, as the contemporary scholar of Islamic studies Muhammad Khalid Masud has noted: ‘Muslim thinkers found it very difficult to understand new ideas like secularism in isolation from Christian (Western colonial) supremacy.’ It must be granted, then, that there is at least some historical association between secularism and imperialism, even if it is not a causal association. The pertinent issue, however, and one we wish to dispute, is whether this association is inherent or inevitable. The broader narrative of colonial exploitation was intellectualised in the wider Muslim world by Edward Said’s influential book Orientalism (1978), which sought to elucidate the ways in which ‘the West’ routinely depicts ‘the East’ in essentially simplistic and contemptuous ways. This in turn, Said argued, makes studies of, for example, Middle Eastern societies intrinsically political in nature and supportive of existing colonial power structures. In Iran specifically, the narrative was intellectualised by the likes of Jalāl Āl-e-Ahmad in his Occidentosis: A Plague from the West (1962), and Dariush Shayegan in his Asia v the West (1978). Āl-e-Ahmad deployed the now-notorious phrase ‘West-toxification’ or ‘West-struck-ness’ (in Persian, ‘Gharbzadegi’) to describe Iran’s unfortunate dependence on Western materials and conceptual apparatus that prohibits an ‘authentic’ Iranian identity. This philosophy – which (ironically) took strong influence from an eclectic mix of traditions in mostly European philosophy, particularly the ideas of Jean-Paul Sartre, Martin Heidegger, Frantz Fanon and Karl Marx – was embraced by many of the factions and figures driving the 1979 revolution. In 1971, for instance, an exiled Khomeini expressed explicit concerns about the pervasive influence of imperialist culture in Muslim communities, asserting that it overshadowed the teachings of the Quran and led the youth to serve foreign interests. One initial concern about this narrative surrounds the legitimacy of the sharp ‘East v West’ dichotomy central to it. The Islamic Republic thrives on this dichotomy. Indeed, it is its entire ideological foundation. One issue is that it is ambiguous who or what ‘the West’ is supposed to be in this context. It is evident that ‘the West’ is considered more than a mere geographical designation. But is it a specific socioeconomic system (ie, capitalism)? A level of development in science and technology? A confederacy of states with shared political interests? A moral framework? At times, Khomeini equated ‘the West’ with colonialism, but at other times he emphasised its essential nature as one of decadence or a lack of morality. This point is important because, without a credible definition of ‘the West’ (and ‘the East’, for that matter), the narrative threatens to make superficial any political analysis involving it. This is evident, for instance, in the fact that many Muslim-majority countries, such as Malaysia and Indonesia, and other countries with histories of colonial subjection, such as Japan, have moved beyond the dichotomy, adopting some typically ‘Western’ values without sacrificing their own cultural identity. In 1979, tens of thousands of women marched in Tehran for six days to protest Khomeini’s religious dress code The ‘East v West’ dichotomy leant upon by the Islamic Republic can also perpetuate the same mistake that Said diagnosed in colonial frameworks, namely: oversimplifying and essentialising entire cultures. The dichotomy is clumsy insofar as it postulates a fantasy of homogeneity, obscuring the wide range of political factions within Iran. The Islamic Republic is just as committed to this fantasy as any European orientalist. In the preamble of its constitution, praise is poured upon ‘the awakened conscience of the nation, under the leadership of Imam Khomeini’, which came to form a ‘united movement of the people’ towards a ‘genuinely Islamic and ideological line in its struggles’. This account, however, is historically revisionary insofar as it depends upon a fictitious narrative over the object of unification in pre-revolutionary Iranian society. While dissatisfaction with the policies of the Pahlavi dynasty was clearly widespread, ideas about what form of government was to replace them was not a unified affair but fragmented, with diverse factions – communists, merchants, students, workers, educated women and secular nationalists – not necessarily in harmony with clerical aims. Moreover, it is not true that all factions responsible for the revolution supported the substantive policies of the theocracy that emerged. As early as 8 March 1979 – a matter of weeks after the conclusion of the revolution – tens of thousands of women marched in the streets of Tehran for six days to protest Khomeini’s announcement that women in Iran ought to adhere to religious dress code (ie, the hijab or chador). The history, even recent history, of religion in political affairs in Iran is more complicated, and pluralist, than the Iranian government admits. In this respect, Islamists and postcolonial scholars who champion the ‘anti-imperialist’ narrative are in many ways in agreement, and as such suffer from the same conceptual problems. The narrative presented so far functions as part of the justification for rejecting calls for a secular Iran. Assuming the challenges addressed above can be reasonably met, the argument can be charitably reconstructed as follows: first, imperialism ought to be resisted where it is found because it is intrinsically wrong; second, the idea of a secular Iran has its origins in, and continues to foster the cause of, Western imperialism; and so, third, the idea of a secular Iran ought to be resisted. The first claim may be justified on a number of grounds: perhaps, for example, imperialism is intrinsically wrong because it is a form of oppression, and necessarily undermines the value of national self-determination and causes individual or cultural harm, or perhaps because it is expressive of objectionable cultural chauvinism. We can grant this claim’s truth for our current purposes, for the second claim is especially vulnerable to a host of criticisms that ultimately render the argument implausible. It is crucial that the claim mentions not only that secular Iran is an idea with origins in Western imperialism, but also one that continues to propagate its aims. If it was merely the former, the argument would be patently invalid insofar as it would fallaciously assume that the current function and value of something can be determined solely by reference to what it originally emerged to do. Even if we grant that secularism in the Iranian context was originally a subversive tool of Western imperialism, this itself would not establish that secularism continues to be such, and that there aren’t independent reasons speaking in its favour now. It is patronising to say Iranians are incapable of deliberating their values outside of a religious framework The second part of the premise, which claims that calls for a secular Iran continue to foster the cause of Western imperialism, seems unfounded. In order to avoid being a mere speculation about the motives of secularists, it would have to be shown not only that (a) a secular Iran would be in the interests of imperialist powers; but, crucially, that (b) calls for a secular Iran are exclusively a causal product of imperialist powers. The fact is that, as we have noted earlier, there is a large portion of Iranians within the country calling for a separation of state and religious authorities. To ignore these Iranians, or to implausibly brush them aside as products of false consciousness and brainwashing by Western media would be, ironically, to silence them in ways that ‘anti-imperialists’ typically find to be criterial of colonial subjection. Perhaps the anti-imperialist’s point is rather that since secularism emerged from, and developed within, a European context (ie, its specific socioeconomic, cultural and religious system), it is best suited to that context, and unsuitable or even harmful when implemented elsewhere. However, there are at least two fatal problems with this relativistic formulation of the argument. The first is that it is unacceptably ahistorical. There are many examples of secularism outside of ‘Western’ societies prior to colonisation – eg, in the philosophical milieu and political structures of India and in numerous Chinese dynasties – and secularism also has its own history within Islamic contexts. Thus, claiming that secularism is exclusively suited to Europe or ‘the West’ is false, and cannot rescue the argument. The second reason this relativism fails is that it is itself patronising, essentialist and even racist to propose that Iranians are inherently incapable of deliberating about their values, beliefs and practices outside of a religious framework. One of the most significant problems with the anti-imperialist argument under consideration is in how it obscures, and can even justify, the problems that secularism was designed to resolve. One of the specific problems of theocratic governance, identified by Locke, is its systematic failure to guarantee the civil liberties of a religiously diverse citizenry. This is historically apparent, but vividly clear in the Islamic Republic. Its constitution recognises only three religious minorities: Christians, Jews and Zoroastrians (Article 13). But this selection is morally arbitrary. Despite the Islamic Republic’s dubious official claim in 2011 that Iran is 99.4 per cent Muslim, Iran is a multi-religious society, which, in addition to the above, has significant adherents to the Baháʼí Faith, Mandaeism, Yarsanism and even to other branches of Islam, namely Sunni Islam. Tehran even has a very small community of Sikhs, as well as atheists, and deists associated with no religion. Far from being ‘anti-religious’, secularism is a requirement for the guarantee of religious freedom Members of these religious minorities face discrimination on a variety of fronts. Baháʼís are routinely denied university education, evicted from their homes, arbitrarily arrested and detained, and imprisoned, all on the basis of their religious beliefs. In 1991, a leaked government document on ‘the Baháʼí question’, signed by the supreme leader Ali Khamenei, postulated the benefits of eradicating this religious community in more subtle ways, such as: enrolling them in schools with especially strict Islamic ideology, destroying their cultural roots outside of Iran, denying them employment in influent positions, and so on. The Islamic Republic’s discrimination of citizens based on their religious views also extends to other Muslim sects. As for the religious minorities that are recognised as genuine in the Islamic Republic’s constitution, they do not share the same civil rights and liberties as their Shia compatriots. Article 3 commits the government to the goal of ‘the expansion and strengthening of Islamic brotherhood’, and this manifests in ways inimical to the equal civil status of Christians, Zoroastrians and Jews. For example, senior government posts are exclusively reserved for Shia male Muslims, and members of all minority religious groups are barred from being elected president. Members of religious minorities are also required to abide by Islamic codes of conduct, for example in the wearing of hijab, and adherence to norms surrounding Islamic festivals such as Ramadan. As well as having independent reasons for thinking these forms of discrimination are unjust, these policies are also internally inconsistent with other goals allegedly championed by the Islamic Republic’s constitution. Article 3 states a commitment to ‘the participation of the entire people in determining their political, economic, social, and cultural destiny’. But this clearly is not (and cannot be) achieved if Islamic (or any religious) rules of governance are implemented in a multireligious society, where vast swathes of the population are effectively excluded from the public sphere. To claim that the plethora of forms of religious discrimination in Iran merely canvassed here are coincidental to the fact that Iran is currently a theocracy would be painfully naive. It is evident that far from being ‘anti-religious’, secularism is a requirement for the guarantee of religious freedom. It seems that the argument against Iranian secularism based upon the tired narrative of ‘anti-imperialism’ is weak, and embodies many of the same problems that genuine imperialism is (rightly) accused of, namely: ignoring Indigenous voices; oversimplifying and essentialising the Other; and the denial of fundamental civil liberties. Those who recite the narrative that the Islamic Republic is a beacon of heroically defiant resistance to Western imperialism not only ignore its own foreign policy, but ignore the plight of Indigenous communities in Iran, offering a shallow apology for rampant oppression. The transparent poverty of this anti-imperialist argument will also likely undermine wider attention to genuine concerns over the continued compromising of nations’ sovereignty in ways that are appropriately described as ‘imperialist’. Those trapped in the hypnotic pull of Gharbzadegi must shake off their deep paranoia about secularism, and recognise the juvenility of its lurking assumption that opposition to Western imperialism is a sufficient condition of legitimate governance.
Patrick Hassan & Hossein Dabbagh
https://aeon.co//essays/secularism-in-iran-is-not-just-a-form-of-western-imperialism
https://images.aeonmedia…y=75&format=auto
Cities
My dad grew up in Robert Moses’s New York City. His story is a testament to how urban planning shapes countless lives
My father rollerskated on the Cross-Bronx Expressway before it opened to car traffic. Born in 1953, he would have been seven or eight when New York City’s massive thoroughfare reached the peak of its construction, facilitated by the destruction of many tight-knit Bronx neighbourhoods. He didn’t live in East Tremont or Spuyten Duyvil, which were literally cut through by the highway, but he did live in between Fordham Heights and Kingsbridge Heights, about two miles north of the new road. He wasn’t a politician or an urban planner – he was a child, concerned with the size of his clip-on roller skates, and whether they’d fit over shoes large enough to support his lanky frame. He didn’t know that he was gliding above one of the city’s most contested planning projects, or what kind of impact it would have on his life. He was young and with friends, and so they laced up their skates. For ordinary people, this is how history happens. 13562Traffic interchange at the Cross-Bronx Expressway, May 1973. All photos courtesy of the National Archives 13563Going east on the Cross-Bronx Expressway, April 1973 13564The Highbridge Interchange in New York City, 1952. The Cross-Bronx Expressway is at the far right Dad was brought up in Robert Moses’ New York – a city undergoing major infrastructural development to produce a sprawling highway network. In addition to his unelected political influence and scores of towering turnpikes, Moses was known for spearheading planning projects that splintered local communities. When my father recalled the Bronx that raised him, he described a place that was diverse and down-to-earth, sometimes veering toward mean, but one where people looked out for each other. They recognised one another. You could leave your house keys with your shopowner, whose brother would send condolences to your family when a loved one passed away. Moses was famous for blatantly overlooking this kind of social capital, and for celebrating rather than ignoring proposals that required entire neighbourhoods to be bulldozed. He was notorious for paraphrasing the adage, ‘you can’t make an omelette without breaking eggs.’ That quote has been mistakenly attributed to Stalin, but today it rings rather Trump-like, with a callousness so dumbfounding it’s almost comical. My dad spent his youth in perpetual sickness, one of countless Jewish kids in the Bronx whose skinny legs and bad asthma kept them out of school. Engulfed by construction and vehicle congestion, my father was nine when his health got so bad that his teachers finally decided to hold him back a year, separating him from classmates and friends. It was 1962, nearly a decade into the period described by the Bronx-born philosopher Marshall Berman as an era of dust and debris, when Moses’s highway was ‘pounded and blasted and smashed’ through the centre of their neighbourhood. For many families like my own, the Expressway symbolises a broader story of environmental injustice in the area, now dubbed ‘asthma alley’ due to its disproportionately high rates. Incidences of childhood asthma in the Bronx still rank 40 per cent higher than the New York City average, a fact attributed to elevated concentrations of particulate matter in the air. Last year, New York City’s mayor Eric Adams declared that the Cross-Bronx Expressway nurtured these inequalities by fragmenting largely Black and Latino working-class neighbourhoods while generating significant air pollution that has been statistically correlated with poor health outcomes for generations of residents. Traffic emits a range of toxins, like nitrous oxide, PM2.5 and dust from brakes and tyres; the side-effects of prolonged exposure to them include asthma, emphysema, cardiovascular disease and cancer. The author’s father Leonard on a return trip to New York in the 1980s. ©Katie Mulkowsky We lost my dad last year: the denouement in a courageously fought cancer battle that spanned more than two decades. I was 24 when he died – not as young as I could have been, but not old enough to negate a dull, almost-always-there sense of missing something. He was unpretentious, unfashionable, unfailingly reliable. He was corny and funny and sentimental. He was a rare combination of impossibly hard-working and deeply empathetic: a respiratory therapist for many years, he was an asthmatic who helped people breathe. We won’t ever be able to say for certain whether his lifelong lung issues, and lengthy scrimmage with the carcinomas, were caused by his exposure to harmful pollutants alone. But we’d be foolish to say that the environment he was raised in had no bearing on his wellbeing – or that of his dad, or brother, or niece and nephew, or those other 33.3 per cent of Bronx residents who die prematurely, a rate substantially higher than in New York City (26.2 per cent) or New York State (23.4 per cent). Beyond being a daughter, I’m now a practising urban planner, and was trained by mentors with a keen eye on the link between public space and public health. Thanks to a slew of writers, scholars and activists – like Robert D Bullard, author of Dumping in Dixie (1990), Julie Sze, author of Noxious New York (2006) and Gregg Mitman, author of Breathing Space (2008), particularly Chapter 4, ‘Choking Cities’ – it’s well documented that environmental issues have unequal human impacts. Certain populations, based on their location, demographic makeup, level of resources available and underlying political context, feel the effects of industrial pollution more than others. This often has to do with the fact that histories of social and economic disenfranchisement become mapped on to urban space through planning practices like redlining and zoning. Along with the South Bronx, neighbourhoods like Brooklyn’s Sunset Park and Manhattan’s West Harlem today have higher geographic concentrations of polluting infrastructure, such as major highways, power plants, incinerators and waste transfer stations, than their wealthier counterparts do – predisposing some of the city’s poorest and most diverse communities to the worst health outcomes. Knowing this, on a professional and a personal level, has compounded the magnitude of my grief with the exasperation of having seen something coming for a long time. I first encountered critical writing about the Cross-Bronx Expressway in Berman’s work, which was shown to me when I decided to follow in my dad’s footsteps and move from my native California to New York for my planning degree. A professor declared Berman’s All That Is Solid Melts into Air (1982) his favourite book, and I spent one Thanksgiving pretending to understand its Faustian references while a hometown friend came over and cooked. The text is about as radical as you’d expect from one whose title quotes the Communist Manifesto. Interested in the local impacts of capitalist accumulation, Berman wrote extensively about the Bronx’s complicated relationship to development while documenting the Expressway’s violent construction, a process that spanned decades. To me, Berman’s work demonstrates the power of putting narrative in dialogue with fact, as it brings to life the visceral, human picture behind things like air-quality statistics. 13569The South Bronx in 1970. All images courtesy Camilo J Vergara/Library of Congress Abandoned cars on a rubbish-strewn urban street13570Texaco Gas Station among the ruins: a view northwest from the Cross-Bronx Expressway by Park Avenue, South Bronx, 1980 A Texaco gas station is depicted against a backdrop of abandoned apartment buildings13585South Bronx, 1970 Boys playing on the roof and hood of an abandoned 1950s car in an urban street covered in graffiti13586Vyse Avenue at East 178th Street, South Bronx, 1982 He recalled, for instance, ‘the immense steam shovels and bulldozers and timber and steel beams, … the giant cranes reaching far above the Bronx’s tallest roofs, … the wild, jagged crags of rock newly torn, the vistas of devastation stretching … as far as the eye could see – and marvel to see our ordinary nice neighbourhood transformed into sublime, spectacular ruins.’ I find something both heart-wrenching and affirming in this excerpt, a sense of: So that was the air he grew up breathing. Born about a decade before my dad, Berman came of age in a similar chapter of Bronx history, and with a similar connection to Judaism. He was a professor at the City University of New York, an activist and a Marxist; Dad went to Lehman, the university’s local campus, and worked department-store day jobs to pay his way through class at night. These parallels made Berman, to me, at once totally inaccessible and already familiar. The consequences of top-down land-use rulings might be seen in full only too late, by the generations that follow I recently revisited his book in an effort to make sense of some tapes I recorded with my dad a few summers ago, when his health started to decline. I won’t exactly call it a benefit, but maybe a side-effect of his lengthy illness was that we at least had time to prepare for the eventuality ahead. This was coupled, of course, by the particular pang of slow grief, and all of the pre-emptive losses that it wrought, but I coped with our situation by coaxing an intense urge toward documentation, an effort to outrun the moment in which he’d no longer be around to be asked questions. I lied and told him our chats were for my New York-based planning dissertation, but what we eventually produced together was an oral history of his life. Loosely structured, our conversations quickly turned toward his childhood neighbourhood – and therefore to the Cross-Bronx Expressway: It was a major road, like the FDR Drive or the West Side Highway – not as wide as the I-5 here, not as many lanes – but it was a major road that goes from the George Washington Bridge on the far west side of the Bronx all the way to the east side of the Bronx, where you can connect to the Whitestone Bridge or the Throgs Neck Bridge to go to Queens or Long Island.That’s my dad – or at least his voice – on record, playing geographer by comparing New York’s roads to the San Diego of my birth, where you can hear the hum of Interstate 5 traffic from the house my parents raised me in. Even in California, my dad carried the Bronx with him: in his faint accent, which came out in punches with certain words like ‘idea’ (always ‘idear’), and in his body, working tirelessly to stay with us. The temporality of urban planning – the distance between a decision and its delivery, and then its real impacts – means that, especially when it comes to environmental health issues, the consequences of top-down land-use rulings might be seen in full only too late, by the generations that follow. For different reasons, men like Berman and Moses are remarkable figures in urban history. My dad, meanwhile, was mostly remarkable to me and the small few who knew our family. But the story of a place is equally contained within those people who live on the margins, whose names never make their way into headlines or books. As a planner, I believe that these are the people who should be listened to the closest – the ones interacting with city spaces on the ground, every day, as their ordinary lives play out. The ones most strongly impacted by major development projects, despite conventionally being the most voiceless in the process. There were some powerful parallels between what my dad said in our recordings and what Berman wrote to critical acclaim, which elevate his anecdotes from being vaguely touching to demonstrating the legitimate merits of everyday expertise. Take this, for example: Dad: I was little, and we didn’t have a car – I didn’t think it [the Expressway] was going to change our lives. I just remember knowing that it was going to get really busy and really noisy with traffic.Berman: [I]t seemed to come from another world. First of all, hardly any of us owned cars: the neighbourhood itself, and the subways leading downtown, defined the flow of our lives.Dad: The area was already very commercial – lots of shops and street vendors – and very ‘ethnic’, I don’t know how else to say it. A lot of people spoke Italian or Yiddish as their first language.Berman: Besides, even if the city needed the road … they surely couldn’t mean what the stories seemed to say: that the road would be blasted directly through a dozen solid, settled, densely populated neighbourhoods like our own; that something like 60,000 working- and lower-middle-class people, mostly Jews, but with many Italians, Irish and Blacks thrown in, would be thrown out of their homes.Unlike Berman, my father never explicitly blamed this destruction and displacement on Moses, and I don’t know that any child would have been aware of the broader political and economic forces shaping his environment at the time. But he remembered details, like the kind of rollerskates he wore to play with his friends on the unopened parts of the Expressway while the rest of it was going up. ‘You probably never had a pair like this,’ he said to me. ‘You wear your regular shoes, your sneakers, and you have a key to tighten the little clamps in front of the skates, and you have a strap around the ankle, and you tighten the front so they don’t fall off…’ In the annals of history, it probably doesn’t matter enormously that these dorky young boys, who got mugged at Yankee Stadium and thrown down stairwells by school bullies, played together on the infamous highway. But my dad’s stories are reminders of childhood innocence and lightness, existing on the sidelines of the wider drama unfolding. They signpost agency and alternative modes of place-making: as much as formal planning decisions contour urban spaces, cities are also shaped by people, and given an identity through their relationships and memories. Of course, a more popular way to tell the story of the Expressway is to tell the story of Robert Moses. Many have done this before, so he already has many monikers: most commonly ‘the power broker’, as in Robert Caro’s eponymous 1974 biography, though The Spectator also called him ‘the psychopath who wrecked New York’. Though Moses is less infamous in the UK than he is in the US, David Hare’s play Straight Line Crazy (2022) recently saw Ralph Fiennes play him at the Bridge Theatre in London in a series of dramatised contestations with anti-car activists seeking to protect their communities from his highway treatment. As a nod to the iconic battle between Moses and the journalist-turned-organiser Jane Jacobs – in which she successfully mobilised enough opposition to thwart his plan to thrust another expressway through quintessential parts of Lower Manhattan – Fiennes sputters on stage, as Moses once did in real life: ‘There is nobody against this: NOBODY, NOBODY, NOBODY but a bunch of… a bunch of MOTHERS!’ A mother herself, but also a thoughtful writer deft at community engagement, Jacobs’s planning philosophies were much more localised than Moses’s were. When it came to scale, she was interested in the level of the city block, and the networks of trust and safety that emerge from human activity in healthy neighbourhoods. Her ‘eyes on the street’ maxim was rooted in the basic principle that if people are around – frequenting local cafés, running after children, walking their dogs, walking at all – places become less dangerous, and more equipped to concern themselves with thriving. When I asked my dad what his childhood neighbourhood was like, his first response was: ‘People knew each other.’ The filmmaker Vivian Vázquez Irizarry, whose documentary Decade of Fire (2019) presents a counternarrative of the flames that swept through the South Bronx in the 1970s, said that people interviewed for her film remembered the area as somewhere you could genuinely ‘ask for sugar from your neighbour’. Eviction notices were served en masse to some of the city’s most vulnerable tenants Most accounts of Moses indicate that it was some combination of money or power, or both, that motivated his alleged 17-hour working days. Caro’s exhaustive and exhausting 1,300-page biography of the planner is an intense character study that documents how Moses amassed more finance capital (his public expenditure ultimately totalled $27 billion, in 1968 US dollars) and political will than perhaps any other figure in New York history. Caro argues that it was corruption and manipulation that won him the role of parks commissioner (and construction coordinator, and then a seat on the planning commission), when Moses’s payoffs to key elected officials skewed land-use decisions in his favour. While those who pocketed his money fought for his projects, Moses mobilised urban renewal policies to appropriate large swathes of privately held land into public authority, a key legal measure to secure the right to actually build on them. By the end of Moses’s lifetime, the New York City region was rendered into an unrecognisable version of itself. The list of his creations is long: the expressways Major Deegan, Van Wyck, Sheridan, Bruckner, Gowanus, Prospect, Whitestone, Clearview, Throgs Neck, Staten Island, Long Island, Nassau and Brooklyn-Queens. Harlem River Drive and the West Side Highway. Then come the bridges: Triborough (now RFK), Verrazzano, Marine Parkway, Henry Hudson, Cross Bay and Bronx-Whitestone. He built Lincoln Center and he built Jones and Orchard beaches. Dams and power plants by Niagara Falls. Even the park scheme for the World’s Fair. ‘[T]he list seemed to go on forever,’ Berman wrote. ‘But then, in the spring and fall of 1953,’ – Dad’s birth year – ‘Moses began to loom over my life in a new way: he proclaimed that he was about to ram an immense expressway, unprecedented in scale, expense and difficulty of construction, through our neighbourhood’s heart.’ According to Caro, Moses’s first proposals for the highway emerged in 1944, but their scale, cost and ambitious construction programme had to gain political support. The plan spanned 113 streets; hundreds of sewerage, water and utility mains; a subway and three railroads; five rapid transit lines and seven other expressways that Moses himself was concurrently building. All critical infrastructure had to keep running during construction, but the residents of East Tremont and their housing stock were deemed acceptable collateral. Remarkably, a parallel route for the Expressway, which would have necessitated the destruction of only six residential buildings, instead of a whopping 54, had been considered – but ultimately avoided because it would have involved the bulldozing of a depot belonging to the Third Avenue Transit Company, a key Moses affiliate. Eviction notices were served en masse to some of the city’s most vulnerable tenants, an immediate human cost, while a fiscal expenditure of $10 million per mile was syphoned into polluting roadways instead of much-needed community development. One man’s ego and greed worked alongside the structural forces of poverty to further entrench existing inequalities. Broken eggs, indeed. The author’s ancestors in the Bronx in the 1940s, featuring the same roller skates her father would use on the Cross-Bronx Expressway years later. ©Katie Mulkowsky Meanwhile, my father’s stories animate the streetscape that Moses treated as a faceless canvas, and reveal one family’s sense of the local communities he discarded. Dad grew up in four different apartments around the same few streets, where the shopowners all knew his parents. The fourth apartment, just next door to the third, was declared superior because his mother could watch from the window as his sister crossed the street to school. It had formerly been occupied by an umbrella repairman who never told his clients that he moved, so for years they’d come and knock and have to be turned away, forced to look elsewhere to fix their umbrellas. Their final apartment sat at the corner of East Kingsbridge and Morris Avenue, about two miles north of the Expressway. Also named Morris, my grandfather died suddenly when my dad was just 16, making the cross-streets take on new meaning for him. Back in the innocence of youth, dad called himself and his friends ‘street kids’ – in the summer, they’d leave the house by 9 am and walk one block south to St James Park, where they’d stay until dinnertime: We didn’t go ‘out’ to play: when you live in an apartment building, you go ‘down’ to play. And when your mother wants you home for dinner, you don’t go ‘home’, you don’t go ‘in’, you go ‘up’ for dinner.His memories were distinctly place-based, and the life he lived was hyper-local. Lehman College, where my father eventually earned a psychology degree, was a closer walk to his family’s apartment than his middle school was. In the end, by Caro’s count, the number of people in similarly quaint, happy neighbourhoods who were displaced as a result of Moses’s highway projects was more than 250,000. Jacobs might have argued that the tally should be even higher, when considering the impact that the erection of a thoroughfare in the centre of a community has on its social dynamics and sense of place. She described the concept of a ‘border vacuum’, a lifeless area that ultimately becomes unsafe and falls into decline because of the lack of people passing through it. Beyond the immediate physical consequences of geographic fragmentation, there were the environmental impacts of exposure to two decades of construction, followed by the vehicle traffic that the Expressway produced when finished. The highway became one of the most congested in the US: contrary to the traffic alleviation that Moses originally promised, one consequence of the 416 miles of new road that he built was to, logically, encourage more driving. Ella was the first person in the world for whom pollution was explicitly cited by a coroner as a factor to her death Today, I work not in New York but in London as a transport planner. There are many reasons why I fell into my field that didn’t feel purely emotional at the time. But when I consider the questions that I now deal with on a daily basis – how to nurture less car-dominated environments, how to foster ‘healthy streets’ where people of all backgrounds can safely travel by active or sustainable modes – I realise just how personally charged my professional engagements are. I see how they connect back to my dad. In the UK, ‘Ella’s Law’ is currently at the House of Commons. It’s named after Ella Roberta Adoo Kissi Debrah, who died in 2013 at just nine years old, the result of a fatal asthma attack. Ella lived in the London borough of Lewisham near the heavily congested South Circular Road, which her mother Rosamund later learned was emitting illegal levels of nitrogen dioxide due to car traffic. Ella was the first person in the world for whom pollution was explicitly cited by a coroner as a contributing factor to her death. Her landmark case spurned a legal battle driven by her mother, in partnership with the Green Party peer Jenny Jones. Under Ella’s Law, or the Clean Air (Human Rights) Bill, they are fighting to mandate that air quality in every community be brought up to World Health Organization standards. It’s not lost on me that a grieving mother is the one powering this fight. Maybe it takes our own experiences of the extreme to metabolise sadness into something more like resolve. Rather than hardening him, my dad’s losses and own health issues made him empathetic; they equipped him to effectively treat others as a respiratory therapist. In his late 20s, he left New York behind, and finally learned to drive on the crosscountry road trip that he took to settle anew in California. He met my mom, an ICU nurse, at work on the hospital rota. Before he became really sick, the life they lived was simple, powered by patient care on the clock, and love for each other off it. Legacy is a funny concept. Deeply American in some ways, it tasks us with the trope of ‘having an impact’ in our lifetime so that we’ll be remembered, for remembrance’s sake, when it’s over. With figures like Moses as cultural guideposts for the capacity – and danger – of unfettered social influence, I don’t think we ask ourselves enough what the nature of that legacy should be, or whether another individual’s infamy is really what our perplexing and tumultuous world currently needs. My father was human and, as such, imperfect. But he was unconcerned with ego. He was soft-spoken. He wanted to take care of people, and for the hard-earned lessons of his first-hand experience to live on in his work. My greatest hope would be to honour him by doing the same. Urban planning is a strange field because you can essentially choose which side of history you want to be on: the one profiting off the master’s tools, or the one dismantling his proverbial house. It’s perhaps never been more important to opt for the latter. I can’t bring my dad back, but I can carry his teachings forward, a tremendous act of parenting, which parents on.
Katie Mulkowsky
https://aeon.co//essays/how-the-new-york-of-robert-moses-shaped-my-fathers-health
https://images.aeonmedia…y=75&format=auto
History of ideas
The discipline today finds itself precariously balanced between incomprehensible specialisation and cheap self-help
‘As long as there has been such a subject as philosophy, there have been people who hated and despised it,’ reads the opening line of Bernard Williams’s article ‘On Hating and Despising Philosophy’ (1996). Almost 30 years later, philosophy is not hated so much as it is viewed with a mixture of uncertainty and indifference. As Kieran Setiya recently put it in the London Review of Books, academic philosophy in particular is ‘in a state of some confusion’. There are many reasons for philosophy’s stagnation, though the dual influences of specialisation and commercialisation, in particular, have turned philosophy into something that scarcely resembles the discipline as it was practised by the likes of Aristotle, Spinoza or Nietzsche. Philosophers have always been concerned with the question of how best to philosophise. In ancient Greece, philosophy was frequently conducted outdoors, in public venues such as the Lyceum, while philosophical works were often written in a dialogue format. Augustine delivered his philosophy as confessions. Niccolò Machiavelli wrote philosophical treatises in the ‘mirrors for princes’ literary genre, while his most famous work, The Prince, was written as though it were an instruction for a ruler. Thomas More maintained the dialogue format that had been popular in ancient Greece when writing his famed philosophical novel Utopia (1516). By the late 1500s, Michel de Montaigne had popularised the essay, combining anecdote with autobiography. In the century that followed, Francis Bacon was distinctly aphoristic in his works, while Thomas Hobbes wrote Leviathan (1651) in a lecture-style format. Baruch Spinoza’s work was unusual in being modelled after Euclid’s geometry. The Enlightenment saw a divergent approach to philosophy regarding form and content. Many works maintained the narrative model that had been used by Machiavelli and More, as in Voltaire’s Candide (1759), while Jean-Jacques Rousseau re-popularised the confessional format of philosophical writing. Immanuel Kant, however, was far less accessible in his writings. His often-impenetrable style would become increasingly popular in philosophy, taken up most consequentially in the work of G W F Hegel. Despite the renowned complexity of their works, both philosophers would become enduringly influential in modern philosophy. In the 19th century, Friedrich Nietzsche, greatly influenced by Arthur Schopenhauer, wrote in an aphoristic style, expressing his ideas – often as they came to him – in bursts of energetic prose. There are very few philosophers who have managed to capture the importance and intellectual rigour of philosophy while being as impassioned and poetic as Nietzsche. Perhaps this accounts for his enduring appeal among readers, though it would also account for the scepticism he often faces in more analytical traditions, where Nietzsche is not always treated as a ‘serious’ philosopher. The 20th century proved to be a crucial turning point. While many great works were published, philosophy also became highly specialised. The rise of specialisation in academia diminished philosophy’s broader influence on artists and the general public. Philosophy became less involved with society more broadly and broke off into narrowly specialised fields, such as philosophy of mind, hermeneutics, semiotics, pragmatism and phenomenology. There are different opinions about why specialisation took such a hold on philosophy. According to Terrance MacMullan, the rise of specialisation began in the 1960s, when universities were becoming more radicalised. During this time, academics began to dismiss non-academics as ‘dupes’. The problem grew when academics began to emulate the jargon-laden styles of philosophers like Jacques Derrida, deciding to speak mostly to each other, rather than to the general public. As MacMullan writes in ‘Jon Stewart and the New Public Intellectual’ (2007): It’s much easier and more comfortable to speak to someone who shares your assumptions and uses your terms than someone who might challenge your assumptions in unexpected ways or ask you to explain what you mean.Adrian Moore, on the other hand, explains that specialisation is seen as a way to distinguish oneself: Academics in general, and philosophers in particular, need to make their mark on their profession in order to progress, and the only realistic way that they have of doing this, at least at an early stage in their careers, is by writing about very specific issues to which they can make a genuinely distinctive contribution. Moore nevertheless laments the rise in specialisation, noting that, while specialists might be necessary in some instances, ‘there’s a danger that [philosophy] will end up not being pursued at all, in any meaningfully integrated way.’ Indeed, while specialisation might help academics to distinguish themselves in their field, their concentrated focus also means that their work is less likely to have a broader impact. In favouring specialisation, academics have not only narrowed the scope of philosophy, but have also unwittingly excluded those who may have their own contributions to make from outside the academy. Expertise counts for much in today’s intellectual climate, and it makes sense that those educated and trained in specific fields would be given greater consideration than a dabbler. But it is those philosophers who wrote on a wide range of areas that left a profound mark on philosophy. Aristotle dedicated himself to a plethora of fields, including science, economics, political theory, art, dance, biology, zoology, botany, metaphysics, rhetoric and psychology. Today, any researcher who draws on different, ‘antagonistic’ fields would be accused of deviating from their specialisation. Consequently, monumental books that defied tradition – from Aristotle’s Nicomachean Ethics to Nietzsche’s Beyond Good and Evil (1886) – are few and far between. This is not to say, however, that there are no influential philosophers. Saul Kripke and Derek Parfit, both not long deceased, are perhaps the most significant philosophers in recent years, but their influence is primarily confined to academia. Martha Nussbaum on the other hand, is one of the most important and prolific philosophers working today. Her contributions to ethics, law and emotion have been both highly regarded and far-reaching, and she is often lauded for her style and rigour, illustrating that not all philosophers are focused on narrow fields of specialisation. But ‘the blight of specialisation’, as David Bloor calls it, remains stubbornly engrained in the practice of philosophy, and ‘represents an artificial barrier to the free traffic of ideas.’ John Potts, meanwhile, argues that an emphasis on specialisation has effectively discouraged any new icons from emerging: A command of history, philosophy, theology, psychology, philology, literature and the Classics fostered German intellectuals of the calibre of Nietzsche and Weber, to name just two of the most influential universal scholars; such figures became much rarer in the 20th century, as academic research came to favour specialisation over generalisation.Reading Nietzsche may at times be arduous and convoluted, but it is never dull By demoting the significance of generalised thinking, the connective tissue that naturally exists between various disciplines is obscured. One is expected, instead, to abide by the methodologies inherent in their field. If, as Henri Bergson argued in The Creative Mind (1946), philosophy is supposed to ‘lead us to a completer perception of reality’, then this ongoing emphasis on specialisation today compromises how much we can truly know about the world in any meaningful depth, compromising the task of philosophy itself. As Milan Kundera put it in The Art of the Novel (1988): The rise of the sciences propelled man into the tunnels of the specialised disciplines. The more he advanced in knowledge, the less clearly could he see either the world as a whole or his own self, and he plunged further into what Husserl’s pupil Heidegger called, in a beautiful and almost magical phrase, ‘the forgetting of being’.To narrow one’s approach to knowledge to any one field, any one area of specialisation, is to reduce one’s view of the world to the regulations of competing discourses, trivialising knowledge as something reducible to a methodology. Under such conditions, knowledge is merely a vessel, a code or a tool, something to be mastered and manipulated. By moving away from a more generalised focus, philosophy became increasingly detached from the more poetic style that nourished its spirit. James Miller, for instance, called pre-20th-century philosophy a ‘species of poetry’. Nietzsche’s own unique, poetic writing style can account for much of the renown his ideas continue to receive (and also much of the criticism levelled at him by other philosophers). Reading Nietzsche may at times be arduous and convoluted, but it is never dull. Indeed, Tamsin Shaw spoke of Nietzsche as less a philosopher and more a ‘philosopher-poet’. Jean-Paul Sartre called him ‘a poet who had the misfortune of having been taken for a philosopher’. While many sought to separate philosophy from other creative styles and pursuits, notably poetry and literature, Mary Midgley insisted that ‘poetry exists to express [our] visions directly, in concentrated form.’ Even Martin Heidegger, whose writing was far less poetic than Nietzsche’s, called for ‘a poet in a destitute time’, and saw poets as those who reach directly into the abyss during the ‘world’s night’. Of course, writing style alone cannot possibly account for philosophy’s floundering; Kant and Ludwig Wittgenstein proved incredibly influential despite their forbidding prose. Like Nietzsche and Heidegger, their works addressed monumental philosophical questions of being and of knowledge, altering the trajectory of philosophy itself. But as philosophy became increasingly detached from the social world upon which its interests were focused, the question about whether it had any relevance to ‘real world’ concerns, anything meaningful to say about what it meant to be human, became more frequent, and was soon the prevailing criticism whenever the topic of philosophy arose. As Bernard Williams had put it in 1996, there is the common accusation that ‘philosophy gets no answers, or no answers to any question that any grown-up person would worry about.’ Or, as David Hall argued, ‘it is the relevance of philosophy that is challenged first.’ Today, one can clearly see the effects of specialisation. Considered little more than a frivolous pastime in the 21st century, at best an elective, philosophy is seen by many to be ill suited to the vocation-oriented education system that is prioritised today. Universities provide courses that make students ‘job-ready’, while digital literacy is marketed as the benchmark of intellect and success. The infrastructure of education is almost unanimously in favour of quantified learning and STEM courses. In 2022, for instance, the Australian Research Council released its results for approved projects for 2023. Out of the 478 projects that were approved for 2023, 131 were for engineering, information and computing sciences; 117 for biological sciences and biotechnology; 98 for mathematics, physics, chemistry and Earth sciences; 93 for social, behavioural and economic sciences; and 39 for humanities and creative arts. Stephen Hawking was one of the most vocal critics of philosophy in recent history, declaring in 2010 that ‘philosophy is dead.’ For Hawking, philosophy lacked the empirical rigour of the sciences. This wasn’t a new accusation. In Power Failure (1987), Albert Borgmann claimed that science is superior to the humanities since ‘there is always by near unanimous consent a best current theory. There never is any such thing in the humanities.’ Einstein, he wrote, ‘superseded Newton in a way in which Arthur Miller has failed to supersede Shakespeare.’ Yet what Borgmann didn’t understand is that philosophical theories are not necessarily meant to be proven or disproven, and that philosophical ideas do not simply become obsolete as new ones take shape. As Hall put it: ‘the philosopher of culture is concerned primarily not with questions of the truth or falsity of this or that interpretation, but with the articulation of those important understandings that promote cultural self-consciousness.’ Steve Jobs and Elon Musk were not the individuals he had in mind when he theorised the Übermensch In response to the stifling impact of specialisation, certain writers and scholars have sought to rectify philosophy’s obscurity by attempting to make it more relevant to society. But in their efforts to broaden philosophy’s reach, many have simply turned philosophy into a corporate enterprise. Corporatisation – the most egregious mutation of neoliberal capitalism – has had a devastating impact on philosophy, to the extent that ideas and creativity are embraced only insofar as they are marketable and profitable. In an age dominated by self-help, the cult of Silicon Valley and the normalisation of excessive wealth, philosophers have been demoted, replaced with ‘thought leaders’ and think tanks, influencers and entrepreneurs. Kiran Kodithala, in his article ‘Becoming the Übermensch’ (2019), even sees Nietzsche’s Übermensch as an entrepreneur, providing a particularly egregious interpretation of Nietzsche’s philosophy: According to Nietzsche, becoming ubermensch is quite simple. His recipe is to believe in yourself and stop worrying about the world. The status quo will always resist change, the society will always call you crazy, some might even label you a narcissist, and a few might call you naive for coming up with radical ideas.For Kodithala, Steve Jobs can be seen as one possible incarnation of Nietzsche’s elusive Übermensch, in large part due to his dogged pursuit of creativity against considerable hardships. Yet Nietzsche would have baulked at the implication, while admonishing society’s celebration of tech moguls like Jobs and Elon Musk, who have simply reinforced the status quo under the guise of entrepreneurship, rather than disrupting it. These were not the individuals that Nietzsche had in mind when he theorised the Übermensch, a concept that applied less to a specific individual than to an idea. Had Nietzsche intended for the Übermensch to apply to a specific person or persons, he would have reserved it for the greatest artists only. For Nietzsche, art exists as the purest form of self-expression, and he held in the highest esteem figures like Ralph Waldo Emerson, Goethe and Schopenhauer who, he felt, exhibited the intrinsic spirit of self-overcoming. In the 21st century, creativity has been co-opted by industries of capital, and the very idea of ‘greatness’ has lost its meaning, increasingly applied to those who, Nietzsche would have argued, do nothing but defile culture and tarnish the very idea of creativity. Creativity is not rewarded as an end in itself, but merely as a method to accrue capital. Or, as Jenny Odell puts it in How to Do Nothing (2019), art, philosophy and poetry struggle to survive ‘in a system that only values the bottom line’; such pursuits ‘cannot be tolerated because they cannot be used or appropriated, and provide no deliverables.’ To this end, great philosophical works have been replaced by pop philosophy books that are more closely associated to the self-help industry than to philosophy itself. Alain de Botton is one of the more familiar figures whose place in contemporary philosophy attests to this shift; his School of Life organisation (comprised of a large production team) has turned philosophy into a business aimed at selling gimmicky merchandise under the guise of contemporary enlightenment. While his desire to breach the gap between philosophy and the general public is certainly commendable, his efforts are at once a help and a hindrance to the nature of philosophy itself. On the one hand, his books attempt to ‘modernise’ philosophy for a broad readership that might otherwise be unfamiliar with such concepts or philosophers, while on the other his particular brand of modernising the field threatens to reduce philosophy and philosophical concepts to a gimmick for curing self-esteem issues. Titles such as How Proust Can Change Your Life (1997) and How to Think More About Sex (2012) share nothing with the great works of philosophy, while promoting the harmful notion that, if philosophy is to have any value now and beyond, it must base its worth on its practical use-value as an antidote to society’s psychological sickness. De Botton is not alone in this treatment of philosophy as a self-help marketing device, as an alarming number of so-called ‘philosophy’ books sold today are merely self-help books masquerading as philosophical treatises. One such book blurbs: ‘How can Kant comfort you when you get dumped via text message? How can Aristotle cure your hangover? How can Heidegger make you feel better when your dog dies?’ Certainly, none of these philosophers ever intended their work to be used in such a way. In her scathing review of Colin McGinn’s poorly received The Meaning of Disgust (2011), Nina Strohminger called the book ‘an emblem of that most modern creation: the pop philosophy book. Actual content, thought, or insight is entirely optional. The only real requirement is that the pages stoke the reader’s ego, make him feel he is doing something highbrow for once.’ There is a feeling among younger readers that philosophy is in want of a clearer identity or direction These books may, of course, prove useful to many people, but they also risk trivialising our expectations regarding what philosophical and critical thinking is supposed to feel like. As Christian Lorentzen put it in the London Review of Books in 2020: ‘Many people buy books that supply the illusion of thinking …’ These books can help introduce readers unfamiliar with philosophy to the thoughts and ideas of some of the great philosophers, but they stop short of demanding more critical engagement from readers. At most, they can make readers feel a bit better, not an unworthy goal, but not at all one with which philosophy itself is concerned. As the philosophical biographer Ray Monk has argued, these books ‘might have a purpose.’ ‘But,’ he adds: ‘that’s not philosophy.’ In his book The Nature and Future of Philosophy (2010), Michael Dummett asks: ‘Where, then, is philosophy likely to go in the near future?’ It is a question that many people, both philosophers and non-philosophers, often ask. In fact, as Kieran Setiya recently pointed out, it isn’t uncommon for people to lament the state of philosophy. He specifies that philosophers of a certain age tend to deplore the discipline’s lack of direction, or the lack of great, influential figures. But there is an overwhelming feeling among younger readers and practitioners that philosophy is in a particular stage of uncertainty or stasis, and that it is in want of a clearer identity or direction. Dummett recognised that specialisation and the various opposing traditions that emanated from this has had no small impact on philosophy’s future: ‘[T]he gravest obstacle to communal progress in philosophy has been the gulf that has opened between different traditions.’ Dummett puts forth the argument that the most fruitful path taken in philosophy has been the analytic tradition, whose chief interest has been language. Though he believes the analytic tradition has certain strengths over the continental focus on phenomenology, he also sees potential in a ‘reconciliation’ between these traditions, believing such a union could be best met through a mutual focus on the philosophy of mind. Both scientists and philosophers, he contends, have become obsessed with the idea of consciousness, an area that may, he reasons, see these divergent traditions meet each other halfway. Yet there is still the larger problem of the lack of understanding regarding philosophy’s identity. Pop philosophy has flooded the market, adding to the confusion about what philosophy actually is and what it does. On Penguin Australia’s ‘pop philosophy’ site, the publisher promotes a list of books – by writers such as de Botton, A C Grayling and Marie Robert – that offer ‘some pearls of wisdom to help steer you through your day’. Promoting pop philosophy is one thing; one might expect that a separate search for ‘philosophy’ on Penguin’s website would at least yield more substantial results. Instead, one is met with an incongruous mix of works by Jordan Peterson, Marcus Aurelius, Stephen Fry, and Seneca. It is perhaps no surprise that philosophy is in such a state of confusion, when classic philosophical works appear alongside lightweight self-help books, as if they are interchangeable. And while academic books might prove more substantial in their offerings, they are notoriously and often prohibitively expensive, meaning they are largely ignored, or read almost exclusively by other academics. There is a disconnect between philosophy as it was practised by the likes of Nietzsche, Heidegger and Kant, and what readers are being offered today. Corporatisation and commercialisation have not only dulled people’s tolerance for critical thinking but have warped their expectations about what it means to read philosophy, seeing it only as something that can make them happier. But as Monk reminds us: ‘Philosophy doesn’t make you happy and it shouldn’t. Why should philosophy be consoling?’ Nietzsche himself recognised that philosophy can be an unsettling endeavour. In his final book, Ecce Homo, he claimed that philosophy is ‘a voluntary retirement into regions of ice and mountain-peaks – the seeking-out of everything strange and questionable in existence.’ He wrote: ‘One must be built for it, otherwise the chances are that it will chill him.’ Nietzsche did not see himself as a philosopher in the traditional sense In 2005, two years before his death, Richard Rorty similarly noted that ‘philosophy is not something that human beings embark on out of an inborn sense of wonder …’ Instead, Rorty believed that philosophy is ‘something [people] are forced into when they have trouble reconciling the old and the new, the imagination of their ancestors with that of their more enterprising contemporaries.’ David Hall once argued that: [I]t is the primary function of the practising philosopher to articulate cultural self-understanding. And if the philosopher fails to provide such an understanding, he fails in the task that is his very raison d’être.Philosophy, of course, is not meant to be for everyone, and Nietzsche knew this. It is easy to see why Bertrand Russell felt that Nietzsche was elitist, when Nietzsche claimed: ‘These alone are my readers, my rightful readers, my predestined readers: what do the rest matter? – The rest are merely mankind.’ Yet, in many ways, Nietzsche’s works exemplify philosophy at its best. They were not academic in nature, but nor were they overtly commercial. They were impassioned works of tremendous literary force. Nietzsche did not see himself as a philosopher in the traditional sense, which helps to explain his unconventional place in philosophical history. But Nietzsche nevertheless saw himself as part of a collective. While Borgmann seemed to pit scientists against each other in an ongoing battle of one-upmanship, Nietzsche recognised that he was drawing on those who came before, and that his own readers would likewise draw on him. In Daybreak (1881), one of his earliest and most underrated works, he writes: All our great mentors and precursors have finally come to a stop, and it is hardly the noblest and most graceful of gestures with which fatigue comes to a stop: it will also happen to you and me! Of what concern, however, is that to you and me! Other birds will fly farther! Nietzsche has indeed influenced a slew of successive thinkers, though no other philosopher since has had such an enduring impact. Clearly, our century’s emphasis on quantified knowledge, specialisation and marketability has created an intellectual climate that not only devalues philosophical thought, but has turned philosophy itself into something it was never supposed to be.
Siobhan Lyons
https://aeon.co//essays/since-when-is-philosophy-a-branch-of-the-self-help-industry
https://images.aeonmedia…y=75&format=auto
Film and visual culture
It’s often said that a successful picture ‘captures the essence’ of a subject. But a great photograph does so much more
The preeminent Victorian portrait photographer Julia Margaret Cameron wrote that, when photographing important subjects, ‘my whole soul has endeavoured to do its duty towards them in recording the greatness of the inner as well as the features of the outer [subject].’ Consider this portrait of Cameron’s niece, Julia Jackson (many of Cameron’s most interesting portraits were of women). It is a powerful and distinctive image. Suppose you were asked to explain what makes it such. Having just read what Cameron said she was trying to achieve with her portraits, you might be tempted to say something along the lines of: ‘it captures the essence of the person being photographed.’ This is, in any case, the sort of thing one tends to hear said of good portrait photographs. Of course, a few qualifications might also be added, to head off criticisms: ‘I mean, it likely captures, at a particular moment, something significant of the essence of the person being photographed.’ Perhaps it will immediately be admitted that it is difficult to know whether or not this photograph does really capture something significant of Jackson’s essence, given that no one now alive knows what she was like. Julia Jackson (1867) by Julia Margaret Cameron. Courtesy the Art Institute of Chicago What should we make of the tendency to describe portrait photographs as capturing something of the essence of a person, at a certain moment? I would suggest that despite this great photographer’s endorsement of something like this thought, we should be wary of it. It can be misleading as a standard for describing good portraits, and it can be stultifying when it is treated as an ideal to be aimed at by portraitists. There are many different ways that portraits, or photographs of people more generally, can be aesthetically appealing and valuable, and the tendency to gravitate towards this ideal should generally be resisted, tempting as it may be. It should be resisted on both artistic and philosophical grounds. Let’s start by unpacking the qualified claim itself. To begin with one of the qualifications, there is the idea of capturing a person at a moment in time. In fact, early photographs, such as the one of Jackson above, typically had long exposure times. Sitters would need to put significant effort into staying very still, and even small movements could leave traces in the final photograph. Cameron actually thought of the softness produced in her portraits by movement blur, as well as by imprecise focusing, as a positive feature of them. Even after it becomes possible to use much faster shutter speeds, it remains the case to this day that light events are still recorded by cameras during a period of time, rather than at a moment in time. When we consider that it’s now common to take photographs at a shutter speed of one-100th of a second or faster, it’s easy to think of this as being more like taking a photograph at an exact moment in time, but the movement blur that can appear in photographs taken even at speeds as high as one-100th of a second puts the lie to this thought. Recordings of movement in photographs do not straightforwardly correspond to anything in normal human vision, and the possibility of using a long exposure time is still often treated as a creative option, with very different outcomes depending on whether we are talking about a ‘long’ exposure of one-fourth of a second – as in this photograph I took recently of pedestrians in Boston, as I waved my camera just so – or leaving the shutter open for seconds or minutes (the latter is common when it comes to landscape photography, where choppy water may appear smooth, for instance). In a rush © Daniel Star The creative use of shutter speeds is related to a second problem with the common thought that we are analysing. Photographs ‘capture’ scenes only in a highly attenuated sense. We do not see the world in the way we see scenes in photographs. Skilled photographers exercise a significant degree of control over the content and production of photographs in a large number of ways, both before the shutter closes and afterwards. In a great many cases, especially when particular prints are even just informally ‘certified’ by the photographer, it is best to follow Ansel Adams’s thought that photographers make photographs, rather than merely take them. So much for a naive way of thinking about what it means to ‘capture’ a scene or a person. We are still left with the idea that people have an essence that images can depict in a truthful fashion (this is sometimes thought of as an ideal with respect to portrait paintings, and not just portrait photographs, it might be noted). Do people have individual essences, and are they such that significant aspects of them might be revealed in depictions of their physical countenances? As a philosophy professor, I’m tempted to start picking apart the idea of an essence of this kind, but that would (sadly) be beside the point in the present context. It’s uncontroversial to say that individual people have particular physiological and psychological characteristics that they and others find significant, and it isn’t problematic to say that photographs can in some ways contain features that line up with some such features. This may be all that people have in mind when they think of photographs as capturing the essence of people. You may have a friend who is typically a happy and relaxed person, with a distinctive smile, and you take a photograph of her at a moment when she is looking happy and relaxed in a particular way that she tends to. Of course, it isn’t guaranteed that she will look happy and relaxed in the photograph (the muscles in our faces are constantly moving, so you may get unlucky using a fast shutter speed and be surprised to find that she appears to be worried), but if she does you might say you have captured something of her generally happy and relaxed nature. When successful in this way, photographs that present us with a typical pose for a particular person may, for people who know them at least, come to function as a synecdoche for their character. Portraitists who follow this approach need to know a fair amount about their subjects’ characters. But what about when we know very little about the subjects of a photograph? Thinking and resting © Daniel Star Consider two street photographs of people that I know are ‘candid’ or unposed because I happen to have taken/made them. The banner image at the top of this page shows a man appearing to suddenly come to a halt (close to an easy-to-miss reflection of the photographer, as it happens), in a manner that suggests he has temporarily forgotten what activity he was in the midst of doing, or what he is meant to do next. Interpreting the image immediately above, one might think it is of a tired man with his legs up (a sock just showing on one side) in the midst of thinking about something, while his loved one rests on his back. Knowing they are candid, one might wonder whether the descriptions I just provided of these photographs line up with the reality before the camera at the time. In the second case, I suspect (but do not know) the description I just provided is accurate in this sense, but in the first case I have no idea whether or not it is, even though I took the photograph. It’s quite possible that this man was doing something completely unrelated to my interpretation. As for the essence or character of these subjects, and whether the poses in question are typical or untypical of them, we remain in the dark. Does it really matter, when appreciating photographs as artworks, whether or not we know that an interpretation of people in the photograph that we favour lines up with reality (on the basis of an independent source of evidence)? In a great many cases, it does not. Street photography, which one might take to include candid street portraits, is one whole genre where it usually doesn’t matter. One might go further and say that part of what we enjoy when viewing many photographs, in this genre but also others, is that we are aware that our interpretation of the photograph we are looking at is one that we can only ever say may line up with reality. Ignorance of this specific kind can be aesthetically enjoyable. This type of tantalising ignorance will be lacking in our experience of a purely AI-generated image that we know to be such, even if it presents to us a scene featuring people in a way that means we might easily confuse it with a photograph. Consideration of such digital images, whose creation didn’t depend on particular light events in the right way, can help us understand what is distinctive of photographs, even on a fairly permissive ‘new theory of photography’ that would classify some hard-to-categorise images, such as Gerhard Richter’s ‘Betty’ (1988), as photographs. A different way in which literal truthfulness is sometimes clearly not a dominant concern comes to the fore when we consider photographs that purposefully present to us a fictional narrative, albeit a much more fragmentary and indeterminate narrative than a typical fictional movie or novel would. Fictional narrative structures can also be made less fragmentary than they otherwise would be when photographs exist as part of a curated series, and this is one of a number of reasons why it can be an error to focus too much on attempting to appreciate or analyse single photographs in isolation from the series they were made to fit into. Various projects of Cindy Sherman, Duane Michals, Alec Soth and Zanele Muholi – to name just a few important contemporary photographers who have focused on creating narrative portraits in a series – all bear this out. It is also important not to forget that the purposeful use of fictional narrative elements in photographs has a long history: many of Cameron’s photographs, for instance, were staged to represent mythical or biblical scenes. Despite this element of continuity with tradition, the best of Sherman’s self-portraits are novel and strike us as richer in subtle, often ambivalent, narrative details. Galina. Odessa, Ukraine 2018. ©Alec Soth/Magnum Photos The problem as I see it isn’t that photographs don’t have veridical features (features isomorphic to features of the world in front of the lens when the photograph was taken), to a lesser or greater extent. It would be a mistake to deny that they do, or indeed that part of what is involved in appreciating photographs involves knowing this. Rather, the problem is that there is a tendency to overvalue or even fetishise veridicality when we move from contexts where the evidential status of photographs rightly matters to us more than anything else – the courtroom, the newspaper and the historical archive – to contexts where the aesthetic features of photographs are what we are being asked to consider most significant. When photographs are art, whether of the amateur or professional kind, there are a range of aesthetic standards that it may or may not be appropriate to apply in any specific case (genre classifications, as well as more specific information about particular artistic projects, help us), and many of these are not concerned with presenting the world to the viewer in a veridical fashion. Some readers will no doubt think they would never have been tempted to use anything like that phrase I started with to describe good portrait photographs – they might avoid speaking of capturing the essence of people. I’m not claiming that only people who do use that sort of phrase make the mistake I have in mind. It’s perhaps more common among critics to complain of certain artistic photographs of people that they simply don’t look like the person who was photographed. Richard Avedon and Diane Arbus, for instance, have attracted this kind of criticism. The philosopher and art critic Arthur Danto complains, in a much-read paper on photography titled ‘The Naked Truth’ (1998), that Avedon’s photograph of Isaiah Berlin taken in 1993 (not one of Avedon’s better photographs, one might independently think) doesn’t look like Isaiah Berlin himself, due to the specific techniques that Avedon adopted for taking portrait photographs. We are told that Avedon is guilty of deception because of the ‘natural authority’ people now and in a hundred years hence will attach to photographs, thinking ‘photographs never lie’. This is very odd, given that Danto himself makes much of the fact that photographs can be non-veridical in a number of respects. Instead of accusing artistic photographers of deception, why not call for better education when it comes to understanding that artistic photographs are often going to be non-veridical in important respects? And, regardless of whether such education is available to people, why should we hold any art (or, indeed, a whole artform) hostage to public misconceptions? These are not questions Danto considers in his essay. We don’t think it’s a valid aesthetic criticism of Vincent van Gogh’s paintings, for instance, that even well-educated people were inclined to misunderstand what he was doing around the time he painted them, and it seems irrelevant that many may still do so; rather than suggest that Van Gogh should have instead employed different techniques, we might say that people should instead do a better job of trying to understand and appreciate his paintings. As for Avedon’s photography, a fairer assessment of it would need to include more and better examples. Student Nonviolent Coordinating Committee Headed by Julien Bond, Atlanta, Georgia by Richard Avedon, 23 March 1963. © The Richard Avedon Foundation Danto’s talk of how in a hundred years hence we will still think of photographs with the same misconceptions as we did in the 1990s (when his article was published) appears quite incautious now that we worry a good deal about the political use of deep fakes and wonder how we might be able to tell the difference between photographs and AI-generated images. For all the issues that we now rightly worry about, we might hope that a silver lining of living in the age of AI-generated images and various digital computing effects on smartphones (eg, filters, fake bokeh and the automatic combining of multiple exposures) may be a significant and widespread diminishing of naivety when it comes to taking photographs at face value. No portrait photographer has attracted misconceptions and divided critics as much as Arbus (and many find her work more interesting than Avedon’s). I obviously cannot hope to do justice here to either her work or the historical debate concerning its nature and value. Suffice to say, I agree with those admirers (among them the philosopher David Davies) who contend that, far from objectifying her subjects or aiming to have us view them as freakish or pitiful, Arbus produced a highly distinctive body of work that attests to the humanity of her subjects. Arbus’s empathetic approach is concerned with how human beings present themselves to others – their projections, rather than their essence – yet is stylistically all her own. If portrait photography simply invited us to look at people, there would be little room for recognising the distinctive stylistic virtues of Arbus’s work. The documentary aspect of photography is even more to the fore in Nan Goldin’s work, yet here too we see a particular ethical and artistic vision at work (one that is quite different from Arbus’s), as is made clear in Laura Poitras’s wonderful documentary All the Beauty and the Bloodshed (2022). Early in her career, Goldin would invite her friends and acquaintances to events in New York where she would present slideshows of photographs that she had taken of them, attuning herself to their feedback, and even allowing them to destroy photographs of themselves that they did not like. It appears she viewed her work at that time as providing a space in which her photographic subjects, often queer, could experiment with ways of representing themselves to each other, and to an often-hostile world; combatting hatred and bigoted stereotyping from mainstream America, and building solidarity. This early policy of allowing others to influence the curation of her photographs can be contrasted with her later more than justifiable decision to publish photographs of herself as a victim of partner abuse, and of her abuser, against his wishes. The focus on co-creation between subject and photographer that one sometimes finds in Goldin’s work makes for an attractive way of approaching the making of photographs of people. She doesn’t mention Goldin, but in her essay ‘Respecting Photographic Subjects’ (2019), the philosopher Macalester Bell, influenced by Danto, claims: ‘Ideally, an artist who respects his subjects, qua artistic subjects, will collaborate with them in creating the final image.’ I think that this is too strong. We might instead take the collaborative ideal that Danto and Bell valorise, attractive and productive as it is to some photographers, to be just one among a number of legitimate artistic ideals with respect to making great photographs of people. One might be tempted to suggest Goldin’s work provides an example of how our selves can be constructed in a narrative fashion through portraits, but the philosopher Cynthia Freeland argues convincingly in Portraits and Persons (2010) that this is more than any series of portraits can ever hope to provide. Still, we need not deny that some of the portrait photographs produced by Goldin (or by Dawoud Bey, to mention a different example) succeed in promoting empowering representations of particular marginalised people and their communities. When it comes to moral judgments of photographic artworks or the artists that produce them, there are many types of concerns we might have. It’s far from clear that collaboration between photographers and subjects is crucial. I cannot here discuss many of the moral issues that arise with respect to photography, but one particularly relevant issue has to do with our individual preferences that other people represent us in various ways and not others. Danto suggests that a photograph of a person ought to respect their own self-conception and will otherwise count as morally disrespectful. I agree with Bell that Danto’s idea can’t be right (although I also disagree with Bell’s own variation on it). Of course, we all wish to have ourselves represented to the world in some ways and not others, but the moral issues here are rather complex. For a start, our own representations of ourselves are often inaccurate, and either overly positive or overly negative. Being offered interpretations of ourselves by others can help us better understand who we are, and help us be more modest about our powers and rights to shape how others view us, as individuals (following Bell, I am here referring to individual characteristics, such as messy hair or lopsided grins, rather than characteristics that are selected or highlighted to reinforce stereotypes). A paucity of stories can be dehumanising, given people’s many aspects I certainly do not wish to deny that it’s possible for photographs to be morally disrespectful. Clearly they can be, and sometimes are. But I think the problem with morally disrespectful photos is not usually best explained by pointing out that they fail to align with individual self-conceptions, and the creation of fictional narratives or art that is not veridical in this respect is not ipso facto disrespectful. I can’t here hope to provide a full account of what can make photographs disrespectful, but let me end by gesturing at a couple of things I take to be crucial in many cases. First, photographs can provide or reinforce harmful stereotypes with respect to particular groups of people that are oppressed or considered foreign or ‘other’ to an audience. Second, photographs can be dehumanising even independently of how they might rely on or reinforce stereotypes. Photographs of people who are suffering or are dead can be of this kind, no matter what social categories the photographic subjects belong to, although not all photographs that feature suffering or death are dehumanising. One way to understand what is involved with dehumanising photographs is to adopt a Kantian view according to which what is essential to our humanity, and what is denied in cases of dehumanisation, is our rational agency – our capacity to autonomously and intelligently set goals for ourselves as individuals and pursue them in a rational fashion. Danto’s suggestion that respect involves having control over how one’s self-conception is represented by others can be thought of as one version of this view. An alternative perspective, which I favour, would have us focus instead on a multitude of significant, rich features that human lives typically possess, from the enjoyment of simple pleasures and the avoidance of pain, to activities that involve learning about aspects of the world or practical achievements, to engagement in loving relationships (both familial and romantic), to participation in a variety of cultural rituals, including engagement with art (either ‘low’ or ‘high’). One significant aspect of our humanity is our rational agency, of course, but it may be a mistake to make this alone the centrepiece of ethics. Dehumanisation can take many forms, depending on what aspects of a person’s humanity are being denied or obscured. The idea that a paucity of stories can be dehumanising, given people’s many aspects, is proposed and defended by the Nigerian novelist Chimamanda Ngozi Adichie in her TED talk ‘The Danger of a Single Story’ (2009). If we return to the idea we started with, that photographs of people might capture their essence, we can now add that a key reason this idea is so problematic is because no individual can be captured in a single story, whether it is provided by a photograph or in some other way. In many cases, the fact that people cannot be easily or adequately captured in stories does not undermine the artistic value of particular photographs, but we should endeavour to keep in mind this fact concerning stories and not succumb to a very human tendency to forget it.
Daniel Star
https://aeon.co//essays/an-individual-cannot-be-captured-in-a-photograph
https://images.aeonmedia…y=75&format=auto
Work
Modern life subjects us to all-consuming demands. That’s why we should reflect on what it means to step away from it all
In a photographic exhibition titled ‘Removed’ (2015), Eric Pickersgill includes depictions of subjects in the company of other intimates, all of them captured in the act of staring blankly at their hands, where smartphones would normally be placed but have been withdrawn to create images of people alone together. Here is how the artist describes his inspiration for the project: The work began as I sat in a café one morning. This is what I wrote about my observation: Family sitting next to me at Illium café in Troy, NY is so disconnected from one another. Not much talking. Father and two daughters have their own phones out. Mom doesn’t have one or chooses to leave it put away. She stares out the window, sad and alone in the company of her closest family. Dad looks up every so often to announce some obscure piece of info he found online. Twice he goes on about a large fish that was caught. No one replies. I am saddened by the use of technology for interaction in exchange for not interacting. This has never happened before and I doubt we have scratched the surface of the social impact of this new experience. Mom has her phone out now. Pickersgill’s work is very much of a piece with broader social and economic concerns over the phenomenon of disengagement in (and from) our culture. Recently, the United States surgeon general issued a public health advisory detailing a growing epidemic of isolation and loneliness, one fuelled largely, though not exclusively, by our increasing use of digital technologies as a substitute for in-person engagement. This trend was taking shape long before the COVID-19 pandemic forced our mass removal from public space and from familiar patterns of shared life, and we are only beginning to understand that era’s long-term effects on our personal and public wellbeing. Meanwhile, the moral and managerial panic over the great resignation, quiet quitting and other (even benign) forms of labour disruption in the wake of the pandemic reveals the extent to which our voluntary practices of leave-taking have been pathologised. We are, it seems, obliged to show up – to be reliably present, available and legible – for our own good or for that of the collective. Rarely do we display much intellectual curiosity about what these practices of withdrawal might be doing for – not just to – us. Acts of disengagement are routinely met with scepticism, judgment and pushback in public discourse. What if we were to treat them instead as opportunities for open enquiry and ask what is to be gained by them? In that spirit, I propose an expanded lexicon that speaks to the benefits of escaping (even temporarily) the confines of waged work; of disconnecting from the enmeshments of a modern existence; and of seizing interludes for contemplation in a world that is chockablock with demands and distractions. The early period of the pandemic (which approximated in many respects a kind of general labour strike) gave some of us an intimation of what life lived largely off the clock can be like when much of what passes for work is suspended or slowed and we are afforded precious ‘little gaps of solitude and silence’, as the French philosopher Gilles Deleuze called them, to engage in worthy pursuits that elude us under normal circumstances. We found incomparable personal freedoms and new opportunities for enrichment and fulfilment in the cessation of many of our standard operating procedures. Then, as everyone recalls, we were summoned back to the office. But, once we had experienced this new way of being, the prospect of returning to the old order – submitting to the control, policing and surveillance of our former workaday lives – became almost unthinkable, especially for members of a chronically insecure workforce forced to endure low pay, lack of opportunity for advancement, inflexible schedules, and a multitude of everyday insults and indignities. Perhaps the chief insult to us all is the governing assumption that we must be collocated – or collated – to do our best work, despite having demonstrated our capacity for self-directed productivity from home (or other private quarters) under the most trying circumstances. The organisation has a vested interest in cultivating the ties – or tethers – that bind us Alas, the administrative impulse to fix our fluidity is nearly universal. Remote work is increasingly supervised and surveilled with keylogger software and other technologies to monitor employee time on task. Amazon, Google and Zoom are among a growing list of companies issuing return-to-work orders, ostensibly to capitalise on the sorts of serendipitous face-to-face interactions that might lead to profitable new ideas and innovation but also simply to pay the rent. In the US, recent survey data from the Government Accountability Office show that most federal agencies in GAO’s review recorded average headquarter weekly occupancy rates of 25 per cent or less during the first three months of 2023, prompting cries of concern from industry observers who cannot abide all that unused physical space. Across the globe, underutilised or vacant offices are being tracked with alarm. We belong back at work, we are told, which is another way of saying that we belong to work. ‘Sense of belonging’, in fact, has become a buzzword in human resources offices over the past decade or so. Nearly every setting where work is performed – from the college campus to the corporation – has taken up the rallying cry of cementing an attachment to place. The literature on sense of belonging is clear on the benefits to workers: enhanced performance, motivation and commitment, as well as heightened feelings of safety and security. In many respects, though, the ultimate beneficiary is the organisation itself, which has a vested interest in cultivating the ties – or tethers – that bind us. It is a small step from connectedness to capture, where the sense of belonging is an organisation’s sense that its subjects belong to it. Those of us who wish to contest our subjection can, of course, unite in coordinated campaigns of resistance and push for formal reforms. We can also embrace quieter, less visible tactics, including the direct action of flight. As Dimitris Papadopoulos, Niamh Stephenson and Vassilis Tsianos argue in their book Escape Routes: Control and Subversion in the 21st Century (2008), ‘escape functions not as a form of exile, nor as mere opposition or protest, but as an interval which interrupts policing.’ In The Scent of Time: A Philosophical Essay on the Art of Lingering (2009), Byung-Chul Han suggests that our experience of intervals is being ‘destroyed in order to produce total proximity and simultaneity’. When everything (and everyone) is within reach at all times, we lose a sense of what it means to be in – and even to savour – transitional states of in-betweenness. As an antidote, Papadopoulos, Stephenson and Tsianos recommend that we ‘tarry with time’ and ‘make spaces for the play of purposeless action’. We can, in other words, reappropriate some of the time and space being withdrawn from us. These can be reclaimed in the fugitive moments we thieve from the calendar, or they can be recovered in what the anarchist Hakim Bey in 1985 called ‘temporary autonomous zones’: undetectable underground enclaves that we carve out of the landscape of our everyday lives in order to find or free ourselves. Simultaneously, practices of disengagement might withdraw from organisations (workplaces primary among them) their extraordinary power to mediate – to dictate and direct – far too many aspects of our existence and experience. Opting to bypass certain workplace amenities and conveniences expertly designed to keep us at work – the cafeteria, the fitness centre, the dry cleaner, the onsite health clinic – might not seem like much of a tactic of rebellion, but it does its part to lessen our dependence on our employer as lifehack, helpmate or healer. On one level, disengagement can be understood as an effort to slip apparatuses of control wherever we encounter them. Think of this as a strike for worker empowerment. The bigger question, though, is whether we can ever really escape the toil and trouble for good. It often seems as if we are doomed to sustain the systems and structures that suspend us in cycles of weariness, overwhelm or misery. It is about engaging in a social policy of degrowth as a corrective to practices that are taxing the planet Respite is a natural adaptive response to a world that is too much with us, as the Romantic poet William Wordsworth put it, a world that might be improved by less (not fewer) of us in it. When people are exhausted, disengagement provides temporary relief from persistent feelings of overwork and a blessed release from our enervating entanglements. What society cannot countenance, it seems, is prolonged disengagement, which tends to be conflated with civic apathy or indifference. Seldom do we reckon with the costs of civic engagement or even frame that as a problem unto itself. Yet staying abreast of current events under the guise of doing our civic duty produces not just an informed (and occasionally misinformed or disinformed) citizenry but can also result in documented instances of information fatigue syndrome. In 2022, the Intergovernmental Panel on Climate Change found a high likelihood that extreme weather events, which increasingly dominate the international news cycles (as they properly should), have an adverse effect on mental health. The social media from which many of us reflexively gather our news and perspectives increasingly command our attention, virtually eroding opportunities for independent thought. In short, there is much to be anxious about, and moments of Thoreauvian withdrawal, in which we remove ourselves from the grid, might allow for some measure of self-preservation and sanity. But this isn’t just about seeking exit from the turmoil of a modern, connected existence, nor is it merely about recharging our batteries so that we can absorb yet more political polarisation, toxic social media or climate catastrophe. It is also about engaging in a social policy of degrowth as a corrective to practices that are taxing the planet, overheating our politics, and putting a strain on our individual and collective health. Writing in Big Issue magazine in 2020, the anthropologist and activist David Graeber put it pointedly: ‘If we want to save the world, we’re going to have to stop working.’ His observation echoes the philosopher Bertrand Russell, who asserted in the essay In Praise of Idleness (1932): I think that there is far too much work done in the world, that immense harm is caused by the belief that work is virtuous, and that what needs to be preached in modern industrial countries is quite different from what always has been preached.Statements such as these acknowledge our own role in maintaining the schemes – hustle culture, time poverty, consumer capitalism, infinite growth – that threaten the very survival of our species or otherwise sabotage our prospects for a more satisfying life. It is often only in the interludes that we come to realise just how much our busy lives are an active conspiracy against the very things that supposedly give our existence a deeper sense of meaning and purpose. Little interruptions of the usual can be an invitation to pause and reflect, a rare opportunity for the deep noticing and heightened awareness that ritual and routine often obscure. An underappreciated interpretation of the disengagement phenomenon is that it is what we do – or where we go – to indulge what the literary critic A D Nuttall in Dead from the Waist Down (2003) called ‘the invisible life, the life of the mind’. That language is vanishing from our culture, where the patter and patterns of the corporation increasingly take root; observable outcomes, measurable impact and performative productivity are what count (and are counted), as everyone knows by now. Nevertheless, and much to the chagrin of those who envisage a world where human effort is always and clearly in the service of public priorities, there is no getting around the fact that much of intellectual labour is, at base, a private affair, one that is substantially located in the act (and in times and spaces) of withdrawal. Withdrawal has an almost universally negative connotation in public life, where it is treated as the ultimate transgression and disdained as retreat or defeat – the very opposite of engagement. However, to withdraw is also, crucially, to repair – both to go to a place and to mend. From this perspective, withdrawal is not merely a defeatist tack; rather, it is, or can be, direct action for a restoration of intellectual life – the kind that is free to ask (to fully engage with) impertinent questions – in settings that have practically banished it, made it inaccessible, or are attempting to monitor and monetise it according to terms not of our choosing. The life of the mind is lived in leisure, but not the version of leisure that looks like recreation or rest. As the political philosopher Sebastian de Grazia wrote in Of Time, Work and Leisure (1962): ‘Contemplation in the Greek sense is so close to leisure that in describing one and the other repetition is inevitable.’ He continued: ‘the classical ideal of leisure [had a] sense of freedom, superiority, and learning for its own sake.’ As Susan Neiman reminds us in Why Grow Up? (2014): ‘Plato and Aristotle believed that a life devoted to contemplation was the highest form of living.’ In what ways might our occasional absences improve our experiences of work and of life more generally? ‘Leisure … is outside of work and outside of inactivity,’ explains Byung-Chul Han. ‘It is not a practice of “relaxation” or of “switching off”.’ Han continues: ‘Thus, St Augustine distinguishes leisure (otium) from passive inertia: “The attraction of a life of leisure ought not to be the prospect of a lazy inactivity, but the chance for the investigation and discovery of truth”.’ Among the questions some of us are investigating in our contemplative moments of disengagement, withdrawal, removal, retreat or escape – however we choose to designate those instances when we take our leave – are these: when, or to what extent, do our norms of organisational affiliation and attachment make us sick or otherwise compound the very problems such forms of connection are meant to solve? In what ways might our occasional absences improve our solitary and even our solidary experiences of work and of life more generally? Yes, there is a bit of fun to be had with the idea that our situation (and saturation) would be made considerably better without some of us in it. But there’s also a more serious proposition to consider. As the Tao Te Ching by the ancient Chinese philosopher Lao Tzu instructs: ‘It is because of its emptiness that the room is useful.’ Another way of saying this is that absence is an affordance – something is made available by it. We are conditioned in public life to view emptiness or absence as a defect(ion). That need not be the case. Crucially, moments of solitude – ‘the beguiling illicit love luring us away from the proper marriage of domestic demands and delights or the civic responsibilities of citizenship’, as Patricia Hampl puts it in The Art of the Wasted Day (2018) – permit us to see into the nature of things almost as if for the first time. Such insights are, of course, nearly always a threat to good order and must therefore be arrested or attenuated by the administrative classes before they form destabilising new habits among the rank and file. At certain intervals during the pandemic, when there could be glimpsed a tantalising end to our involuntary vacations from many public spaces, we heard friends, family and colleagues say that they would miss the different rhythm of life imposed by the shutdowns; they would miss the chance to slow down, to think, to pursue pastimes and passions, or to experience much of life that is concealed by our constant busyness. No one was suggesting a preference for death and disease or the loss of livelihood that were unwelcome features of the pandemic, but there was a palpable sense – often shared in hushed tones and with all due qualifications, as if not wanting to be misunderstood – that it had been, in many ways, a good interlude (a more human and humane way of being), one that we ought now to take new measures to preserve and protect. Many of us continue to experiment with ways to decouple ourselves from the deletions and depletions of workaday life. We certainly don’t require a mass casualty event to grant us time and space to live our lives more consciously or intentionally. The interludes we seize for ourselves – on whatever scale, from the episodic to the enduring – are precisely such occasions. It may require us to rip what Graeber, in Revolution in Reverse (2007), called ‘insurrectionary moments’ out of the warp and weft of our workaday lives. Getting there might take work, but there are alternatives to the way we live now, and interludes provide the distance we need to recognise them. A good interlude can alert us to much of what we’ve been missing. In Hampl’s phrasing: ‘What a surprise – to discover it’s all about leisure, apparently, this fugitive Real Life, abandoned all those years to “the limitless capacity for toil”.’
David J Siegel
https://aeon.co//essays/why-we-must-seize-leisurely-interludes-from-works-confines
https://images.aeonmedia…y=75&format=auto
Nature and landscape
In an era of 20th-century suburban sprawl, the great designer Russell Page infused soulful philosophy into his gardens
Russell Page (1906-85) was a brooding, chain-smoking loner who somehow became one of the most glamorous landscape designers of the 20th century. Born in Lincolnshire, England, he spent much of his career in Paris and worked all over the world, travelling to southern Europe, the Americas, Australia and the Middle East. Page designed grand gardens for aristocrats, industrialists and institutions, as well as many small-scale projects for his friends. His only book, The Education of a Gardener (1962) – a memoir blending historical and philosophical meditations with practical guidance – is still admired as a masterpiece of garden writing, reissued by New York Review Books (2007) and Vintage Classics (2023). For all his shimmering success, however, Page is something of an outsider, hard to place in the canon of landscape design. Though he was widely regarded as a great artist, he did not establish his own innovative signature style, and most of his gardens have been remodelled or destroyed. The significance of Page’s career seems curiously obscure. ‘Where does this man stand in the history of garden making?’ Fred Whitsey, the longtime gardening correspondent for The Daily Telegraph, asked after Page’s death. ‘He remains elusive.’ The question of Page’s legacy is partly a matter of style. Although he made many different kinds of gardens, he was most famous for highly formal designs in the classical French and Italian traditions. If you have ever looked at pictures of the gardens at Versailles, created by André Le Nôtre for Louis XIV, then you have seen the iconic example of formal design, with its rigid geometries and tightly controlled planting schemes. Most 20th-century English designers, by contrast, favoured looser, more informal landscapes. In recent decades, new kinds of naturalistic gardens, such as those of the Dutch master Piet Oudolf, have won prestigious commissions on both sides of the Atlantic, and now a new movement for ‘wild’ gardening is designing garden habitats for biodiversity and ecological repair. To contemporary eyes, classical gardens have come to appear rigid, even soulless, in their efforts to control nature. And so Page might simply be seen as an outmoded artist, the last of the great formal designers, left behind by modern times. A shady sycamore grove in Russell Page’s garden at Schloss Freudenberg, on Lake Zug in Switzerland. Photo by Michael Boys/Corbis/VCG/Getty As it turns out, though, Page’s thinking about gardens is too strange and too beautiful to follow such an oversimple script. A closer look at his writing, especially The Education of a Gardener, reveals how deeply he considered his own place in the development of garden-making. Like other midcentury designers, Page found himself dealing with distinctly modern conditions, trying to envision a future for home gardens in the era of suburban sprawl. Unlike many others, he responded to this problem in an unorthodox way – by adopting a quasi-religious mysticism. Far from soulless, Page devoted himself to an esoteric faith, influenced by the charismatic spiritual leader G I Gurdjieff and the Sufi teacher Idries Shah. After training as an artist in England and France, Page widened his perspective to study the design histories of many other, distant cultures. In The Education of a Gardener, he praised the forgotten landscape architects from ‘the Zen sect of Buddhism’ in 15th-century Japan and ‘the earlier Mogul gardens in India’. To see a path forward in the 20th century, Page proposed, Western garden-making needed to reconnect with the universal spirit that had guided ancient Eastern traditions. This, I think, was Page’s real contribution to design history – not so much a distinctive kind of planting scheme as a special role for the designer. He cultivated an image of himself as an artist-mystic, spiritually estranged from British imperialism and consumer capitalism which, he felt, tended to create ugly, life-stunting landscapes wherever they imposed themselves. Against the backdrop of mass-produced suburbia, he created therapeutic sanctuaries of unique harmony and tranquility. Reconsidered in this light, Page’s work speaks to our own moment in unsettling, surprising ways. While today’s naturalistic and wild styles might define themselves against his classical formality, they also unwittingly echo his spiritual critique of colonial and suburban landscapes. Page’s career shows how a vaguely orientalist ethos can be appropriated as a kind of high-end branding. At the same time, though, he offers unconventional theories and practices that anyone can use, on any scale, for making gardens in the ruins. Russell Page photographed in April 1966. Photo by Express/Hulton/Getty In his introduction to The Education of a Gardener, Page recalls how his professional career began in the early 1930s when, after art school and some years abroad in Paris, he took ‘a very subordinate job’ under Richard Sudell, a London-based landscape architect. The work, Page writes, involved ‘designing plantings for the endless new blocks of cheap flats then being built in the London suburbs.’ His tone is dreary, lightly snobbish. He needed the salary to get by but, from the start, suburban expansion weighed heavy on his spirits. Soon, though, Page’s first big commission took him out of London and, like a fairytale, into an enchanted past. The site was Longleat House, a distinguished but deteriorating Elizabethan estate. Page’s client, Lord Henry Bath, the heir to Longleat, showed him around, providing what turned out to be a crash course in garden history. In the 1600s, the marquesses of Bath had built grand gardens in a formal style influenced by continental models, with straight lines and tightly clipped hedges, to complement Longleat House, an impressive work of Elizabethan architecture. Such gardens were associated with Renaissance ideals of aristocratic power, ordered hierarchy and control over nature. In England, though, they eventually fell out of fashion. Gardeners collected exotic plants and ornaments, like tourists buying souvenirs, then jumbled them together By the late 18th century, in the eyes of England’s ruling classes, formal gardens came to look too rigid, too abstract – too French. The legendary garden designer Lancelot ‘Capability’ Brown was hired to replace Longleat’s formal plantings with pastoral parklands. Instead of neat parterres and abstract shapes, demarcated from the landscape beyond, Brown favoured naturalistic forms – a clump of trees, the bend of a river – so that the garden seemed to roll toward the horizon. The effect was picturesque, though it required huge investments of work and money. Brown and his followers remodelled so many estates in this way that the historian Roy Strong would later dedicate his study The Renaissance Garden in England (1979) to the ‘memory of all those gardens destroyed by Capability Brown and his successors’. Longleat’s gardens were remodelled once again in the 19th century, this time by adding what Page called a ‘profusion of exotic trees and shrubs’ imported from around the empire. Such abundance displayed Britain’s global power, but Page didn’t like it. English design, he felt, had fallen victim to its own imperial excesses. Gardeners collected exotic plants and ornaments, like tourists buying souvenirs, then jumbled them together. There was too much material, not enough focused intention. Good design, in Page’s eyes, called for some restraint. ‘We walked the open parkland for days,’ Page remembered, analysing what ‘the composition of the landscape seemed to require.’ Longleat was a kind of palimpsest of historical styles: Renaissance formality, Romantic picturesque and Victorian extravagance. Page tried to peel away the most recent layer, to restore the parklands in the spirit of Capability Brown. He preserved clumps of 18th-century trees, especially beechwoods, while adding some limes and scarlet oaks to harmonise the views. Remodelling the site for its future owner, Page was also correcting the past errors that had spoiled its quiet elegance. After the prestigious commission at Longleat, Page’s fortunes were rising. He found a kindred spirit in Geoffrey Jellicoe, a well-connected landscape architect. The two shared interests both in classical design – Jellicoe wrote Italian Gardens of the Renaissance (1925) with John Shepherd – and in esoteric theories of the soul, including Gurdjieff’s. Page and Jellicoe started their own firm, taking on prestigious commissions. They also became involved with a new professional organisation, the Institute of Landscape Architects, and its quarterly, Landscape and Garden, edited by Sudell. Page’s beat for the magazine was international garden history. He travelled around Europe, taking photographs and writing short dispatches, exploring various traditions – 17th-century French classicism, Islamic influences in Portugal. Page published a two-page spread with a one-word title: ‘Urns’. But the point of learning history was not to imitate earlier fashions. ‘We have inherited relics of all these styles and travesties of style,’ Page wrote in The Education of a Gardener. Too often, the effect was a nostalgic pastiche rather than historical fidelity, kitsch instead of creativity. What older gardens did provide, however, were models of an ethos. In each of the traditions that Page studied, he saw gardeners responding authentically to the character of their sites, in accordance with the values of their local cultures and their times. The challenge for modern landscape architecture was to express the spirit of its era, just as earlier gardeners had expressed their own. Page’s excursions into the past were not time-travel fantasies; he was looking for ways to reorient himself in the present. He was beginning to find his balance – philosophically modern, yet historically informed. But then a world war disrupted everything. Visiting gardens abroad, he cultivated a cosmopolitan perspective in a time of violent nationalisms ‘Our practice ceased to exist,’ Page wrote, bluntly, ‘and all my modest accumulation of plans, photographs and 18th-century garden books went up in flames in the first London blitz.’ His firm was defunct, his archive in ashes, destroyed by the Luftwaffe. ‘Gardening belonged, it seemed, only to the past; there was no future – only the pressing present.’ Page joined Britain’s War Department, going south and east to distant stations – Egypt, India, Sri Lanka. He never publicly discussed his wartime work, which may have been propaganda, but the travels expanded his education. Page was learning ‘that life can be otherwise experienced than from the European point of view.’ Visiting gardens abroad, he cultivated a cosmopolitan perspective in a time of violent nationalisms. His writing emphasised the moving spirituality of Eastern cultures. ‘There were all the marvellous mosques to explore,’ he remembered of his time in Cairo, where he heard a voice ‘chanting verses from the Quran – every sound vibrant with meaning and devotion.’ Even on the other side of the world, though, Page saw the grim signs of British imperial expansion. In one telling passage, he talked about arriving in Sri Lanka, ‘with its ancient cities of a former Buddhist civilisation buried in the tropical forests and the thin veneer of 19th-century British red brick architecture which has made Colombo as commonplace as Southsea or the Bronx.’ Colonisation had imposed a generic, industrial surface not just onto the land but also on to human cultures, everywhere. When peacetime came, Page found himself adrift. He had no money and no vocation. His mind was rattled. ‘I was totally at a loss in a world to which I had become unused,’ he confessed. Page turned to friends for help. Under the mentorship of the Austrian émigré painter Oskar Kokoschka, who had settled in London, Page retrained himself, starting with ‘the problem of really drawing’. Before he could compose again, he had to rehabilitate his eyes. ‘To study the nature and form of the object in front of me gave me back the possibility of another kind of vision and another kind of discipline, and little by little a vast weight of accumulated superficialities fell away.’ Page understood his discipline as a practice of looking through surfaces until he discerned a deeper reality, then making shapes in harmony with what he found. This blend of meditation and composition became his postwar therapy. ‘I felt refreshed and quiet because I knew again that there is a continuing reality behind the appearances and problems of everyday.’ It is hard … to see Page’s gardens from his point of view without acknowledging how mysticism shaped his art Two French acquaintances, Stéphane Boudin and André de Vilmorin, helped Page set up a new practice in Paris. He started over, working on many scales, from grand chateaux to smaller urban plots. He also joined the circle of bohemian intellectuals and artists who gathered in Gurdjieff’s salon on the rue des Colonels Renard. In 1947, Page married Lida Gurdjieff, often referred to as the famous mystic’s daughter, though some reference sources identify her as his niece. Gurdjieff’s teaching appealed to educated Westerners who rejected mainstream Christianity but still craved some kind of transcendence – a spiritual vision beyond the confines of Church or nation. His ideas influenced Page’s former partner Jellicoe and the American architect Frank Lloyd Wright, as well as Page himself. Drawing from various world religions, Gurdjieff taught artists to align their work with universal principles of geometry and balance. Even sympathetic readers of The Education of a Gardener have struggled to assimilate this aspect of Page’s thought. The Oxford historian Robin Lane Fox, in his introduction to the New York Review Books edition, waves it away as ‘the purest baloney’. The English gardener and television host Alan Titchmarsh, introducing the new Vintage Classics version, does not mention spirituality at all. But it is hard to understand Page’s book, or to see his gardens from his own point of view, without acknowledging how mysticism shaped his art. Explaining his theory of composition, Page wrote that ‘every object emanates – sends out vibrations beyond its physical body which are specific to itself.’ The landscape designer Kevin Barton, who studied Page’s archives for a master’s thesis in garden history, shows how these comments echo Gurdjieff’s teachings. Page had learned to think about design as a way of recreating cosmic geometries. When the vibrations were in harmony, the composition worked; the garden attained its peculiar ‘magic’. While Page’s philosophy became unorthodox and orientalist, his practice became more traditional and, on its surface, classically Western. After two decades away from England, he acknowledged: ‘my approach to designing was modified by the greater formality of classical French planning and the more sculptural approach of the Italian tradition.’ Page’s work from this era is richly presented in two coffee-table books, both published after his death: Marina Schinz and Gabrielle van Zuylen’s The Gardens of Russell Page (1991) and the American Academy in Rome’s Russell Page: Ritratti di Giardini Italiani (1998). Both feature the same garden on their covers. Two stone sphinxes stand guard over the entrance to the garden, which descends a steep hillside in three wide, rectangular terraces. A straight path through the upper terrace establishes the main axis. It runs between two rectangular parterres and then, by way of a few steps, down to the middle terrace, which is almost entirely taken up by a shallow, stone-edged pool. On the lower level, a tightly clipped hornbeam hedge forms a maze-like arabesque pattern. This is Page’s garden at Villa Silvio Pellico, south of Turin, Italy, and it practically poses for the camera. The geometry is clear and orderly. No paths curve into darkness; no vines snake through the scene. Van Zuylen calls it: ‘proof of Page’s understanding of the classical Italian garden.’ Page continued to be appreciated as a formal designer, even when he worked in England and the United States, where more informal styles prevailed. In 1958, he won one of English gardening’s highest honours, a gold medal at the Royal Horticultural Society’s Chelsea Flower Show, for a kitchen garden with French details, its vegetable beds framed by boxwood hedges. Ernestine Carter of The Sunday Times praised the garden’s ‘delicate formality’. Later, for Manhattan’s Frick Collection, Page made a courtyard with neat flowerbeds, a few trees and a reflecting pool. A New York Times writer described it as an ‘exquisite little classical garden’, a museum piece in its own right. Deeper than English informality or French formality, the Islamic tradition was Page’s greatest source of inspiration While these observers celebrated Page’s works, they tended to cast him as a conservative artist, defending classical formality in an informal age. But this is a misreading. Page dismissed the whole question of formality and informality as a shallow one of secondary significance. ‘The degree of formality you will use will depend on the character of the house and the idiom of the landscape,’ he advised. The formal clipping of a hedge or the informal curve of a flower bed were just ‘superficialities’. The real art of design was the deeper structure – form as shape, not affectations of formality. Or, as Page put it: ‘I like gardens with good bones.’ The garden at the Frick Collection in Manhattan. Photo courtesy of Wally Gobetz/Flickr He found his favourite kind of bones in southern Spain, near Malaga airport, when he went to see ‘a fine elaborate late 17th- or early 18th-century “Italian” garden of paved terraces, balustraded stairways, fountains and a quantity of statues.’ While Page appreciated these details, he ‘sensed that the site of both house and garden had been carefully chosen (as only the Moors knew how), and I set out to explore the less frequented areas of the garden.’ His hunch proved right, and ‘sure enough, I found an octagonal fountain of the 14th century falling to pieces in a cabbage patch and a long canal-like reservoir.’ The crucial thing was not the Italian ornamentation; it was the design below. Deeper than English informality or French formality, the Islamic tradition that had made its way into Europe from North Africa was Page’s greatest source of inspiration. In it, he found a mystical view of garden-making as the art of bringing each site into harmony with itself and its environment – and so with the universe. This philosophy, as Page understood it, did not require him to imitate the surface details of ‘Moorish’ gardens. It allowed him to be flexible about style, using formal or informal touches as the site required. It also allowed him to salvage, rather than demolish, the best materials from his sites. But there was a dark irony at the heart of Page’s career: very often, he worked for clients whose wealth came from the same mass-marketed consumer industries whose effects Page criticised so scornfully. His garden at Villa Silvio Pellico was funded by the Fiat automobile manufacturing fortune. His final project would be a sculpture garden for the world headquarters of PepsiCo. Page’s clients created and profited from the degraded suburban landscape; in his elegant sanctuaries, they enjoyed the privilege of retreating from it. What began as an ascetic reaction against consumer capitalism was now on the market as a luxury experience. In the long run, Page’s design ethos became his true product. Clients who hired him were persuaded that his talent went beyond technical competence or a fashionable style. They were dealing with a deep soul whose gardens offered therapeutic – maybe even sacred – atmospheres. Seen in this light, his legacy is not really elusive after all, nor is it confined to the classical tradition of formal design. We might detect Page’s influence in the orientalist minimalism of lifestyle magazines like Architectural Digest, with their glossy photographs of boutique meditation spas and Silicon Valley ‘Zen’ gardens. But we might also notice surprising resonances between Page’s ethos and the vital, progressive design movements that are actively reimagining gardens for social justice and ecological repair. In 1985, the year Page died, a young landscape designer named Chris Baines introduced his ‘wildlife garden’ at the Chelsea Flower Show. In the midst of a festival known for elaborate, expensive floral displays, Baines used common, native plants to draw in pollinators and songbirds. Baines had recreated the home garden as an oasis of nature within the sterile expanses of suburbia. It was the sign of a new era, and now, almost four decades later, wild is ‘the word of the moment’, as one magazine writer observed in February 2022. The idea of the wild garden, which once seemed paradoxical, is ‘currently dominating design’. The formal garden stands for abstract rationality, against the informal garden’s romance and spirituality Reckoning with climate change and habitat destruction, garden design has begun to focus on sustainability, and some landowners have abandoned conventional gardening altogether. In the UK, Lowther Castle and the Knepp Estate – the kinds of sites that used to hire Page – are working with environmentalists on ‘rewilding’ missions, turning cultivated lands back into biodiverse ecosystems. On a smaller scale, gardener-activists like John Little in England, Mary Reynolds in Ireland and Douglas Tallamy in the US are reimagining home gardens as little wildlife sanctuaries, with biodiversity as their first principle. Where does all this leave Russell Page? To some ecological gardeners, the classical style that characterises Page’s most famous works has come to represent the colonial, destructive arrogance of the modern West. Critics now describe classical gardens in almost monstrous terms. In Gardening in a Changing World: Plants, People and the Climate Crisis (2022), the English designer Darryl Moore calls them: ‘ostentatious displays of power and conspicuous wealth articulated through the medium of meticulously defined structural planting.’ From this point of view, the formal garden stands for abstract rationality, against the informal garden’s romance and spirituality. The formal garden represents imperial power and wasteful capitalism, as opposed to gentle sweetness. On the surface, the new experiments in ecological garden-making look nothing like Page’s formal compositions. At the same time, though, in calling for a new ethos, the new designers often echo Page’s spiritual critique of Western culture. In A New Garden Ethic: Cultivating Defiant Compassion for an Uncertain Future (2017), for example, the American garden writer Benjamin Vogt asks readers to ‘open our hearts and minds and rethink beauty – a deeper, functional beauty designed for species and environments other than our own.’ At one level, Page’s story provides a cautionary tale. His career shows how any design ethos, even a spiritual protest against empire and consumer capitalism, can become a kind of branding – distinguishing itself against the ugliness of Western, middle-class excesses only to elevate its own market prestige. To redress our social and ecological crises in a substantial way, garden-making would have to help create open, rather than private, sanctuaries for people and nature on every scale, from rewilded urban lots to sustainably managed state and national conservation areas. At another level, though, Page’s own work provides resources that might be put to use in new, progressive projects. In fact, one of Page’s own concerns was to help gardeners decide which valuable pieces should be kept, which useless ones discarded, when we are working with a mixed-up site. Page approached his work as a set of restoration projects on long-inhabited properties where, he felt, both ecology and culture had been degraded over time. In the ruins, he looked for salvageable forms, then turned them into new gardens whose futures he did not control. At the end of his life, in the early 1980s, Page returned to the informal, picturesque style that he had first used at Longleat half a century earlier. His commission was to remodel the sculpture gardens surrounding the PepsiCo offices in Westchester County, New York. When he died of cancer in January 1985, the project was incomplete, but his designs were eventually carried out and, unlike most of Page’s other gardens, this one has been well preserved over time. On a bright, warm day in June, I went by myself to wander through the grounds. 13553Russell Page’s garden at the PepsiCo offices in Westchester County, New York. All photos by Peter Bond/Flickr 1355413555PepsiCo is the global distributor of soft drinks, Taco Bell franchises and Frito-Lay snacks and, as I made my way into the property, I had some uneasy feelings. I was thinking about how Page protested the flattening effects of consumer capitalism, yet worked for this conglomerate that suburbanised the planet. I was wondering whether his design might feel as sterile as an ordinary office park. A small walkway, or ‘golden path’, led me into a garden of understated beauty. There was a pond in the centre of a green lawn, with a few mature weeping willow trees around its edges, swaying in the breeze. Tucked away in one corner of the larger park, I came upon Page’s small, elegant formal garden, where water lilies floated on rectangular pools. The overall impression was serene, but it was not lifeless. I recognised the sensibility that I first encountered in Page’s writing, and I thought again that there are some useful things to be taken from his work, despite its troubling complicities. At its best, Page’s thinking leads beyond the oversimple debate between formality and informality, the artist’s composition and the ecosystem’s thrum. In one manuscript fragment, quoted in Barton’s thesis, Page pictured gardens as ‘ordered three-dimensional patterns fixed in time and space, through which flows nature, the vegetable world, proliferation, growing, dying, budding, flowering and seeding, impermanent undisciplined and usually the antithesis of order.’ He never wanted to dominate the earth by imposing abstract order on its thorny funk. He loved both art and plants alike with what he called his ‘verdant heart’. To a vision of the landscape’s future, he offers up a rebel spirit and an artist’s eye.
Caleb Smith
https://aeon.co//essays/a-rebel-spirit-and-an-artists-eye-russell-pages-landscape-design
https://images.aeonmedia…y=75&format=auto
Cognition and intelligence
Maybe not, but if that’s the threshold you use for creativity in your life, you are coming at the problem all wrong
I always wanted to be a creative writer. At different points in my life, I’ve tried fiction, sports writing, poetry, journalism and plays. But in my senior year in college, I decided that I wasn’t good enough. When I thought of what it meant to be ‘a writer’, I thought of long-dead legends like William Shakespeare, Jane Austen and Mark Twain or brilliant prizewinners like Margaret Atwood, T Coraghessan Boyle, Robert Olen Butler and Toni Morrison. I felt like I was a million miles away from that level. My plans to get my MFA in creative writing suddenly felt silly. After exploring other possibilities, I ended up pursuing my PhD in cognitive psychology. At first, I struggled to figure out what interested me most in that field – until I started to look back at my creative-writing aspirations from a new perspective. I spent a summer in my parents’ basement devouring articles and books on the scientific study of creativity. One of the first things I discovered was that there were two ways of thinking about creativity: ‘little-c’ and ‘Big-C’. Little-c was everyday creativity, the type of activities that the average person could do, such as building a bookcase or learning to play popular songs on the guitar. Big-C was reserved for geniuses. The dichotomy – which was first articulated by Mihaly Csikszentmihalyi – made sense to me. But it would not have been especially helpful for my situation. I wasn’t a genius, so as far as creative writing was concerned, that meant I was lumped in with everyone else – those engaged in little-c. If I wouldn’t be able to reach a level of consistently publishing my creative work, then it seemed to me that I had definitely made the right choice to give up my creative ambitions. I didn’t know it then, but I had been susceptible to a ‘genius bias’ – that is, I assumed that the only creativity of note was that of brilliant creators. I didn’t value my own creativity enough. My writing was clearly not at the Big-C level, but I would come to find that the category of little-c was too vast to sufficiently describe what the majority of people engage in. I would eventually tackle this problem as a researcher of creativity, emerging with a more nuanced (and continually developing) view of what creativity can be. I will explain that view further – but first, it’s worth examining some other common misconceptions about creativity that any of us can fall victim to, and how these incorrect beliefs can unconsciously shape and narrow our perspective on creativity. It is so easy to minimise, or simply not even recognise, your own creative potential, much as I did earlier in my life. You may be setting up barriers to creative thought and behaviour without even knowing you are doing so; one of my goals in this essay is to help you recognise the hidden creative strengths – your shadow creativity, so to speak – that wait to be explored. Many people assume that an individual’s artistic talents, such as drawing, writing, or playing a musical instrument, are the best (or only) way to determine if they are creative. This is what creativity researchers have called the ‘art bias’. Even people who realise that creativity takes other forms might still expand their consideration to creativity only in the realms of science or business. Yet you can show creativity in countless activities, from organising storage space to trimming shrubs to fixing a hole in your wall to training the local crows so they bring you shiny objects. I understand the art bias; when I need to give an illustration of creativity to an audience, I’m more likely to turn to examples of poetry or paintings than, say, creative workout routines or tax deductions. But these are all valid examples of creativity. That is, they can all involve original thinking that is relevant or appropriate to the task at hand. To move from the varied products of creativity to the creative process: what do you imagine when you think of people actively being creative? You might picture a group of folks brainstorming and shouting out ideas. Yet there are many different aspects of the creative process, from figuring out the best problem to solve to selecting the best potential solution. Some people hate brainstorming, or the idea-generation process in general, to the point that they feel anxious and unnecessarily discouraged about ‘being creative’. However, thinking divergently – coming up with many possible ideas or solutions – is but one of many parts of a creative process. Similarly, many people have a ‘novelty bias’ – they focus only on the aspect of creativity that requires originality, to the exclusion of its other elements. But creativity encompasses much more than just producing something new. For one, as already noted, a creative product should also be useful or task appropriate. If I served you a dish of scrambled lint with plastic shards and a fresh glass of otter juice, you might agree that it is a unique meal… but it’s not what most people would call creative. It is easy to think about creativity as nothing more than a burst of inspiration, but that’s not the case. Someone might have a terrific idea but have no clue how to execute it. Or they might slap their thoughts together and never revise or proofread, and end up with a garbled mess. Too much of a focus on the novelty-and-inspiration side of creativity can lead to ideas that are chaotic and unhelpful. It didn’t occur to me until later that I could also be creative as a scholar Finally, the ‘mad genius’ stereotype links creativity closely with mental illness. To truly dive into the association between creativity and mental illness requires its own essay. But, to make a very long story short, many of the key studies that people have cited on this topic have critical flaws, and the decent studies that do indicate some association between mental illness and creativity do not show that one leads to the other. If anything, it is more likely that creativity has enough positive benefits that those who suffer may be drawn to express (or distract) themselves with creativity. Unfortunately, the belief that mental illness and creativity are closely connected could have several negative consequences. One is that the regrettable stigma about mental illness may end up being linked with creative pursuits as well. Another is that some people who want to be creators may assume that it takes suffering or tragedy to succeed. (There is a similar risk with regard to beliefs about alcohol or drug abuse and creativity.) These biased ways of thinking about creativity might make it less likely that you, or any other person with creative potential (which is to say, anyone), will gravitate toward creativity in the first place. It’s also possible that these biases have prevented you from recognising creative work that you are already doing. I certainly believed in some of these misconceptions as a budding writer. It didn’t occur to me until later that I could also be creative as a scholar, or that there was no need to jeopardise my mental health in order to be inspired. As I began studying creativity in earnest, I kept returning to that conceptual split between Big-C (the work of geniuses) and little-c (everyone else). I didn’t feel like I had Big-C talent – and I still don’t. However, the broad concept of little-c didn’t seem adequate for describing the vast number of people who are creative but not recognised as geniuses in their field. It didn’t distinguish between, for example, fourth-grade me working on my first short story, the college me able to publish in tiny literary journals and zines, and the adult me who, outside of my career in psychology, ended up writing plays that have been put on in New York City and elsewhere. Nor did the Big-C/little-c dichotomy seem to capture how a person might progress from smaller to more impactful forms of creativity. I began talking about these issues with my colleague Ron Beghetto, and together we developed the Four C model, which aimed to expand the simple dichotomy and offer more gradations. Big-C largely stayed the same in our conception: true creative genius that would outlast the creator, continuing to influence people generations after their death. Where we focused was splitting up little-c. Beghetto was a former classroom teacher who saw how easy it was for student creativity to get ignored or brushed aside. By proposing a new category explicitly devoted to creators of all levels who were learning, playing with ideas and exploring possibilities, we hoped to acknowledge not just the creativity of students but also novices and everyone else who might fall through the cracks. So, the first thing we did was to propose ‘mini-c’. This category includes the personal insights that pop into your head and the spontaneous moments of improvisation that make you smile or reflect. Maybe you share them with other people; maybe you don’t. But this creativity has meaning (even a little bit) to you, and it still matters. It might be a small daydream, a flight of fancy, or a genuine ‘aha’ moment that could have larger implications. While creativity is commonly defined as both novel and task appropriate, mini-c adjusts that slightly to add ‘to you’. Mini-c should be new to you and should meet your needs, even if that is simply to distract yourself for a moment. A person may have dozens or more moments of mini-c a day. Some folks make up lyrics to popular melodies to describe their current actions (as in, ‘I’m leaving… in my Kia. Don’t think I’ll try to use the heat’ to the tune of ‘Leaving on a Jet Plane’). Others think of spontaneous puns that they note to an empty room, or figure out how to complete a recipe when they’re out of butter, or doodle elaborate cartoon renditions of their co-workers while on a Zoom call, or hum jazzy variations on a melody, or devise a makeshift way of keeping a wobbly chair from collapsing. Novices and students who are learning how to do a creative task will likely start here as they evolve their craft, producing ideas that are novel to them and meet their personal need. So, too, will people who start out by replicating existing work with their own personal take on the material. Think of someone sitting in an art museum and sketching a copy of a masterpiece. Or telling a joke that they heard from someone else, but using their own words and intonation in the retelling. All of this is mini-c, and it has a purpose, whether it serves as a temporary amusement, a spark for further creative thinking, or the first step of many on the road to becoming an expert creator. A plot twist might develop into a complete story. An improvised tool might become a basic prototype Unfortunately, many people undervalue mini-c – and may not even recognise it as being creative in the first place. Why does this matter? Well, if you see these everyday behaviours as creative, and you therefore believe that you can be creative and identify as a creative person, it will make you more likely to practise creativity. You can’t succeed if you don’t try. Everything starts with mini-c. If you don’t embrace those flashes of whimsy or trust those tiny insights, you may miss out on everything that follows. Every brilliant invention, groundbreaking scientific discovery or powerful work of art started as a small germ of an idea. This does not mean that all of our mini-c moments are going to blossom, of course. But they definitely will not blossom if they are suppressed or dismissed. Beghetto and I kept the term ‘little-c’ to represent everyday creativity. In our model, little-c is when you keep working at mini-c. You share that thought, that rough draft, that working idea. You can get advice and feedback and then keep working, revising and improving. A plot twist might develop into a complete story. An improvised tool might become a basic prototype. At little-c, other people start to appreciate and recognise your creativity. Little-c could be a meal you make for friends that’s your own twist on an old favourite, or a song that you record and share with a small group of listeners online. How do you know if you’ve gone from mini-c to little-c? There’s no checklist to fill out. Certainly, though, when you realise that you are not only sharing your efforts in a particular area with others, but that they are truly enjoying your work, you’re on your way. A lot of people stop at little-c in an area that interests them – and that is absolutely fine. My grandma enjoyed painting and took some classes. She painted for fun and to give to her loved ones. More than 20 years after her death, I have two paintings of colourful flowers on my living room walls. They do not have the precision or skill you might expect from a painting in an art museum. They are not blazingly original; they’re flowers. But they are pretty and they make me think of her, and there is creative worth in that. In other cases, people keep plugging away. Maybe their creative efforts are part of their job, or a personal passion that continues to inspire them. Whatever the reason, with extensive practice over time comes creative expertise, which is the level we call ‘Pro-c’. This is the point where the work begins to have an impact in some way not just among a small circle of friends or in the local community, but on a wider scale. Pro-c is when your efforts are published, produced, recorded or manufactured and distributed broadly to many people. We are living in a time with unprecedented access to the materials needed for this kind of creativity, and to audiences for it. A few generations ago, someone who wanted to make a widely viewed film would likely need the backing of a studio system to be able to afford the equipment and other resources. Today, most of us carry a video camera in our pockets and it is easy to upload creative work to the internet (where it may or may not go viral). There is no commemorative button that says ‘Welcome to the Pro-c club’, but there are signs. A Pro-c creator begins to slowly exert some influence in their chosen field or domain. Other creators are inspired to create responses, rebuttals or refinements to the original work. With enough impact, effort, talent and luck, there may be some Pro-c works that live on long after the creator has died – such as those of Shakespeare, Austen and Twain, the geniuses who so intimidated me when I was a young writer. This final level is what we call ‘Big-C’. But note that Pro-c creativity is the highest kind that any creator can have much certainty that they will reach. We can make good guesses about which living artists, scientists, inventors or leaders might be remembered for generations to come, but they are just that: guesses. Creativity at the foundational levels is easy to overlook or undervalue. Many of us don’t recognise our mini-c moments as ‘creative’ when we have them, and fail to appreciate the full worth of little-c creations. Pro-c work is often compared with Big-C and seen as wanting, rather than celebrated for what it is. And there are other reasons creativity can remain hidden or underappreciated – both in the minds of creative individuals and in society more generally. An increased awareness of these obstacles may help people to better appreciate and enjoy their own personal creativity and make decisions about how they apply creativity in their professional lives. My colleague Vlad Glăveanu and I have spent a lot of time thinking about the ways in which creativity remains in the shadows. Importantly, many of the reasons are not the creator’s fault. People who lack resources, status or a relevant background might have a harder time advancing their creative ideas. Perhaps someone does not know the vocabulary or jargon of a particular field, and it would be costly or overly time-consuming to learn it. Or a creator might have excellent ideas, but lack the needed resources, including money, to bring them to life. Although obstacles can sometimes provide fertile ground for sparking the imagination, they can also limit one’s ability to develop creative work. Plus, of course, there are many creators who haven’t been given a chance because of their ethnicity, sexuality, religion, gender, culture, socioeconomic status, caste or other characteristics. There are other obstacles, however, that are easier for a creator to address. For example, to successfully present creativity to others requires self-awareness. Being aware of what you know, broadly speaking, is called metacognition. Creative metacognition is a term that encompasses your understanding of your own creative strengths and weaknesses, including how your own talents match what you want to do. So, for example, you might have very poor visual understanding of light and shadow. If you decide to pursue drawing, then you run the risk of not being particularly talented. If you do not care and simply enjoy drawing, then there is no issue. But if you aspire to be an acclaimed artist, then this choice may not reflect strong metacognition. There are also people who have the reverse problem: they do not realise that their creative output is actually very impressive. Recognising fleeting thoughts as being creative can lay the foundation for higher levels of creativity There are different reasons why someone might lack awareness about their creativity. Some people may have a bit of an inflated sense of self, whereas others may need to be boosted up. Still others may be hampered by some of the biases I discussed earlier and assume that they must express their creativity via the arts, as opposed to in other domains, which may be where their strengths lie. One way to tackle the challenge of self-awareness is to ask for feedback from trusted others who have some expertise in the area you’re interested in – ideally, feedback that also suggests specific ways to improve your creative output. These individuals might be certain friends or family members with relevant creative experience, other creative people whom you encounter at events or meet-ups, co-workers who practise similar kinds of creativity or others. Creative self-awareness also involves recognising the right time and place to share your creativity. If you insist on bombarding people with your creative ideas when they are busy or not ready to hear them, you may get the same reception as the child who interrupts a parent’s important phone call to tell them a knock-knock joke. Identifying the best times and methods for sharing with a receptive audience can set you up for success. Once again, feedback and advice from people with experience in your creative domain can help improve your ability to ‘read the room’ and get the timing right. The more that you know – about your own creativity, about creativity itself, about the areas in which you want to be creative – the more likely you will be to climb the metaphorical ‘C’ ladder. Increasing your creative knowledge could also mean expanding your conception of what it means for you, personally, to ‘be creative’, so that it includes the smaller sorts of everyday improvisation or idea-generation that I’ve described – instances of creative thinking that could be worth pursuing further. One example of a mini-c insight leading to greater things is the story of how the Swiss engineer George de Mestral invented the Velcro fastener. After walking around outside with his dog, he noticed that the hooks from the burrs had stuck to his dog’s fur. This small observation could easily have been dismissed as irrelevant or unworthy of further exploration. However, de Mestral kept thinking about it and realised that tiny hooks and a suitable fabric could potentially be used for a novel fastening system (which, after years of research and trial and error, he was able to create). Recognising fleeting ideas and thoughts as being potentially creative – and then acting on them – can lay the foundation for higher levels of creativity. My guess is that, if you’ve read this far, you already want to be creative and you understand its value. But while most people won’t outright say ‘why bother being creative’, some will think it. There are a host of benefits to being creative that are not always obvious. It can bring self-insight and strengthen your sense of identity. It can help a person heal from trauma, benefit those with cognitive decline, and generally enhance wellbeing both in the moment and over time. It can help connect us with other people (and, indeed, it is very attractive to potential romantic partners). It can also improve motivation, give purpose and meaning to life, and ultimately even help us leave behind some kind of legacy for our loved ones when we eventually pass away. There’s no magic trick or silver bullet with creativity. Developing and sharing creative ideas takes hard work, steady effort, and plugging along. You might have an image of the ideal genius, perhaps the one who invented or created your favourite thing. Whoever that genius is, they didn’t get to snap their fingers and create a masterpiece. They were creative even on days when they felt blocked. Indeed, they likely created in the same habitual way that you might have an exercise or diet plan. You don’t lose weight by deciding to not eat for a month; it takes lots of small decisions and minor sacrifices on a regular basis. Being a creative person does not have to mean sacrificing everything to appease the demons in your head When I write a book, for example, the initial inspiration is hardly the most important component. For me, the creative process is also doing countless searches of the research literature. It’s forcing myself to write when I’m feeling blocked. It’s reaching out to colleagues to make sure that I’ve summarised their work correctly. It’s rereading and revising to try to make the book as readable as possible, while still being true to the original studies and theories. It means getting obsessed for a while – but then being able to turn it off so the book (or, in your case, maybe a painting or an experiment or a prototype or a website or a scarf) gets finished. The fact that creativity does not ride in on a sparkling thunderbolt can feel disappointing because it means you have to put in the work every day – but it also means that you are in control of your creativity and not waiting for Zeus to get bored and grant you a one-time vision. It is important, too, to remember that being a creative person does not have to mean sacrificing everything to appease the demons in your head. Creativity comes in all shapes and forms, and it doesn’t matter if you don’t fit a certain mental image of what you think a creative person should be. You can recognise and appreciate your own creativity even if it seems minor, even if others don’t like it, even if it feels like it’s just part of your job. Being creative can improve multiple dimensions of your life, regardless of what creativity looks like for you. If you want to find your hidden creativity, remember the myths and facts I’ve shared. Let yourself be creative in whatever area calls to you. Let yourself be creative even if you think you aren’t very good. And if developing your creative efforts further is important to you, try to get feedback and practise as much as possible. But most of all, let yourself have fun with your creativity and don’t be overly bound by what other people say or think. It is true that very few of us will become the renowned creators whose work is remembered generations from now, but that does not mean our creativity is not worthwhile. We can each make a positive impact with our creativity – on ourselves, our loved ones, our communities and, who knows, maybe even the world.
James C Kaufman
https://aeon.co//essays/you-can-be-truly-creative-if-you-let-go-of-your-assumptions
https://images.aeonmedia…y=75&format=auto
Space exploration
Earthbound exploration was plagued with colonialism, exploitation and extraction. Can we hope to make space any different?
When he rode to the edge of space on board Jeff Bezos’s reusable New Shepard rocket, William Shatner found the experience was not quite as he’d imagined. The Canadian actor famous for his phlegmatic captaincy of the starship Enterprise said on his return to Earth that ‘when I looked … into space, there was no mystery, no majestic awe to behold … all I saw was death.’ ‘Everything I had expected to see was wrong,’ he went on. ‘The contrast between the vicious coldness of space and the warm nurturing of Earth below filled me with overwhelming sadness.’ It was an unusually astute and honest perspective on human spaceflight – but hardly the one Bezos, whose space-exploration company Blue Origin operates New Shepard, must have hoped to hear. Most of those who venture into space aren’t, like Shatner, taking up an offer out of sheer curiosity, but have already decided that this is indeed where humans should be heading. They are predisposed to relate the awe, splendour and adventure, but perhaps less inclined to question the whole enterprise more deeply. Shatner’s brief voyage to the final frontier shared nothing of Star Trek’s vision of a united humankind, but was made possible by the eye-watering private profits of capitalism. When the US president John F Kennedy offered his rationale in 1962, at the peak of the Space Race – ‘We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard’ – no one was under any illusion that the real motivation was Cold War rivalry. (There are, after all, plenty of other hard things one could do, but the Soviets had already beaten the United States into Earth orbit.) Yet still, it was deemed expedient for the US project to claim, when the lunar module of Apollo 11 touched down seven years later, that ‘We came in peace for all mankind.’ While today’s commercial spaceflight initiatives, such as Blue Origin, Elon Musk’s SpaceX and Richard Branson’s Virgin Galactic, still mobilise that utopian universalism, they are building a business. Others hope to profit from mineral resources mined in space. ‘For all mankind’ won’t cut it any longer; it is time to mothball the inherited rhetoric of the first space age, and to look honestly at the reasons human spaceflight is being pursued and at the ethical issues raised by both the current practices and the potential future goals. For many in the space industry it is not obvious that there is any real ethics to discuss. In 2016, the astrophysicist Erika Nesvold asked the CEO of a (now-defunct) California space-mining company how he planned to address the danger that his proposed lunar mining equipment might contaminate the moon in ways detrimental to its scientific study. He told her: ‘We’ll worry about that later.’ Nesvold discovered that others in the private space industry had a similar response to dilemmas their plans raised. How will workers in space be protected from exploitation in such a vulnerable setting? How will interpersonal conflicts of people living in space be mediated and settled? Should there be property rights at all in space? If so, how would they be decided – and enforced? What obligations do we have to the space environment? What are the best structures for space governance, whether of activities in near-Earth orbit, planetary settlements or commercial activities? Who gets to go? Rather than think about such issues, Nesvold wrote in her book Off-Earth (2023), many in the industry ‘seemed to be focused exclusively on technical challenges like reusable rocket designs, economic strategies for making space activities financially feasible, and legal structures that would invigorate rather than inhibit their industry’. Take, for example, how everyone from advocates of space science to satellite companies to visionaries of the human colonisation of the galaxy looks admiringly at SpaceX’s efforts to make a reusable giant rocket, the Starship, the huge payload of which might benefit all those enterprises. So what if the project is led by a person increasingly drawn to far-Right conspiracy theories, who reportedly bullies staff or sacks them on a whim, whose commitment to free speech does not extend to the right of gender self-identification? It will all be fine. Give us the cool tech! We’ll worry about the ethics later. But maybe let’s not. Maybe let’s worry now about the motives and the conduct of human space exploration in the modern commercial age. Let’s interrogate it openly and frankly, without the fuzzy slogans of inclusivity, before the Moon is being strip-mined and others discover, like Shatner but to their own cost, that life off-Earth is not quite what it’s been cracked up to be. The only existing international agreement on conduct and obligations in space is the 1967 Outer Space Treaty, of which all the major spacefaring nations are signatories. The treaty declares that ‘the exploration and use of outer space shall be carried out for the benefit and in the interests of all countries and shall be the province of all mankind’, and that ‘outer space is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means.’ The treaty was never drawn up with private space companies in mind, intent on turning space into another branch of the tourist industry or on exploiting its resources for personal gain. In the late 1960s, only governmental agencies were deemed capable of space exploration, and the treaty aimed to prevent nation-states from making extraterrestrial land grabs. But there are now several private businesses exploring technologies for mining water and minerals from asteroids, while NASA’s Artemis mission to return humans to the Moon, a first step towards a ‘long-term presence’, was developed in collaboration with commercial partners. Despite such plans, the legal status of private property in space ‘remains murky and untested’, wrote Nesvold. In 2015, the US revised its position on the matter with the Commercial Space Launch Competitiveness Act, which states that US citizens who set out to extract resources from space ‘shall be entitled to any asteroid resource or space resource obtained, including to possess, own, transport, use, and sell [it] in accordance with applicable law’. The Act sought to evade possible conflict with the Outer Space Treaty by saying that such resource rights of companies would not imply any national ‘sovereignty … over, or the ownership of, any celestial body’. China and Russia, and possibly other nations with space programmes, are now drawing up their own legislation on private rights of ownership or extraction. Asteroid Psyche is thought to be mineral rich. Courtesy NASA These rights are being granted without any public discussion; we have no idea if this is what the public wants. Nor does that permission seem to be accompanied by codified duties of care. ‘Will future generations of humans in space struggle to live in a scarred, toxic landscape after years of unregulated mining, manufacturing, and waste disposal?’ asked Nesvold. The popular narrative that space is a bottomless reservoir of resources does not fit the facts. The Harvard astrophysicist Martin Elvis estimates that only about 10 near-Earth asteroids are rich enough in valuable rare-earth metals to be worth the immense cost of mining. That may not be enough to satisfy the greed and ambitions of space-mining companies – and as a result, as Elvis told Nesvold, ‘we’ll have piracy and rustling and claim-jumping and espionage, all going on [in space]’. ‘Cosmocentrism’ asserts an intrinsic value even to the lifeless rocky landscapes of the Moon and Mars Amid such competition, can there be any guarantee that these worlds won’t be ruined for scientific study? ‘There is a strong argument that the planets and their moons should be treated as sacrosanct and off limits, with one or two higher levels of protection than the US national parks,’ the planetary scientist Carolyn Porco tells me. Porco led the imaging science team on the Cassini (robotic) mission to Saturn from 2004 to 2017. ‘They are too scientifically important to be left open to commercial, political or colonialist purposes.’ Porco is currently involved with a future mission to Saturn’s moon Enceladus, considered by some to be the most likely extraterrestrial environment in the solar system to host life. ‘Would I trust a commercial outfit to take the necessary steps to ensure planetary protection on a moon like Enceladus?’ she asks. ‘Hell no!’ She would like to see ‘a stringent set of international regulations to prevent commercial interests [in space] from creating a “tragedy of the commons” as has happened with so many resources we have on Earth’. An ability to study pristine outer space has benefits back home. ‘Our understanding of the greenhouse effect comes partly from work on the atmosphere of Venus,’ says the philosopher Tony Milligan of King’s College London. Venus has a runaway greenhouse effect, where the planet has become superheated by all its volatile substances evaporating into the dense atmosphere. ‘Planetary science just works better when you are studying more than one planet,’ Milligan tells me. ‘The science that shapes the climate-change response [thereby] improves – a lot.’ But some, eager to promote human spacefaring, oppose the idea that science must come first. ‘It’s not just a matter of who gave the Moon to astrobiologists, but also of who gave the Universe to professional scientists,’ wrote Robert Zubrin, president of the Mars Society, which advocates for Mars settlement. ‘Humans do not exist to serve scientific research. Scientific research exists to serve humanity.’ Others question whether the value of space environments should be assessed on utilitarian grounds in any event, whether that’s for the resources or the scientific insights they might offer. This perspective, called ‘cosmic preservationism’ or ‘cosmocentrism’, asserts an intrinsic value even to the (as far as we know) lifeless rocky landscapes of the Moon and Mars. It collides head on with the argument that there is, on the contrary, an ethical imperative to make a lifeless place like Mars fit for human habitation by ‘terraforming’ its atmosphere: a planetary equivalent of Israel’s first prime minister David Ben-Gurion’s mission to ‘make the desert bloom’. That was hardly an uncontroversial sentiment; neither is this. Space mining and other commercial activities raise the question of workers’ rights. At face value, the situation might seem comparable to that in which any workers face remote and potentially dangerous conditions: in terrestrial mining or on oil rigs, say. But the dangers, the remoteness, the isolation – and hence the vulnerabilities of labourers – will be all the greater. However, Milligan warns against imagining that space industries will conform to the sci-fi imagery of savage off-world worker colonies. ‘Only very highly skilled people are likely to be sent there to do any work, not a mass of dispossessed “belters” hewing at asteroids,’ he says. Most of the manual graft would be done by robots. All the same, it seems likely that some workers will be needed and, as Nesvold warns: ‘History has shown us that the combination of potential profit and a vulnerable workforce can easily lead to labour exploitation, especially in a remote environment that’s difficult, if not impossible, to monitor for abuse.’ Although larger-scale planetary settlements remain highly hypothetical, rather little thought has been given to how they would be governed. The common assumption seems to be that they will be utopian democracies, but there is no reason why that must be so. Historical settlements in remote frontier locations are not exactly noted for their egalitarianism and tolerance. For example, any such communities with populations large enough to approach self-sufficiency will experience crime; with resources so scarce, would miscreants be deemed to deserve any of them at all? While it might be far-fetched at this stage to imagine ‘billionauts’ like Musk and Bezos setting up settlements independently from any governmental support, it’s worth looking at the track records of their terrestrial enterprises. In 2017, SpaceX settled a lawsuit brought by employees who claimed to be given inadequate mealtimes and rest breaks; Musk’s Tesla electric-car factories have also been accused of dangerous and stressful working conditions, and have faced charges of union-busting and rampant workplace racism. Unsafe and oppressive working conditions have been reported in Bezos’s Amazon warehouses. It seems rather optimistic to imagine that the respect for human rights from private entrepreneurs would be better in the unforgiving environments of outer space. Musk has even floated the idea of offering prospective Mars settlers the opportunity to defray the enormous cost of their tickets by repaying the debt in labour on the Red Planet. Many advocates of space settlement see regulation as a hindrance to innovation and entrepreneurship. Nesvold says they tend to be neoliberal techno-utopians who believe that the very process of moving communities out from under the thumb of terrestrial oversight and governments into an environment with plenty of valuable, untouched resources will naturally improve society. By the very nature of these ideologies, one shouldn’t spend too much time thinking or planning for how to protect human rights or basic needs, because overplanning and over-regulation are Bad For Society.Prospective pioneer settlers might want to consider how they feel about that. The more immediate question is who gets to go in the first place. So far, space tourism has been largely confined to the rich and famous, while publicly funded astronaut programmes have not been beacons of diversity: no Apollo astronaut was other than white and male. Space agencies seem keen now to change that, but only in 2022 did Jessica Watkins become the first Black woman to serve on a long-duration mission on the International Space Station. But things are improving, says Milligan. ‘Inclusion within space programmes, including the command structure, is a lot better than in the older industries or in politics. We really have moved on from “the right stuff”.’ The Artemis mission proclaims its intention to ‘land the first woman and first person of colour on the Moon’. And the second commercial spaceflight of Branson’s Virgin Galactic on 10 August 2023 carried the Caribbean mother and daughter Keisha Schahaff and Anastatia Mayers, who won their seats in a draw. Milligan points out that some of the ground-level training of First Nations people in NASA’s programme has happened on those individuals’ Indigenous lands ‘so that there is no sense of people being sent away from their own lands to be inculcated with somebody else’s way of seeing the world’. Part of the problem is that, by its very nature, human spaceflight tends to draw from a limited demographic – if not in terms of race or gender, then certainly of mental and physical attributes. Some physical disabilities, such as might require a wheelchair on Earth, could be rendered moot in zero gravity, but could face serious obstacles on another planet. Mindful of this ableist history, in 2021 the European Space Agency launched its parastronaut feasibility project, calling for applicants with a specified range of physical disabilities in order to explore the possibilities of adapting spaceflight hardware (such as spacesuits) to such physiques. Six months on the ISS exposes astronauts to about 100 times more radiation than a worker on a nuclear facility ‘Right now we are at step zero,’ the project announced. ‘The door is closed to persons living with disabilities. With this pilot project we have the ambition to open this door and make a leap, to go from zero to one.’ The first person accepted onto the programme, chosen in November 2022, is a British Paralympian named John McFall, who lost a leg in his late teens. The philosopher J S Johnson-Schwartz of Wichita State University in Kansas calls the programme ‘a step in the right direction’. Then there is the question of what spaceflight demands mentally from its participants. The psychological attributes that might make for a good astronaut narrow the pool of neurotypes from which they are typically drawn. It is striking how Shatner – probably not, in real life, quite the ideal personality for astronautics – had a very different response to space from that of most candidates for NASA missions. In his classic study of the early Apollo missions Of a Fire on the Moon (1970), the US author Norman Mailer was unnerved by the dispassionate and clinical way in which Neil Armstrong and crew, and indeed the entire organisation, undertook an endeavour so imbued (as Mailer saw it) with spiritual dimensions. ‘Let us try to comprehend,’ he wrote, ‘how men can be so bold yet inhabit such insulations of cliché.’ Schwartz worries that, so far, the emphasis has been on mental and physical endurance. ‘We are doing what we can to learn whether any human bodies can survive at all in space,’ she tells me. ‘But our performance degrades when we don’t have what we need, when we’re confined in hostile environments. [So] we need to think about not eliminating human vulnerability in space, but acknowledging and accepting it.’ A resilient crew or settlement is going to need flexibility of thought, since there will be no manual with solutions to all the challenges they will face. ‘You need a lot of cognitive diversity in order to tackle new problems,’ says Schwartz: the old ‘test pilot’ mentality won’t suffice. ‘There’s virtually nothing we really know about how to make a human happy in space, how to sustain meaningful human lives in space,’ she adds. Yet no matter how much we might strive to make space more welcoming, there’s no escaping the fact that the space environment is intensely hazardous. It is, you might say, always trying to kill us. This makes the popular comparisons with the sea voyages of Columbus and Magellan (which were of course in fact the beginning of the oppressive colonialist era) not just misleading but perhaps immorally so. ‘The more you learn about space, the more you realise how deeply uninhabitable it is compared to Earth,’ says Nesvold. Far from the madding crowd: the astronaut Robert L Stewart using the nitrogen-propelled Manned Maneuvering Unit. He is floating without tethers attaching him to the space shuttle. Courtesy NASA There’s obviously the extreme cold and the vacuum (or, on Mars, the lack of breathable air). But among the worst dangers is radiation: the constant bombardment from the high-energy particles of cosmic rays and the solar wind, which wreak molecular havoc in our biomolecules with potentially carcinogenic consequences. There are currently no good solutions. Spacecraft can’t carry heavy shielding, and weak shielding might be worse than none because it creates a potential source of cascading showers of secondary ionising particles. Six months on the International Space Station exposes astronauts to about 100 times more radiation than the expected dose for a worker on a nuclear facility, and around twice the permitted annual limit for those working with radioactive material. These doses are cumulative. Given such health threats, Porco thinks that permanent space settlements are a fantasy. ‘When I review all the medical issues that have been found in astronauts thus far, I seriously doubt humans will be able to survive on Mars long term,’ she says. Is it ethical even to pretend otherwise while no solution is known? Crewed missions to Mars would be prohibited by existing astronaut radiation exposure limits. NASA seems likely to give astronauts the option of waiving those limits if they choose to participate in such missions. But is it acceptable to permit individuals to expose themselves to such risk? Some will argue that acceptance of danger is the essence of all exploration, and that, in any event, the other risks of such a mission are likely to be at least as big. But, as the implosion of the Titanic submersible Titan has reminded us, the ethical calculus of risk is not merely a question of the freedom of individuals to enter hazardous environments. Not least, such decisions depend in part on the motivations offered to participants. If, say, an astronaut believes that they are taking a risk for the sake of the future welfare of all of humanity, we are obliged to ask whether that belief has been formed with access to all the relevant information. This brings us to the central ethical consideration: why go into space at all? When the Russian space visionary Konstantin Tsiolkovsky wrote in 1911 that ‘Earth is the cradle of humanity, but one cannot remain in the cradle forever’, he set the tone for the artful rhetoric of spaceflight by neglecting to add that outside the cradle is nothing but Shatner’s ‘vicious coldness of space’. Seen in that light, perhaps it is not so childish to stay put. Maybe ‘cradle’ is then the wrong metaphor. There is only one answer that really stands up, though it is seldom expressed openly: we go not because it is hard, but because we think it would be cool. I share that feeling. To see a person walk on Mars would be extraordinary. It isn’t at all obvious that it would be worth all the risk and expense but, that aside, it would quicken my pulse. ‘I think that if most of us were being completely honest, this is the first and strongest reason we have for working on space,’ agrees Nesvold. ‘Emotion is such a strong driver, but when the stakes in money and in lives are this high, we need to recognise that we’re being driven emotionally and think about whether our actions are worth the cost.’ Instead – and this is where the matter becomes more of an ethical issue about honesty – the impulse is often defended with sophistry. We are often told that we need to colonise the stars or die out. We’d have to assume that those who point to the death of our own Sun in 5 billion years simply don’t understand evolution, but even the commonly invoked threat of existential risk from a meteorite impact is disingenuous. Since an impact that large is expected only once every several million years or so, the argument that we can’t possibly wait to see how we’re technologically fixed to find an escape route in a couple of centuries is feeble. (And, in any event, a Martian settlement obviously faces the same threat.) What about the climate crisis? Over-population? Nuclear war? Resource exhaustion? But on the timescale on which we can meaningfully consider how these dangers might play out – say, a century or so – it is barely conceivable that there could be an autonomous, self-sustaining settlement on another world that could keep humanity alive. The old saw that ‘there is no Planet B’ is sadly apt. This is not a debate distinguished for its rationality. ‘I’ve found that people arguing from the “we have to prevent the extinction of humanity” side can get extremely aggressive and toxic, because they believe their cause is literally the existence of the species, so any means is worth protecting those ends,’ says Nesvold. Of those who imply that getting into a rocket will save the human race, Schwartz points out that ‘the other people that say those things are generally cult leaders.’ ‘Spaceflight is such a religion to some people that they don’t question the propaganda they hear about it,’ she says, ‘especially when it comes from scientists.’ If we enlist science as a cover for other motives, then we are not making an ethical case at all Ah, science. That argument will not take us very far – literally. A crewed lunar observatory maintained with flights to Earth could be a nice addition to the many automated space-based telescopes already in operation. But the pace of advance in robotics and AI means it is even now highly questionable whether the additional cost and danger of getting fragile human planetary scientists to Mars in (a generous estimate) 30 years’ time would be a sound investment. Venturing any further afield is currently fantasy as far as human space science is concerned, whereas the return on investment for robotic missions has been phenomenal. For many, ‘Why go?’ comes down to a human urge to explore. But is that really a universal urge? Nesvold points out that a tendency towards risk-seeking behaviour has been associated with a particular genetic disposition, specifically a variant of a gene called DRD4. But one of the most robust associations of this gene variant is in fact with the neurological condition ADHD, which Branson says he has. If this translates into an urge to go into space, that is surely not right or wrong in itself – but neither can it be claimed as some universal tendency. ‘Humans are diverse, we have lots of different tendencies, and it’s a fallacy to argue that just because something is “natural” [to some], it’s correct,’ says Nesvold. Perhaps, then, the fundamental ethical issue of human spaceflight is about honesty. It is hard to consider ethical an enterprise that puts people at risk at immense cost and use of resources, if we do so under false pretences. If we pretend that space is like the oceans crossed by intrepid explorers, a promise of lush virgin territory over the horizon; if we mobilise that colonialist narrative for motivational purposes while suppressing the immense harm it caused; if we ignore what history tells us about human exploitation in outposts; if we enlist science as a cover for other motives, then we are not making an ethical case at all. Has much changed in space ethics since Nesvold found it being shrugged off in 2016? ‘We [ethicists] are not playing a big role in the shaping of what goes on at the highest level,’ Milligan admits – but he adds that this isn’t unique to work on space. ‘Ethicists generally have a low-key role,’ he says – the same might be said of biotechnology, for example. That’s not an easy position to sustain. ‘It’s disheartening that the message doesn’t get through,’ says Schwartz. ‘There’s a certain kind of hopelessness that researchers like me sometimes feel about this. We get dragged down by it.’ But Nesvold holds out hope for a richer discussion. ‘I was recently invited to attend an ethics workshop at NASA HQ, organised by an employee who wanted to bring in social scientists and ethicists to talk to others about ethical concerns with the Artemis programme,’ she says. ‘There was a bit of pushback during the conversations, but the fact that the workshop was held at all seemed like progress to me.’ A recent article in Science magazine by a group of bioethicists and legal scholars, exploring the wider ethical questions of commercial spaceflight such as dangers to the health of crew and passengers, and questions about inclusivity and use of natural resources, was another indication of growing recognition that the issues should be widely discussed. ‘The most important thing,’ she adds, ‘is that we have these conversations now, well before it’s too late.’
Philip Ball
https://aeon.co//essays/as-space-gets-more-commercial-how-can-it-be-governed-ethically
https://images.aeonmedia…y=75&format=auto
Consciousness and altered states
Have you been here before? The eerie sensation is the shadow of your mind searching inward for clues to its own survival
Déjà vu, the eerie sense that something new has been experienced before, has confounded us for hundreds of years. Along with the public, philosophers, physicians, intellectuals and, more recently, scientists have tried to get to the bottom of the phenomenon. Potential explanations have ranged from double perception (the idea that an initial glance at something was only partially taken in, leading to déjà vu upon a second, fuller glance) to dissolution of perceptual boundaries (a brief blurring of boundaries between the self and the environment) to seizure activity to memory-based explanations (the idea that déjà vu results from a buried memory). Now, research emerging from my lab and others suggests that déjà vu is not just a spooky experience, but a possible mechanism for focusing attention – perhaps an adaptive mechanism for survival shaped by evolution itself. I first became interested in the topic after reading the paper ‘A Review of the Déjà Vu Experience’ (2003) by the psychologist Alan S Brown – probably the first treatment ever to appear in a mainstream psychology journal. Writing in the Psychological Bulletin, Brown described survey studies, case reports and theoretical ideas culled from more than a century’s worth of writings on déjà vu. Much of the available literature on déjà vu at the time came from non-mainstream sources (and some were even of a paranormal flavour). Still, from this largely fragmented literature, Brown managed to winnow some important clues and presented them in a language that cognitive scientists could work with and act upon: data and theory. The data from the survey studies provided useful empirical starting points, and the very old theories of déjà vu that Brown reviewed provided a scaffolding for devising highly specified hypotheses that could be tested in a lab. From the large collection of surveys conducted over the years, Brown determined that roughly two-thirds of people experience déjà vu at some point in their lives. He also reported that the likelihood of experiencing déjà vu decreases with age, and that physical settings (or places) are the most common trigger. The finding that déjà vu is most commonly elicited by scenes (as opposed to just speech or objects) was a particularly useful clue for scientists: a new theoretical approach to autobiographical and event memory emphasises a role of scenes in the ability to recollect past life events. Partly based on newer understandings that brain areas critical for first-person navigation through places may also underlie recollective memory ability, the idea is that the first-person perspective within a scene is a crucial facet of human memory. Consider the last dinner that you ate at a restaurant. What is this memory like? Can you ‘see’, in your mind’s eye, where everyone else is sitting relative to you at the table? This illustrates how our ability to process, navigate through and mentally reconstruct our place within past scenes may be central to our recollective memory ability. The critical role of our place within scenes in memory may also be why the centuries-old memorisation technique known as the Method of Loci (also called the Memory Palace) is very effective and used by competitive memorisers; it involves envisioning your to-be-remembered information within particular scenes along a route that you regularly take, or within a building that you know well. For example, to remember his talking points in their correct order for his TED talk ‘Feats of Memory Anyone Can Do’ (2012), the science writer Joshua Foer created a visualisation of different points throughout his house, each with a visual-image cue attached to it so that, when he did a mental walk-through of his house starting at a mental image of the front door, he would ‘see’ in his mind’s eye an image cuing him for the next talking point. In the foyer of his house, Foer had imagined Cookie Monster (the Muppet) on top of Mister Ed (the horse) as his cue to introduce his friend Ed Cooke at that point in the talk. Foer continued moving through various places within his image of his house to access his cues for the next talking points in the order in which he needed to raise them. For example, later on, when arriving at the kitchen in his mental walk-through of his home, he had imagined the characters from The Wizard of Oz along a Yellow Brick Road; this was his cue to describe how he had embarked on a journey and the many friends he met along the way. As Ulric Neisser, often considered the father of cognitive psychology, suggested decades ago, ‘a sense of where you are’ may provide a basis for recollective memory. Although déjà vu is more of a contentless sensation of memory than a recollection of autobiographical experience, the fact that it tends to be elicited by scenes hints at the possibility that it, too, emerges from the same basic scene-processing mechanisms that enable this ‘sense of where you are’. An example is having a sense of recognition for a person’s face without being able to pinpoint just how you know them Dovetailing with this useful clue about déjà vu, Brown’s 2003 review also mentioned the ‘Gestalt familiarity hypothesis’ – the theory that déjà vu results from a familiar Gestalt, a German word for the arrangement of elements within a space – such as when a new acquaintance’s living room happens to have the same spatial layout as a previously visited space that fails to come to mind. Brown linked this untested hypothesis of déjà vu to an ongoing approach for studying memory known as the ‘source-monitoring framework’, in which a person can recognise a situation as having been experienced before without pinpointing the source of the familiarity. In what seemed to be an invitation for cognitive scientists, Brown suggested that it would be straightforward to test such hypotheses in the lab. At the time, I had been studying a phenomenon known as ‘recognition without identification’ and its sister phenomenon, ‘recognition without recall’. Both are thought to reflect the ability to sense that something was experienced before, even when no specific past instance comes to mind. A common example is having a sense of recognition for a person’s face without being able to pinpoint just how you know the person. I immediately saw a connection between my own work and what Brown presented in his review, and I set out to test his ideas. One of my methods seemed particularly applicable. This was the recognition without recall method. In my original work, participants might receive a cue like ‘POTCHBORK’ that resembles an earlier viewed word, ‘PITCHFORK’. Although a person can successfully use the cue to recall the word it resembles, sometimes recall fails. Recognition without recall is the finding that people give higher familiarity ratings to cues that resemble unrecalled studied words than to cues that do not. Applying this insight to déjà vu, my students and I developed a variant of the task using black-and-white line drawings. Each test image potentially shared an overall ‘Gestalt’ or arrangement, with an image that had been studied before. When presented with an image on the test, participants attempted to recall a previously viewed image having a similar arrangement of elements. They also rated how familiar the test image seemed and whether or not it provoked a sense of déjà vu. Images that fell into the déjà vu category tended to indeed match arrangements found in prior images, establishing evidence for the Gestalt familiarity hypothesis and setting the stage for what I would later facetiously refer to, in a TEDx talk, as a ‘déjà vu generator’– an implementation of the Gestalt familiarity idea in virtual reality (VR). Wearing a VR headset, participants would be sequentially immersed in different sets of visual surroundings throughout a study phase. In a later test phase, they would be immersed in new scenes, some of which share a spatial layout (ie, arrangement of elements) with scenes from the study phase. Here, the familiar Gestalt would be one’s visual surroundings within the VR environment, as might mimic real-life situations in which déjà vu occurs, and as might involve a sense of where you are. I met Alan Brown in the summer of 2007 at the American Psychological Association annual convention in San Francisco after inviting him to give a talk on déjà vu. I told him how his 2003 review paper and later book, The Déjà Vu Experience (1st ed, 2004), inspired me to pick up the study of déjà vu myself. This formed the start of a long collaboration. Later over dinner at the 2007 annual meeting of the Psychonomic Society that November in Long Beach, California, we marvelled at how neat it would be to be able to test the Gestalt familiarity hypothesis in VR for an immersive experience in which the spatial layout is one’s visual surroundings within the VR environment. To the extent that VR allows for a simulation of life-like immersion within scenes, this approach might approximate the way in which an arrangement of elements in space (such as where a table, couch, floor lamp and artwork are placed relative to one another within a living room scene) might produce déjà vu in real life. It seemed like a castles-in-the-air idea. But then in 2008, there I was with a group of students, down in a windowless cinderblock room in the basement of the old Clark Building on the Colorado State University campus, wrestling with a VR headset. Fifteen years ago, VR systems were quite crude. They lacked a user-friendly interface or any form of tech support, and required a lot of improvising in the form of makeshift workarounds. We were working with a set of eMagin Z800 VR goggles and were attempting to get The Sims 2 – a 2004 life-simulation game – to display within the goggles for an immersive experience with the game. This was not trivial. Fortunately, Ben Sawyer was among the tinkerers down in that basement. An undergraduate at the time, with a lot of technical savvy, Sawyer was a legend among the Clark A-wing basement-dwellers for having taken apart and reassembled the always-malfunctioning driving simulator, completely reprogramming it for functional operation in research. Recognition without recall occurred in the form of higher familiarity ratings among VR test scenes that shared a spatial layout The Sims 2 video game involves creating indoor and outdoor spaces by placing elements onto a grid from a bird’s eye perspective, and then zooming down into the scene from a first-person perspective to make adjustments and tour the scene. This provided a means by which a large set of scenes, each having an identically configured but otherwise distinct counterpart scene, could be created for viewing from a first-person perspective. For example, a clothing-store scene might have the same arrangement of elements on a grid (eg, the placement of hanging wall displays of clothing relative to a table with folded shirts) as a bedroom scene (eg, the placement of windows and end tables relative to a bed). So while Sawyer worked for months on getting the Sims 2 game engine to output in 3D to the Z800 goggles, I used a pad of graph paper to sketch out a bird’s eye view of dozens of pairs of distinct but identically configured scenes to then manually create within The Sims 2 game, soliciting scene ideas from other team members along the way and keeping a running list (eg, a clothing store configured the same way as a bedroom, a bowling alley configured the same way as a subway station, a museum configured the same way as a courtyard, etc). After many months of creating Sims scenes, and many remarkable improvisations that included having the machine output in 3D to the monitor or any attached display device, and creating short-cut keys to enable teleportation from one scene to the next within The Sims 2 structure (and with Sawyer at one point taking apart then soldering together a pair of non-functioning Z800 goggles that Brown had shipped to us), we eventually got the experiment to work in VR. 13542Figure 1a: bedroom. Figures 1a and 1b have the same spatial layout (configuration of elements) but are otherwise distinct scenes. The same applies to figures 2a and 2b. They were all created in The Sims 2 A computer-generated image of a hotel room13543Figure 1b: clothing store A computer-generated image of the interior of a clothing store13544Figure 2a: warehouse A computer-generated image of a warehouse13545Figure 2b: nightclub A computer-generated image of the interior of a neon-lit nightclubFrom within the goggles, which felt a bit like thick, heavy ski goggles edged with foam, a given cartoon-like Sims scene could be viewed through a square, straight ahead. The depth perception was comparable to that of a 3D movie viewed with 3D glasses, and turning your head enabled viewing differing aspects of the scene, such as looking up at the ceiling or down at the floor, or left or right. The first VR experiment to examine the Gestalt familiarity hypothesis involved 24 college students. A short-cut key zapped the participant from a particular vantage point within one scene to the next and, from each pre-established vantage point, the participant was free to look around the scene by turning their head. After the first 16 scenes, the person viewed a new set of 32 test scenes, half having an identical spatial layout to one of the first 16. While viewing a test scene, the student rated how familiar the scene seemed, indicated if the current scene prompted any recollection of one of the earlier 16 scenes (and if so, which one), and indicated if déjà vu had occurred. After cycling through the 32 test scenes in this manner, the process started over with a new set of 16 study scenes followed by another 32 test scenes. Recognition without recall occurred in the form of higher familiarity ratings among VR test scenes that shared a spatial layout with an earlier viewed but unrecalled scene, and during recall failure, participants reported déjà vu 27 per cent of the time, compared with a baseline of 17 per cent of the time when there was no spatial layout resemblance to an earlier scene. Although this study demonstrated interesting findings and represented a remarkable technological feat for its era, it was repeatedly rejected from journals before finally finding a home at Consciousness and Cognition in 2012. The topic of déjà vu, was, and still is, a tough sell in the world of science. Still, the publication generated a great deal of media attention and public interest, and with that came a number of enquiries to me from the general public about the research, by phone, email and sometimes mailed letters. In many of these enquiries, people were reaching out to tell me that they thought the idea that déjà vu was grounded in memory could not be correct, or could not be complete, because, to them, déjà vu included a sense of knowing what will happen next. Some people even used the term ‘precognition’ to describe this. At first, I was not only sceptical, but also wary of venturing into what seemed like more of a topic for paranormal literature than mainstream scientific research, especially when déjà vu was already a tough sell as a topic within science. But the line of questioning kept happening, even in academic settings and, eventually, I started looking into it. Was there a logical, scientific explanation for the sensation these people expressed? Perhaps if a situation was experienced before but failed to be recalled, the sense of how a similar situation would unfold might seem like a prediction? To test this with our spatial layout paradigm, we needed scenes to dynamically unfold over time. The Sims 2 platform was well suited to this because it was set up to easily create videos of virtual tours to publicise one’s Sims creations on YouTube. From this idea, the ‘virtual tour’ paradigm was born. Participants viewed video tours of the Sims scenes that had been used in the previous VR study, each taking a particular path with turns through the scene from a first-person perspective. In the test phase, the tours through scenes with identical spatial layouts also followed the same path as in the earlier-viewed counterpart scene, but only up to a point – the tour stopped short of a turn that happened in the earlier counterpart scene, and participants had to determine the direction of the next turn. If our hypothesis was correct, we thought, then we would find that, when participants experience déjà vu while viewing a tour of a scene with an identical layout as an earlier viewed but unrecalled scene, they should be more likely to successfully predict the next turn. However, that was not what we found. Our new hypothesis was not supported and, deeming the study a failure, I let it sit for a couple of years. Everything the doctor was saying was something she’d heard before, but also she knew what the doctor was going to say next But the enquiries continued to come. One that stands out was when my office phone rang and it was a somewhat shaken man calling from Alaska. He’d had a strange déjà vu experience and was looking for answers. He found some of my research on déjà vu in an internet search. He had recently experienced a strong sense of déjà vu while on a hunting trip and was quite shaken by the fact that, during his déjà vu, he knew exactly what would happen next. ‘I am not a superstitious person,’ he said, ‘so I just don’t understand how this could be possible. I’m hoping maybe you have some answers that can explain this.’ He was distraught, looking for an explanation. I had no good explanation to offer. Conversations like this continued to eat at me. Then one day it occurred to me that perhaps the feeling of déjà vu is associated with an illusory sense of prediction. Digging back through old literature, there were some hints at this idea. For example, in a very old neurology case report from 1959, Sean Mullan and Wilder Penfield reported on a patient for whom electrical stimulation during awake brain surgery induced déjà vu. The patient reported feeling like everything the doctor was saying was something she had heard before, but also like she knew what the doctor was going to say next. Since the déjà vu was induced artificially through electrical stimulation to the brain, the accompanying sense of prediction must have been illusory in that case, rather than memory based. So, I dusted off the old experiment from a couple of years earlier and ran it again with an additional prompt following the pause during each tour of a test scene: rate the feeling of being able to predict the direction of the next turn. And lo and behold, people felt pretty strongly that they knew the direction of the next turn when experiencing déjà vu, even though that was not the case. This finding persisted across many subsequent experiments, including in the original multi-experiment study that was the first to show it in 2018 and in the studies that followed it. But this research still didn’t address the question of why people like the Alaska caller feel like they really did predict what was going to happen during déjà vu. So, we did a follow-up study, which suggested that not only is there a predictive bias associated with déjà vu, but a ‘postdictive’ bias (a feeling of having known all along how the situation was going to unfold) too. As to what all of this means, it may be that déjà vu produces the feeling of being on the verge of retrieving a past experience from memory, leading to the belief that you can identify what will happen next (because it feels like how the situation unfolds is about to come to mind at any moment); then, as the situation does unfold in a certain way, its continued familiarity tricks the mind into believing that it knew it all along. Although these research findings represent major steps toward understanding déjà vu, it wasn’t until I was able to experience déjà vu myself within the ‘déjà vu generator’ that I had what may be my most critical insight. It took a recreation of the VR déjà vu paradigm by someone else for me to have the experience myself. Because I had personally created most of the scenes in our previous work, and because I knew every scene and its counterpart, I could never experience déjà vu myself within our system. The scenes were just too familiar to me. That changed when I donned an HTC Vive VR headset to personally run through a brand-new variant of the VR paradigm created by Noah Okada, then a computer science student at Emory University in Georgia. I met Okada in 2019 while on a visit to Emory during my sabbatical. He was working with the neuroscientist Daniel Drane and the neurologist Nigel Pedersen – whom I was visiting – to create VR scenes for use in research. Pedersen and Drane work with people who have epilepsy. Our collaboration had formed a year earlier through Joe Neisser, a philosopher at Grinnell College in Iowa (who, somewhat serendipitously, happens to be the son of Ulric Neisser). Joe Neisser met Pedersen during his own sabbatical at Emory while attending a talk. Like most neurologists specialising in epileptology, Pedersen was familiar with seizure-related déjà vu, as neurologists have been writing about it for more than a century. Joe Neisser and I had met in Savannah, Georgia in 2012 during a symposium he moderated at the Southern Society for Philosophy and Psychology, where I gave a talk on our recent VR study of déjà vu. When Pedersen and Joe Neisser got to talking about seizure-related déjà vu, Neisser described the VR paradigm to Pedersen and suggested that we should all collaborate. So there I was in 2019 on my own sabbatical, visiting Pedersen’s group at Emory to help get the video-based virtual tour experiment running on a portable computer that could be wheeled on a cart into a patient’s room. Patients with pharmacologically intractable seizures sometimes undergo pre-operative evaluation for surgical candidacy through the use of implanted electrodes with continuous monitoring. While hospitalised for the lengthy monitoring period, interested patients can participate in computerised cognitive tasks while their brain activity is being measured through the electrodes to better understand the function of different brain circuits. As many of the common seizure foci (and thus commonly implanted regions of the brain) happen to be implicated in seizure-related déjà vu, measuring neural activity while a patient completes the virtual tour task might shed light on the mechanisms involved in both familiarity-detection and déjà vu. While I was there helping to prepare the computer cart for the task, I had a long conversation with Okada about it. An impressive and intellectually curious student, he had already read my publications on déjà vu, and already had several great ideas for extending the research using modern-day VR. He got to work re-creating the virtual tour task for use with present-day VR systems. Using the gaming engine Unity, he created new scenes with new layouts and programmed a VR implementation of the virtual tour paradigm for the Vive headset. The viewer is pulled on rails through a highly realistic scene on a particular path as if on a ride (similar to the It’s a Small World ride at Disney World). In a later identically configured but otherwise novel scene, that precise path is taken through the identical layout of that new scene. It felt like my mind signalling to me to pause exploring the new scene and instead turn my attention inward to something in my memory It happened as I sat there in 2022 looking through the Vive headset, in a VR lab room in the Behavioral Sciences Building at Colorado State, testing out Okada’s VR virtual tour program for the first time. I had been exploring the various scenes he had created, looking around while ‘riding’ through them and admiring the detail of the textures and the cleverness of the placement of various realistic-seeming objects within each scene. Then, as I was being pulled through a scene of straw huts along a boardwalk in an oceanside resort, I was suddenly overcome with an intense sense of familiarity. The sensation grabbed hold of my attention and I found myself no longer looking around and taking in the details of the scene but instead intensely focused on trying to figure out why it felt so familiar. It was déjà vu. 13548Environments created by Unity game-engine software for the HTC Vive headset. A shopping mall and a barn with the same spatial layout 13550The author wearing the HTC Vive headset At first, I could not figure out exactly why I was experiencing it. That is, I could not identify a specific scene from earlier that might be responsible for the feeling. But my attention had now been fully devoted to trying to figure it out. So, as I continued to be pulled through the scene, I kept going through possibilities in my mind for what might be the reason behind the déjà vu. Eventually, by the time the tour of that scene came to an end and the prompts started appearing, I figured it out. It was the campground from earlier. The campground had an arrangement of tents along a dirt pathway and was identically configured to the layout of the huts along the oceanside boardwalk at the resort. And it happened several more times in several more test scenes as I continued through the program. What I noticed during these experiences was that, while thoroughly enjoying looking around a highly realistic, detailed scene that I had never seen before, I would be hit with a strong feeling of familiarity and would feel certain that the scene was reminding me of something I hadn’t quite placed yet. It felt like my mind signalling to me to pause exploring the novel and interesting scene and instead turn my attention inward to look for something in my memory. Then I would spend a lot of time going through possibilities in my mind. In many such instances, I would eventually figure it out: I would identify the previously viewed scene responsible for the familiarity. This made me realise that there may be a component to déjà vu that we had been overlooking: it may prompt a flip of attention from outward to inward, to search one’s memory for potentially relevant information. For me, the déjà vu sensation in the VR environment was often a step along the way to eventual recall success, and this facet of the experience might be getting completely missed in our usual research approach of separating instances of recall success and recall failure. Instances of recall success may sometimes be preceded by a feeling of déjà vu – but our studies had not been set up to examine how the memory experience unfolds over time. Perhaps déjà vu grabs attention and pulls it inward toward a search of memory for potentially relevant information? My students and I began to sift through some of our existing data sets in search of evidence that we might have previously missed. And we found some. For one thing, as reported in a recent article led by my former student Katherine McNeely-White, participants seem to guess more at earlier experienced scenarios when experiencing déjà vu than when not. That is, when experiencing déjà vu, they tend to type inaccurate information into the recall prompt rather than just leaving the recall prompt blank (leaving it blank more often when not experiencing déjà vu). This is consistent with the idea that, during déjà vu, people expend more effort searching their memory trying to conjure potentially relevant information, even if what they generate from the search is incorrect. For another thing, even when participants did leave the recall prompt blank during instances of déjà vu, they spent more time at the prompt before hitting Enter to move on, compared with when they were not experiencing déjà vu. This greater time spent at the recall prompt suggests that participants were likely trying a bit harder to recall an earlier scene when déjà vu was experienced than when it was not. Finally, participants were also more curious to discover whether a studied scene (and if so, which one) might map on to the current scene when experiencing déjà vu than when not. There are other hints that déjà vu relates to attention. When it accompanies seizure activity, its pull on attention is so powerful that it may provoke some patients to confabulate memories – to invent recollections that help explain away the sensation of reliving something from the past. Much like the active search of memory I myself experienced, this kind of ‘recollective confabulation’ could represent an inward-directed accounting, and the information pulled up, real or not, could be a means of trying to provide oneself relief from the forced, prolonged, inward-directed attention that may ensue during seizure-related déjà vu. Déjà vu may be an eerie shadow of the mind at work, and a window into the mind’s evolutionary past. Most of the time, our cognitive processing takes place smoothly and effortlessly – we just process the world around us and retrieve relevant information rapidly, without introspective access to how that occurs. It just does. Déjà vu occurs when there is a hiccup in the system, and we notice the pull on our attention; it grabs hold of our focus, allowing us to catch a quick glimpse of our memory’s operation occurring in slow motion. What would ordinarily take place quickly beneath the surface – the unfolding process of familiarity-detection followed by inward-directed attention and retrieval search effort leading to retrieval of relevant information – suddenly has a light shining on the spot where the halt occurred, where the retrieval piece was not successful, and we find ourselves in a heightened state of searching our memory, trying to find out why the situation feels so familiar. But rather than being an odd quirk of memory, this cognitive mechanism could be forcing us to retrieve the very memories we need to survive – and could be evolution’s way of forcing the mind inward, when it needs that insight most.
Anne Cleary
https://aeon.co//essays/deja-vu-a-window-on-the-past-and-a-key-to-human-survival
https://images.aeonmedia…y=75&format=auto
Ethics
Bernard Williams argued that one’s ethics is shaped by culture and history. But that doesn’t mean that everyone is right
Travel and history can both inspire a sense of moral relativism, as they did for the Greek historian and traveller Herodotus in the 5th century BCE. What should one make of the fact that what counts as adultery, for example, differs around the world? In Lust in Translation (2007), the contemporary writer Pamela Druckerman chronicles how the rules of infidelity vary ‘from Tokyo to Tennessee’. It can be tempting to conclude that the correct answer to moral questions is ultimately settled by convention, perhaps like matters of etiquette such as how to eat your food. For Herodotus, the recognition of cultural difference led him to declare, echoing the words of the Greek poet Pindar, that ‘custom is king of all.’ The acclaimed British philosopher Bernard Williams, writing in the 1970s, showed that a common way of arguing for moral relativism is confused and contradictory. Nonetheless, he went on to defend a philosophical worldview that incorporated some of relativism’s underlying ideas. There is much to learn, when we think about the ongoing culture wars over moral values, from the encounters with relativism that recur throughout Williams’s work. First, however, it’s useful to understand why a prevalent feature of the culture wars, arguing over which words to use, itself quickly leads to arguments over relativism. Bernard Williams in 2002. Photo by Eamonn McCabe/Popperfoto/Getty Images Consider the following memorable scene in Sally Rooney’s novel Conversations with Friends (2017). The central character, Frances, who is sleeping with Bobbi, rejects her friend Philip’s insistence that ‘in basic vocabulary she is your girlfriend.’ Frances is right to resist Philip’s attempt to put a familiar label on things: she is trying to live in a way for which there aren’t words yet. Elsewhere in the book, Frances questions not only the word ‘couple’ but even the term ‘relationship’ to depict her life with Bobbi. If she isn’t sure how to describe her complicated situation, it’s in part because it doesn’t easily fit into the grids of conventional thought. She wants, to use an image from James Joyce, to ‘fly by’ the nets of language. The words your society uses, as Frances is highly aware, shape the self you can become. Language is loaded with ethical expectations. If you agree that you are in a ‘couple’ with someone, for instance, then that commonly (though not always) carries with it the expectation that you will not be in bed with anyone else. That norm can be challenged, and has been, by those who are in open relationships. However, if you are trying to live in a way that is new, and doesn’t fit into accustomed categories, then it’s likely that you will be misunderstood and deprived of social recognition. Even so, as the American philosopher Judith Butler has argued in Undoing Gender (2004), there are situations where it’s better to be unintelligible than to force oneself into the existing menu of social options. If everyday language can sometimes feel oppressive, it’s perhaps because it is inescapably descriptive and evaluative: it tells you not just how things are, but how they should be. If you are someone’s ‘girlfriend’, for instance, then a vast number of beliefs kick into action about how you should behave. This is why Frances is so wary about accepting the label. Perhaps the clearest example of how language can be at once descriptive and value-loaded is in the case of what philosophers have come to call thick ethical concepts. Think of words such as ‘friendly’, ‘mean’, ‘aggressive’, ‘rude’, ‘impatient’, ‘brutal’ and so on, and notice how these terms evaluate behaviour positively or negatively at the same time as they describe it. Thick ethical concepts are named by contrast with thin ethical concepts such as ‘right’ and ‘should’ and ‘ought’. These highly abstract terms are almost purely evaluative and don’t seem to describe any specific actions. Rather, as the American philosopher Christine Korsgaard has put it in The Sources of Normativity (1996), they seem like those gold stars used at school that can be stuck upon anything. The culture wars that take place over controversial moral questions are, in part, battles over which ethically loaded concepts should win out within a society. Should sexuality be conceptualised in terms connected with sexual purity and restraint (‘sanctity’, ‘chastity’ and so on) or in terms of sexual self-expression and experimentation (‘liberation’, ‘kink’ and so on)? This brings home the fact that ethical words and concepts are not just abstract ideas: they are the product and expression of different ways of living. Seen this way, the political intensity surrounding what is sometimes disparaged as ‘arguments over words’ makes total sense. The culture wars are concept wars over how best to live. We all use ethical concepts in the broad sense I have introduced. People who think that they can live without values are failing to think through what that would really mean. But if we all, inevitably, evaluate our experience, we don’t all do so in the same way. In a recent podcast on the lessons from the Roman Empire, the historian Tom Holland stressed the dramatic contrast between the sexual mores of ancient Rome and those of the modern West. This is just one, perhaps already familiar, example of the commonplace fact that ethical norms vary across, as well as within, cultures. Moreover, even ethical concepts that are superficially shared can be understood in deeply different ways. Consider how respect is shown in a nod of the head: it can symbolise respect as a form of mutual recognition, or respect as deference to another’s superior strength. Call it the anti-Humanist Fork: relativism or religion? The fact of moral diversity therefore raises the issue of moral relativism. This, too, has become a part of the culture wars, especially as these debates have played out in the United States. Many moral traditions are based on the idea that there are universal values, perhaps rooted in human nature. Perhaps you yourself were raised with the universalist idea that there is a single true morality that applies to everyone, everywhere. But if living many different ethical ways of life is natural to human beings, then this encourages the idea that humans create multiple ethical worlds, and that ethical truth is relative to the world in question. Moral truth, like the truth about etiquette, simply varies from place to place. So far, so bad, for universalism. When battles over moral relativism have featured in the culture wars, they tend to be framed in the following way. One side of the argument celebrates cultural diversity and unites this with an emphasis on the socially constructed nature of values. This is the outlook popularly associated with postmodernism, identity politics, and the rejection of universalist tradition. However, this seemingly ‘relativistic’ destination is precisely what alarms the moral conservative. Hence the other side of the culture wars: if there is no common human standard upon which to ground moral universalism, then something beyond the human is needed. This is the side of the culture wars associated with the need to return to religion, and a morally reactionary response to social diversity. These debates about the sources of morality have become part of mainstream culture. The old-school secular humanist, faced with the difficulty of finding a universal basis for a human-centred morality, is presented with a dilemma: either choose a culture-centred ethics, or return to a God-centred one. Call it the anti-Humanist Fork: relativism or religion? Rowan Williams, the former Archbishop of Canterbury in the United Kingdom, recently stated in the New Statesman magazine that ‘The modern humanist is likely to be a far more passionate defender of cultural variety than their predecessors.’ What he didn’t dwell upon is the following irony: that proper recognition of moral diversity has tended to undermine the universalism upon which humanism is typically founded. It’s important to note that diversity of belief doesn’t by itself entail relativism. After all, different cultures have held different beliefs about the shape of the Earth. Does it follow that there is no non-relative fact of the matter and that all we can say is that the Earth is truly round relative to one culture, and truly flat relative to another? If your friend said the Earth was flat, you would perhaps show them the photo known as ‘Blue Marble’, taken as the Apollo 17 crew made its way to the Moon in 1972. If you are wealthy and extravagant enough, you might book them on a trip to space. You are unlikely to ‘go relativist’. Being a non-relativist about the shape of the Earth, however, doesn’t require you to be a non-relativist about everything. Moral relativism remains an option. As we have already seen, if you combine the idea that Human beings construct ethical reality with the claim that How humans construct ethical reality varies between cultures, then moral relativism becomes hard to avoid. Indeed, those who are quick to move from observing the diversity of moral beliefs to embracing moral relativism are perhaps already inclined to think that morality is a cultural construct whereas the shape of the Earth is not. Others are drawn to relativism about morality because they think it a wiser, more tolerant outlook. As someone might say: ‘They have their way, we have ours, and that’s all there is to be said.’ Bernard Williams (no relation to Rowan) argued incisively against what he called ‘vulgar relativism’ in his first book, Morality (1972). A leading figure in English-language philosophy, he later popularised the term ‘thick concepts’ that I introduced earlier (he was the first to use the term in print, in 1985). Williams had a deep sense of the cultural and historical variety of ethical life. But he also saw that the typical way that moral relativism was taken to support toleration, notably by some anthropologists at the time, was fundamentally incoherent. Perhaps, at least for a violent society, war is the answer The vulgar relativist, Williams says, thinks that whether something is ‘morally right’ means ‘right for a given society’. As a result, to discuss whether, say, sex with multiple partners is morally right, you must first ask: right for whom? There is no universal answer: polyamory will be permitted, indeed celebrated, in some times and places, and morally denounced in others. This is the insight that is supposed to lead to a tolerant outlook. Indeed, the vulgar relativist, as described by Williams, holds that, because morality is tied to a way of life, ‘it is wrong for people in one society to condemn, interfere with, etc, the values of another society.’ The problem for vulgar relativism, as Williams goes on to show, is with the status of the principle of toleration. If it’s right to be tolerant, and ‘right’ is relative, then we must ask: right for whom? After all, if an aggressive warrior society is debating whether it should interfere with its neighbours, then according to its values the answer might be a definite ‘Yes, we should interfere.’ Perhaps, at least for a violent society, war is the answer. The point, as Williams makes clear, is that you can’t coherently say that All moral truth is relative to a culture and espouse a non-relative moral rule that all cultures should respect one another. The vulgar relativist is putting forward toleration as a universal moral principle, but this is flat-out inconsistent with moral relativism itself. Vulgar relativism is ‘absurd’, Williams concluded, but this can give a misleading impression: he took seriously many of the ideas that underpin moral relativism. In fact, he agrees with the moral relativist that ethical reality is a human construction, and, like the relativist, he emphasises the variety of moral outlooks. Some moral and religious traditions hold that moral reality is as objective and universal as facts about the shape of the Earth. Williams certainly didn’t think this and went so far as to call his own moral position ‘nonobjectivist’. Perhaps Williams’s respect for the moral relativist’s motivations emerges most strikingly in the following passage from his middle-period book Ethics and the Limits of Philosophy (1985): If you are conscious of nonobjectivity, should that not properly affect the way in which you see the application or extent of your ethical outlook? … If we become conscious of ethical variation and of the kinds of explanation it may receive, it is incredible that this consciousness should just leave everything where it was and not affect our ethical thought itself. We can go on, no doubt, simply saying that we are right and everyone else is wrong (that is to say, on the nonobjectivist view, affirming our values and rejecting theirs), but if we have arrived at this stage of reflection, it seems a remarkably inadequate response.Williams argued for appropriate recognition of the cultural and historical location of one’s ethics and combined this with a shrewd sense of when moral assessment has a point and when it doesn’t. This took him close to the spirit of relativism – in fact, he even espoused what he called a ‘relativism of distance’. The danger with an acute feel for history is that you can end up trapped in a relativist bubble The belief at the heart of Williams’s relativism of distance is that it doesn’t makes sense to assert the truth of one’s moral outlook across the entire span of human history. He would have supported the Universal Declaration of Human Rights, for example, but at the same time questioned the value and wisdom of mentally applying it to warrior cultures thousands of years in the past. There was no need, Williams urged, for a ‘relativistic vow of silence about the past’ but on the other hand, ‘comments about it are not obligatory, either.’ Writing in The New York Review in 1998, Williams gave memorable expression to these ideas and sentiments: Must I think of myself as visiting in judgment all the reaches of history? Of course, one can imagine oneself as Kant at the Court of King Arthur, disapproving of its injustices, but exactly what grip does this get on one’s ethical thought?Immanuel Kant, the 18th-century moral philosopher, believed that everyone knew the same universal moral law, so that it was always intelligible to appeal to its presence. Williams, for the most part, thinks that what makes ethical sense is more culturally limited. When we look inside, what we find is not the moral law, but our historically formed identity. The danger with an acute feel for history is that you can end up trapped in a relativist bubble. But if Williams shared the relativist’s sense of the culturally rooted nature of ethical life, he also wanted to incorporate into his moral philosophy the kind of critical tools that mean you don’t have to accept the worst things associated with moral relativism: either that ‘anything goes’, or that societies can’t assess and evaluate each other, or that you must accept the status quo in your own society. Williams’s great late work Truth and Truthfulness (2002) celebrated the virtues associated with the pursuit of truth. There is no objective and universal morality, according to Williams, but moral philosophy could still draw on the fact that some truths, like the shape of the planet, are objective and universal. If a moral outlook depends on blatant falsehoods, then it can be undermined by revealing the truth. To reject the claims of climate-change denial, for instance, you don’t have to debate whether there is an objective truth about morality. It’s enough to know that there is an objective truth about the effects of carbon dioxide in the atmosphere, what has happened to global annual temperature since the Industrial Revolution, and so on. Williams had little time for the idea, associated with postmodernism, that all of reality is a cultural construction. Humans have dramatically reshaped the Earth but they didn’t create the planet they live on. Ethical reality is constructed via interaction with ‘an already existing physical world’ that is not a cultural product. He tussled on numerous occasions with the American philosopher Richard Rorty, who, in the latter decades of the 20th century, became a kind of cultural figurehead for postmodernism in the academy. In fact, when I was a doctoral student at Johns Hopkins University in Baltimore, I spoke to Rorty about the contrast between his ideas and those of Williams. ‘Yes,’ Rorty said, Williams’s view chimed more with common sense but, as Rorty unforgettably concluded, ‘I want to change common sense!’ Like Rorty, however, Williams did emphasise the culturally constructed nature of ethical life. Influenced by the 19th-century thinker Friedrich Nietzsche, Williams became particularly interested in conceptual genealogy as a method in philosophy. What this means, in a nutshell, is that you can trace the origin and development of a concept or idea – liberty, for instance – to see whether the resulting narrative encourages use of the concept in question or whether it debunks it. A concept’s history helps you understand whether you want to be part of the conceptual tribe that uses it Think about this in relation to culture wars debates over love and sexuality. Not everyone will want to avoid, like Rooney’s character Frances, traditional concepts connected to romance. But conceptual genealogy invites you to reflect on the history of a word or concept such as ‘girlfriend’ and decide whether you want to continue to employ it. You might come to decide that, as Oscar Wilde in 1895 said about blasphemy, it ‘is not a word of mine.’ Many ideas associated with love, in particular marriage, have historically had very little to do with romance. As Stephanie Coontz’s work Marriage: A History (2005) illustrates, ‘most societies around the world saw marriage as far too vital an economic and political institution’ to be based on love. That’s a much more recent idea. Understanding the history of a concept helps you understand whether you want to be part of the way of life – call it the conceptual tribe – that uses it. Sometimes, joining an institution involves modifying its concepts for the better, as in the case of gay and lesbian marriage. Truthfulness can be bracing, especially when focused on abuse of power. Williams drew on the tradition of philosophy known as critical theory, which stresses the examination and criticism of social structures. He writes: [I]f one comes to know that the sole reason one accepts some moral claim is that somebody’s power has brought it about that one accepts it [and it is] in their interest that one should accept it [then] one will have no reason to go on accepting it. No doubt one of Williams’s most admirable and enduring qualities was his desire to make philosophical room for inconvenient truths and the potentially startling clarity of speaking truth to power. Williams argued that all human societies have a need for basic notions of accuracy and sincerity: the traits that combine to form the virtue of truthfulness. This introduced an element of universalism into his worldview. However, while the need for truthfulness is universal, Williams again made clear that different cultures have and will build differently on the need. He ends Truth and Truthfulness with the hope that the more ‘courageous, intransigent, and socially effective forms’ of the virtues associated with truth will live on. It’s fair to say, strange as it sounds, that Williams’s defence of truth and truthfulness was an unfashionable undertaking in the humanities at the time. He was prescient, writing at the end of his life and at the turn of the millennium, about the various forms of truth denial that would emerge (or re-emerge) in the 21st century. Think of how the age of the internet, of which he saw only the beginning, would make Holocaust denial common again. Indeed, in a passage now widely shared online, he wrote about how the internet ‘makes it easy for large numbers of previously isolated extremists to find each other and talk only among themselves.’ Moral criticism must often take the form of making the plain truth widely known. But what if some arguments do ultimately come down to disagreements over values? Perhaps disputes over climate change, for example, go much deeper than familiarity with the relevant science can remedy. Williams says little about rational argument over values themselves, perhaps limited by his worldview according to which principles ‘do not admit of any ultimate justification’ (as Korsgaard puts it). Williams also expressed a worldly scepticism about what moral arguments can be expected to achieve. ‘What will the professor’s justification do,’ he wrote, ‘when they break down the door, smash his spectacles, take him away?’ He never thought moral philosophy could make ethical life any easier than it is Williams’s work manifested the tension that one sees in the larger culture wars over values: between the desire to acknowledge what seem like universal and indisputable evils, and the desire to leave behind the legacy of universalism. He did, for instance in a book chapter titled ‘Human Rights and Relativism’, suggest that there are some very basic moral wrongs that almost all human beings recognise, even if elsewhere in his work he adamantly rejected the idea of a universal Moral Law. Compare his outlook with that of the moral philosopher Derek Parfit, his longtime Oxford colleague. Parfit really did believe that ethical facts are as objective and universal as facts about the shape of the Earth, and searched for moral arguments that would convince everyone. In Shame and Necessity (1993), Williams argued, in contrast, that it makes more sense to pursue ‘social and political honesty’ than a ‘rationalistic metaphysics of morality’. If Williams had little time for Rorty’s postmodernism writ large, he also did not share Parfit’s hope (now associated with the Effective Altruism movement) that the study of ethics could become transformed into a science of morality, which would then be applied to solve the world’s problems. Truthfulness, conceptual genealogy, comparative ethical study: these ingredients give Williams’s philosophy of value its critical bite. There are many resources left for ethical and political criticism after moral philosophy fully emerges from what Williams called ‘the shadow of universalism’ – or so he endeavoured to show. His aim was to hold on to the vital distinction between what is and what ought to be while maintaining that norms about what ought to be are themselves ultimately cultural creations. His position, in this respect, is akin to the view that human beings create the norms about what counts as good and bad art rather than discover mind-independent and timeless truths about beauty. Williams never thought that moral philosophy could make ethical life any easier than it is. Nonetheless, he offers a vision of how philosophy, allied with other disciplines such as history, can provide both criticism and support for one’s ethical orientation in the world. And in his engagement with moral relativism, he doesn’t just point to a middle way between his contemporaries Richard Rorty and Derek Parfit. He offers an example of how to make one’s way through the culture wars.
Daniel Callcut
https://aeon.co//essays/bernard-williams-moral-relativism-and-the-culture-wars
https://images.aeonmedia…y=75&format=auto
Education
Keju, China’s incredibly difficult civil service test, strengthened the state at the cost of freedom and creativity
On 7 and 8 June 2023, close to 13 million high-school students in China sat for the world’s most gruelling college entrance exam. ‘Imagine,’ wrote a Singapore journalist, ‘the SAT, ACT, and all of your AP tests rolled into two days. That’s Gao Kao, or “higher education exam”.’ In 2023, almost 2.6 million applied to sit China’s civil service exam to compete for only 37,100 slots. Gao Kao and China’s civil service exam trace their origin to, and are modelled on, an ancient Chinese institution, Keju, the imperial civil service exam established by the Sui Dynasty (581-618). It can be translated as ‘subject recommendation’. Toward the end of its reign, the Qing dynasty (1644-1911) abolished it in 1905 as part of its effort to reform and modernise the Chinese system. Until then, Keju had been the principal recruitment route for imperial bureaucracy. Keju reached its apex during the Ming dynasty (1368-1644). All the prime ministers but one came through the Keju route and many of them were ranked at the very top in their exam cohort. Keju was sheer memorisation. Testing was based primarily on the Confucian classics. And there was a lot to memorise. There were some 400,000 characters and phrases in the Confucian classics, according to Benjamin Elman’s book A Cultural History of Civil Examinations in Late Imperial China (2000). Preparation for the Keju began early. Boys aged as young as three to five began to practise their memorisation drills. After the immediate environs of their families, Keju was their first exposure to the world. Keju, which was open only to the male gender, was fiercely competitive. Using figures provided by Elman, during the Ming dynasty, 1 million regularly took the qualifying tests and, of these, eventually about 400 would make it to the final Jinshi round. Passing the first tier of Keju, known as the provincial exam, was a lot easier – working out to be 4 per cent on average during the Ming. Still, this was more cut-throat than getting into Harvard in most years. The prestige of Keju was such that even an emperor coveted its bona fides. According to a legend, an emperor in the late Tang dynasty (618-907) hung on the wall of an imperial palace a wooden tablet proudly displaying his Keju degree – only it was fake. The emperor had it made for himself. This credentialism pervades officialdom today. Many Chinese government officials claim PhD degrees – earned or otherwise – on their résumés. Much of the academic literature focuses on the meritocracy of Keju. The path-breaking book in this genre is Ping-ti Ho’s The Ladder of Success in Imperial China (1962). One of his observations is eye catching: more than half of those who obtained the Juren degree were first generation: ie, none of their ancestors had ever attained a Juren status. (Juren was, at the time, the first degree granted in the three-tiered hierarchy of Keju.) More recent literature demonstrates the political effects of Keju. In 1905, the Qing dynasty abolished Keju, dashing the aspirations of millions and sparking regional rebellions that eventually toppled China’s last imperial regime in 1911. Keju cultivated and imposed the values of deference to authority and collectivism The political dimension of Keju goes far beyond its meritocracy and its connection to the 1911 republican revolution. For an institution that had such deep penetration, both cross-sectionally in society and across time in history, Keju was all encompassing, laying claims to the time, effort and cognitive investment of a significant swathe of the male Chinese population. It was a state institution designed to augment the state’s own power and capabilities. Directly, the state monopolised the very best human capital; indirectly, the state deprived society of access to talent and pre-empted organised religion, commerce and the intelligentsia. Keju anchored Chinese autocracy. Candidates queue for the national civil service examination on 27 March 2021 in Taiyuan, Shanxi province, China. Photo by Wu Junjie/China News Service via Getty The impact of Keju is still felt today, not only in the form and practice of Gao Kao and the civil service exam but also because Keju incubated values and work ethics. Today, Chinese minds still bear its imprint. For one, Keju elevated the value of education and we see this effect today. A 2020 study shows that, for every doubling of successful Keju candidates per 10,000 of the population in the Ming-Qing period, there was a 6.9 per cent increase in years of schooling in 2010. The Keju exams loom as part of China’s human capital formation today, but they also cultivated and imposed the values of deference to authority and collectivism that the Chinese Communist Party has reaped richly for its rule and legitimacy. But isn’t it the case that the West – Prussia, then the United Kingdom and the United States – all had their own civil service exams? How is it possible that a strong bureaucracy complemented rather than supplanted political and religious pluralisms in the West? China and the West bureaucratised under an entirely different sequential order and under different contextual conditions, and these differences entail substantial implications for the subsequent political development. The civil service in the West was not a single-platform institution in the way that Keju was. There was a military civil service, a civil service for foreign affairs, for forestry, etc, etc. Multiple platforms of bureaucratic recruitment competed with one another and, collectively, they competed with other channels of mobility, such as the political parties and commerce. In the US, the Pendleton Act of 1883 removed the power of Congress and the political parties to control civil service appointments. Before the 1883 Act, federal appointees returned a portion of their salaries to the party that had appointed them. Civil service never replaced Congress or political parties in toto, as witnessed by the fact that Congress today wields enormous power over the bureaucracy, including the power of the purse that funds its operation. Another difference – and this is a big one – is timing. In the 19th century, the US introduced bureaucracy when ‘[t]he two institutions of constraint, the rule of law and accountability, were the most highly developed,’ as Francis Fukuyama writes in Political Order and Political Decay (2014). The state in the US and the UK was already ‘a Shackled Leviathan’, to use the words of Daron Acemoglu and James A Robinson in their influential book, The Narrow Corridor (2020). The sequential order ran from politics to bureaucracy, not as in China from bureaucracy to politics. In the West, society was vibrant long before the state ramped up its administrative capacity. The rule of law, the principle of accountability, and the powers of the legislature and the political parties were already firmly entrenched. Yes, the Leviathan was shackled by society, but different parts of the Leviathan shackled each other. Bureaucracy in the US formed and gained power only under a myriad of constraints and contending forces, rather than the socioeconomic tabula rasa that greeted the arrival of Chinese bureaucracy. Vladimir Putin’s autocracy pales in comparison with that of China’s president Xi Jinping The civil service in the UK and the US was ensconced in pluralistic societies that enjoyed a degree of religious freedom and a modicum of emergent electoral democracy. A world of competing forces and constraints attended the arrival of bureaucracy, even helped to create it. Government bureaucracy competed in some situations or complemented in others with church, universities, commerce and other social groups for human capital, legitimacy and resources. For political development, birth order really matters. In his book Strong Societies and Weak States (1988), Joel S Migdal identifies a common problem in the developing world – the struggle of the state to acquire autonomy and capabilities. China, through history and today, is exactly the opposite. The state dominates society. Vladimir Putin’s Russia is autocratic but his autocracy pales in comparison with that of China’s president Xi Jinping. Harassed and targeted by the state, opposition parties are still legal and tenuously legitimate in Russia and some of Putin’s critics command a sizeable following. Even the power to commit violence – war fighting – was outsourced to a private force, the mercenaries led by Yevgeny Prigozhin, an arrangement not even remotely conceivable in China. Last-minute revision before the 2010 civil service examination in Hefei, Anhui province, China. Photo AFP/Getty Since 2013, against the increasingly dictatorial Xi, there have been two prominent critics of the president and both were dispensed with summarily. Unlike Putin who has to rely on extra-legal means to silence his critics, suggesting some formal constraints on him, Xi directed the full apparatus of the Chinese state after his critics. The Chinese court sentenced the businessman Ren Zhiqiang to 18 years in prison, and Tsinghua University promptly fired Xu Zhangrun, a law professor who wrote an open letter criticising Xi. Standing forlornly by themselves, neither Ren nor Xu commanded any formal political organisations behind them. In 2022, the Chinese regime put almost 400 million people under some sort of COVID-19 lockdown, a feat that is unimaginable in any other country. An ultimate autocracy is one that reigns without society. Society shackles the state in many ways. One is ex ante: it checks and balances the actions of the state. The other is ex post. A strong society provides an outside option to those inside the state. Sometimes, this is derisively described as ‘a revolving door’, but it may also have the positive function of checking the power of the state. State functionaries can object to state actions by voting with their feet, as many US civil servants did during the Donald Trump administration, and thereby drain the state of the valuable human capital it needs to function and operate. A strong society raises the opportunity costs for the state to recruit human capital but such a receptor function of society has never existed at scale in imperial China nor today, thanks – in large part, I would argue – to Keju. Keju was so precocious that it pre-empted and displaced an emergent society. Meritocracy empowered the Chinese state at a time when society was still at an embryonic stage. Massive resources and administrative manpower were poured into Keju such that it completely eclipsed all other channels of upward mobility that could have emerged. In that sense, the celebration by many of Keju’s meritocracy misses the bigger picture of Chinese history. It is a view of a tree rather than of a forest. The crowding-out effect of Keju is captured succinctly in a book from the late 19th century: Since the introduction of the examination system … scholars have forsaken their studies, peasants their ploughs, artisans their crafts, and merchants their trades; all have turned their attention to but one thing – government office. This is because the official has all the combined advantages of the four without requiring their necessary toil …This is the larger impact of Keju. Its impressive bureaucratic mobility demolished all other mobility channels and possibilities. Keju was an anti-mobility mobility channel. It packed all the upward mobility within one channel – that of the state. Society was crowded out, and over time, due to its deficient access to quality human capital, it atrophied. This is the root of the power of Chinese autocracy and, I would argue, it is a historical development that is unique to China and explains the awesome power of Chinese autocracy. China has legions of intellectuals, but it is bereft of an intelligentsia Take intellectuals as an example. Keju inculcated literacy and helped create a vibrant book readership. Book ownership was widespread as early as the Ming dynasty. ‘More books were available,’ writes Timothy Brook in The Troubled Empire (2010), ‘and more people read and owned more books, in the late Ming than at any earlier time in history, anywhere in the world.’ Brook sums up the impressions of Jesuits visiting China: ‘More surprising, perhaps, is that complete illiterates may well have been a minority in the late Ming.’ But a striking fact is that no organised intelligentsia of any significant size and visibility ever emerged in imperial China. There were no Chinese equivalents of the Royal Society in Britain or the many learned societies in France. One that left a mark is the Donglin Academy, a private discussion forum founded in 1111 by intellectuals of the Song dynasty (960-1279). The academy lasted as long as its founders’ lifespan and vanished into obscurity after their expiry. It was revived in 1604 during the reign of the Wanli emperor (1573-1620), but it operated as a political rather than an intellectual force. The scholar-officials formed a Donglin Faction, later brutally put down by the powerful eunuchs of the Ming court. The grand total of the second life of the Donglin Academy is 21 years, from 1604 to 1625. The term ‘scholar official’ is of Chinese coinage and it is evocative of China’s lacuna of intellectuals as an institutionalised establishment. Compare that situation with Tsarist Russia, another autocracy. Russians coined the term ‘intelligentsia’ – intellectuals as a class – and Russian intellectuals have a long tradition of standing apart from and defining their identity as separate to the state. China has legions of intellectuals, but it is bereft of an intelligentsia. Prior to Keju and even during the early centuries of Keju, China had a plurality of upward mobility. Within bureaucracy, officials were appointed through nepotism, family ties, heredity and recommendations. Commerce, while always curtailed, was a nascent force, promising to burst forward. The Song dynasty experienced a vibrant development of commerce and a market economy. Although Confucianism was always the first among equals, other ideologies, such as Legalism, Daoism and Buddhism, cohabitated with Confucianism and vied with one another for the Chinese population’s attention and adherence. But these societal forces were too nascent and too embryonic by the time Keju arrived and matured. They had yet to acquire their own unique identity, significant organisation and autonomous agency. In imperial China, there never was a level playing field between state and society, and over nearly 1,500 years, Keju further deprived the congenitally deficient society of its oxygen – human capital. Fukuyama is right to assert that the Chinese state was precocious, but it was precocious in a particular fashion: its precocity contrasted sharply with the immaturity of Chinese society. The most direct way Keju decimated Chinese society is through talent monopoly but there were others. Keju also monopolised the time and mental energy of its candidates. Keju was not a one-shot deal. A candidate could take the test multiple times. In a dataset that has information on the 11,706 Keju candidates during the Ming dynasty, the average age passing the final stage of Keju was 32, approaching middle age at a time when average life expectancy was much lower than today. The oldest in the dataset was was probably Gui Youguang (1506-1571). Before passing the provincial examination in 1540 at the youngish age of 34, Gui had already failed it on six occasions. He then proceeded to toil for more than 24 years of his life and finally attained his Jinshi degree in 1565, although ranking near the bottom of his class and at the ripe age of 59. Unfortunately, he did not bask in his exalted status for long, as he died aged 65. For him, and many others, Keju was a life-long endeavour. 13469View of the examination cells in Canton. Library of Congress A man peers from a long line of cells constructed to ensure privacy during examination conditions. It is an old stereoscope pictur13470Examination hall in Canton. Library of Congress A central path is flanked by rows of examination cells13471Jiangnan imperial examination centre, Nanjing, c1913. Courtesy Historical Photos of China Hundreds of examination cells spread before the eye in this aerial view of an old examination centreThe Keju curriculum was formidable and required memorising close to 400,000 characters. Is there spare residual energy, capacity and curiosity left to pursue other mentally taxing activities, such as ideation of new thoughts, new politics, and discoveries of natural phenomena? In my book The Rise and Fall of the EAST (2023), I show that Chinese technology began to stagnate as Keju gained dominance. The brain power that ended up in the state did not flow to Chinese society, the economy or human creativity. Mental energy aside, the values drilled deeply into Keju candidates were pro-autocracy and authoritarian. Keju legitimates statism. Boys as young as three or four began to practise writing characters that were meant to instil admiration of, and devotion to, the ideas and teachings of the master – Confucius – which would eventually be tested on Keju. By the Ming dynasty, the initial plurality of the Keju subjects gave way to one subject only, Confucianism – ‘knowledge of classics, stereotyped theories of administration, and literary attainments’. Autocracy and Keju became ever more intimately intertwined Imagine repeated exposures to the statist values at that tender age, producing what psychologists call ‘an imprinting effect’. The autocratic values were incubated in substance but also by the format of Keju; this was standardised testing par excellence. When Keju was first established, candidates were tested on a wide range of subject matters but, after the Song dynasty, the Keju curriculum became progressively stratified and exceedingly narrow. Candidates were required to fill in the blanks with missing words or phrases in excerpted texts from the Confucian classics. The Yuan dynasty (1271-1368) narrowed the Keju curriculum further. Only a streamlined version of annotations of Confucian classics was allowed, the so-called Neo-Confucianism, which was the brainchild of the great Confucian scholar Zhu Xi (1130-1200) of the Song dynasty. Neo-Confucianism is a pared-down version of classical Confucianism, and it strips away some of the moral veneer of its classical predecessor. Summarising a common view among historians, Peter K Bol observes in Neo-Confucianism in History (2010) that this version of Confucianism ‘provided a justification for seeking external authority in the ruler’ and stipulated the responsibility for transforming the world as that of the emperor alone. The Neo-Confucianist Keju curriculum was rigid, narrow and absolutist, and was single-minded in its advocacy of a hierarchical order – subordination to the ruler, to the elderly, and to the male gender. No scope for scepticism and ambiguity was allowed. Autocracy and Keju thus became ever more intimately intertwined. There was, however, a massive operational advantage to the Neo-Confucianist curriculum: it standardised everything. Standardisation abhors nuance and the evaluations became more straightforward as the baseline comparison was more clearly delineated. There was objectivity, even if the objectivity was a manufactured artefact. The Chinese invented the modern state and meritocracy, but above all the Chinese invented specialised standardised testing – the memorisation, cognitive inclination and frame of references of an exceedingly narrow ideology. Ming standardised Keju further: it enforced a highly scripted essay format, known as the ‘eight-legged essay’, or baguwen in Chinese (八股文), to which every Keju candidate had to adhere. A ‘leg’ here refers to each section of an essay, with a Keju essay requiring eight sections: 1) breaking open the topic; 2) receiving the topic; 3) beginning the discussion; 4) the initial leg; 5) the transition leg; 6) the middle leg; 7) the later leg; and 8) conclusion. The eight-legged essay fixed more than the aggregate structure of exposition. The specifications were granular and detailed. For example, the number of phrases was specified in each of the sections and the entire essay required expressions in paired sentences – a minimum of six paired sentences, up to a maximum of 12. The key contribution of the eight-legged essay is that it packed information into a pre-set presentational format. Standardisation was designed to scale the Keju system and it succeeded brilliantly in that regard, but it had a devastating effect on expositional freedom and human creativity. All elements of subjectivity and judgment were taken out. In his book Traditional Government in Imperial China (1982), the historian Ch’ien Mu describes the ‘eight-legged essay’ as ‘the greatest destroyer of human talent’. A bane to human creativity was a boon to autocracy. Standardised testing was conducive to authoritarianism. In his book Who’s Afraid of the Big Bad Dragon? (2014), Yong Zhao, professor at the School of Education of the University of Kansas, notes a natural compatibility between authoritarianism and standardised testing. Authoritarianism, he writes, ‘sees education as a way to instil in all students the same knowledge and skills deemed valuable by the authority.’ The standardised tests appeal to an authoritative body for correct answers; as Zhao said in an interview for the US National Education Policy Center, the tests ‘force students to comply with the answers or the way of thinking that the authority wants.’ The direction of deference is automatically established: ‘Then you hold the students, the teachers and, to a lesser extent, the parents accountable for being able to get the answers that the authority wants and to show that they have mastered the skills and the knowledge and possibly even the beliefs that the authority wants.’ Confucianism, thus, functioned as an equivalent of the abstruse and arcane vocabulary of the SAT In his book The WEIRDest People in the World (2020), Joseph Henrich posited that the West prospered because of its early lead in literacy. Yet the substantial Keju literacy produced none of the liberalising effects on Chinese ideas, economy or society. The literacy that Henrich had in mind was a particular kind of literacy – Protestant literacy – and the contrast with Keju literacy could not have been sharper. Keju literacy was drilled and practised in classical and highly stratified Chinese, the language of the imperial court rather than the language of the masses, in sharp contrast to Protestant literacy. Protestant literacy empowered personal agency by embracing and spreading vernaculars of the masses. Henrich’s liberalising ‘WEIRD’ effect – Western, educated, industrialised, rich and democratic – was a byproduct of Protestant literacy. It is no accident that Keju literacy produced an opposite effect. Why was there such a close affinity between Keju and Confucianism? The answer is not obvious. Ancient China boasted other great ideologies and traditions, such as Daoism, Mohism and Legalism, but they were completely absent in the Keju curriculum. This ideological single-mindedness of Keju is puzzling and it is puzzling still considering the following: in my book, I document that several emperors who played an instrumental role in inventing and developing Keju were not Confucianists themselves. The answer may lie in an operational imperative of Keju. Standardised testing is necessary when you want to scale the evaluation. Subjective evaluations, such as relying on reputation, recommendations and interviews, are feasible when the number of candidates under evaluation is small. For example, the Big Three colleges in the US – Harvard, Yale and Princeton – began to embrace the SAT (the standardised test for college admissions) when they started recruiting beyond their traditional, narrow socioeconomic group – the white Anglo-Saxon Protestants (WASPs) in the elite private schools of the east coast. The Chinese emperors made the same decision when they expanded bureaucratic recruitment beyond the nobility and wealthy elites. Standardising and constricting the Keju curriculum were not an optional luxury; it was a necessity to scale Keju. Confucianism offered an operational advantage. It is textually rich; the verbiage is massive, and the pontifications are incredibly involved, not unlike the verbal portion of the SAT. As noted before, there are approximately 400,000 characters and phrases in the Confucian classics. Using a website, Chinese Text Project, ‘an online open-access digital library that makes pre-modern Chinese texts available to readers and researchers all around the world’, I found that among the classical texts created before the Han dynasty (206 BCE-220 CE) Confucianism is paragraphically the richest, with 11,184 paragraphs. No other ideologies come remotely close. Legalism has 1,783 paragraphs; Daoism has 1,161 paragraphs, and Mohism has 915 paragraphs. Confucianism, thus, functioned as an equivalent of the abstruse and arcane vocabulary of the SAT, and it was most suited for screening and selecting the desired human capital from a large pool of candidates. Is it at all possible that Keju successfully anchored and shaped the nature of the Chinese autocracy because of this accidental feature of Confucianism and on account of an operational technicality? Let’s pause, savour and ponder for a moment the momentous implications of this proposition. This essay is adapted from the book The Rise and Fall of the EAST: How Exam, Autocracy, Stability and Technology Brought China Success, and Why They Might Lead to its Decline (2023) by Yasheng Huang.
Yasheng Huang
https://aeon.co//essays/why-chinese-minds-still-bear-the-long-shadow-of-keju
https://images.aeonmedia…y=75&format=auto
Astronomy
It’s possible that frozen worlds with subterranean oceans are incubators of organic life. But then how did life get here?
Some months ago, I went through a bunch of (virtually) piled-up research papers from my computer’s ‘important s*%t’ folder. (Please, authors of those papers, do not feel offended. I tend to name my folders in specific ways, and I guess I am not alone in this habit.) I needed some free space, and we are talking about more than some gigabytes of data. Out of curiosity, I scanned the long list of studies I’d been saving for later reading for approximately a decade now, promising myself many times that I would undoubtedly read them when I had a little break in my research and teaching duties. Doing so, I accidentally ran into a paper from 1973 that changed the direction of my research and the way I think about our origin and existence. The study asked the same question I asked my younger self decades ago: what if life evolved on other planets first and had been somehow directed to seed early Earth? Finding that study made me think about the topic again, but now, with additional knowledge compared with my younger self, I try to resolve at least some components of the dilemma. Was there a ‘ground zero’ planet (or planets) where life evolved and later swarmed out to the Universe – and had it arrived by natural means, or had it been sent by a civilisation more advanced and ancient than our own? The idea goes all the way back to the Greek philosopher Anaxagoras, who used the term ‘panspermia’ in the 5th century BCE to invoke the concept of travelling between planets as seeds. More recently, the Swedish scientist Svante Arrhenius, one of the founders of physical chemistry, suggested the idea of microscopic spores’ transportation through interplanetary space in his book Worlds in the Making: The Evolution of the Universe (1906). But the modern form of the theory originates from one of the pioneers of DNA, the Nobel prize laureate Francis H C Crick, and Leslie E Orgel, a chemist famous for building theories about the origin of life. Their idea, called ‘directed panspermia’, suggests that life could have been deliberately transferred to Earth by intelligent beings from elsewhere in the Universe. First presented at a scientific conference and then published in the scientific journal Icarus in July 1973, it soon became grist for science fiction, too. Along with Crick and Orgel, there have been many scientists influenced by the idea of panspermia. Among the most prominent, the astronomers Fred Hoyle and Chandra Wickramasinghe became central figures for the revival of panspermia around the late 20th century. Since this revival, many aspects of the theory have been investigated by the most up-to-date scientific methods available, attracting the next generation of scientists to the fold. The same year that Crick and Orgel published their concept in Icarus, Big Ear – the radio telescope of Ohio State University Radio Observatory – first turned toward the sky for SETI, the scientific search for extraterrestrial intelligence, a programme originally funded by NASA. A few years later, on 15 August 1977 to be exact, Big Ear was eavesdropping on the ‘water hole’ – not so much a location as a band of the electromagnetic spectrum in interstellar space, where emissions from natural hydrogen (H) and hydroxyl radical (OH) gases combine to form water. According to Project Cyclops, one of the first handbooks for detecting extraterrestrial life, the water hole is a relatively quiet, noise-free channel that would be perfect for interstellar and intergalactic communication between intelligent species, should they want to engage. At 22:16 Eastern Standard Time on the infamous date, the silence of the water hole broke for 72 long seconds. The signal detected by the radio telescope peaked at 30 times the level of the static radio noise always present. One person in particular was thrilled. Jerry Ehman, a local physicist who volunteered at Big Ear out of interest and enthusiasm, discovered the sequence of letters and numbers output by the computer – 6EQUJ5, a code that indicated their strength and changing intensity over time. The starting number of the series, 6, shows us that something stronger than the regular noise arrived at our radio telescope. We might think that it is just some error, but by translating the language of early radio telescopes, we already know that the series of letters E, Q and U indicates that the signal got stronger and stronger. Succeeding this uniquely strong peak, the signal then fades away into the well-accustomed monotony of radio static, shown by J and 5, the closing members of the presumptive ‘code’. Ehman immediately understood its implication, and in those excited, elevated moments, he quickly scribbled a side note on the paper and unintentionally named this strong, unique and not-yet-repeated mysterious radio signal ‘Wow!’ Could the signal that broke the monotony of the water hole’s radio static have originated from intelligent beings on another world? Suppose life was planted on Earth by an intelligent extraterrestrial civilisation And if so, why? From novels such as Arthur C Clarke’s 2001: A Space Odyssey (1968) to Carl Sagan’s Contact (1985), the question of why such beings would reach out over the vastness of the cosmos have arisen again and again. It could be that such a signal is a torchlight, a signpost telling the ones who accidentally received it: ‘We are here, ready to make contact,’ similar to the two identical copies of the Golden Record, a greeting from Earth carried by the Voyager spacecraft into the void. In short, the insight of Ehman and the Big Ear scientific team resonated with the hypothesis of directed panspermia, introduced by Crick and Orgel just four years before. Suppose life was planted on Earth by an intelligent extraterrestrial civilisation. In that case, such a society must be capable of targeting, and sending life to, our planet, for example, in the form of complex organic molecules or microbes implanted in some object that may survive travel through the galaxy. A civilisation sophisticated enough to send a radio message to Earth would be an excellent candidate to shoot an asteroid filled with the bricks of life toward the early Earth. I bet I am not alone in thinking that finding even just a microscopic sign of life in any relic from space would be a giant leap for humankind. That is most likely what scientists in the mid-1990s felt when examining a Martian meteorite named Allan Hills (ALH) 84001. A nanometres-sized chain of magnetic minerals appeared on the screen connected to a scanning electron microscope, a special instrument capable of showing objects the size of dust particles floating around us. Such minerals looked very similar to the so-called magnetosome, magnetic iron-bearing nanoparticles that function as tiny compasses in the cells of a particular type of bacteria. The meteorite Allan Hills (ALH) 84001. Courtesy Smithsonian National Museum of Natural History From the moment they saw those mineral chains, the group of scientists involved in the ALH enigma divided into two. Some believed in the biological nature of the mineral chain and defined it as an extraterrestrial magnetofossil from the past of Mars. Others denied the biological origin of the relic and explained its formation in alternative ways, such as abiotic organic synthesis and artificial laboratory artefacts. Even today, disputes repeatedly flare up around the organic or inorganic origin of such microscopic crystal chains. Despite all the back and forth about evidence for microscopic forms of life on other worlds, panspermia theory itself was pushed back from the centre of interest for a decade – until the Cassini spacecraft arrived at Saturn in 2005 and sent the first close-up images of Enceladus, one of the gas giant’s icy moons, back to Earth. Enceladus revealed his secret, which was kept hidden for a billion years: flying close over its surface, Cassini captured cracks – nicknamed ‘Tiger Stripes’, because they resemble the animal’s marks – around the moon’s south pole and, more importantly, its geysers spraying out from the fissures, illuminated by the dim light of the distant Sun. Water plumes of ice and vapour mixed with essential compounds of more complex organic materials. Such water jets, studied by the Cassini spacecraft, may be linked to underground water reservoirs originating from an ocean hidden under Enceladus’s frozen surface, containing key chemical elements of many biological processes abundant in the subsurface ocean. Cassini took this photograph of Saturn’s moon Enceladus from within 25 km of the moon’s surface on 9 October 2008. NASA/JPL/Space Science Institute With the finding, panspermia struck back, and the theory again grabbed our attention. The news broke and spread like a streak of lightning, not limited to the scientific community: icy worlds could allow biological evolution and harbour life in their hidden oceans below the surface. I wondered whether such worlds were actually incubators for primordial life, poised to spread across the cosmos. Were icy moons points of origin for life hopscotching from world to world, stored for posterity under the surface but ready to travel aboard a meteor or some other celestial object to a place like Earth? If so, was this clever system an accident of the Universe, or by design? Were such moons and other icy bodies the ultimate source of life? Why not? Panspermia is believed to be universal. Whether life was spread intentionally or not, the real question is where and when the first living entities had a chance to evolve. Did life actually evolve anew on Earth, or are icy worlds, distant and nearer, really ‘ground zero’ for the origin of life? What if microbial life swarms out from under the icy shell of ancient moons? Based on our commonly accepted, Earth-based, geocentric knowledge, water is one of the essential requirements for the first steps of biological evolution. Down at the ocean bottom, fissures open and warm up the water with the heat of the magma lying deep below. Along those vents, active hydrothermal processes in the re-emerging hot water provide a key, mineral-, compound- and element-rich environment to prebiotic synthesis, when the building blocks of first cells are formed. Water might have already been abundant in the epoch cosmologists call ‘first metal enrichment’, when, astronomically speaking, many elements heavier than hydrogen and helium formed by nuclear fusions in the earliest existing stars. Ranging between 10 and 1,000 times our Sun, these extremely massive, ginormous hydrogen and helium-built stars, referred to as Population III (or Pop III) by astronomers, took seriously the injunction to ‘live fast, die young’ and ejected their material into space at the last moments of their short, few-hundred-million-years life cycle. While they exhausted their primordial, hydrogen- and helium-containing gas fuel, they produced elements heavier than the two gases, which elements are referred to as ‘metal’ by astronomers. However, many of those ‘metals’, such as carbon and oxygen, are not considered as metals in common sense. Theoretically, Pop III stars’ short but very productive life enriched the cosmos with elements, creating the abundance of water in the interstellar medium at the time when the Universe was only approximately 300-400 million years old. At least one study says that the habitable cosmological epoch, in which life could evolve on ocean-bearing rocky planets, emerged when our 13.8 billion-years-old Universe was only 10-17 million years old. Real-life observations provide additional support for such theoretical calculations. One of the oldest observed planets, PSR B1620-26 b (sounds like a droid from Star Wars), often called Planet Methuselah, is approximately 12.7 billion years old. Another great old one, WASP-183 b, is thought to be at least 13 billion years old. Both ancient worlds are gas giants like our far planetary neighbour, Jupiter, known for its great red spot and its four Galilean moons: Io, the home of volcanoes, and three with an icy shell and a putative subterranean ocean perhaps harbouring life: Europa, Ganymede and Callisto. What if the earliest planets in the Universe, those Jupiter-like gas giants, have their own icy moons with subterranean oceans that could also evolve and harbour life? What if microbial life swarms out from under the icy shell of those ancient moons, catches a ride on a meteor, and fertilises our entire Universe? The ‘small’ WASP-183 b, half the size of Jupiter, is orbiting around a G-type star, a host like our Sun. Unfortunately, the similarities end there. The distance of the ancient planet from its Sun-like host star is a fraction of the distance between the Sun and Mercury. Even if we wish for a Europa- or Ganymede-like icy moon around the gas giant as a potential life-harbouring place, water – a possible key compound of ocean formation and life – cannot stay in the liquid or solid phase in the scorching heat at such close distance to a host star. This led me to Methuselah, more than twice as big as Jupiter and living in a binary star system of pulsar and white dwarf. Despite the size difference and the binary suns, similarities could abound – especially if we look back in cosmological time. Originally, Methuselah was born out of a protoplanetary disk, swirling around a Sun-like progenitor star in a similar setting to the Sun and Jupiter. Later, the star system was captured by a neutron star and its companion. During such an exchange, the couple broke up, and the companion of the neutron star was left behind. At the start of a new chapter in the life of the neutron star, a fresh relationship was formed when the host of Methuselah became the latest member of the binary system. Time passed, and the Sun-like star turned into a red giant, feeding the neutron star (later, a pulsar) with its material, and ending up as a white dwarf. Yet Methuselah witnessed this transformation at a safe distance and continued its orbit undisturbed around the binary star system that served as its new host. A theoretical icy satellite of Methuselah, similar to the moons of Jupiter, is still a candidate for the origin of life. Could such icy satellites be the ground-zero locations of life in the Universe? Stored at depth, life would be protected from electron bombardment in the radiation belts around Jupiter There needs to be more than just a subsurface ocean to keep the potential lifeforms alive, of course. The environment must be oxygenated, like the oceans on Earth. And geological renewal processes like tectonism and volcanism, which promote communication between the subsurface ocean and the icy surface, would be needed to convey the oxidants and other materials needed for life below. To see how this works, look no further than the surface of Europa with its labyrinthine pattern of linear features: long, curvy lineaments run through half of the moon, accompanied by thin lines, barely visible even on the highest-resolution spacecraft images. There are troughs and ridges evolving into more complex patterns, including double ridges and ridged bands. The chaotic pattern of those lineaments shows how the surface repeatedly renews by exchanging material with the ocean below. The presence of the subsurface ocean and the possibility of extraterrestrial life has made the Jovian moons the primary targets of ongoing outer solar system explorations. The Jupiter Icy Moons Explorer (JUICE) is already on its eight-year journey to the Jovian moons. Europa Clipper is in the last moments of its preparation phase, with a planned launch in autumn 2024. During close flybys over Europa’s frozen surface, the spacecraft has one overall objective: to determine if Europa can support life in the subterranean ocean below its icy shell. Succeeding those missions to collect data by flying around the icy satellites, a new challenge awaits. Europa Lander, still in the concept stage, would land on Europa’s surface to directly sample it and search for potential traces of life, so-called biosignatures, 10 centimetres below the icy surface. Stored at such depth, traces of life with complex chemistry would be protected from the damaging high-energy electron bombardment in the radiation belts around Jupiter. A swarm of wedge-shaped mini-robots will swim through the subsurface ocean and explore Once we can land on the surface of an icy satellite, we could design an instrument capable of penetrating the crust and reaching the ocean tens of kilometres beneath the crust. On Earth we execute such exploratory ventures by building drilling rigs to an enormous depth, then adjusting the drill pipe’s length to the hole’s growing depth. Unfortunately, the mass and volume of these instruments would be prohibitive aboard spacecraft bound for icy moons. In addition, you’d still need a human to operate those instruments. Imagine the challenge of transporting elements and building a 10-storey construction capable of drilling through the icy crust of Europa. We definitely and desperately need an alternative to succeed, and that is where cryobots, which can travel aboard spacecraft, come in. The concept of cryobots to penetrate the ice shell and investigate Europa’s hidden ocean has been developed through NASA’s Scientific Exploration Subsurface Access Mechanism for Europa (SESAME), which aims to open the treasure chest of life hidden under the satellite’s icy surface. The cryobots under consideration are robot probes with an array of telling names: SLUSH (from Honeybee Robotics), VERNE (from Georgia Tech), and the late VALKYRIE, now SPINDLE (from Stone Aerospace). These small explorers, working together, will melt the icy crust and dig through the surface using a variety of thermal sources along with drilling and cutting instruments and waterjet assembly. Make way! Following on the heels of the cryobots, a swarm of wedge-shaped mini-robots called SWIM (Sensing With Independent Micro-swimmers), a dozen centimetres in length, will patiently wait to swim through the subsurface ocean and explore. Meanwhile, BRUIE (Buoyant Rover for Under-Ice Exploration), another type of device, will rise to the undersurface of the ice crust, wandering ‘upside down’ to search for life at the boundary of water and ice. When these systems are finally deployed, they should be able to provide highly anticipated direct evidence of extraterrestrial life, at last. Now, it’s time to connect the dots. The panspermia theory suggests that life on Earth might originate from an extraterrestrial source. At least for me, panspermia is strongly connected to our neverending search for the origin of humankind. We want to know where we came from. We want to feel how deeply our roots penetrate across space and in time. Evolutionary scientists search for human roots in geologic time, often looking to the Miocene, a period marked by the disappearing remains of an ancient ocean called Tethys. It was an epoch where the peaks of the transcontinental Alpine-Himalayan mountain chain rose and, toward the end, in a continuously cooling global climate, where the Antarctic ice sheet loomed larger. If we look closer, we may find a creature following a hidden path in the trees of a tropical-subtropical deciduous forest, heading toward a more open area with the hope of fruit- and insect-containing lunch on the way. Some say it looked like an ancient ape, and others thought it was more like a modern gorilla or chimpanzee. Still, both agree that its appearance around 8 to 6 million years ago marks a critical moment in biological evolution: the dawn of humankind. The creature is called the Last Common Ancestor or Homo-Pan, indicating the spot on the evolutionary tree of life where the branches of the hominin and the pan (ancestors of the chimpanzee and the bonobo) lineages diverged. Going back in time, all converging branches of the phylogenetic tree led to one common ancestor of all cells – referred to as Last Universal Common Ancestor (LUCA). Even though some scientific studies separate the first living cell and LUCA, the latter is often referred to as the evolutionary intermediate connecting early Earth’s abiotic world with the period when microbial life, preserved in rocks as fossils, appeared on the surface of Earth around 3.5 billion years ago. In the quest to find our most ancient roots, this milestone is the one we can say out loud, without being labelled a heretic or a joke. Microbes could hitch a ride on an asteroid and travel through the vast nothingness of space But did LUCA emerge from the molecules formed by more and more complex networks of chemical reactions in the abiotic world here on Earth – or did LUCA arrive from space? It’s possible that molecules and compounds resulting from chemical evolution, the ones responsible for creating the building blocks of life, may appear not only on Earth but on Mars and the icy moons. According to recent experiments aboard the International Space Station, such molecules can survive the vicissitudes of space. Microbes, along with complex organic molecules (the precursors of the first living cells), would be able to hitch a ride on an asteroid and travel through the vast nothingness of space to arrive somewhere else intact. With any luck, those newcomers would survive more than a bumpy landing on a new planet; arriving at their new home, they would embark upon biological evolution anew. What if some of those asteroids are artificial, made by an intelligent civilisation and sent through interstellar space to fertilise other worlds? From UFO buffs to SETI enthusiasts to some scientists, the notion of directed panspermia is still very much in play. One potential source could be the oblong-shaped asteroid ‘Oumuamua (Hawaiian for ‘scout’), an interstellar visitor that the Harvard physicist Avi Loeb contends might have been propelled to our solar system like a light sail. (Most of his colleagues disagree.) Loeb and his team do not stop at ‘Oumuamua. Their search for an interstellar visitor also points to CNEOS 20140108, an interstellar meteor that fell into the South Pacific and is now referred to by some folks as Interstellar Meteor 1. As recently as this summer, Loeb’s team used a powerful magnet to dredge the bottom of the ocean around what they think is the impact location, analysing some 50 microscopic iron spherules they suspect could be the meteor’s remains. If the spherules turn out to be artificial (of course, a big if), it would support the directed panspermia theory and revolutionise our knowledge of the Universe. Tiny meteoritic spherules from the most likely path of Interstellar Meteor 1. Courtesy Avi Loeb, Harvard University/Galileo Project But you don’t need an alien to send life through space, perhaps from a theoretical icy moon of Methuselah. If we can prove the incubation of life on the icy moons of Jupiter, that would offer a natural means of seeding LUCA through space. Given evidence that such forms could survive this kind of deep-space journey, the idea of a Last Intergalactic Universal Common Ancestor may not be far fetched. This Essay was made possible through the support of a grant to Aeon+Psyche from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Foundation. Funders to Aeon+Psyche are not involved in editorial decision-making.
Balazs Bradak
https://aeon.co//essays/panspermia-theory-dives-under-other-worlds-ocean-ice-crust
https://images.aeonmedia…y=75&format=auto
Religion
The Sunni movement of Salafism was born at the beginning of the 20th century, with the goal of modelling life on the 7th
There are tens of millions of Salafis today, male and female, and their ranks stretch from the Middle East and South Asia to western Europe and the United States. Members of this Sunni Islamic movement are bound by shared principles, including a literalist theological approach regarding God’s nature and a commitment to deriving all law from Islam’s holy text (the Quran) and the authoritative record of the Prophet Muhammad’s life (the Sunna). These shared principles manifest not merely in a comprehensive project of piety that defines these men and women’s lives, but also in a set of daily practices that visually distinguish Salafis not only from non-Muslims but also from fellow Muslims. If one knows what to look for, one can identify a Salafi by sight. Salafi facial hair comprises a trimmed moustache and a beard that is a fist long, at minimum; Salafis dress in pants or robes that are shortened to the ankle; and Salafi social spaces are defined by separation of men and women. If one were to venture into a mosque, one could identify (some) Salafis by the distinctive practice of praying in shoes, which stands in contrast to the practice of the vast majority of Muslims, who pray barefoot (and have done so for centuries). Collectively, these practices enable Salafis to create a cultural boundary between themselves and other Muslims and non-Muslims with whom they disagree on core issues. It is easy to dismiss bodily practices such as these as secondary or unimportant. But for Salafis, they are not merely powerful symbols of belonging but a key means of orienting themselves to God in daily life. Salafis, inspired by the theological vision of Ibn Taymiyya, a 13th-century scholar from Damascus in Syria, seek to live lives in which every action is oriented towards worship of God. The power of this vision derives in part from its challenge: to even aspire to live life in such a manner, one must be constantly on guard, ever vigilant not merely to influences from outside the Salafi community but also to one’s material and bodily desires. For some Salafis, this theological approach can also create practical problems when living under secular states. For example, does payment of taxes to a state that doesn’t rule by Islamic law – whether in the Middle East and South Asia or the West – constitute a rejection of God’s sovereignty over material wealth? Can men serve in the armed forces of a country that requires a particular dress code and no facial hair? And what should one do in the case that conscription necessitates service in the armed forces? These are but a few of the questions that Salafis face in seeking to live their theological commitments. This description is likely not what you expected from an article on Salafism. When the term ‘Salafi’ is used, many in the US think of Osama bin Laden, the founder and late leader of al-Qaeda. And it is quite possible that many of those who think of Bin Laden when they hear this term also know a significant amount regarding his goals (global Jihad), his methods (attacks on civilians and military alike), his disdain for local Arab regimes (particularly Saudi Arabia) and even his death in May 2011 in a compound in Abbottabad, Pakistan. And, as not only al-Qaeda but also ISIS have challenged the international order over the past two decades, the common answer to explain this trajectory has been that these groups model themselves after the first three generations of the Muslim community, known as the ‘Pious Ancestors’ (al-Salaf al-Salih). These associations are not wrong but they are incomplete. Bin Laden was certainly a prominent representative of Salafi-Jihadism, but Salafi-Jihadism is a minority within the broader Salafi movement. Similarly, it is correct that Salafis take the first three generations of the Muslim community as a model yet, in doing so, they are joined by billions of Muslims across the world of vastly different views. And while it would be accurate to note that Salafis pattern themselves after the Prophet Muhammad’s journey – first his preaching in Mecca and then a full-blown project fusing religion and politics in Medina – an aspiration to reproduce this model is not the same as reproducing it. Put differently, while Salafis take inspiration from the 7th century, they emerged in the 20th century. Instead of assuming that Salafism is a reproduction of the 7th century and a violent one at that, we might begin in the late 19th and early 20th centuries. In the shadow of French and British colonial occupation, Muslims from Cairo to Calcutta debated a basic question: who in the Muslim community should have authority? For a millennium, the answer to that question lay with the traditional schools of Islamic jurisprudence, known as the madhhabs. Scholars within these schools – which were not brick-and-mortar buildings but rather intellectual and social networks that knit together scholars across vast geographic distances – had long served as crucial mediators. They not only bridged the gap between lay Muslims and divine revelation and the Prophetic model, but also between the ruler and ruled. In the face of unprecedented political, social and economic challenges introduced by colonialism, however, Muslim reformers challenged the centrality of madhhab scholars. These reformers were otherwise a highly diverse group who had little in common except shared opposition to the madhhab system. They came from Iran (Jamal al-Din al-Afghani), Egypt (Muhammad ʿAbduh), Syria (Jamal al-Din al-Qasimi and Muhammad Rashid Rida), and even Crimea (Ismaʿil bey Gaspıralı). Some such as ʿAbduh, Gaspıralı and Al-Qasimi sought change through education, while others such as Al-Afghani and Rida turned to Islam as a powerful source of solidarity to mobilise Muslims against colonial occupation. All, however, embraced the power of the written word to speak to Muslim audiences within and beyond their country of origin and, in doing so, modelled an understanding of a global Islamic community that depended on scholars and laymen alike. All were also concerned with an interlocking set of questions: what did it mean to preserve the Islamic tradition in the shadow of modernity? How could Muslims compete with Europe’s intellectual, political and economic might that manifested itself in colonial rule over Muslim-majority lands? Could the Ottoman Caliphate, which had stood for 500 years, be saved and, in doing so, offer a counterweight to colonial interference? To answer these questions, all proposed returning to Islam’s two ‘pure’ sources: the Quran and the Sunna. The anti-madhhab reformers of the turn of the 20th century were hardly the first Muslim thinkers to reckon with the perceived drifting of Muslims from their foundational model of 7th-century Medina. Most notably, Muhammad Ibn ʿAbd al-Wahhab and Muhammad al-Shawkani, two leading reformers of 18th- and 19th-century Arabia, sought to return Muslims to their two core sources. But Ibn ʿAbd al-Wahhab and Al-Shawkani undertook reform without need to consider the challenge of European colonialism. Instead, they placed themselves within a longer history of revival and reform. These two reformers, however, modelled a powerful rejection of the status quo through religious purity from which Salafis would draw inspiration. Salafis believe that their theological approach is the only valid way to worship God Reformers from the turn of the 20th century, such as ʿAbduh and Rida, were leading religious authorities of their day, but they were not Salafis. They were united by an objection to the centuries-old system of Islamic law by which scholars needed to claim affiliation with a given madhhab and follow the prior authoritative rulings of that school. According to these reformers, the problem with the madhhab system was not simply that it was intellectually stagnant but that it was unsuited to handle the political, economic and cultural challenge faced by Muslims at this time. Their rejection of the madhhab system’s intellectual methods was also a political manoeuvre: Islamic reformists such as ʿAbduh, Rida and others sought to undermine their competitors’ base of political and social influence so that they could take their place. In the 1920s and ’30s, Salafism emerged as part of the Islamic reformist camp. As they did so, Salafis brought together two previously independent positions: a literalist approach to God’s nature and attributes preserved in the Hanbali school, one of the four Sunni schools of law; and the rejection of the madhhabs’ authoritative positions in favour of direct recourse to the Quran and the Sunna. While theology might appear to be an unlikely source of distinction for a world-making project, Salafis believe that their theological approach is the only valid way to worship God. The Salafi theological and legal position was not merely a view of theological or legal truth but also a claim to authority and authenticity. In the shadow of the radical changes of the 19th and 20th centuries – a period during which Muslims had lost control over their lands, in which the Ottoman Caliphate had fallen, and in which European colonial penetration had reshaped the political and economic backbone of the Middle East and South Asia – they argued that Muslims had lost their connection to Islam’s founding model. While such an accusation highlighted the rise of secular nationalism in Muslim-majority lands, it also applied to other pious Muslims who also prized Islam’s golden period, such as the premier Islamist movement of the time, the Muslim Brotherhood. In short, the Salafi claim was (and is) that their commitment to modelling themselves after the early Muslim community of 7th-century Arabia makes them the most authentic and thus legitimate claimants to Islamic leadership. Just as importantly, the repeated claim that Salafis were subject to God’s authority only in countries whose leaders and populations were overwhelmingly Muslim was a clear rebuke of the authority of secular nationalist states that had arisen following the end of colonial rule. It is easy to think of Salafism as a throwback to the 7th century. And indeed, this is a core aspect of Salafism’s appeal: a return to the basics, pure and uncorrupted. But aspiring to return to Islam’s founding moment is very different from actually doing so. And, as Salafis sought to replicate this ideal model, they were deeply shaped by challenges of European rule and Western cultural influence, on the one hand, and the competing ambitions of fellow Muslims to chart a new direction, on the other. As Salafism emerged in the 1930s, paths to political power were blocked and the battle was to be found in society, with secular-nationalists and Islamists competing to offer distinct visions of the past and future alike. These bodily practices were important because they were a key site at which Muslim men and women laid claim to politics in daily life. In my recent book, In the Shade of the Sunna: Salafi Piety in the 20th-Century Middle East (2022), I charted the history of Salafi efforts to reshape society with a particular focus on Egypt. Between roughly 1940 and 1950, Egyptian Salafis came to focus on a largely forgotten practice: praying in shoes. While Muslims had once prayed in shoes – and the record of the Sunna suggests that the Prophet Muhammad and His Companions did so – this practice had declined with Islam’s emergence as an urban civilisation (and the establishment of ornate mosques with plush carpets). Here, for Salafis, was an opportunity: a practice with clear precedent in Islam’s core sources that would distinguish Salafis from other Muslims in mosques. The fall of praying in shoes, however, was as rapid as its rise. In the 1950s, the costs of religious distinctiveness grew under the secular authoritarianism of Egypt’s president Gamal Abdel Nasser and the latter’s crackdown on the Salafis’ Islamist competitor, the Muslim Brotherhood. Instead of insisting dogmatically on this project irrespective of the costs, Salafis quickly marginalised it and did not return to discussing this style of prayer for another four decades. The emergence of Salafi facial hair in the 1980s was no more straightforward. Muslim men had long worn substantial beards as statements of religious piety and masculinity alike. What Salafis needed, then, was a model of facial hair that could distinguish them from other Muslims and make a plausible claim to be derived from the Quran and the Sunna. The eventual result could certainly cite these sources, particularly the Prophet Muhammad’s command to Muslim men to grow a beard and trim the moustache. Just as importantly, Salafis needed a practice that would differentiate them from competing Islamic institutions and movements – many of whom wore beards – as well as from their secular-nationalist competitors who paired a shaved face with a moustache. Suddenly, flaunting was not a prohibition against immodest female conduct but a call to gender segregation This project pivoted on a seemingly secondary source: a record of the actions of the Prophet Muhammad’s companion, ʿAbd Allah ibn ʿUmar, the son of the second Caliph, ʿUmar ibn al-Khattab. What made the Salafi choice to cite Ibn Umar’s trimming his beard to a minimum length of a fist so striking, however, was that this practice was performed in a specific context: engagement in the Hajj or ʿUmra pilgrimages to Mecca. And, just as strikingly, the Muslim Brotherhood had beaten the Salafis to the punch on this hadith report: in the early 1940s, a leading figure in the Muslim Brotherhood, Sayyid Sabiq, had cited the Ibn Umar report to justify the closely trimmed beard sported by the organisation’s founder, Hassan al-Banna. Put differently, this was a case in which the same hadith report was cited to support radically different positions. Just as importantly, a core aspect of Salafism’s appeal is that the Quran and the Sunna are essentially self-explanatory and that Salafis can avoid being corrupted by un-Islamic influences through exclusive reliance on these sources. The fact that the history of the emergence of distinctly Salafi facial hair stretched across four decades suggests that this model was far from self-explanatory. At other times, the practice in question was decidedly novel. While Salafis cited the Islamic society of 7th-century Medina as a model, they could not point to any proof of texts from the Quran or the Sunna that prohibited gender mixing or required gender segregation. Previously, when it came to the question of gender relations, scholars of Islamic law had been primarily concerned with preventing extramarital sex. Indeed, to the extent that gender segregation existed in Islamic history, it was difficult to point to examples beyond the mosque, where men and women had long prayed in separate sections. Thus, to make the case, Salafis in the 1970s had to discover a new interpretation for an old source, specifically the Quranic prohibition against women acting in a manner designed to draw sexual attention (known as flaunting themselves, or tabarruj). The prohibition against flaunting was clear and derived from the 33rd verse of the 33rd chapter of the Quran, which commanded women, in part, ‘do not flaunt yourselves’ (wa la tabarrajna). Yet, to justify gender segregation, a leading Saudi Salafi scholar, ʿAbd al-ʿAziz ibn Baz (Bin Baz for short) argued that the prohibition against women flaunting was actually a prohibition against men and women mixing. Suddenly, flaunting was not a prohibition against immodest female conduct but rather a thorough call to gender segregation. Here, too, political competition beckoned: at this time, the Muslim Brotherhood, along with its allies in the Islamic student movement, had begun to offer gender-segregated seating on selected public buses and had lobbied for gender segregation in several Egyptian universities. In response to their competitors’ attempt to seize the mantle of public piety, Salafis argued for a stricter and more expansive vision of gender relations. It is easy to dismiss such bodily practices as ‘secondary’ matters but to do so is to miss how and why Salafis have emerged as powerful shapers of the societies in which they live. In the absence of political power, Salafis seek to shape the societies from which they have emerged and to do so by visibly modelling a commitment to the Prophet Muhammad’s example. Indeed, from Egypt to Syria to Yemen to Saudi Arabia to the Indian subcontinent to Europe and the US, one can find Salafis today adhering to these theological and legal approaches and bodily practices. While Salafis can be found across the Middle East, South Asia, western Europe and the US, it is difficult to come by firm numbers of adherents. This is partially a function of how Salafis understand themselves: most reject the formation of political parties let alone transnational Jihadist groups and, unlike their Islamist counterparts, do not generally pledge allegiance to a particular organisation. Evidence of popular support for Salafism, however, can sometimes be found when Islamist or so-called Politico Salafis run for office. In Egypt, three Salafi groups, most prominently the Nour party, formed a bloc in the 2011-12 parliamentary elections, receiving more than 7.5 million votes, which represented 27.8 per cent of all votes. Generally, though, the primary community for Salafi men and women is local and informal: the scholars and teachers, male and female, with whom they study at local mosques. Yet, Salafism is not exclusively local: particular Salafi scholars, including deceased figures such as Bin Baz, Muhammad ibn Salih al-ʿUthaymin and Muhammad Nasir al-Din al-Albani, exercised influence on adherents across the world and continue to do so through their writings. The contemporary Salafi scene is defined by three main contingents: Quietists, Islamists (aka Politicos) and Jihadis. The Quietists believe in obedience to the existing ruler and shy away from any public statements that could be interpreted as criticism. Instead, they offer advice (nasiha) to the ruler in private while pursuing grassroots reform of Muslims’ theological beliefs and ritual life, a focus that Al-Albani termed ‘Purification and Education’. To the extent that Quietist scholars comment on the status quo, they do so exclusively from a ‘religious’ perspective, avoiding any indictments of the political elite. The approach of Quietist scholars to politics, however, should not be understood as apolitical, but rather as a principled view of the dangers of political disorder drawn from the Sunni political tradition. Quietist Salafis avoid political competition and criticism not merely because it is unwise but also because it exaggerates the capacity of the state to rear pious Muslims while necessitating compromise with non-Salafi Muslims such as Islamists of the Muslim Brotherhood. It is only through uncompromising and principled reform of society that a properly Islamic state can arise in the future, and shortcuts to such a state will inevitably be plagued by corruption that renders this project defective. For Salafi-Jihadis, living under a secular state and paying taxes to it renders a professing Muslim an infidel By contrast, those Salafis who belong to the Islamist or Politico camp meld a commitment to Salafi theology, law and social practice with a vision of religiopolitical change through explicit critique of the status quo and electoral competition. While they are aware of the Quietist concern with compromise, they reject it in favour of the opportunities offered by state power and an urgent desire to change the status quo. This understanding of religiopolitical change, in turn, is a legacy of the Muslim Brotherhood, particularly the ideas of its founder, Al-Banna. In Saudi Arabia, such Salafis emerged in the 1960s and ’70s under what is known as the ‘Awakening’ (Sahwa) movement, though the Awakening’s political prospects have been severely limited by the restrictions of the Saudi political system. In Egypt, on the other hand, this camp arose in earnest post-2011 to take advantage of the opportunities offered by the transition after the president Hosni Mubarak. Most prominently, a leading Salafi preaching organisation, the Salafi Call (al-Daʿwa al-Salafiyya) in the coastal city of Alexandria formed the Nour political party, which contested parliamentary seats and captured the second largest bloc next to the Muslim Brotherhood’s Freedom and Justice Party. Nour thus retains Salafi views of theology and law, but shares the Muslim Brotherhood’s goal of establishing an Islamic state. Finally, the Jihadi camp is most prominently represented by groups such as Al-Qaeda and ISIS. Such groups meld Salafi theological and legal positions with a set of political concepts inspired by Sayyid Qutb in the 20th century on the one hand, and the 18th-century Arabian reformer Ibn ʿAbd al-Wahhab on the other. In particular, this movement is distinguished by an emphasis on God’s sovereignty (hakimiyya) and declaring other Muslims to be infidels (takfir). Unlike the mainstream Sunni position that only acts of ‘flagrant disbelief’ (kufr bawwah), such as questioning God’s essential oneness, justify such excommunication, Salafi-Jihadis take the view that the mere act of living under the authority of a secular state and paying taxes to it renders a professing Muslim to be an infidel. The Jihadi movement, however, was not always Salafi: in the 1980s, the vast majority of the fighters (mujahidin) in Afghanistan hailed from varied theological and legal approaches, a situation that would begin to change only in the early 1990s. Put differently, it is not only the case that the vast majority of Salafis are not Jihadis, but also a fact that the Jihadi movement emerges independently from the Salafi movement. By the early 1990s, however, the Jihadi movement had melded Salafism’s theological and legal approaches with an Islamist-inspired revolutionary political ideology and the purist inspiration of Ibn ʿAbd al-Wahhab. Over the past two decades, Salafism has emerged and persisted as a key question of US foreign policy. First with Al-Qaeda’s attacks on the US on 11 September 2001 and then with ISIS’s announcement of a Caliphate in parts of Iraq and Syria in 2014, Salafi-Jihadis became the most visible representatives of this movement. That Salafi-Jihadis would stake this claim is unsurprising, as they benefit from Salafism’s claim to authenticity. Islamists within the Salafi camp, too, have emerged as prominent players in the post-2011 Middle East. Yet, as a matter of both past and present, these two segments of the Salafi movement are a distinct minority compared with the millions of Salafi men and women who adhere to Quietism and live not only in the Middle East and South Asia but also in the US and western Europe. Given this history, how should we understand Salafism today? Opponents of Salafism of varied political persuasions often frame the movement as ‘backwards’ or as seeking a ‘return to the 7th century’. While the former description partakes in a longstanding tradition of purist (often non-Western) religious movements being assessed negatively along a teleological vision of progress, the latter reproduce the Salafi’s own claim to authenticity, albeit in pejorative fashion. Salafis cannot, any more than any other 20th- or 21st-century movement, return to the past. Instead, like any other movement, they are firmly rooted in the questions and concerns of the present, and engage selectively with past traditions in search of answers to these questions and concerns. In this regard, Salafis are fundamentally similar to both religious and non-religious movements today. Like their theologically traditionalist and socially conservative counterparts in Judaism and Christianity, they seek to preserve an imagined past in which religion defined the goalposts of social life, in which their religious tradition was dominant, and in which the knowledge about the world that mattered most was that of the scholars. Salafis, however, also have a great deal in common with their secular nationalist competitors, who themselves appeal to a mythological national history, shorn of complication, ambiguity and division, and defined by unifying commitment to a given purpose. In conclusion, to be a Salafi is to seek to live a model in the 21st century that first emerged in the 7th century. Yet, in doing so, Salafis are aspiring to recreate a golden past, an idealistic aspiration shared among religious and non-religious movements alike. Salafi claims to fundamental difference have persisted as long as they have because the movements’ proponents and opponents share a basic ahistorical view: that Salafism replicates the 7th century. Yet it is only by acknowledging the impossibility of such a claim that we can begin to have a serious public conversation about Salafism and public life in both East and West today.
Aaron Rock-Singer
https://aeon.co//essays/a-history-of-the-modern-islamic-movement-that-is-salafism
https://images.aeonmedia…y=75&format=auto
Mental health
Training individuals to support one another through difficult times is a profound step forward in our mental health crisis
Diksha hasn’t been feeling like herself lately. For three weeks, she’s been unable to follow her daily routine, hasn’t felt like eating or playing with her children, and no longer sits at her bangle shop in the village market. The violence she endures at home has become more frequent, and Diksha wonders if that’s the reason for her low mood. Sensing something is wrong, her neighbour Radha asks Diksha if everything is all right. Diksha shares that she is dealing with domestic abuse as well as financial difficulties affecting her shop. Radha empathises and says she’s been through a similar situation in the past. She also explains her role as a village ‘champion’, trained to provide emotional support, and offers to talk to Diksha over a few sessions, at a time and place of her convenience. After completing these sessions, Diksha feels so much better and is assured that she is not alone. This hopeful vignette is set in a village in Mehsana, an economically disadvantaged district in the western state of Gujarat in India where, as per national estimates, approximately 4 to 8 per cent of the residents are dealing with mental health problems such as depression and anxiety. Similar to all rural districts in India, there are inadequate mental health services and professionals. Diksha’s story is fictional, but it’s one that draws upon several real-life stories we have encountered in our work researching and delivering a vital mental health intervention known as peer support. In the example above, Diksha is clearly in a distressed state of mind. Thankfully, her neighbour Radha, who is trained as a peer supporter (or champion, in the local context), identifies something is amiss and reaches out. Diksha is familiar with Radha, trusts her, and hence feels comfortable in expressing her feelings to her. During their sessions together, Radha uses lay counselling skills, such as active listening (a form of reflexive listening to empathise with a person experiencing distress), to understand Diksha’s thoughts and feelings. Radha also uses lay counselling techniques, such as problem-solving, which have proven to be effective in helping individuals find solutions to cope with their distress. Diksha is not alone in facing stressors of this nature. Every one of us may or will experience difficult situations at some point in our lives. A few of us might wonder if what we’re going through is serious enough to reach out to a professional therapist or counsellor. Others may feel reluctant to open up about their feelings due to the stigma and shame associated with expressing one’s emotional vulnerability. It’s also possible that, the last time some of us reached out to a loved one, their advice was perhaps not helpful for our situation or they weren’t equipped with the skills to emotionally support us in our most distressed moments. Finally, those of us who want to approach a mental health professional such as a psychiatrist or a therapist may be confronted by other challenges, such as finding a professional who is available, affordable, can speak in a language that resonates with us and makes us feel comfortable. These hurdles are particularly prevalent in areas where there is a lack of mental health professionals or in societies where reaching out for emotional support is shamed as a sign of weakness (in India, where we work, between 70 to 92 per cent of people with a diagnosable mental health condition don’t have access to any form of mental health care). In such contexts, a person might not reach out for support at all, or instead rely on networks of family and friends rather than look for a mental health professional. At Bicêtre, recovered mental health patients were employed as staff to take care of patients In our work, we address these realities by developing evidence-based and innovative mental health and suicide-prevention programmes based on peer support and lay counselling. These programmes are usually delivered by community members, persons with lived experience, caregivers, youth, health professionals or other groups using skills that they’ve developed through the training and mentorship our programmes provide. In the context of mental health, peer support is a process through which people who share similar lived experiences or social backgrounds support others experiencing mental health problems or emotional distress. One of the earliest recorded instances within psychiatric settings can be traced to the late 18th century at the Bicêtre Hospital in Paris where recovered mental health patients were employed as staff members to take care of patients who were in treatment. Other earlier manifestations of peer support emerged as self-help groups providing informal support in the community. For instance, in 1845, a group of men who had experienced treatment violations while in an asylum, set up the Alleged Lunatics’ Friend Society in England, which fought for protection from ‘unjust confinement … [and] from cruel and improper treatment’. Another early example was Alcoholics Anonymous (AA) set up in 1935 in Ohio in the United States to support people struggling with alcohol addiction. AA is now present across the world and is widely regarded as one of the most successful peer-support groups specifically for addiction recovery. However, it wasn’t until the 1970s that peer support as a formalised and systemic approach emerged out of the service-user/survivor movement, which, along with the anti-psychiatric movement, challenged mainstream and formal mental health services driven by the psychiatric model. This was alongside other social movements for civil rights, women’s rights, LGBTQ+ rights and disability rights during the 1960s and ’70s, which also influenced the development of peer support in the context of emerging discourses on human rights and resistance against oppressive systems. In particular, peer support began to embed itself in the ‘recovery movement’, which foregrounded the voices and agency of persons with lived experiences and service-users to shape and pursue their own idea of recovery, hope and functioning, without reducing their identities to their diagnostic labels and symptoms. To understand the basic principles of peer support, you might be able to refer to experiences in your own life. Most of us, at some point, have provided emotional support to a family member, loved one or acquaintance in a familiar setting. In such moments, we might have found ourselves drawing upon our own lived experiences, and sharing insights on how we coped in those moments – both of which are key elements of peer support. A couple of years ago, one of us (AK) received an urgent phone call from a young man we’ll call Ajay via an LGBTQ+ network. Ajay identified as part of the LGBTQ+ community; he was unemployed and hence dependent on his family, but said his family members would subject him to physical and emotional abuse. On that particular day, traumatised by their violence, Ajay was confronted by intense thoughts of ending his life, and immediately reached out for help since he was unable to cope with the distress by himself. In his role as a peer supporter, AK immediately provided Ajay emotional support by creating space for him to share his distressed feelings and by acknowledging the violence he had experienced, while also reinforcing that he was not responsible for his family’s violent actions. After assessing the intensity of Ajay’s suicidal thoughts, AK provided him with reassurance and hope by helping him identify his internal strengths and potential solutions to deal with his situation. AK also gave Ajay details of referral contacts for legal and financial support to help reduce his dependence on his family and prevent any further instances of violence. AK continued to provide emotional support and followed up with Ajay regularly until he was no longer at risk of suicide and in a better state of mind. In this way, Ajay overcame his suicidal thoughts and was motivated to find ways to overcome his difficult situation. When someone listens to us with empathy and respect, we feel recognised Similar instances of peer support abound in our everyday lives. Take, for example, a student volunteer providing psychosocial support to a classmate who belongs to an oppressed caste community and is confronting institutional caste-based discrimination; a member of an LGBTQ+ collective providing affirmative support to a person coming to terms with their sexuality; a survivor of domestic violence providing crisis support to a young woman who has been physically assaulted by her husband; a volunteer providing referral resources over a chat-based app to an adolescent having thoughts of ending their life; a peer supporter helping a person with schizophrenia admitted to hospital to develop their own recovery plan; or a group of individuals listening to and sharing each other’s journey of recovering from substance use addiction. These are examples of the myriad ways in which peer support can be provided across settings ranging from community spaces, educational institutions, psychiatric hospitals and rehabilitation homes to online support groups, phone-based helplines or even chat-based apps. Peer support also comes in many different forms, ranging from emotional support, problem-solving and crisis support to providing information resources, advice and specific mental health services within hospital settings. The term ‘peer’ is used to describe individuals providing such support to others. Peers are most often laypersons who belong to the same age groups, specific communities or identities, or share lived experiences of distress, mental health problems or oppression. Peers can also be workplace or academic colleagues, community volunteers, or individuals recovering from psychiatric disorders in hospital or community settings. Peer support draws upon a fundamental human instinct to relate and connect with the other’s condition. As human beings, we are imbued with an innate potential to not only listen to others’ stories but, through such engaged listening, support them in coping and overcoming emotional and psychological distress. When someone listens to us in this way, with empathy and respect, we feel recognised. A sense of recognition not only validates and affirms our emotional pain, but also helps foster hope and resilience to cope with adversity. For instance, going back to Ajay’s story, we find that the peer supporter’s act of recognising the wrong being done to Ajay by his family, and also acknowledging his internal strengths to overcome his difficult situation, provided him with the motivation to not only resist acting on his suicidal thoughts, but also to recognise his own self-worth and imagine a better future for himself, independent of his family. As in the case of Ajay, reaching out to a peer supporter who has a similar identity, lived experience or sensitivity to understand the other’s reality can not only make it easier for the distressed person to overcome stigma to seek support, but also enable the peer supporter to emotionally ‘hold’ the person by making them feel recognised and acknowledged. Thus, peer support embodies the transformative potential of human relationships to support, empower and heal individuals experiencing distress by mutually sharing lived experiences, empathetic listening or simply validating the other’s emotional pain. In cultures and societies where reaching out for support or providing emotional care is often seen as an act of weakness, or as devalued emotionality, peer support is a radical possibility, as it blurs the boundary between self and other through a process of attuning one’s own lived experiences to the simultaneous struggle of the other. Through this alchemy, peer support facilitates a space wherein two individuals, familiar in their shared reality yet strangers to each other’s lives, can nurture a bond, however transient, to transform the stigma that envelopes them and their lived experiences of distress into an emergence of mutual hope, connectedness and recovery. A question that is often posed is whether peer support is comparable to professional mental health services. Are the two approaches effectively the same, substitutes of each other, complementary or completely opposed? An answer can be found in the seminal paper ‘Peer Support: A Theoretical Perspective’ (2001) by Shery Mead, David Hilton and Laurie Curtis, where they define peer support as ‘a system of giving and receiving help founded on key principles of respect, shared responsibility, and mutual agreement of what is helpful.’ According to the authors, one of peer support’s defining features is that it is not based on ‘psychiatric models and diagnostic criteria’. Instead, peer support is about ‘understanding another’s situation empathically through the shared experience of emotional and psychological pain.’ In other words, the peer support model’s defining feature is that it is conceptually and practically separate from formal mental health care, which is conventionally provided by qualified clinical professionals. Peer support departs from the traditional relationship of the professional and the patient. This relationship is inherently mediated by an institutionalised power dynamic that determines what kind of support is provided to a person in distress. In this dynamic, the professional as an expert is presumed to ‘know better’, while the patient is expected to be a ‘passive recipient’ of the professional’s assessment of what is in the ‘best interests’ of the patient. Peer support inverts this very dynamic and replaces it with a relationship of two equally positioned individuals, founded on mutual respect, reciprocity and attunement of their lived experiences. For the person receiving support, a peer may also serve as an inspirational figure Peer support draws especially from the power of sharing lived experiences. It offers a relatable and lived exemplar of the unique experience of living and coping with multiple sources of stress in one’s life – something that formal mental health professionals embedded in the expert-patient dyad are often unable to provide therapeutically. In addition, peer support provides a cathartic space for refuge that transcends the constraints of expert-delivered formal services in favour of a more equitable relationship wherein, through the mutual sharing of one’s pain or life journeys, one can identify and feel connected with the other’s experiences. The relational nature of emotional adversity requires reparative relationships to mend the psychological damage caused by those very fractured relationships (personal and social) in the past and present. Thus, for the person receiving support, a peer may also serve as an inspirational figure; to identify with their journey can be an antidote to the loneliness and isolation one experiences while in distress. To be told by someone, in whose narrative we identify parts of ourselves, I understand what you’re going through. Things can get better, they did for me can be deeply affirming and liberating. A peer’s ‘experiential knowledge’ can therefore serve as an inspirational model for living and coping with adversity, and a boosting reassurance that this is possible for me also. This sentiment can be particularly therapeutic for individuals and communities that are experiencing identity-based discrimination. We’ve seen this in our own work co-designing peer-support programmes with marginalised youth experiencing discrimination based on gender, sexuality or caste. We’ve also seen the power of reciprocity and resonance in our peer-support work within formal psychiatric hospital settings. Within this context, to meet a peer supporter who has also received institutional care in the past, but who now, with the right amount of resources and support network, is able to live an independent life on their own terms, provides hope and reassurance by modelling through example. For instance, in one of the hospitals that we’ve worked in the past, the story of a peer supporter (with a severe mental health condition) whose family invested in setting up a photocopying and printing workshop to help him earn a livelihood is often quoted to other patients as an example of how, in one’s recovery journey, despite ups and downs, there is hope to take back control over one’s life, even with severe mental health issues. By learning from this experiential knowledge, one can find new ways to understand oneself or reimagine one’s narrative, identify solutions for one’s problems, reaffirm faith and hope for oneself, or instil an enduring sense of self-confidence. For the peer supporter too, providing this support can be a satisfying and fulfilling experience, leading to an enhanced self-identity, and a deeper sense of meaning or purpose especially as a witness to the impact of their support on the other. For peer supporters with mental health problems or psychiatric disorders, providing support can also bolster their own recovery journeys, build vital life skills and help forge an identity beyond their status as a ‘person with mental illness’. Peer support is also embedded in the philosophy of ‘recovery’ that challenges the idea of an expert-driven biomedical notion of ‘cure’ most often imposed within psychiatric settings. Instead, the recovery approach centres the person’s own agency to develop pathways to live a meaningful life and achieve their full potential, irrespective of their symptoms or the anticipation of awaiting an enduring cure. The emerging scientific evidence on the benefits of peer support is promising. Studies have demonstrated benefits for both peer supporters and persons receiving peer support. In a systematic review published in 2013, of 11 randomised controlled trials involving close to 3,000 people in the US, the UK and Australia, researchers found that patients showed equivalent outcomes, in terms of quality of life, mental health symptoms, satisfaction and use of mental health services, when their care needs or group therapy were managed by a peer supporter, compared with a mental health professional. Furthermore, in 2011 another review of studies into peer support in professional mental health services suggested that peer support can help reduce admission rates and re-hospitalisations; increase the sense of empowerment and independence for the peer and the service user; and improve social functioning among service users. Although most published research on peer support has been conducted in the context of high-income Western countries, there are promising results emerging from low- and middle-income countries too. The champions facilitate access to social entitlements and provide practical support Take Atmiyata, for example – an innovative, rural community-led intervention in the aforementioned Mehsana district in Gujarat that is currently delivered by the organisation where we are based. Atmiyata focuses on identifying and providing between 12 and 14 days of training comprising role plays, group discussions and input sessions to build the capacities of community volunteers – at the village level – as lay mental health care providers to deliver evidence-informed counselling to support people in distress or with common mental disorders, such as depression and anxiety. These lay counsellors, or champions, reach out to their village community members using their training to identify symptoms of mental health conditions, and social conditions that might be distressing to community members (such as a newly married woman who is moving to a new village; recent unemployment; women facing domestic violence, or financial distress), and the champions use Atmiyata films to facilitate a conversation on mental health. On their first few interactions, the champions make an assessment of the person’s level of distress and subsequently use lay counselling skills and techniques over four to six interactions to provide emotional support and assist them in identifying and reaching their goals. In addition, the champions also facilitate access to social entitlements and provide practical support such as sharing employment opportunities, making referrals for legal aid, shelter services and helplines they might need to access. At present, there are 1,000 such volunteers across 1,200 villages who are actively providing peer support through this intervention. A majority of the volunteers joined the intervention given their lived experience of distress and mental health problems. Using the unique position of the village-based volunteers, services can be provided to those in need in an acceptable manner, free of cost, at people’s doorsteps. Our results show that people who interacted with the peers were twice as likely to recover from their symptoms of common mental disorders, as compared with a control group, and also showed sustained benefits eight months later. The World Health Organization has listed Atmiyata as one of the 25 good practices for community outreach mental health services around the world. Another promising example of peer support in a different setting is Outlive – a youth suicide-prevention programme – for which we are pilot-testing a youth peer-support programme that trains youth volunteers aged 18 to 24 from universities and community-based organisations to provide chat-based emotional support to young people in distress and having thoughts of ending their life. The youth volunteers are provided with a 30-hour online training in basic gatekeeping such as identifying warning signs of suicide, assessing suicide risk, providing emotional support, and making referrals to other helplines and organisations. This support is provided via text-based chat on an online app that we co-designed with young people and which maintains the anonymity of both the peer supporter and the young support-seeker throughout. Peer-support programmes for suicide prevention such as Outlive are premised on the principle that one doesn’t have to be a professional or expert to support an individual with suicidal thoughts and feelings. Rather, shared lived experience as community members or peers, complemented with lay skills to provide emotional support and manage suicide risk, can help prevent suicides, especially in the absence of immediate formal support. Despite these promising interventions, there are many challenges ahead – including a need to convince policymakers to look beyond conventional, professional-led models of mental health support. Unfortunately, there are currently very few peer-support programmes that are actually integrated in the delivery of public mental health services at scale covering large, diverse populations across different geographical areas. Nonetheless, we are hopeful. Peer support shows that change is possible, that mental health is not the exclusive domain of mental health ‘experts’, such as psychiatrists and psychologists; rather, mental health is universal and relational – we all have varied experiences of mental health, which are shaped by our relationships with each other and the world. Peer support has the potential to bring a paradigm shift: to reclaim the expertise of persons with lived experience to shape mental health care pathways and control their own recovery journeys.
Arjun Kapoor & Jasmine Kalha
https://aeon.co//essays/how-peer-support-can-help-with-the-mental-health-care-crisis
https://images.aeonmedia…y=75&format=auto
Thinkers and theories
Written for laymen, read by women and kings, Christian Wolff’s mathematical method made him a key Enlightenment philosopher
Writing primarily during the first half of the 18th century, Christian Wolff (1679-1754) and his philosophical system, ‘Wolffianism’, dominated the intellectual landscape to such an extent that during his own lifetime he became one of the most influential philosophers in all of Europe. He made substantial contributions to virtually every sub-field of philosophy (as well as to mathematics and natural science), shaped the way philosophy was practised in the German-speaking lands of Europe and beyond for decades if not centuries to come, and even had an influence on the German language itself. And yet, in the present day, Wolff is not a stable figure of the Western philosophical tradition. This is a tragedy, because Wolffianism had such an impact that a large and important piece of German philosophy’s history remains obscure unless we can come to better appreciate Wolff’s philosophy and the ideas to which it gave rise. Just how influential was Wolff’s philosophy? And how could it come to be that his philosophical system eventually became pushed into the background? Wolff was born in Breslau, at the time the capital of the historical region of Silesia, now Wrocław in Poland. Raised in a devout Lutheran household, Wolff’s early educational environment shaped his later development in a big way: at Gymnasium, Wolff was introduced to Cartesian philosophy and mathematics and, having been exposed to disputes between Lutherans and Catholics during his youth, Wolff saw in mathematics the promise of ending these disagreements once and for all. As he writes in his autobiography: ‘I was eager to learn the mathematical method in order to endeavour to make theology incontrovertibly certain.’ The mathematical method became a cornerstone of Wolff’s philosophy, which he characterised as following three general principles: 1) beginning with clearly defined terms, and consistently using the same term for the same idea throughout one’s writings; 2) arguing strictly according to conceptual analysis and deductive inference; and 3) never using a principle as a major premise in an argument that had not been previously proven. This method embodies a faith in reason that became characteristic of the Enlightenment: by presenting his arguments in steps that ideally anyone could follow, Wolff’s hope was that all people who applied their mind to the same topic would come to agree, leading to less intellectual – and political – conflict and thus more human happiness. Wolff’s first publication of note, a thesis from 1703 that granted him permission to teach at the university level, sought to prove the fruitfulness of the mathematical method as applied to practical philosophy in particular. Entitled Universal Practical Philosophy, Written According to the Mathematical Method, this text founded the discipline of ‘universal practical philosophy’, a general, abstract branch of knowledge that treated foundational concepts in practical philosophy, such as obligation, the nature of ‘law’ and the highest good, which were presupposed and utilised in the more particular practical disciplines of ethics, politics and economics. It was Wolff’s first success: the examiner in Leipzig, Otto Mencke, was so impressed that he sent a copy to the German polymath Gottfried Wilhelm Leibniz, which led to a correspondence and productive friendship between the two figures that lasted until Leibniz’s death in 1716. Mencke also hired Wolff to work for the Acta eruditorum, the first scientific journal for German-speaking intellectuals, for which Wolff went on to write approximately 40 papers and 485 book reviews; Wolff even learned foreign languages such as English to review works like John Locke’s Of the Conduct of the Understanding (1706). Wolff’s textbook on logic was given an initial print run of 8,000 copies, a number unheard of at the time With Leibniz’s help, Wolff attained his first position as a professor of mathematics in 1707 at the newly founded Friedrichs University of Halle. In line with this role, Wolff initially lectured on mathematics and his first subsequent publications were mathematical textbooks that grew out of his lecturing activity. He soon taught philosophy and physics as well, and it was the publications resulting from these lectures that brought him almost immediate fame both at home and abroad. He wrote a series of textbooks outlining his philosophical system in virtually every sub-discipline of philosophy, organised systematically in line with the mathematical method. After a foundational volume on logic, Wolff added volumes on metaphysics, ethics, politics, physics, teleology, and physiology. The success of these works was astonishing. Wolff’s textbook on logic, for instance, was given an initial print run of 8,000 copies, a number unheard of at the time (Kant’s Critique of Practical Reason, for example, received an initial run of only 1,000 copies), and went through 14 editions in his lifetime. Similarly, Wolff’s textbook on metaphysics, Rational Thoughts on God, the World, and the Soul of Human Beings, as Well as All Things in General, went through 12 editions. On the basis of these texts, Wolff was offered chairs at several other German universities and Czar Peter the Great offered him a position at the St Petersburg Academy (all of which he turned down); he was also granted memberships in virtually every important academic society at the time, including the Royal Society of London, the Berlin Academy, the St Petersburg Academy of Sciences, and the Academy of Sciences in Paris. A main reason for the success of this first series of Wolff’s writings had to do with the fact that he chose to publish them in the German vernacular rather than in Latin. In this respect, Wolff was following in the footsteps of his new colleague in Halle, Christian Thomasius, who had been the first to publicly announce that he was going to deliver his lectures in German. To be sure, philosophical treatises had been published in German before Wolff, including by Thomasius, but philosophical German was in a poor state at the end of the 17th century. Leibniz, for instance, complained in two treatises that the German language was unfit for scholarly works, in part because it incorporated so many foreign expressions. Wolff forever changed philosophical German by being the first to employ a stable and consistent philosophical vocabulary. His efforts here were in part a result of his commitment to the mathematical method: because definition was central to his attempt to achieve certainty, he went to great lengths to define every technical term he used, as well as to use them consistently to refer to a single idea throughout his works. Wolff even attached indexes to the end of his works to make it clear to his readers which Latin terms corresponded to the German terms he was in some cases inventing. To cite just one important example, whereas the word Begriff had a variety of meanings in Middle High German, it was Wolff who fixed it as the accepted philosophical term for ‘idea’, ‘concept’, or ‘notion’. His efforts with respect to lending the German language stability were connected to his broader Enlightenment aspirations: by improving the precision of language, Wolff believed he could improve how common people thought. Indeed, Wolff was not only attempting to save his students time from taking notes by writing his textbooks in German, so that they might focus more on the ideas he was presenting, he also wanted his writings to be accessible to the uneducated layperson. This was Thomasius’s motivation for writing in German, too, namely for philosophy to be useful to all, regardless of both sex and social standing. Wolff followed suit and, to make his philosophy less offensive to the uneducated, he refrained from ‘dressing’ his arguments in ‘mathematical clothing’, that is, from explicitly outlining his system in terms of definitions and axioms, and instead opted to preserve this method beneath the surface. Wolff’s philosophy was soon taught at every major German university, including Jena, Göttingen, Tübingen, Leipzig and Königsberg. And his attempt to appeal to laypersons in particular was such a success that Wolffianism became a topic in broader culture: there soon emerged a number of satirical works, such as a treatise about a shoemaker who used the mathematical method and the principle of sufficient reason to advance his trade, a text about the use of pre-established harmony in marriage, and a parody of Wolff seducing a young woman using scientific arguments. An important aspect of Wolff’s impact on broader culture was the reception of his thought by female readers. In fact, his philosophy was so popular among women at the time that one of his contemporaries remarked that an ‘actual lycanthropie’ had broken out among the female sex. Numerous efforts were therefore made to popularise Wolff’s thought for a specifically female readership. Samuel Formey, for instance, published a six-volume philosophical novel in French entitled La Belle Wolffienne (1741), meant to provide an overview of Wolff’s thought. Wolff himself even began to write a version of his philosophy specifically intended for women, in the form of a series of letters between himself and a young noblewoman. Somewhat curiously, he never completed this project, even though the Queen of Prussia herself expressed interest in it. One of the most interesting texts written in this connection was by Johanna Charlotte Unzer, who has been called the first female German philosopher. Unzer’s Outline of a Philosophy for Women (1751) is the first metaphysical treatise intended exclusively for a female audience, and it consists in a popularisation of Wolffian logic and metaphysics. That a specialised book of this nature received a second edition is a testament to its success. Wolff’s early popularity and success brought him turmoil as well: at the university of Halle, tension quickly grew between him and members of the theology faculty, especially Joachim Lange, a Pietist Lutheran who was not only jealous of Wolff’s success as an instructor, but was also offended by his claim that philosophy, rather than theology, is the foundation of all the other sciences. The conflict peaked in 1721 when Wolff delivered a speech entitled ‘Oration on the Practical Philosophy of the Chinese’ when handing over the office of pro-rector to Lange. In this speech, Wolff argued that the ancient Chinese developed a perfectly plausible theory of morality based on reason alone, and thus without the help of divine revelation. One of his more fascinating claims is that Confucius accepted the principle of sufficient reason, according to which everything has a cause or reason for why it exists in the way that it does. Wolff therefore saw in Confucius a version of his own theory, according to which grasping the reasons underlying reality could alter our behaviour and make us more rational. Wolff added that Confucius served as proof that one does not need to be a Christian to lead a moral life. Thomas Jefferson underlined numerous passages related to the right of civil war and neutrality The theology faculty in Halle was so enraged by the speech that its members argued to officials in Berlin that Wolff’s writings were blasphemous, hoping to get his teaching restricted to mathematics. The officials did more than this, however, and told the ‘Soldier King’ Friedrich Wilhelm I that Wolff’s compatibilist conception of freedom implied that soldiers could not be held responsible for deserting the battlefield; indeed, that Wolff’s philosophy might even encourage desertion. The king was convinced and issued a decree, written in his own hand, stripping Wolff of his professorship and ordering him to leave Prussia within 48 hours on penalty of death. Wolff complied. His expulsion from Prussia had the opposite effect from what his critics intended: Wolff became known throughout Europe as a martyr of reason and the Enlightenment, thereby only increasing his fame. The Crown Prince of Prussia, Friedrich II (later Frederick the Great), commissioned a French translation of Wolff’s so-called ‘German Metaphysics’ in 1736, and rumour has it that he read it so often that his pet monkey Mimi threw it into the fire out of jealousy. This translation was significant, for it found its way into the hands of Émilie du Châtelet, who went on to summarise the foundations of Wolff’s philosophy in the first chapter of her work The Foundations of Physics (1740). Wolff’s expulsion caught the attention of Voltaire too, who came up with the motto ‘Wolfio docente, Rege Philosopho regnante, Germania applaudente Athenas invisi’ (‘I visited Athens with Wolff teaching, Philosopher King reigning, Germany applauding’). Wolff’s works were also both a source and an example for the creation of Denis Diderot and Jean d’Alembert’s Encyclopédie: not only are a number of the entries near-literal translations from his works, but the systematic nature of his philosophy served as an example of the systematic arrangement of all knowledge that the Encyclopédie attempted to embody. The influence of Wolff’s own philosophy reached its high point in Europe in the 1730s. In 1740, one of Frederick the Great’s first official orders as the new king was to invite Wolff back to Halle, to which Wolff complied. Ironically, and somewhat tragically, Wolffianism was on the decline when he was in the middle of the massive undertaking to rewrite his entire philosophical system in Latin in order to reach a wider audience. Wolff devoted the rest of his life to this task and, although he left it unfinished, he wrote an astonishing amount. The Wolff scholar Clemens Schwaiger has claimed that Wolff could be considered the most productive philosophical writer of all time. By the time he died in 1754, at the age of 75, he had published more than 50,000 pages. Wolffianism nonetheless continued to be present in the intellectual landscape until the end of the century by means of the modified versions of Wolff’s system invented by his many students and followers. To mention just one very important example, Alexander Gottlieb Baumgarten, the founder of the discipline of aesthetics, wrote a series of Latin textbooks that explicitly acknowledge their debt to Wolff. In his Elements of First Practical Philosophy (1760), for instance, one of the texts that Kant read from in his lectures on moral philosophy, Baumgarten states that his aim is ‘abridging’ and ‘explaining’ Wolff’s universal practical philosophy. And even once Kant published the first works of his ‘critical philosophy’, one of the major waves of critical reactions was waged by defenders of Wolff’s philosophy, such as Moses Mendelssohn who speaks of Kant as the ‘all-destroying’ critic of metaphysics. By the end of the century, Wolff’s philosophy had been taught across Europe and Wolffianism was even represented in such faraway places as the Athonite Academy on Mount Athos in Greece, as well as in Turkey and South America. Both Wolff’s own writings and others written by his disciples made their way to North America too: George Washington checked out a copy of Emmerich de Vattel’s The Law of Nations (1758) from the library of the New York Society – an important text in the history of international law and a popularisation of Wolff’s thought on the subject. Thomas Jefferson owned a 1772 dual (French and Latin) edition of Wolff’s Institutes of the Laws of Nature and Nations (1750), in which he underlined numerous passages related to the right of civil war and neutrality. It has even been claimed that Wolff is the source of the idea, present in the US Declaration of Independence, that freedom is an inalienable right – an idea not to be found in other possible sources, such as Locke. So while the influence of Wolffianism did not end with Wolff’s death, its legacy changed dramatically by the beginning of the 19th century, and Wolff was instantly pushed into the background. Consider the way in which his thought is treated in a German-language history of philosophy published at the beginning of the 19th century, Wilhelm Gottlieb Tennemann’s 11-volume History of Philosophy. In the work’s final volume on modern philosophy, covering the period from Locke to David Hume, Wolff’s name is mentioned fewer than 10 times, and only in passing. Tennemann’s history of philosophy is extremely important because it went on to influence other English and French histories of philosophy of the period, so Wolff’s meagre portrayal there likely shaped how he was perceived abroad. But the important question is: how could this happen? How could a philosopher who dominated the philosophical landscape just a few decades earlier come to be a mere footnote? Wolff came to be seen as a mere follower and imitator of Leibniz There are a few reasons for why this happened. The first concerns the development of late 18th- and early 19th-century German philosophy. Although the uptake of Kant’s critical philosophy was somewhat slow at first, it eventually came to dominate philosophical discussion by the final decade of the 18th century. And with the subsequent rise of German Idealism and Romanticism, intellectual fashion had shifted such that the philosophical outlooks of the previous century, which seemed outdated and scholastic by comparison, necessarily fell into the background. The damage done by this to Wolff’s fate in particular was exacerbated by what the philosophical giants on the horizon had to say about him. For instance, although Kant describes Wolff as ‘the greatest among all dogmatic philosophers’ in the Critique of Pure Reason (1781), he goes on to level a damning critique of dogmatism, which he understands as at least in part involving a commitment to Wolff’s method, subsequently concluding that only the ‘critical path’ remains open. Hegel too praises Wolff in his lectures on the history of philosophy ‘for raising Germany to a culture of the understanding’, which Wolff undoubtedly did by means of his improvement of the German language, but he accuses Wolff elsewhere of carrying out his method ‘to the height of pedantry’. The impact of these figures’ own philosophies, paired with their negative opinion of Wolff, was therefore doubly disadvantageous and could not but demotivate interest in Wolff’s philosophy. The most important reason for Wolff’s decline, however, has to do with the fact that he was considered by many to have merely presented a version of Leibniz’s philosophy from very early on. One of Wolff’s early followers, Georg Bernhard Bilfinger, commended Wolff for founding the ‘Leibnizian-Wolffian’ school of philosophy. Intended as praise, Wolff’s critics quickly misappropriated the term and Wolff came to be seen as a mere follower and imitator of Leibniz. Voltaire accused Wolff of lacking originality and merely presenting what Leibniz had already discovered. Hegel claimed that Wolff is a mere systematiser of Leibniz’s philosophy, and Schelling accused Wolff of having ‘appropriated’ Leibnizian ideas. This opinion has persisted to the present day, with many general histories of philosophy and even more specialised histories of ethics referring to Wolff as a mere follower of Leibniz, if his name is mentioned at all. The label ‘Leibnizian-Wolffian’ philosophy is misleading however, and the identification of their philosophies is problematic, not least because Wolff himself explicitly denied that his aim was ever to expand or explain Leibniz’s philosophy. It is also doubtful that Wolff could have known enough about Leibniz’s philosophy to have been able to appropriate it: although it is true that the two were acquaintances and engaged in a correspondence, their conversations on topics in philosophy were somewhat limited to issues in ethics and philosophical theology. Furthermore, Leibniz published very little during his lifetime, and some of his most famous texts, such as The Principles of Nature and Grace, The Monadology and the New Essays Concerning Human Understanding, were not only published posthumously, but only after Wolff had written his major German works on metaphysics and ethics. Even more significantly, Leibniz himself says that Wolff likely knew little of his philosophy, or at least no more than what anyone else with access to his published writings could have known: Mr. Wolff has adopted some of my opinions, but since he is very busy with teaching, especially in mathematics, and we have not had much correspondence together on philosophy, he can know very little about my opinions beyond those which I have published.To be sure, some aspects, though certainly not all, of Wolff’s philosophy show the influence of Leibniz, and a limited number of them can be regarded as advancing a particular interpretation of Leibniz’s thought. And in retrospect, Wolff published his works at an opportune time: due to the general interest in Leibniz’s philosophy but also the fact that there was no extensive presentation of it, the learned public were eager to digest the ‘Leibnizian’ system they considered Wolff and his disciples to be presenting. The label ‘Leibnizian-Wolffian’ philosophy therefore both helped attract attention to Wolff’s philosophy and sealed Wolff’s fate as a mere ‘systematiser’. Once a first collection of Leibniz’s writings appeared in the mid-18th century, and more expansive editions of his German and philosophical writings followed in the mid-19th century, interest in Leibniz’s philosophy was repeatedly renewed. And once the opinion that Wolff was a mere appropriator of Leibniz was solidified in public consciousness, many felt no need to read Wolff himself. Neglect of Wolff is to be regretted, however. It leaves a large and important piece of the history of German philosophy obscure. Aside from the fact that understanding him as a mere Leibnizian discourages noticing what is original about his thought, without an accurate understanding of Wolffianism we cannot grasp a number of crucial texts written by and for women intellectuals of the period, nor can we properly appreciate a rare instance of a Western figure promoting Eastern thinking. Rediscovering his philosophy is therefore a gateway into an important period of the history of German philosophy that we ignore to our detriment. Not only this, but his philosophy itself serves as an example of an ambitious attempt to help readers from all walks of life to not only think more clearly, but to put their knowledge into practice and live happier lives as a result.
Michael Walschots
https://aeon.co//essays/why-we-should-recover-the-philosophy-of-christian-wolff
https://images.aeonmedia…y=75&format=auto
Medicine
Eager for medical breakthroughs, some doctors take enormous risks experimenting on themselves. Should we celebrate them?
A century ago, invasion of a beating heart was off limits to doctors. To pass through its pulsating walls, with scalpel or otherwise, was seen as an uncharted expedition where risk of death for the patient was exponentially high. Theodor Billroth, an early innovator of modern surgery, allegedly remarked that: ‘No surgeon who wished to preserve the respect of his colleagues would ever attempt to suture a cardiac wound.’ And so, early attempts at surgery on a living heart were conducted in situations where recklessness was deemed appropriate. In 1896, the first successful suture of the heart was performed on a stabbing victim who would have died if his surgeon, Ludwig Rehn, had cared more about gaining the respect of his colleagues. While this early success inspired confidence in the pioneers of heart surgery, progress in cardiac surgery was dismally slow. Bypass procedures – where new pathways are surgically created when arteries are blocked – were not trialed until the mid-20th century. Astonishingly, cardiothoracic surgery, where lungs and chest are also involved, is seen as one of the most technically advanced fields of medicine, despite a majority of these advancements having been made over the relatively brief span of half a century. That such prudence was paid to the heart is reflective of more than just the physiological difficulty of operating on the powerful muscle responsible for circulating our most vital fluid. It also represents deference to our passions and desires. When a lover betrays us, we are ‘heartbroken’. Fits of rage and anger are marked by our hearts ‘pounding in our chest’. When a potential mate flirts with us, we experience a ‘flutter in our breast’. Such psychological and sentimental attributions to the heart raised its significance beyond its circulatory function, and established a taboo that was cultural, in addition to anatomical. Adding to this taboo is the emphasis placed on the heart by religious images and texts. The first depiction of Jesus that I can recall is from my devotedly Catholic grandmother’s house. In it, the bearded figure stood with a glowing heart protruding from the centre of his chest. This ‘sacred heart’ is said to represent, for some, the greatest love imaginable – the love that Jesus had for the whole of humanity. Most interpretations of the Hebrew bible understand the heart to be the seat of one’s emotive and intellectual states, both of which are necessary to establish a connection to God and their community. One can find similar interpretations in the Quran, in which the heart is described as the centre both of one’s emotions and of one’s ability to uncover truths. Citta, sometimes translated as ‘heart-mind’, is a central part of certain Buddhist meditative practices. Hanuman, a Hindu god of strength, courage and self-discipline, is often depicted with his chest torn open, revealing the goddess Sita and the god Rama presiding within his heart – a symbol of his loving devotion to them. Against this immense medical, cultural and religious backdrop, we can turn our attention to the early 20th-century German physician Werner Forssmann. Perhaps the best summation of Forssmann’s legacy is the title of his profile in the journal Clinical Cardiology – ‘Werner Forssmann: A German Problem with the Nobel Prize’ (1992). Werner’s problematic nature can be understood through his proclivity towards self-experimentation, which seems to have begun early in his career. His doctoral thesis, which examined the effect of liver consumption on human blood, consisted of studying the blood chemistry of those who drank a litre of concentrated liver broth. Forssmann recruited a small group of students to help with his experiment; they would drink the liver broth every day for a fixed period, and then have their blood chemistry examined. However, Forssmann, who had an eclectic test-it-on-oneself drive, decided to also take the liver broth and measure the effects that it had on his own blood chemistry. While such a low-risk exercise in scientific curiosity is not a cause for major ethical concern or worry, it forebodes a much more ethically worrisome self-experiment. The suggestion that Forssmann conduct his experiment on animals did not satisfy his drive for innovation One summer afternoon in 1929, during the customary siesta hour of a small regional hospital, Forssmann carried out another self-experiment that had been on his mind for quite some time. The inspiration behind this experiment came from a 19th-century medical text by the French physiologists Jean-Baptiste Chauveau and Étienne-Jules Marey. The book contained an illustrated diagram depicting a man holding a tube that had been inserted into the neck of a horse. This tube had been led down the horse’s jugular vein and was illustrated as being situated in the right ventricle of its heart. This image piqued Forssmann’s curiosity, and he began to wonder if a similar procedure could be successfully performed on a living human being – albeit with some adjustments. An X-ray image documenting cardiac catheterisation, showing the catheter extending into the right atrium. From ‘Die Sondierung des Rechten Herzens’ (‘Probing of the Right Heart’) by Werner Forssmann, Klinische Wochenschrift, November 1929. Namely, the jugular vein in the neck would be an inappropriate route for accessing a human heart because it carried the risk of developing an air embolism – a condition in which a pocket of air builds in a vein or artery and prevents blood flow, potentially a prelude to a fatal heart attack or stroke. Instead of the jugular, Forssmann thought it more appropriate to use the veins located in the pit of the elbow; these veins were already safely used for intravenous injections. So, Forssmann developed the following procedure: a patient would lift their arm so that it was parallel to the floor, the vein in their elbow would then be accessed via a small incision and a catheter line would slowly be led into the vein, and then, using the guidance of an X-ray fluoroscope, the line would be carefully led into the left ventricle of the patient’s heart. Forssmann did not originally plan to conduct this experiment on himself. He had pitched the idea of performing this experiment on a patient to his supervisor, Richard Schneider. However, despite Forssmann’s enthusiasm, Schneider was unmoved by the proposal. The reason for Schneider’s rejection was quite practical in nature: he could not permit novel research to be conducted in a hospital whose facilities could not accommodate it. Though he imparted to Forssmann that such research would have to take place in a larger, more established facility, he did not wish to entirely chill Forssmann’s scientific curiosity. Schneider therefore suggested that Forssmann conduct his experiment on animals. However, this suggestion did not satisfy Forssmann’s drive for innovation. After all, the experiment had already been performed on a horse; there would be no novelty in repeating it on another animal. Irrespective of Schneider’s judgment on the matter, Forssmann’s experimental urge remained undeterred, and he began to gather the necessary resources. He determined that he needed the cooperation of at least one other person: Gerda Ditzen, the department’s operating room nurse, and so in charge of sterilising the surgical equipment. Forssmann warmed up Ditzen to his plan by persuading her of the groundbreaking medical progress such an experiment could unlock. As a pump, the heart functions by using pressure to draw in and expel blood. If clinicians were able to measure the pressure that the heart uses, they would be able to assess how well a patient’s heart was functioning. At the time of Forssmann’s experiment, relatively little was known about the physiological causes of heart failure or other cardiac disorders. Assessing functioning by measuring and recording the various pressures of the heart and surrounding veins and arteries could be helpful in diagnosing or discovering new cardiac disorders. Establishing access to the ventricles of the heart, however, would be a necessary first step in making these discoveries. If Forssmann’s experiment worked, clinicians could be granted access to the inner workings of a beating heart with a minimally invasive procedure. The thrill of being part of this important medical discovery allowed Ditzen to see past any scruples she might have had regarding the experiment. She agreed to help Forssmann with the procedure and would provide him with the sterilised equipment. However, Ditzen was operating under the seemingly logical assumption that Forssmann would perform the experiment on her, not on himself. And so, on that summer afternoon, the pieces of Forssmann’s plan fell into place. The equipment was sterilised, the rest of the hospital staff were taking their siesta, and Forssmann and Ditzen found themselves alone together in the operating room. Forssmann explained to Ditzen that, due to the possibility of her fainting after the administration of Novocain, she would need to be supine for the procedure. So, Ditzen lay down on the operating table and offered up her arm to Forssmann. He then told Ditzen that he would also need to strap her to the table in order to prevent her from rolling off in case she were to lose consciousness. Ditzen agreed with Forssmann’s reasoning and allowed him to restrain her to the table. Ditzen was essentially tied up. He was reasonably certain he could safely lead a catheter up to his shoulder, before the vein bent towards the heart In truth, such preparations simply ensured that Ditzen would be able to neither see nor prevent Forssmann’s actions. Once out of sight from Ditzen’s view, Forssmann began to anaesthetise the pit of his own left elbow with Novocain. He then went to Ditzen’s side and pretended to iodise her own arm in order to sterilise it for the procedure. Such play-acting bought Forssmann the time necessary for his own anaesthetic to take effect. Once he determined that his arm was sufficiently numbed, he began his self-experiment by making an incision that allowed access to a vein located in the pit of his left elbow. Much like the highways leading from the suburbs and into the heart of the city, our veins are the route that our blood takes back to our own hearts. Like highways, veins vary in size and structure. Some veins take a complex and winding path to the heart, whereas others are more or less straight shots. For instance, the antecubital vein, which Forssmann had just accessed, leads straight up the arm, bends slightly at the shoulder, and then flows down directly into the heart. Since this vein leads straight up the arm, Forssmann was reasonably certain that he could safely lead a catheter – a small rubber tube with a metal tip – up to his shoulder, right before the vein bent towards the heart. Once the catheter’s tip reached his shoulder, he would need the visual assistance of an X-ray fluoroscope to guide him the rest of the way to the heart. The fluoroscope, a machine that provides a real-time X-ray image of the body, would allow him to determine if his experiment was headed for disaster; it would show if the catheter’s tip had internally punctured his vein – which would require immediate medical attention – or if the catheter had gotten stuck, meaning it would have to be pulled from his arm. To reach the machine, which was in the hospital’s basement, Forssmann decided that he would have to free Ditzen from the table for her assistance. Now releasing her arm straps and loosening her leg restraints, he tersely stated: ‘There we are, it’s ready now. Please call the X-ray nurse.’ Ditzen eventually arose from the table and out of her confusion, realising that Forssmann had essentially tied her up while he performed the procedure on himself. Understandably upset, she began to yell at him. Nevertheless, she soon grasped the gravity of the situation: Forssmann had a uterine catheter hanging out of the pit of his elbow and could be in grave danger. Such a concern provided the impetus necessary for Ditzen to further comply with Forssmann’s requests. She phoned the X-ray department to get them ready for Forssmann to come down. After all, it was still the siesta hour, and so the department would be unprepared to accommodate an emergency fluoroscope without some notice in advance. After the call, Ditzen seemingly remained concerned for Forssmann’s wellbeing, or perhaps she simply had the curiosity to see if the experiment would succeed, and so she accompanied him downstairs to the X-ray department. When they arrived, the X-ray nurse, named Eva, placed Forssmann behind the fluoroscope and began to position it so that they could locate the catheter’s tip. Just then, adding to the absurdity of the scene, Forssmann’s colleague, an internalist named Peter Romeis, burst into the X-ray room and shouted: ‘You idiot, what the hell are you doing?’ He then tried to yank the catheter from Forssmann’s arm, fearing that Forssmann would gravely injure himself if he proceeded with the experiment. Forssmann was able to fend off Romeis with ‘a few kicks on the shin’. Once things settled down, Forssmann ordered a mirror be placed in front of the fluoroscope screen so that he could see where the catheter was in his vein. He observed that the catheter’s tip sat at the top of his shoulder. So far, physiologically speaking, the experiment was going as planned. With the mirror positioned in front of him, Forssmann used the fluoroscope to witness the catheter’s slow advance as he carefully pushed it up his arm. After sending 30 more centimetres of catheter line into his arm, the fluoroscope showed that the catheter’s tip sat inside his heart’s right ventricle. He had done it. He had entered into his own beating heart. Forssmann had taken an important step towards dispelling the cultural taboos surrounding the heart The significance of this moment is two-fold. Physiologically, Forssmann had shown that the living heart could now be explored via a catheter with no physiological repercussions or advanced surgical techniques. The procedure was minimally invasive, requiring only a small incision to access a vein. What’s more, Forssmann was able to access his heart using only a local anaesthetic to numb the site of the incision, which meant that no heavy sedation or general anaesthetic was required. This is important because general anaesthetic often cannot be tolerated by patients with conditions that reduce their heart function. In addition to this physiological significance, Forssmann had taken an important step towards dispelling the cultural taboos surrounding the heart. While modern medicine was already on a path to demystify the heart by showing its primary function was the circulation of blood via rhythmic electrical waves, popular attitudes of the heart still regarded it as a psychologically significant organ responsible for passionate desires. But Forssmann’s experiment showed that medicine could access the heart in a manner that posed no risk to the organ held in such high regard. At first, Forssmann’s supervisor, Schneider, was astonished by both the recklessness and the results. But soon more practical concerns, including how to publish, entered their minds. While Forssmann’s experiment was scientifically compelling, in that it uncovered the possibility of reaching a still-beating human heart, Schneider believed that scientific curiosity alone would not help journal editors to see past the ethical concerns. He hoped that, if only Forssmann could show some immediate therapeutic benefit, an editor would overlook the questionable ethics involved. Eventually, Forssmann would find a therapeutic use for the procedure: by establishing a catheter line to the heart, physicians could administer certain drugs that proved more effective when administered via the line. With this clinical use established, Forssmann wrote up a manuscript and sent it to one of the most esteemed German medical journals. To his surprise, it was accepted, and even contained an X-ray image of Forssmann’s own heart with the catheter tip resting inside his right ventricle. Yet the true clinical relevance of Forssmann’s experiment would not be realised until a decade later, when his article came to the attention of the physician André Cournand. Generally speaking, the functions of the heart can be split into right and left: the right side is responsible for drawing in blood that is low in oxygen and propelling this blood into the lungs where it is then oxygenated. This oxygenated blood is then drawn back into the heart and propelled to the rest of the body by the left side of the heart. This drawing in and propelling out of blood – the heart’s essential function – can be assessed by measuring the various pressures within the heart and the surrounding veins and arteries. And, it goes without saying, without the cardiac catheterisation pioneered by Forssmann and utilised by Cournand and his colleague Dickinson Richards, a majority of cardiac conditions would go undiagnosed and untreated today. Currently, cardiac catheterisation is one of the most commonly performed medical procedures. In the United States alone, more than 1 million cardiac catheterisations are performed each year. Cardiac catheterisation is also used for angioplasty, the most frequently used procedure in the event of a blocked coronary artery. During the procedure, a catheter is inserted into an artery located in either the arm, leg or groin. Then, with the assistance of a real-time X-ray (ie, a modern-day fluoroscope), the catheter is led up the artery until it reaches an area close to the suspected blockage. A contrast dye is then sent through the catheter and into the artery to identify the exact location of the blockage. The catheter is then led to the blockage and a small balloon is inflated to stretch the artery, which then amends the blockage. In some cases, a small wire mesh – a cardiac stent – is placed in the artery to keep it unblocked. Coronary angioplasty is crucial in preventing or treating heart attacks, one of the leading causes of death. Given all this, it can be said that the experiment Forssmann conducted on himself spurred developments in cardiology that would go on to save and extend countless lives. It’s only fitting that, along with Cournand and Richards, he was awarded the Nobel Prize in Physiology or Medicine in 1956. His colleagues held him in high regard. In Clinical Cardiology, Forssmann is remembered as ‘a great character, gifted surgeon, and pioneer in cardiology’. In the European Heart Journal, his experiment is characterised as having ‘immense significance in paving the way for a monumental leap forward in cardiovascular care’ and ‘an outstanding act of selfless courage’. In the preface to Forssmann’s aptly titled memoir Experiments on Myself (1974), his fellow Nobel awardee Cournand describes Forssmann as a man ‘full of resources and will power, and endowed with physical courage’. Such a warm reception for self-experiment among clinicians is not surprising. At the turn of the 20th century, the US Army physician Walter Reed was sent to the newly occupied US colony of Cuba. Reed’s task was to study the tropical diseases that impacted soldiers during the recently concluded Spanish-American War. In particular, he was to investigate how yellow fever spread. Over the course of the three-month war, fewer than 400 US soldiers died in combat and more than 2,000 soldiers died from yellow fever infections. While the prevailing theory was that small particles, called fomites, where responsible for the spread of yellow fever, another theory proposed that mosquitoes carrying a particular pathogen were responsible for the disease. Reed’s task was to determine, once and for all, what exactly caused yellow fever. 13493Jesse William Lazear with his one-year-old son. Johns Hopkins University/Getty A soldier in uniform holds a young child in his arms beside a veranda13494Experimental hut at Camp Lazear, Cuba. Wellcome Images A sepia image of a man in 19th century shirt and trousers outside a small wooden hut13495Walter Reed, c1900. Courtesy the Department of Defense A portrait of a soldier in uniformAlong with his colleagues Jesse Lazear and James Carroll, Reed had tried to infect various animals with yellow fever, but these attempts were unsuccessful. Lacking an animal analogue on which they could conduct experiments, Reed, Lazear and Carroll determined that the only possible way to truly test the hypothesis that yellow fever was spread by mosquitoes was to use human subjects. The proposed experiment they came up with was quite rudimentary by today’s standards: a human subject would be bitten by a mosquito that was thought to be a potential carrier of yellow fever. If the bitten subject developed yellow fever, it would be evident that the mosquitoes were a vector for the disease. If not, it would be more likely that fomites were the cause for the disease. Lazear allowed himself to be bitten by his mosquitoes and waited to see if it would result in an infection Reed had gone to great lengths to calm the ethical worries that persisted around exposing human subjects to potentially deadly virulent matter. He drafted what is perhaps the first instance of a written consent document explaining the risks of participating in the proposed research. Nevertheless, the research board decided that, due to the dangerous nature of the experiment, they themselves should be among the first volunteers. That way, if the experiment did prove fatal, they would bear the burden of this risk. In August 1900, Lazear, himself a member of the research board, began to raise a batch of mosquitoes from their larval stage, ensuring that he had fresh, uncontaminated mosquitoes. He then had this batch fed on a person with an active yellow fever infection. To test whether this batch, or any mosquito, could pass on the infection, Lazear allowed himself to be bitten by one of his mosquitoes and waited to see if the bite would result in an infection. When no signs of infection came, Lazear assumed it safe to allow other volunteers to be bitten by the mosquitoes. None of the volunteers developed an infection, and so the other research board member, Carroll, allowed himself to be bitten. At this point, the evidence had pointed to mosquitoes not being a vector for the disease. So, it came as a surprise when, a few days after being bitten, Carroll developed symptoms of yellow fever. Dismayed at the possibility that mosquitoes could be the vector for the disease, Lazear hastily subjected a soldier, Private William H Dean, to a bite from one of his yellow fever-fed mosquitoes, to determine if they truly did carry the pathogen causing the disease. The soldier came down with the disease, which all but confirmed that mosquitoes were the vector. After this discovery, Lazear was subjected to another bite from an infected mosquito. While it is unclear whether Lazear allowed himself to be bitten or whether it was accidental, this bite would have fatal consequences. He developed yellow fever, as was to be expected, and died a week later from the disease. Like Forssmann’s, Lazear’s self-experiment is viewed with reverence. He is enshrined in the ‘Sacrifice for Freedom’ stained glass window at the War Memorial Chapel of the Washington National Cathedral, which displays his self-experiment alongside a depiction of Jesus’ crucifixion. Interestingly, Lazear’s legacy of self-sacrifice influenced the course of modern research ethics and its stance towards self-experimentation. The fifth article of the Nuremberg Code (1947) – a document that many regard as foundational to modern research ethics – states that: No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.This allowance of self-experimentation stems from an earlier working draft of the Code, which states that: [I]t is ethically permissible for an experimenter to perform experiments involving significant risks … if he considers the solution of the problem important enough to risk his own life … such as was done in the case of Walter Reed’s yellow fever experiments.And so, it appears that modern research ethics has carved out a unique allowance for self-experimentation. It is condoned, perhaps even endorsed, in cases where there persists an enormous risk to one’s wellbeing but also an enormous benefit to society. This ethical allowance, even reverence, towards self-experimentation continues into the present day. In 2005, the Australian physician Barry Marshall and his colleague J Robin Warren were awarded the Nobel Prize in Physiology or Medicine for research examining the relationship between Helicobacter pylori (H pylori) and the development of peptic (ie, stomach) ulcers. These ulcers occur within the lining of the stomach and can cause a sharp, intense pain along with internal bleeding resulting in bloody stool or vomit. Through years of research, Marshall and Warren were able to show, definitively, that H pylori infections could cause peptic ulcers. Marshall’s choice to conduct a self-experiment was crucial to shifting the paradigm But, initially, Marshall and Warren’s hypothesis received a lukewarm reception within the medical community, which held that ulcers were caused by stomach acid due to psychological stress. Finally, to get a fair hearing in the face of disbelief, Marshall took matters into his own hands. In 1984, he had an endoscopy to confirm that he was not infected with H pylori. Then he drank a solution that contained two pure cultures of H pylori: essentially a megadose of the bacteria. Marshall knew that H pylori was slow growing and so he suspected that he would have to wait for months, maybe even years before he developed a peptic ulcer from the bacteria. It came as a surprise, then, that Marshall came down with a case of severe gastritis – a precursor condition to peptic ulcers – five days after consuming his bacteria-rich concoction. This made it all but certain that the bacterium was responsible for his gastritis. After completing a course of antibiotics to treat his infection, Marshall and Warren published the results of Marshall’s self-experiment. This publication started to solidify the paradigm shift regarding the cause and treatment of peptic ulcers; the acid hypothesis slowly faded into the background as studies began to confirm that H pylori was the culprit for most cases of peptic ulcers. Marshall’s self-experiment, like Forssmann’s or Lazear’s, is looked upon with praise. The award ceremony speech for Marshall and Warren’s Nobel Prize highlights Marshall’s self-experiment, depicting it as a necessary step for their research. Such praise also extends into the realm of medical ethics: the American Medical Association Journal of Ethics presented Marshall and Warren with a Virtual Mentor Award for being ‘exemplary role models in medicine’. Many profile articles, both in scholarly and popular media, have been written about Marshall, often highlighting his choice to conduct a self-experiment so crucial to shifting the paradigm regarding the underlying cause of peptic ulcers. Logically speaking, it’s hardly surprising that our modern research-ethics environment has failed to find fault with self-experiments. Many of the current debates within research ethics centre around preserving the autonomy of research participants. For instance, a constant point of debate concerns how much information is necessary for research participants to be able to make an informed, autonomous decision regarding whether they want to participate in a research project. Some argue that providing only a minimal amount of information – such as risks, benefits and information about a research procedure through a written document – suffices in informing the potential participant about the research. Others say that a more detailed consent process, possibly involving a one-on-one, in-depth discussion that covers the research’s purpose, risks, benefits and procedure with each potential research participant, is required. Such debates become non-issues when the researcher and research subject are one and the same. After all, the researcher who designs the experiment knows why the experiment is being done, what benefits it may have, and the potential risks that might come from undergoing such an experiment. If there would be anyone able to make a truly informed decision about whether they want to participate in the research, it would be the researcher who designed the experiment. Another ethical concern seemingly side-stepped by self-experimentation is that of establishing a favourable risk-benefit ratio. The Belmont Report (1979), understood to be the central ethical doctrine for research conducted in the US, highlights that ‘benefits and risks must be “balanced” and shown to be “in a favourable ratio”.’ Generally, this means that research must be designed in a manner that assures participants face only the minimal necessary risks and that even these are aligned with an equal or greater benefit. In the case of self-experimentation, such concerns seem to be more a matter of personal prudence rather than an ethical consideration of another’s wellbeing. In fact, using the principle of autonomy that dominates our present research ethics environment, it could be argued that it is unethical to encroach upon those who wish to conduct a medical experiment on themselves. What right would anyone have in preventing someone from doing an experiment upon themselves that progressed science or bettered humanity? But does celebration of the self-experimenter overlook requisite concern for them? Have we become so excited by the bravery displayed in sacrificing one’s wellbeing for the advancement of a humanistic endeavour such as medicine that we have overlooked a basic ethical responsibility to do one’s own self no harm? At least Forssmann was unwilling to perform the potentially dangerous experiment on Ditzen In Forssmann’s case, the ethical concerns start with his treatment of Ditzen. There are many ways Forssmann disrespected Ditzen as a person. First, he used her; he stoked her curiosity and excitement regarding groundbreaking research only so that she would agree to sterilise a set of instruments for Forssmann to use. Second, he deceived her; she was under the impression that the experiment would be conducted on her, and Forssmann allowed her to carry on with this impression despite him having no intention of performing the experiment on her. Third, and perhaps the most blatant form of disrespect, he restrained her; he strapped her to the operating table, meaning that she was essentially being tied up and restrained against her will. However, despite these instances of disrespect, it should be noted that at least Forssmann was unwilling to perform the potentially dangerous experiment on Ditzen. The reasons for this decision could be numerous, as Forssmann offers us no explanation as to why he decided not to experiment on Ditzen despite her willingness. Yet, a parsimonious explanation for Forssmann’s decision is that he had respect for Ditzen’s wellbeing and personhood. That is, Forssmann recognised that the experiment was not free of risk or adverse consequences: such an experiment had never been conducted in humans and thus could come with numerous unforeseen dangers. What’s more, the experiment was being conducted during what amounted to the lunch hour at a small regional hospital; if things did go awry, it was uncertain if help could arrive in time. Needless to say, subjecting someone to a risky and hastily planned first-in-human experiment amounts to disrespecting their personhood. The second formulation of Immanuel Kant’s categorical imperative – sometimes referred to as the ‘humanity formula’ – serves as the moral basis for the notion that research needs to consist of respect for persons. In its entirety, the formula reads: Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means, but always at the same time as an end. Generally, this is taken to mean that a researcher should not treat human subjects as merely instruments to obtain some research goal, but rather recognise that they are persons who are able to make their own rational choices to achieve their own unique goals. More often than not, this interpretation of the humanity formula is simplified to the following claim: research participants have the right to consent to and voluntarily withdraw from a research project. While this interpretation of the formula is perhaps sufficient in respecting a participant’s autonomy, it does not capture Kant’s important claim that human beings are particularly unique in that they possess an inherent moral value. Kant’s humanity formula distinguishes between objective and subjective ends. Subjective ends, which Kant sometimes refers to as material ends, are things that have value only within the unique material contexts of a person’s life – a new car, say, or a job promotion. Objective ends, on the other hand, have value irrespective of someone’s unique desires or individual contexts. For Kant, the prime example of an objective end is a person, since they retain their dignity and worth regardless of another’s estimation of them. Importantly, subjective ends are obtained using means that are particular to each individual’s unique contexts. For example, the means I would take to obtain a promotion within academia differ widely from the means a person working in the financial sector might undertake to obtain a promotion. Also, subjective ends are often, themselves, merely a means to a further end: I desire a new car because it will improve my morning commute, which I desire because it will improve my work performance, which I desire because it will better my chances at a promotion, and so on. Objective ends, on the other hand, are obtained by following duties that we preform irrespective of our individual contexts. To give an example from medical ethics, the surgeon performs life-saving surgery on the serial killer not because they have a desire to save a serial killer’s life but rather because they have a duty as a surgeon to preserve human life. Additionally, objective ends are what Kant calls ends in themselves – they require no further value beyond the value contained in themselves. Writing in 1785, Kant says humans can grasp inherent value through philosophical self-reflection in a way that other beings cannot – and that chief among things vested with inherent value is a human (including the experimenter himself). Here is where Forssmann went wrong. Though it cannot be denied that he was successful in advancing scientific and medical progress, he disregarded the moral duty he had to respect himself by not treating himself merely as a means to his subjective ends. Werner Forssmann touched his own heart, shattering taboos that surrounded this vital organ and spurring a renaissance in cardiology that continues into the present day. Yet, the ethical coda to his experiment reiterates the importance of self-respect and portrays the moral limits of our scientific curiosity. This coda is timelier than ever because gene-editing technologies such as CRISPR have made medical experimentation a ‘do it yourself’-friendly endeavour. While it is unlikely that medical self-experimentation will ever cease, we can at least be hopeful that, in the headlong rush to prove a theory or advance others’ health, those who conduct such experiments watch out for their own safety and wellbeing with more care than their predecessors.
Tom Doyle
https://aeon.co//essays/should-we-celebrate-the-doctors-who-experiment-on-themselves
https://images.aeonmedia…y=75&format=auto
Economics
Finance fraud is not a deviation from an essentially rational system but a window onto the reality-distortion of markets
When the German banking giant Wirecard collapsed in June 2020 amid a roaring fraud scandal, public opinion was shocked. The company, praised as the country’s innovative answer to the fintech industry of Silicon Valley, had been widely seen as a ‘German miracle’ following recovery from the 2008 financial crisis. Its bankruptcy involved a massive state prosecution that sent shockwaves through world markets. To the astonishment of German and international observers, Wirecard executives were found to be involved in all manner of deception: direct falsification of accounts, fake cash-flows, re-routing of payments through non-existing shell companies, ghost subsidiaries. While forging profits, they had obscured a mammoth debt of €3.5 billion. This, of course, is not an unfamiliar tale. The explosive growth of finance as a percentage of the ‘real’ economy in recent decades has been matched by an equally dazzling scale of financial fraud, from the Enron scandal to Bernie Madoff’s pyramid scheme (the largest recorded fraud in world history) in the 2000s, to more recent scams in cryptocurrency markets such as FTX. Denizens of finance – both system insiders (Madoff was a former chairman of the Nasdaq exchange) and ‘maverick’ outsiders (Sam Bankman-Fried of FTX had been seen as a challenger of mainstream banking elites) – have displayed a unique capacity for alchemy: whipping up distorted realities in which false truth and true fact become indistinguishable. Their plotting is often aided by regulatory bodies, rating agencies and consultancies that firm up such distorted realities through either action or inaction. A recent Netflix documentary follows the Financial Times reporter Dan McCrum in his quest to reveal Wirecard’s own big con. The ‘aha’ moment comes when McCrum and his FT colleagues show up at the Singapore address of one of the company’s supposed subsidiaries only to find an unassuming farmhouse. Behind Wirecard’s opaque structure lay simply nothing: no accounts, no offices, no cash. Much of the bank’s alleged business had been conjured out of thin air. Evocatively subtitled ‘a fight for the truth’, McCrum’s bestselling book Money Men (2022), on which the Netflix show is based, offers an animated account not only of Wirecard’s fraudsters, but also of their victims – those led to believe the company bosses and their outlandish myths of stratospheric growth, despite ominous clouds of deceit hovering over. What made a fake story so readily believable by so many? How is it possible that a DAX 30-listed bank (with the backing of Germany’s former chancellor, Angela Merkel, herself) transpired to be a giant Ponzi scheme? Today’s financial fraud is part of a bigger story unravelling outside of trading floors and corporate board rooms: a growing preoccupation over the nature of reality itself. On one level this preoccupation is fuelled by Big Tech, which has been pumping financial value through innovations ostensibly geared towards tackling ‘existential’ future threats via the production of simulated realities. Facebook’s launch of the Metaverse, an avatar world promising to revolutionise work and everyday life, has been critiqued as a fluke that aimed to distract from the company’s legal troubles, and the more recent release of programs such as ChatGPT and DALL-E by OpenAI intensified concerns about the use of artificial intelligence in Silicon Valley to address fabricated, rather than real, problems. While some artists, teachers and writers grew uneasy about the increasing ‘realness’ of such AI outputs, others see new opportunities in implementing these chatbots into their work and daily tasks – even though OpenAI has admitted that its ‘large language model’ suffers from so-called hallucination problems: a propensity to cheat by weaving fictitious facts into its answers to user prompts. Everything appears ‘almost true’ and nothing seems ‘entirely false’ This fuzzy line between authenticity and fakeness is also reflected in social media debates around, for instance Twitter’s use of ‘blue ticks’, originally introduced for identity verification and recently turned into a monetisation tool that made legitimate and feigned accounts harder to tell apart. In popular social media platforms, newcomers such as BeReal strive to capture more spontaneous ‘authentic images’, responding to a growing demand for less staged (yet, still, curated) content among younger users – a feature now incorporated by Instagram and Snapchat. The latest trend in these platforms is ‘dupes’: user forums around fake products mimicking luxury items, which explicitly challenge the distinction between bootleg and original goods (with both being often manufactured in the same supply chains to identical blueprints). Meanwhile, concerns about ‘fake news’ breed new political conflicts. As a disparate alliance of conspiracy ‘truth-seekers’, New Age entrepreneurs and populist demagogues attack time-honoured certainties and scientific facts, nervous advocates of democratic capitalism strive to expose disinformation and repair our hollowed trust in liberal values. Out of these battles we are often told that a new ‘post-truth’ era emerges, in which material struggles give way to a relentless ‘epistemological crisis’ – as the former U S president Barack Obama put it in the wake of the 2020 election: the dismantling of the means by which we seek and recognise truth. As this state of doubt and confusion takes hold over everyday life, our capacity to tell fact from fiction weakens. Everything appears ‘almost true’ and nothing seems ‘entirely false’. Contemporary finance has become emblematic of this state of affairs. How true is the reality of mark-to-market accounting practices (the ‘marking’ of fictional future values as ‘present’) used recently by Wirecard and by Enron before that? How real is the wildly fluctuating value of non-fungible tokens (NFTs), the blockchain technologies used to certify authenticity of digital or physical assets traded in exchanges like FTX? Last February, it transpired that more than 80 per cent of NFTs minted in OpenSea, the most popular marketplace for such tokens, were wash trades (simultaneous sells and buys of the same NFTs creating a false impression of market activity) or straight spam: fake and plagiarised works. Because blockchain asset markets are inherently opaque and built around a belief system that defies ‘real’ valuation – the consensus real set by central banks, ratings agencies and forex trade – they are especially fertile ground for fraud. They often become the stage where traditional forms of scam (such as phone impersonations) are combined with AI technologies (such as those underpinning ChatGPT) to deceive unsuspecting lay investors. In social media, fake celebrity endorsements abound, and ‘pump-and-dump’ schemes artificially inflate the price of crypto-assets before selling them to retail investors. Despite this, financial wizards have been among the first to take off their gloves and defend their own versions of market reality and truth. Fraud schemes now routinely deploy the well-rehearsed populist rhetoric of ‘fake news’ to respond to allegations of corruption. In the months and weeks before its collapse, Wirecard’s defence line (adopted fully by the German Chancellery and the country’s financial authorities) was that the FT investigation was rigged by short-sellers spreading misinformation for profit. Turning the tables, the company’s bosses pointed the finger to finance itself, blaming their meddling with reality on the ruthless games of greedy speculators. Earlier this year, the Indian commodity trading giant Adani shrugged off similar allegations of market manipulation as fake news sown by market opportunists, who were distorting market reality with bad data – what is commonly referred to in trading as ‘noise’. Soros suggested that a far wiser move would be to accept the distorted reality of finance Unravelling the deceptiveness of these worlds appears less straightforward than calling out the lies of fabulists such as Donald Trump or the congressman George Santos. Fraud technologies themselves tend to be spectacularly ‘low fi’ – fake trading at Madoff involved practices as ‘sophisticated’ as manually cobbling together accounts on spreadsheets, keeping cash at office safes, or even hiding it in grocery bags. But finance’s big con hides in plain sight. As financialised culture proliferates and ordinary digital life becomes gamified, the impact of finance on our everyday reality becomes insidious. For those coming of age in today’s middle-class United States, speculation’s augmented reality is only a few scrolls or swipes away from the worlds of gaming, dating, wellness or even the realms of digital astrology and the occult. The deeper we immerse ourselves in the simulated worlds of finance, the more difficult it becomes to explain its alchemy. One way to make sense of it is to ask how alchemy is imagined by financiers themselves. The leading liberal philanthropist George Soros first took centre-stage during the currency speculation wars of the 1990s, when he gambled against the Bank of England to make an alleged $1 billion profit. In so doing, he became a symbol of greed during a period of unhinged expansion of financial markets. Soros had given important hints of the mindset driving his sensational wagers in his book The Alchemy of Finance: Reading the Mind of the Market (1987). One year before the publication of Paulo Coelho’s hit novel The Alchemist, the master speculator sought to draw the outlines of financial alchemy. Soros understood it as the capacity to control the fakeness of markets by becoming immersed in it. He challenged the proclaimed association of financial forecasting with ‘hard science’ and quashed mainstream economists’ assumptions regarding the ‘underlying truths’ of market prognostication. Insofar as no financial theory can ever be ‘verified’, he argued, all modelling of price movements can be based only on ritual and incantation – a belief later spurring his infamous ‘discretionary macro’ strategy, which came to be known as a sort of market sorcery. Instead of trying to decipher the unassailable truth of market prices and wrestle it apart from the ‘noise’ of human bias and irrational behaviour, Soros suggested that a far wiser move would be to accept the distorted reality of finance. He has not been alone in this gambit. Bankman-Fried, the crypto swindler at the helm of the collapsed exchange FTX, allegedly played the popular League of Legends video game while negotiating capital investments. A passionate gamer, he treated the worlds of cryptocurrency markets and action role-playing games with the same knack for plotting. Jan Marsalek, the now-fugitive former boss of Wirecard, had a reputation for evading ‘finer details’ when negotiating trades, often shifting the conversation to diverting stories about Cold War secrets and spies. His zest for casual forgery was not all that different from the tales and rumours greasing the reality of contemporary venture capital. However, rather than simply warping economic ‘facts’, Bankman-Fried and Marsalek also strived to control the forces moulding those facts. Like Soros, they did not aspire to merely interpret ‘the mind of the market’. They sought to re-shape its material reality, too. The figure of the market alchemist long predates such contemporary villains of finance. The most adventurous confidence tricksters were always to be found in markets: in the fin-de-siècle US, stock touts and tipsters dominated news headlines, stirring fierce debates on the legitimacy and morality of speculation that defined the history of modern finance. Memorable works of fiction during that era at once mirrored and fuelled the public’s fascination with rogue financiers. From Anthony Trollope’s sensational account of Victorian England’s corrupt traders in The Way We Live Now (1875), to Frank Norris’s epic of greed and wheat speculation at the Chicago Board of Trade in The Pit (1903), and from Theodore Dreiser’s fictionalisation of the notorious tycoon Charles Yerkes in The Financier (1912), to Edwin Lefèvre’s celebrated Reminiscences of a Stock Operator (1923), financial fiction invoked an underworld of greed and deception in which ruthless con men reigned supreme. Real figures like Charles Ponzi – the most infamous of all market fraudsters – could have jumped right out of the pages of these novels. Still, Ponzi and others like him were seen as deviant in ‘efficient markets’ whose hallmark principle of instrumental rationality epitomised the spirit of scientific modernity. The late 19th century was a time when mathematical forecasting took off in earnest in the major stock exchanges in the US. The trading floor became a testing ground for methods of scientific prognostication writ large, including in the fields of meteorology and climate-related prediction. The rise of statistical techniques such as time series analysis, the bell curve and Gaussian distribution revolutionised the study of market price movements, further unmooring it from the material reality of assets. At the dawn of the 20th century, a conviction settled in among traders that stock prices were in some fundamental sense right, their fluctuation conferring a godlike truth. But while faith in the power of statistical prognostication was growing in the pits, it was not the only method of deciphering finance’s inner truths. The turn of the century saw a renaissance of mystical foreknowledge seeping right into the heart of markets, spawning influential financial practices including ‘technical analysis’ (gaining immense popularity under the aegis of Charles Dow, a co-founder of Dow Jones) and popular trading manuals expounding the virtues of ‘gnostic reason’. Contrary to accounts of markets as cardinal sites of a disenchanted, scientific modernity, fin-de-siècle finance was the stage of a lavish spectacle that swept economic and political life alike. Rather than augurs of a stifling rationality, stockbrokers became the shamans and magicians of a ‘pecuniary enchantment’ – in the words of the historian Eugene McCarraher. Their alchemy, however, did not merely aspire to departures from the material reality of economic doings. Guided by a growing conviction in both material-scientific and spiritual practices, they sought to transmute the base materials of finance (capital, labour) and create a gold-coated reality in the image of markets. In it, their hermetic quests were undergirded by an unwavering belief in market rationality. Futures trading was legal and desirable because it enabled ‘the self-adjustment of society to the probable’ The great sweep of the traders’ gospel was strengthened through significant developments in US finance over the ensuing decades, most notably the establishment of a national market for financial securities through a widespread distribution of stocks and bonds championed by the government. The promise of more democratic markets encouraged larger swathes of society to reap the benefits of ‘market wisdom’ alongside the professional financiers. However, while the predecessors of today’s securities analysts adeptly reaped the rewards of financial alchemy, those at the bottom of finance’s pecking order were proving more vulnerable to its evils. Economic and political observers of the fin-de-siècle era became gripped by the spectre of ‘irrational crowds’: mobs of market dwellers purportedly marred by manias, panics and delusions, and thus prone to manipulation and deception. This negative view of the world of lay finance was bookended by Charles Mackay’s 1841 account of ‘manias’ in Victorian-era markets and the historian Richard Hofstadter’s damning 1960s treatise of the Populist Movement as a ‘paranoid style’ of politics: collective action taken by exuberant publics who were led astray by misinformation, gossip and hearsay. If professional financiers were seen as competent helmsmen in turbulent speculative markets, amateur bettors were cast as deluded crowds threatening market stability. Their ‘noise’ was seen as a distortion of market reality, an inflection of the fundamental truths summoned by the signals of stock prices. The early period of market ‘democratisation’ had let the genie out of the alchemist’s bottle, and the fever of speculative finance was spreading to thousands of ‘bucket shops’ in the far corners of the US. But the trading of commodities did not merely excite publics by fuelling their speculative longings. It also invited them into a ‘marketplace of ideas’ that bolstered the vision of liberal democracy that came to define 20th-century politics. The term itself is often traced back to Justice Oliver Wendell Holmes and his 1919 ruling in the Abrams v United States Supreme Court case, in which he asserted the superiority of the truth defined by market competition. This avowal was not incidental – it reflected the broader convergence around the ideas of democracy and speculative finance that were congealing in US capitalism. A few years earlier, Holmes had been a key figure behind a lesser-known – yet just as influential – Supreme Court ruling: the 1905 Chicago Board of Trade v Christie Grain & Stock Co, which declared futures trading (the most speculative kind of financial activity) legal and desirable because it enables ‘the self-adjustment of society to the probable’ (thus distinguishing professional exchanges from the lay trading in bucket shops, which he regarded as pure gambling). Sanctioning the enchanting world of markets while asserting its vast power inequity conferred on financial alchemy an enduring force that is still with us today. Our time’s market alchemists, like their forebears in the postbellum stock exchanges, are typically seen through the binary of fraud: the flip side of institutional norms assumed to be constant, fair and tending towards equilibrium. Popular depictions of high-octane finance continue to focus on stories of smoke and mirrors woven around lies and greed – and they do so for a good reason. But by singling out the ‘excess’ of a few fraudsters, they ultimately distract us from the messier reality of finance, where alchemy is at the core, not an outlier. The ways in which Madoff and Bankman-Fried steered their multibillion scams through global markets were not as much a deviation from that reality as a window into it. Because markets are worlds where noise and signal are impossible to distinguish, the boundaries between real and fake are much more porous than what is assumed in mainstream accounts of fraud. This, as I hope to have shown, has been the case throughout the history of a modern finance capitalism powered by alchemy. But it has become especially pronounced in our time, because contemporary forms of (computational, quantified) finance thrive in the uncertain space of big data and correlation, where noise reigns supreme. Fraud becomes both more insidious and harder to parse out in this context. Financial alchemy, in that sense, is more akin to distortion than to deception. Rather than a neat movement from facts to fibs, it represents an ambivalent coexistence of truths and falsehoods, which – as is often the case in today’s gamified markets – can even embrace fakeness. From J P Morgan’s avowed passion for astrology during the Gilded Age, to contemporary bankers’ enthusiastic endorsement of memetic NFTs, the history of finance brims with distortions that make no totalising claims of truthfulness. Financiers have long understood themselves as performers of alchemy, often being entirely transparent about their own gimmicks. Today, paradoxically, it is this open admission – the ‘exposure of the trick’ – that makes financial alchemy even more effective. Its mass appeal emanates from being rooted into our volatile social and cultural worlds. In them, opacity and spectacle so often become accepted features of everyday reality. Far from dupes in the grip of ‘collective hallucinations’, modern financial subjects have been entwined with the forces of alchemy in much more dynamic, imaginative – and, often, wilful – ways. Their collective expression has produced a politics rich in myth, stretching today from outlandish conspiracy movements like QAnon, to TikTok communities of ‘vibes’, and gaming and crypto-trading subcultures. It is for this reason that calls to break the spell of financialisation in everyday life offer insufficient answers to our so-called ‘post-truth’ moment. The ghosts of mob psychology and irrational exuberance have continued to haunt our perceptions of fraud and financial deception. But the present ‘reality crisis’ demands greater sensitivity towards the capacity of market distortion to create absorbing other-worlds. Distortion has been a critical force across fields as diverse as scientific and cultural production, from data science to music. Interpreting signals has been closely entangled with studying the generative possibilities of ‘noise’: looking for an ally in the glitches, dupes and ‘bad data’ that inhere within all forms of life and permeate our technologies of representing truth. At stake in the ‘fake worlds’ of financial alchemy is not merely resisting their will to deceive us but understanding their capacity to condition our struggles for other, more democratic realities.
Aris Komporozos-Athanasiou
https://aeon.co//essays/finance-fraud-is-not-a-deviation-from-the-norm-but-a-reflection-of-it
https://images.aeonmedia…y=75&format=auto
Space exploration
The detection of alien life won’t be obvious. It’ll be partial and inconclusive: a perfect task for the scientific method
The first images beamed back from the James Webb Space Telescope (JWST) were filled with jewels and fire. That’s what the galaxies look like, tiny and distant, resplendent in false-colour contrast: red, gold, and white-blue. Some stretched like gummy candies from gravitational lensing. Some radiated a six-point star, the signature artefact of JWST’s hexagonal mirrors. In one image, four large galaxies held for a breath in their cosmic dance, a moment in a long gravitational embrace that will end with their eventual merging. Under JWST’s gaze, the rusty crags of the Carina Nebula were translucent to the countless stars being born within. In the telescope’s infrared vision, dust is transparent. Nascent stars were unveiled; the most distant galaxies ever seen, recorded The JWST image of the interacting galaxies known as the ‘Stephan’s Quintet’ group. Courtesy NASA, ESA, CSA and STScI These images were chosen to impress: a stellar nursery, a galactic dance, the Universe’s first aeons, the death shroud of an exploded star. The fifth image is a graph of a wiggly blue line, studded with white data points. Four peaks of the blue wiggle are labelled ‘Water H2O’, marking the wavelengths of light absorbed by water molecules in an atmosphere. It is an image so unremarkable that NASA presented it over an artist’s impression of an exoplanet and star. Writing in The New York Times in July 2022, Joshua Sokol described the secretive process behind selecting these first images, full of visual splendour and scientific promise – the ‘early highlight reel’ that would, as the US president Joe Biden put it, ‘remind the world that America can do big things’. But it was that plain graph I was most excited to see. Not because it was beautiful, but because of what it meant: it was the portrait of an atmosphere of a planet 1,120 light years away. The first of many to come. I didn’t care about this particular gas giant. But I knew that other planets JWST could ‘see’ might be possible homes to life. Composition of the atmosphere of exoplanet WASP-96b. Courtesy NASA, ESA, CSA, and STScI For centuries, we’ve seemed to be on the verge of finding life beyond Earth – from Galileo’s first observations of Venus through a telescope and his realisation that planets were other worlds, to Percival Lowell’s observations of ‘canals’ on Mars, to the hopes of every NASA rover and SETI search. Yet the more we’ve learned about the solar system, especially in the past century, the rarer life has started to seem to be. Mars has no vegetation, let alone canals. Venus’s clouds don’t shroud a humid jungle but a surface so greenhouse-hot it can melt lead. There are thin hopes of finding traces of life from Mars’s warmer, wetter past. A slim chance that the subsurface oceans of outer-solar system moons might teem with exotic microbes. But there are only eight planets in our solar system. With the discovery of the first planets orbiting other stars in the 1990s, hopes for abundance re-emerged: so far, we’ve found more than 5,000 exoplanets, and scientists now believe that, if you point to any star in the sky, odds are a planet circles it. All those possible homes for life, and that’s just in our galaxy. Zoom out to JWST’s deep field, the galaxies scattered like jewels on black velvet – each comprises hundreds of billions of stars, and perhaps hundreds of billions of planets. They’re much too far away for us to ever know who or what might be living there but, as Carl Sagan liked to say, if there isn’t life out there, wouldn’t it be a waste of space? We think we know how discovery might go, because we’ve been raised on so many versions of this story. Ellie Arroway in Sagan’s novel Contact (1985) catches a radio signal beeping prime numbers coming from the star Vega in which is encoded a message of welcome and the gift of shortcuts to technological advancement. In Michael Crichton’s The Andromeda Strain (1969), alien microbes hitch a ride to Earth on a satellite, wreaking havoc. In the movie Arrival (2016) – and a thousand other cinematic sagas of conquest or visitation – alien ships come to Earth. That one, at least, we can set aside as a likely model. But even when it’s a signal, even when it’s a microbe, we’ll likely never know if it’s aliens. Not just because of the vast distances involved or because of the wild possibilities presented by chemistry and biology, but because science seldom works that way. Discoveries almost never arrive as we think they will, as lightning bolt eurekas. They are slow, gradual, communal. Alien life may not be something we ever ‘find’, but instead inch towards, ever closer, like a curve approaching its asymptote. For all our desire to know who’s out there, that may have to be enough. ‘No single effect, experiment, or paper provides definitive evidence about its claims. Innovation identifies possibilities. Verification interrogates credibility. Progress depends on both.’ So opens the ‘Community Report from the Biosignatures Standards of Evidence Workshop’ (2022), quoting an earlier paper. The effects in question are not astrobiological: the quoted paper is from cancer research. But this is how all science works. Cumulatively, in small steps, on the shoulders of giants, and in fits and starts. It’s a process the public seldom sees. Nor does it match the story we’re usually told with its litany of heroes – Newton, Copernicus, Darwin, Einstein: men who saw beyond their era’s paradigm to glimpse a revolutionary new worldview, bringing it to humanity like Prometheus (but without the punitive price tag). Our discomfort with the fallible process of science was made urgent during the COVID-19 pandemic. ‘Follow the science’ was a rallying cry, construing science as something other than it is; as a monolith rather than a process. When the stakes felt urgent, and personal, the slow sausage-making machine of science was not just unseemly, but a threat. Why was the best guidance changing? How could science not know! The search for life may be less urgent and less threatening, even if existentially higher-stakes, but we still think of scientists as plucking knowledge from the ether, finding truth and telling us. In reality, science isn’t about finding facts: it’s about creating knowledge. A 2018 paper in the journal Astrobiology introduced a tool called ‘The Ladder of Life Detection’ that synthesises our understanding of life and our ways of detecting it into a framework for determining what combinations of evidence could be sufficient to ‘preclude any abiotic interpretation’; that is, to say not It is life but rather It couldn’t be anything else. The authors organised their criteria not by the potential to definitively prove life but ‘to convince a majority of the scientific community’. After all, life exists or it doesn’t – whether observed by humanity or not. What science changes is our collective knowledge. The threshold is consensus. Astrobiologists are trying to answer one of humanity’s biggest questions with something like a shadow show It seems like it should be simple. You have no trouble telling what’s alive from what’s not. A cat versus a rock, a tree versus water. You can recognise what might be called ‘technosignatures’ too, proof of intelligent life’s material manipulations: a car exhaust, a cell phone, a city grid. But astrobiologists say things like: ‘The detection of extraterrestrial life in our solar system and beyond will likely be neither instantaneous nor unambiguous.’ Or they write: ‘Evidence of life may be subtle or unfamiliar, and reveal itself only in stages, as one observing campaign informs the next.’ Aren’t they supposed to be smarter than us? They’re not exactly rocket scientists, but they’re in the department next door. The problem is, you and I and the scientists are good at recognising Earth life. And big life, too. But scoop up a cup of seawater – or a slice of Antarctic ice – and it becomes much harder to determine what’s living. Even our intuition for habitable environments has been bested by microbial ingenuity, by extremophiles revealing the provinciality of our instincts. Then there’s viruses. Are they alive or not? You see how it gets tricky. Add to the mix trying to figure out all this from afar: with a remote-control robot on Mars, or a snapshot of a spectral reading of an exoplanet’s atmosphere, and you see that astrobiologists are trying to answer one of humanity’s biggest questions with something like a shadow show. Before we could read atmospheric spectra, all that could be known about an exoplanet was its mass, density, and how much energy it gets from its star. Plenty can be extrapolated from that: whether it’s rocky or gaseous, how much atmosphere it might hold, whether it might have liquid water on its surface. We know, generally, from how planets form, that the smaller ones are rocky and the bigger ones gaseous, like Jupiter and Saturn, while the smallest can’t hold onto an atmosphere at all. The same elements abound throughout the Universe, so distant planets are generally made of the same sorts of things as the worlds that orbit our Sun. So then what happens to all that geology and chemistry? Does some of it, as it did here, cross the ineffable line into life? We can learn a great deal from a planet’s atmosphere. Earth’s oxygenated atmosphere, which makes so much of life possible, got that way only because of life, the advent of photosynthesis that uses carbon and starlight to make energy, giving off oxygen in the process. There’s water vapour, too, showing the planet is habitable and inhabited. The atmosphere also holds telltale evidence of technological activity, like chlorofluorocarbons. It’s a rich text to read if you have the technology to do so. But we don’t quite have that yet. Nothing close to a confirmation of an alien biosphere will be coming from JWST JWST, NASA’s most powerful space telescope, observes in the infrared spectrum – good for seeing through interstellar dust, but not for detecting some of the most important potential biosignatures, like water. The planetary scientist Maggie Thompson, at ETH Zurich, studies the viability of methane as a biosignature. ‘Methane,’ she told me, ‘is one of the best biosignatures we could be able to detect with something like JWST,’ but it’s hardly a best-case scenario. In fact, JWST wasn’t designed to look for biosignatures at all, the astrobiologist David Catling told me. Its instrument suite was decided when exoplanet discoveries had just begun to trickle in. Catling recalled giving a talk about how to find biosignatures with JWST, when a scientist who studies Earth’s atmosphere asked: Why aren’t you just looking for oxygen? Catling sighed to me as he related this. Oxygen’s spectral signature is in the range of visible light, which JWST can’t see. An atmospheric biosignature is a hint of a hint of a hint, which we’ll likely never be able to confirm by direct sampling, given how far away exoplanets are. But the hints are nonetheless crucial. The astrobiologist Eddie Schwieterman told me: ‘Astrobiology is the study of the origin, evolution, distribution and future of life in the Universe.’ The solar system is just a tiny fraction of our own galaxy. And while the Copernican principle reminds us not to assume that we’re special, we can’t assume we’re average, either. For Thompson, an ideal scenario might be JWST detecting methane and CO2 in an exoplanet’s atmosphere. ‘If you saw something where there was a lot of methane, some decent amount of carbon dioxide, but very little or no carbon monoxide, that would be very interesting and worth exploring more.’ Still, the best you can hope for is to identify interesting targets for further study. Nothing close to a confirmation of an alien biosphere will be coming from JWST. Every astronomer I asked about using JWST to look for biosignatures said something along the lines of The real good stuff will come from the Habitable Worlds Observatory. And when will that come online? Oh, in 20 years or so. What will its capabilities be? No one knows, because it hasn’t been designed yet. Or funded. But the goal is for a space telescope that can observe in infrared, optical, and ultraviolet light, equipped to search for habitable exoplanets and detect signs of life there. An important component will be a chronograph, which blocks a star’s light so the faint planets around it can be directly observed. But everyone involved is acutely aware that JWST was designed before scientists knew the questions they’d want to ask about exoplanets today; who knows what new questions we’ll learn to ask as Habitable Worlds finds its way into the sky? Even if all the (metaphorical) stars align, though, Habitable Worlds isn’t going to bring us certainty, either. It’s likely that nothing ever will. ‘We have to be somewhat comfortable with ambiguity,’ Schwieterman said. ‘The first indication we get that a planet may have life is not going to be a certain one, it’s not going to be: we point a telescope at a planet, slam dunk, there’s life on it.’ He foresees years of debate and investigation, every possible explanation for what we’ve seen other than that it’s life. ‘That’s the scientific method,’ he said. ‘And that is great. We’re talking about a civilisational goal here. We want to be sure that we’re right. And we have to have patience.’ In 2018, Catling published a paper in Astrobiology proposing a framework for assessing exoplanet biosignatures. It uses Bayesian statistics to arrive at a probability that any collection of observations is indicative of life, in the context of a range of observations and analyses. In other words, it’s a lot more than Do we observe a given biosignature? Catling told me of the response he got to the work: ‘One person said, What you’ve described here, that’s a research programme over 50 years! I said, Well, sure.’ Better it takes 50 years than gets done sooner and poorly. We have a historical tendency to jump to wishful conclusions. The Mariner 4 image from the Martian flyby of 1965 showing the surface of the planet. Courtesy NASA ‘The history of solar system exploration shows that people have tended to want to claim the presence of life when, in fact, there was no life,’ says Catling. Before Mariner 4 made its Martian flyby in 1965, the mainstream explanation for some Martian surface features, dimly glimpsed from Earth, was vegetation; abiotic (and correct) explanations like dust storms were discounted, because conventional wisdom, and wishful thinking, pointed to life. ‘That little story makes me want to express scepticism when someone’s claiming that some phenomenon that doesn’t have to be life is life.’ We want to find life, and what scientist wouldn’t want to be the one to find it? Yet even when claims turn out to be ‘suspect in retrospect’, as Catling carefully put it, ‘there is some benefit to it, because it does push people to make new measurements, and to think up new ideas.’ Catling paraphrased Sagan: ‘Keep an open mind, but not so open your brains fall out.’ To recognise alien life, we need a more expansive, holistic understanding of what we’re looking for Ambiguity is not a flaw – it is how science works. You take your evidence, and you don’t overreach. You look at it for what it can tell you. You hypothesise in terms of probability. But all of that rests on prior knowledge – our sense of what we already know, or think we do. ‘Up until now,’ says the researcher Heather Graham, ‘the way we’ve searched for life … off of our planet has been very focused on features that we know we have in common with all the organisms on Earth’ – and this approach needs to change. Graham, a research physical scientist at NASA Goddard, comes to this work with a training in organic geochemistry and palaeoecology, and immersion in astrobiology. They see a promising path in the search for agnostic biosignatures, biosignatures that have nothing to do with Earth life. In order to be open-minded enough to recognise alien life, we need a more expansive, holistic understanding of what we’re looking for. ‘A really strong biosignature we can think about is disequilibrium of gases that would normally react together,’ Graham says. If the gases persist, they’re coming from somewhere. ‘That’s a sufficient signal to say, there might be an energetic input here. It could be a volcano, or it could be a critter.’ Essentially, this means looking for something other than life. ‘We can’t just steam out into the outer solar system and look for a cell or something like that,’ Graham says – astrobiologists need to be asking (and asking colleagues like geochemists and biologists): What are the energy sources? What are the nutrient sources? What are the physical conditions that this organism would have to contend with? Only armed with that holistic context can you reasonably start asking who might live there and what their signatures might be. Graham echoes Catling’s embrace of uncertainty. ‘This is a spectrum of understanding. And there are sources of uncertainty that need to be grappled with at every stage.’ Instead of seeking certainty, seek greater confidence. The gradual process of accumulating knowledge and understanding, nudging the slider step by step slightly higher up the confidence scale, along with the interdisciplinary nature of the work, means there’s no room here for the great-men model of discovery. Which is probably a fallacy, anyway. The historian of science Steven J Dick warns: ‘There is no such thing as immediate discovery in astronomy, or, I would venture to say, in all of science.’ Breaking down ‘discovery’ into three discrete phases – detection, interpretation, and understanding – Dick proposes stretching out discovery even farther. He sees in the history of scientific discoveries a common ‘pre-discovery phase’, when ‘the true nature of an object, signal or phenomenon goes unrecognised or unreported, or during which only theory indicates the phenomenon should exist.’ By this model, astrobiology would be in the pre-prediscovery phase, squarely in the realm of This should exist. Driven not by theory, though, but by desire and hope. Atmospheric biosignatures may never give us a definitive answer, but they’re not the only signal to search for. Life does more than metabolise and excrete, it makes things, too: technology. Enter SETI, the Search for Extraterrestrial Intelligence, or more accurately the search for alien technology, because technology is what we’d be able to detect. (Apologies to any aliens focusing their energies on art and philosophy instead of transmitting radio waves or making use of ever more energy from their stars.) Technology is an appealing target because it may be more definitive than chemical proof of life. Life makes methane, but so do volcanoes. No natural process can make a satellite. Of course, the challenges of remote detection remain: recall the interstellar asteroid ʻOumuamua, about which we could not know or see quite enough to squash a fringe insistence that it wasn’t a comet but an alien craft. But, for the most part, things could be clearer this way. Sofia Sheikh is a radio astronomer and astrobiologist, currently a postdoctoral fellow at the SETI Institute. She told me that a SETI search mainly looks for narrowband signals, radio emissions at a particular signal, which, as far as we know, or can imagine, could be made only by technology. (The narrowest-band known emissions from natural phenomena, MASERs, span 1,000 or 500 hertz: Sheik targets just one.) Technological signals must logically be designed to be distinct from natural radio emissions, because without even worrying about alien astronomers (us) trying to eavesdrop, good communication methods aren’t easily interfered with by the radio rumblings of the cosmos. The challenge, then, is differentiating an extraterrestrial signal from the abundant noise emitted by technology here on Earth. In 2020, news leaked of an intriguing signal that had been found by Sheik’s colleagues in the Berkeley SETI Research Center. Dubbed BLC-1 – the first Breakthrough Listen Candidate – the signal was detected in spring 2019, seeming to come from the direction of Proxima Centauri, the closest star to the Sun. Sheikh led the investigation; the leak came halfway through what would be a four-month process. ‘It was really in the middle of the analysis,’ she told me. ‘We were pretty sure we didn’t have a true astrophysical signal. But we weren’t quite ready to say that, because we weren’t done.’ BLC-1 was always clearly the product of technology – the question was if it was alien. ‘In all these cases, we go back, look at the same target and frequency, and you don’t see it’ If the image you have of a SETI detection is Jodie Foster in the film Contact (1997), headphone pressed to her ear, running around the lab cranking dials so the signal blares through the lab, I’m sorry to say that the director Robert Zemeckis (and even Sagan himself) led us astray. Part of the problem is the word ‘radio’ – radio signals are a chunk of the electromagnetic spectrum, the same kind of energy as physical light but with much longer wavelengths. But we hear ‘radio’ and think ‘sound’, which is not helped by astronomers talking about ‘listening for signals’ or ‘eavesdropping’ or a filmmaker chasing a high-impact scene. The signal isn’t a sound. And it’s rarely heard – sorry, observed! – in real time, either. BLC-1, for example, was unearthed from a pile of candidates culled by a machine-learning algorithm scanning for curiosities, singled out for further attention by human eyes (those of Shane Smith, an undergraduate research intern). So if you’re not listening in real time to a sequence of prime numbers, as in Contact, what would it take to know that a signal was alien? First, point your telescope back at the source (‘get on-sky’, in the technical parlance) and see if the signal is still there. ‘So far,’ Sheikh told me, ‘we have never had an instance where we have a signal of interest and observe [the source] again and it’s still there.’ Not with BLC-1, not with the so-called ‘Wow! signal’ of 1977 – a radio blip that was never explained away, but also never explained because the recording at the time was so information-poor and the signal never recurred. ‘In all of these cases, we go back, we look at the same target and same frequency, and you don’t see it.’ If you did see it again, Sheikh said, it wouldn’t be a slam dunk, but it would still be a momentous first. The Wow! signal of 1978. Courtesy Wikipedia After that, confirmation would include tracking the signal’s drift against Earth’s rotation, to confirm that it’s actually coming from a great distance. Then you’d get more telescopes on the target: if additional instruments see it, then you know it’s really out in the sky, and can more easily rule out a hoax. ‘And then I think a great debate and verification process starts happening, where if it’s truly in the sky… can we prove it’s technology? Does the signal contain information?’ The slide from detection to confirmation to trying to understand. Sheikh doesn’t value a repeat detection for its being able to answer Is it aliens? with a Yes, but for the further questions it allows us to pursue. ‘A repeatably measurable signal [is] the only time you can do science on it. If it’s not repeatable, it can’t be analysed in the scientific framework.’ A real-life off-world signal wouldn’t be the end of the quest, but another beginning. In his book The Impact of Discovering Life Beyond Earth (2006), Dick points to at least six times in the past 200 years that we’ve thought we’ve found alien life, and I’d say we’ve had a seventh since his writing, with the announcement of phosphine discovered on Venus. Some of the six were hoaxes or misunderstandings, like the War of the Worlds broadcast, but others, like the discovery of pulsars in 1967, gave scientists pause. And one of them, 1996’s purported discovery of microfossils in a Martian meteorite, got a presidential press conference when Bill Clinton stood on the South Lawn of the White House on 7 August, and said: Like all discoveries, this one will and should continue to be reviewed, examined and scrutinised. It must be confirmed by other scientists.He went on to say that even if rock 84001 ‘promises answers to some of our oldest questions, it poses still others even more fundamental.’ The promised answers never came: the rock turned out to hold no fossils, after all. President Clinton rightly claimed that the search for life ‘is as old as humanity itself’. But while our questions are ancient, the science that seeks answers is extraordinarily young. We have only just scratched the surface – or, in the case of exoplanet biosignatures, not even that. We have so little data, and so many questions. Sheikh told me: ‘It’s the challenge with astronomy … how far apart things are, how long it takes for things to happen.’ Astrobiology, she said, grounds us in the familiar and makes the incomprehensible more concrete. Imagining the cosmos as a home for life brings it back to a human scale. We see a spread of distant galaxies and think immediately of the potential for life. We see a stellar nursery and think of just a few more steps needed for biology to emerge. If the Universe is full of life, we on Earth may not be special, but we are central, part of the point. We just have to find a way to settle into our place in the cosmos without ever knowing.
Jaime Green
https://aeon.co//essays/alien-life-might-not-be-something-science-can-ever-discover
https://images.aeonmedia…y=75&format=auto
Nations and empires
Displacing and destroying peoples by colonisation is not just a historical Western evil but a global and contemporary one
In 1931, Japan invaded northeast China and established a client state called Manchukuo (Manchuria). To secure control over Manchuria, over the next 14 years, the Japanese government lured 270,000 settlers there by offering free land to ordinary Japanese households. Japanese propaganda stressed, importantly, that this colonisation scheme was not inconsistent with Japan’s commitment to racial equality. Japanese farmers would bring new agricultural techniques to Manchuria and ‘improve’ the lives of native Manchus, Mongols and Chinese by way of example. Japan’s settlement of Manchuria represents a case of settler colonialism, a concept that was initially developed in the humanities to explain the violent history of nation-building in North America and Australasia. Unlike traditional colonies such as India or Nigeria, as Patrick Wolfe explained, settler colonies do not exploit native populations but instead seek to replace them. The key resource in settler colonies is land. Where Indigenous land is more valuable than Indigenous labour – often because Indigenous peoples are mobile and cannot be easily taxed – native peoples are killed, displaced or forcibly assimilated by settlers who want their land for farming. Settlers and their descendants then justify these land grabs through discourses that both naturalise the disappearance of Indigenous peoples (it was disease!) and stress the benefits of the civilisation the settlers brought with them. 13433A series of postcards promoting Manchukuo. Courtesy the Manchukuo Collection, Harvard-Yenching Library 13434134351343613437Although settler colonialism has become a valuable framework for explaining the history of Western countries like the United States and Australia, the dynamics that it describes are clearly quite general. Japan’s leaders in the 1930s, for instance, similarly salivated at the seemingly empty plains of Manchuria that could be a solution for all the food needs of Japan’s rapidly growing empire. And just like policymakers in the US, Japan had a variety of self-serving justifications for settling this new frontier. Its claim that Japanese farmers would contribute to ‘co-prosperity’ and ‘racial harmony’ in Manchuria and Korea bore little resemblance to the forced assimilation, discrimination and dispossession experienced by subject peoples there. As such, in popular and academic writing today, there is no resistance to naming Japanese colonialism and imperialism in East Asia or to placing Japanese settler colonialism in conversation with Western settler colonial projects. What is odd, however, is that, while Japan’s colonisation of Manchuria was unfolding in the 1930s, there was much greater reluctance to condemn Japan by Western scholars otherwise committed to the abolition of racism and imperialism. This confusion helps illuminate why other, very similar settler colonial projects currently unfolding in the Global South have received relatively little attention or condemnation today. In 1936, the noted US scholar W E B Du Bois visited Manchuria, China and Japan as part of a world tour. Japan’s rise had long been a source of inspiration for Du Bois, who claimed in The Souls of Black Folk (1903) that ‘the problem of the 20th century is the problem of the colour line’. Japan’s wartime victory over Russia in 1905 seemed to Du Bois to augur the long-awaited rise of coloured peoples around the world. And Japan was a rhetorical champion of racial equality in the interwar period. It had tried (but failed) to enshrine racial equality as a founding principle of the League of Nations in 1919, and its diplomats proved vocal critics of Jim Crow in the US South. W E B Du Bois with Japanese professors in Tokyo, 1936. W E B Du Bois Papers (MS 312). Courtesy Special Collections and University Archives, University of Massachusetts Amherst Libraries It is in this context that Du Bois visited Manchuria in 1936. He would subsequently report that what Japan had accomplished in Manchuria was ‘nothing less than marvellous’. Du Bois gaped at Manchuria’s absence of unemployment, sparkling new infrastructure and ‘happy’ people. The absence of an explicit racial hierarchy or segregation between different ethnic groups in Manchuria, with schools divided only by language, seemed to make Japanese rule there qualitatively different from European colonialism. Japan, to Du Bois, was ‘above all a country of coloured people run by coloured people for coloured people.’ Du Bois’s credulous defence of Japan in the interwar period is a major analytical blind spot The trouble for Du Bois, of course, was that his Chinese friends felt very differently about the whole matter of Japanese rule in Manchuria. Du Bois struggled to understand the enmity between China and Japan: two ‘coloured’ nations who should ostensibly be political allies. Shortly after leaving Manchuria, he provocatively questioned an audience in Shanghai: ‘Why is it that you hate Japan more than Europe when you have suffered more from England, France and Germany, than from Japan?’ A year later, in the wake of the undeniable atrocities committed by Japan against hundreds of thousands of Chinese civilians in Nanjing, Du Bois doubled down on his defence of Japan. He wrote that ‘Japan fought China to save China from Europe’ and that, even if it had committed violence in China, Japan was simply following Europe’s playbook. The Japanese had also not invented the practice of ‘killing the unarmed and innocent in order to reach the guilty’, he emphasised, highlighting similar European counterinsurgency practices in South Africa and the Punjab. A 10th-anniversary poster for the Manchuria Airline Company, c1941. Courtesy the MFA, Boston Du Bois’s credulous defence of Japan in the interwar period is acknowledged by even his most sympathetic interlocutors as a major analytical blind spot. The point of highlighting these errors is not to undermine Du Bois’s critique of the operation of race in the Western world. Rather, highlighting how Du Bois became a surprisingly vocal defender of Japanese colonialism points out how even otherwise insightful political observers can spectacularly miss the mark with respect to understanding how race and power operate in ‘coloured nations’. Du Bois’s errors, in other words, have much to teach us about why scholars continue to fail to understand settler colonial projects in the Global South today. In the early 1960s, Indonesia annexed the western half of the island of New Guinea or ‘West Papua’, claiming to liberate the people there from Dutch colonial rule. In response to a series of uprisings from Indigenous Papuans in the 1970s and ’80s, Indonesia resettled 300,000 farmers from its core islands to West Papua in just two decades. Much like Japan in Manchuria, Indonesia lured large numbers of ordinary Indonesians to West Papua by promising them free transport and land there. And much like Japan in Manchuria, Indonesia justified this resettlement or ‘transmigration’ scheme to external observers by stressing two things. First, transmigrants would bring agricultural development to West Papua and thereby improve the living standards of what officials called ‘primitive’ Papuans. And second, transmigration was not inconsistent with the state’s commitment to ethnic and racial equality. Quite the opposite, in fact. Mixing ethnic groups together would produce social cohesion. As Martono, Indonesia’s minister for transmigration, put it: ‘[T]he transmigration programme highlights social integration so that racial differences and differences between ethnic groups will no longer exist. There is no such thing as one ethnic group colonising another [in Indonesia].’ The disappearance of West Papuans as a distinct group, in other words, would be the natural result of ethnic mixing. These justifications were accepted by Western donors in the World Bank who ultimately funded the transmigration scheme. Indonesia ethnically cleansed and settled the most resource-rich areas of West Papua West Papuan activists argued that these official rationales were red herrings; the real purpose of transmigration was not to foster economic development but to prevent West Papua’s secession by flooding the island with settlers. As Benny Wenda, a leading West Papuan activist, put it in a statement in 2014: ‘The Indonesian government is deliberately trying to keep our population low [and] flood the country with Indonesians. This is not what we Papuans need and it is not what we are asking for.’ Indonesian transmigration in West Papua indeed turned Indigenous Papuans into a minority in much of the island, making an independent West Papua much more difficult to achieve in the future. For a long time, these competing claims about the purpose of Indonesian transmigration in West Papua were difficult to parse. Some observers stressed the settler colonial nature of Indonesian rule over West Papua, whereas others stressed the benign effects of transmigration. But in a recent project, I collected highly sensitive internal government data capturing precisely where and when the Indonesian government displaced Indigenous Papuans and settled their lands over the late 20th century. These data clearly support the claims made by Indigenous activists. Indonesia ethnically cleansed and settled border areas in response to cross-border insurgent attacks from Papua New Guinea. Indonesia also cleansed and settled the most resource-rich areas of West Papua. In other words, the Indonesian government’s own data undermine its claim that resettling hundreds of thousands of people to West Papua was a benevolent strategy for economic development. Transmigrants were sent to colonise areas unsuited to intensive agriculture but that had great geostrategic value. Indonesian transmigration in West Papua, much like Japan’s settlements in Manchuria, was a tool for coercively locking a contested frontier and its rich resources into the state. It was and is colonisation. Indonesian settler colonialism in West Papua was not particularly unusual in the late 20th century. If we define settler colonialism as the coercive displacement of Indigenous peoples by settlers, then a wide range of cases fit this bill. To list just a few in Asia: China settled millions of Han Chinese to Xinjiang and Tibet in the 1960s and ’70s; Sri Lanka resettled hundreds of thousands of Sinhalese to formerly Tamil areas in the 1960s and ’70s; Thailand resettled more than 100,000 Buddhists to its southern Malay areas in the 1960s and ’70s; Bangladesh settled 400,000 Bengalis to the Chittagong Hills in the 1970s and ’80s; and Iraq resettled tens of thousands of Arabs to Kurdish areas in the 1980s and ’90s. More recently, in 2018 Myanmar began to attract Buddhists to formerly Muslim Rohingya areas, and in 2019 India controversially made it much easier for Hindus to emigrate to Kashmir. Between these different resettlement schemes, we can usually discern a common underlying logic. European settler colonialism in the 18th and 19th centuries generally involved a large degree of agency on the part of settlers who moved into areas where state authority was previously nonexistent. The state followed the settlers. Settler colonialism in the Global South, on the other hand, generally takes place within internationally accepted borders and is ‘state-led’, meaning that bureaucrats select settlers, demarcate frontier farms, and fund settler relocation. Settlers follow the state. Colonised peoples in the Global South have experienced a double erasure: by settlers, and by settler colonial studies State-led colonisation, whether for Japan in Manchuria, Indonesia in West Papua, or Iraq in Kurdistan, often escalates in response to insurgency and the fear of secession in ethnic minority areas. Unable to distinguish between who is an insurgent and who is not, states displace ethnic minorities who are actively engaged in rebellion and settle their lands with more stereotypically loyal ethnic groups who can prevent cross-border incursions. As one advocate for Manchurian settlement put it in 1934, the ideal Japanese settler is not just a productive farmer but also someone who is ‘ready to draw his gun and risk his life fighting for his country should bandits invade.’ Or, as one Burmese official more recently emphasised, settlers usefully create a ‘human fence’ along contested borders. Yet, settler colonialism in the Global South fails to attract international attention. Maps circulating online depicting where settler colonialism is ‘still a reality’, for instance, almost exclusively depict areas settled by Europeans. Colonised peoples in the Global South have experienced a double erasure: first by settlers and second by settler colonial studies. We have not seen Boycott, Divestment, Sanctions movements on China, Indonesia, Morocco or Bangladesh rocking Western campuses. We have not seen the burgeoning field of settler colonial studies attempt to seriously theorise settler colonialism as an ongoing practice in the Global South. And we have not seen the UN Human Rights Council or General Assembly condemn these states for coercively settling the lands of minority groups, which jars, considering the attention paid to Israel in these forums. Why? The case of Manchuria is instructive because the mistakes that Du Bois made there shed light on similar mistakes made by Western Leftists today who are otherwise vocal critics of Israeli settler colonialism in Palestine. Du Bois made two errors in his analysis of Manchuria in the 1930s, which together led him down the path of justifying Japanese colonisation. The first was to presume that a state officially committed to racial and ethnic equality cannot be a violent, exploitative coloniser. The second was to presume that the colour line, the central political division in the US, is a master key that explains political conflict elsewhere in the world. Let us take each of these mistakes in turn. The first mistake that Du Bois made in Manchuria was presuming that a commitment to racial supremacy is a necessary aspect of colonisation. It is understandable why Du Bois made this mistake. White settlers in the Americas, sub-Saharan Africa and Australasia justified their monopolisation of land through white supremacy. Racist ideas like ‘terra nullius’, for instance, meant that all the land in Australia was treated legally as unowned and unoccupied before British colonisation. European settlers created rigid legal racial hierarchies in colonised areas, reserving certain areas for whites only. Indigenous peoples were subjects, not citizens, and were often forcibly put into reservations. Settler colonialism in the Global South is not usually accompanied by these explicitly racist qualities. Indeed, what is characteristic about settler colonialism in the Global South is that it is generally accompanied by a perverse rhetoric of racial equality. Anticolonial leaders across the Global South enshrined ethnic equality as a foundational commitment of their nation-states in the 20th century, in explicit contrast to the racial hierarchies that characterised European colonial rule. For instance, at the Bandung conference in 1955, Indonesia’s president Sukarno emphasised how countries like China, Indonesia and India were united by ‘a common detestation of racialism’. The violent displacement of minorities by dominant ethnic groups in settings like Xinjiang or West Papua seems paradoxical. How do we explain the ongoing practice of settler colonialism in countries rhetorically committed to the abolition of colonialism? White Americans can resist affirmative action by using the rhetoric not of racial supremacy but of racial equality The way out of this paradox is to recognise that settler colonialism need not be justified by racist ideologies like white supremacy or terra nullius. When all ethnic groups in a country have the same political rights, no one group has any greater claim to a piece of territory than any other group. Equality before the law can therefore be used to rhetorically justify the mixing together of ethnic groups within national boundaries. For instance, to justify the presence of Han settlers in ethnic minority areas like Tibet and Xinjiang, China’s president Xi Jinping recently emphasised how ‘Ethnic equality is the prerequisite and basis for achieving national unity … the Han cannot be separated from the ethnic minorities, and the ethnic minorities cannot be separated from the Han.’ Martono similarly emphasised how settling people in frontier areas like West Papua would ‘realise what has been pledged: to integrate all the ethnic groups into one nation, the Indonesian nation.’ The rhetoric of national equality was also used in 2019 by India’s prime minister Narendra Modi to justify changing the Indian Constitution. Modi emphasised how scrapping Articles 370 and 35A, which long prevented non-Kashmiris from emigrating to Kashmir, would help foster national equality by removing special legal privileges granted only to one regional group. Racial ideologies are malleable things, easily twisted to rationalise the interests and actions of those in power. This point is frequently made by critical race theorists in the US who emphasise how ideologies of ‘colour blindness’ today have been strategically adopted by conservative politicians to limit redistribution to Black Americans. In other words, white Americans today can resist policies like affirmative action that would affect their material standing by using the rhetoric not of racial supremacy but of racial equality. Indigenous autonomy can be similarly delegitimated by those in power in the Global South like Sukarno, Modi and Xi for purporting to give special rights over a piece of territory to a particular ethnic group. Ethnic equality, whether for Japan in Manchuria, China in Tibet or Indonesia in West Papua, is a useful justification for denying the sovereignty of Indigenous peoples and for flooding their lands with co-nationals. The rhetorical justifications for colonisation may be different in the Global South, but the outcomes – displacement, cultural erasure, and the loss of Indigenous self-determination – remain fundamentally the same. The second mistake that Du Bois made in Manchuria was to presume that the colour line is the defining axis of political conflict around the world. Du Bois essentially disregarded Chinese complaints about Japanese expansion into Manchuria because he saw this conflict as a distraction from the much more fundamental global division between white and nonwhite peoples. As he explained: ‘It is not that I sympathise with China less but that I hate white European and American propaganda, theft, and insult more.’ Japan was leading the resistance of coloured peoples against Europe and the US. Its expansion into Manchuria in the 1930s was therefore justified because Japan needed Manchuria’s natural resources to ‘escape annihilation and subjection and the nameless slavery of Western Europe’. We can discern a connecting thread here between Du Bois and vocal anti-imperialists today who are often pejoratively called ‘tankies’. Tankies are Leftists who are so concerned with what they see as the fundamental evil of the world – US imperialism – that they ignore, deflect or justify atrocities committed by countries that are aligned against the US. The first move is usually to ignore. For instance, as mentioned earlier, in the 1970s and ’80s, Iraq expelled more than a quarter of a million Kurdish people and repopulated a long stretch of its northern border with Arabs. Then, in 1988 approximately 100,000 Kurds were killed in a chemical weapons campaign called Anfal (amaliyet al-Anfal). However, writing in the London Review of Books in 1991, Edward Said, probably the most prominent postcolonial theorist in the Middle East, notoriously sowed doubt about Anfal’s existence because he feared that these atrocities could justify US military intervention. Most recently, China’s mass incarceration and sterilisation of its Uyghur minority in Xinjiang have been strenuously denied by a range of Leftist writers and organisations. Tankies have attributed evidence of genocide there to ‘Western atrocity propaganda’, disseminated by Western state actors to stymie China’s rise and justify war. Tankies are not generally disposed to grant any agency to minority groups themselves The second move, if atrocities cannot be easily ignored, is to deflect by attributing such violence to the West. This move, like Du Bois’s explanation of Japan’s ‘defensive’ expansion into Manchuria, can take the form of blaming Western aggression. Indonesia was obliged to invade West Papua in the early 1960s, the argument goes, because the territory was a Dutch ‘pistol pointing at Indonesia’s chest’. More recently, scholars like Jeffrey Sachs or Noam Chomsky have argued that Russia was provoked into invading Ukraine in 2022, because NATO enlargement threatened Russia. Secessionist movements are also attributable to Western interference, the argument goes. West Papuan resistance against Indonesia, for instance, is blamed by Indonesian nationalists on a Dutch ‘time bomb’ of divide and rule and ongoing interference from Western states like Australia that seek to weaken Indonesia. These fears are not particularly helped by the fact that Western states often do extend aid and support to ethnic minorities in rival states. The fact that the US led international condemnation of Japanese expansion into Manchuria in the 1930s, that Israel has become a vocal supporter of the Uyghurs and the Kurds, or that the CIA did help train Tibetan rebels in the 1960s, for instance, delegitimates secessionist movements in the eyes of tankies who are not generally disposed to grant any agency to minority groups themselves. The final move, if ignoring atrocities in the Global South or blaming these conflicts on Western interference is not straightforward, is to justify state violence. This usually, like Du Bois in Manchuria, takes the form of emphasising the ‘benefits’ of modernisation brought by the state. Japanese settlers in Manchuria emphasised the improvements that they were bringing to an undeveloped land. Chinese settlers in Tibet and Indonesian transmigrants in West Papua similarly regard themselves as missionaries of progress, dismissing disaffected Tibetans and West Papuans as lazy ingrates. This is a vexing argument because, in a narrow sense, it is not totally wrong. Just as colonies settled by Europeans today tend to be wealthier than colonies in which Europeans did not settle, areas settled by Javanese transmigrants in West Papua or Han Chinese settlers in Xinjiang tend to be wealthier than otherwise similar areas. GDP per capita in West Papua, for instance, is almost twice that of neighbouring Papua New Guinea, and its physical infrastructure is much better. However, economic development is not necessarily political progress, particularly when such progress comes at the cost of dispossession, cultural loss and subjugation. It is evident to any reasonable observer that European colonisation of North America and Australasia cannot be retrospectively justified by its economic benefits, particularly when such benefits have primarily accrued to the descendants of settlers. For some reason, however, this point escaped Du Bois in Manchuria and escapes Leftists today. When referring to the incarceration of Uyghurs in China, for instance, the Marxist Vijay Prashad emphasised to The Nation magazine in 2022 that it’s ‘the price that people pay … [to] alleviate or eradicate absolute poverty.’ Such rationalisations are a sham. There are better ways to alleviate poverty than by forcibly incarcerating an entire ethnic group, removing their children, and subjecting them to re-education. In February 2023, Israel announced that it was authorising nine Jewish outpost settlements in the Palestinian West Bank and the construction of 10,000 new houses there. This decision was met with widespread condemnation by Western Leftists. For instance, New Zealand’s foreign minister Nanaia Mahuta Tweeted that New Zealand ‘rejects Israel’s decision to authorise nine settler outposts in the occupied West Bank … We call on Israel to reverse this decision and avoid unilateral actions that escalate tensions and undermine the two-state solution.’ Mahuta’s vocal condemnation of Israel was noteworthy in New Zealand for it stood in stark contrast to her cautious stance on a similar conflict much closer to home. When questioned in Parliament seven months earlier whether she supported Indigenous self-determination in West Papua, Mahuta emphasised that New Zealand ‘fully respect[s] … the sovereignty and territorial integrity of Indonesia.’ Western Leftists can and should do better than grandstanding on the issue of Palestine while ignoring, deflecting or justifying settler colonialism in the Global South. It is demoralising that the Palestinian president Mahmoud Abbas visited China this June and issued a statement denying that events in Xinjiang are ‘human rights issues at all’. Abbas claimed that the mass incarceration of Uyghurs was instead driven by ‘anti-violent terrorism, de-radicalisation, and anti-separatism’, bizarrely echoing Israel’s own rationalisations for ongoing violence against his people. The fact that Israel, not Palestine, has condemned the erasure of Muslims in western China should give us pause. The correct response to the prevalence of settler colonialism beyond Palestine is not, as tankies would have us believe, to side with Palestine and be silent on the Uyghurs. Nor is it, as Right-wingers in the West and Israel would have us believe, to side with the Uyghurs and be silent on the Palestinians. The correct response is quite obviously to stand with all marginalised peoples, be consistent in our political activism, and attend to context rather than subsume conflicts under some wider, more important geopolitical division or colour line. Greed, status and state-building are the key dynamics animating settler colonialism, and these dynamics can be found everywhere. If we fail to learn from Du Bois’s mistakes in Manchuria, we are doomed to repeat them.
Lachlan McNamee
https://aeon.co//essays/settler-colonialism-is-not-distinctly-western-or-european
https://images.aeonmedia…y=75&format=auto
Bioethics
Infertility treatments aim to improve women’s lives. But they risk tying womanhood to the toxic expectation of motherhood
Feminist critics first met in vitro fertilisation (IVF) developments with suspicion. Gena Corea argued that assisted reproductive technologies (ARTs) would reduce women to ‘Matter’ and represented a troubling medicalisation of the reproductive process that was poised to harm women. Corea made prescient predictions that markets in wombs and eggs would develop along classed and racialised lines. Writing in her book The Mother Machine (1985), she estimated that there would be a demand for the wombs of women of colour but not for their donor eggs, in a manner continuous with racism. Corea also observed that a woman’s economic situation was central to her ‘will’ to engage in commercial surrogacy. A generation or more later, Corea’s claims have been borne out by the development of surrogacy industries in the Global South, which are hotspots for ‘reproductive tourism’, attracting wealthy international consumers. Ethnographic research reveals that surrogates typically exhibit socioeconomic vulnerabilities and palpable financial motivations. They are poor and want to make money. Meanwhile, countries in Europe including Spain, the Czech Republic and Greece, have emerged as popular centres for compensated egg donation. In her book Women As Wombs (1993), the feminist theorist Janice Raymond pointed to the marketised development of ARTs and the cultural expectation that all women should mother. She saw these as constructing women’s choice to engage in fertility procedures. Corea and Raymond shared a scepticism about ARTs. They called attention to their experimental nature, citing the harm previously inflicted upon women by the medical profession (through practices such as forced sterilisation, medically unnecessary hysterectomies, and harmful birth control) and advised caution toward further reproductive interventions in the name of fertility. While extremely rare, IVF can still lead to serious adverse outcomes. Pregnancies through IVF are considered higher risk, and may lead to gestational diabetes, premature delivery, low birth weight, and miscarriage. The use of fertility drugs in IVF to induce egg production can cause ovarian hyperstimulation syndrome: last year in India, a seemingly healthy egg-donor died while doctors were retrieving her eggs. ARTs such as IVF are often viewed as a ‘treatment’ for infertility. Infertility, however, turns out to be difficult to define. A prima facie attempt might see it as the inability to conceive and reproduce through natural means. Yet, from the outset, Raymond and other feminist theorists questioned the claim that infertility is a disease, and instead noted how infertility diagnoses rose in tandem with the proliferation of commercially motivated infertility specialists. Raymond also drew an analogy with the classification of disability, pointing to the way in which disability rights activists maintain that physical handicaps should not be treated as diseases. Philosophers of medicine continue to debate the nature and definition of ‘disease’. The naturalist position in this debate holds that we can maintain purely descriptive definitions of disease and health. In the 1970s, Christopher Boorse’s biostatistical theory of health set out the terms of naturalism, and it remains influential. The naturalist view focuses on the idea of typical, or species-normal, biological functioning. A fundamental grounding claim is that the human body comprises organ systems that have teleological natural functions. These organ systems might depart from their natural functions in various ways. Some of these departures are harmful, and these are classified as diseases. Disease is thus defined as a harmful deviation from species-normal biological functioning. On the naturalist view, the determining of bodily malfunction is an objective matter. Normativists argue that health and disease are essentially value-laden phenomena. It is impossible, normativists maintain, to assess health, disease or disability without making value judgments, which are often concealed (even from those making them), and so health, disease and proper bodily functioning are anything but objective. In support of their position, normativists point to the historical and cultural nature of disease classification. For example, some conditions that human societies have considered and treated as diseases have been discredited as unscientific or discriminatory, such as ‘hysteria’. Masturbation was until very recently considered a disorder that could be treated by surgical intervention through circumcision. Homosexuality remained classified as a disease by the World Health Organization until the publication of the ICD-10 in 1992. The evolving ways of thinking about mental health illustrates the expansion and development of human understanding of health and changing social norms. So, are people who cannot conceive through unprotected heterosexual intercourse suffering from a disease? On the naturalist view, statistical definitions of ‘normal’ functioning depend on a reference class, a natural class of organisms of uniform functional design, such as an age group of a sex of a species. Whether your body is normal depends on which other bodies you’re comparing it with. On Boorse’s biostatistical model, infertility is a disease where an individual falls short of the statistical norm in ability to conceive, given their age and sex. The philosopher Emily McTernan asks us to consider that the average age of a woman seeking IVF on the NHS is 35, and one-third of those who receive treatment are over 37. Such a candidate for IVF, who is unable to conceive through natural means, may not be exhibiting a clear failure of ‘normal’ functioning. This is because, it is normal, statistically speaking, for ageing to lead to a loss of certain capabilities. The candidate’s reference class – other 35- or 37-year-old women – might similarly be comprised of women struggling to conceive. Thus, on the biostatistical model of disease, infertility can be, in certain ways, quite statistically normal. On the normativist view, if infertility is to be considered a disease, it will be classified as such through appealing to some value. For example, a normativist might claim that there is a hidden premise in operation about valuing genetic reproduction in attempts to classify infertility as a disease. So, when it is argued that infertility limits health, in the manner of disease, this might be further explained in terms of inhibiting the ability to achieve one’s vital goals (reproduction). The hidden premise, according to the normativist, is that it is good or desirable to genetically reproduce. Thus, normativist reasoning makes explicit the function of a norm towards desiring reproduction. As a result, it might be argued that reproduction is seen as a vital goal due to the dominance of pronatalist cultural norms surrounding childbearing and genetic kinship. It is because of such norms that people feel the need to have their own genetically related children, and that infertility is seen as ‘malfunctioning’ according to one’s vital goals. The normativist might argue that it is possible to imagine a scenario in which the strength of such norms is reduced. In such a scenario, fewer people would hold genetic reproduction to be a vital goal, resulting in fewer people failing to meet this goal, thus leading to a reduction in the incidence of this ‘disease’. The possibility of this reduction attests to the contingent and norm-based nature of the infertility-as-disease position. In support of their claims, a normativist may also point to the non-universal nature of the norms towards genetic reproduction. For example, some Indigenous communities exhibit mating relationships of kinds other than the nuclear family unit, and do not place weight on two-parent genetic ties, or on just having one ‘mother’. The dominance of the nuclear family as a mode of organisation limits access to alternative modes of kinship The normativist is bound to be met with the challenge that we live in the here and now, in circumstances where great emphasis is placed on genetic parenthood and reproduction. We cannot reason ourselves out of the pain of childlessness. Reproductive technologies, whether they ‘treat’ a ‘disease’ or not, are a vehicle of hope and fulfilment for many. Classification issues notwithstanding, infertility continues to be an issue of immense social significance. Research suggests that the psychological impact of infertility is greater upon women than men, and that fertility treatments, particularly unsuccessful ones, are associated with high depressive scores for women undergoing them. Yet, rather than viewing the advancement of ARTs as the solution to this suffering, feminist theorists such as Corea and Raymond sought to question why it is that infertility causes such distress, and have proposed ways of addressing it that confront the totality of the situation, that is, the situation of women under patriarchy. Infertility-related distress, they argue, is, at least in part, due to ‘ideal’ notions of femininity that permeate our culture and place significant emphasis on motherhood and childrearing. Such recurrent messaging causes pain when one feels unable to conform to this model. Further, the dominance of the nuclear family unit as a mode of social organisation limits access to alternative modes of kinship that could provide other options for feeling fulfilled in terms of family life. These feminist critics attributed infertility-related distress to various systemic features of patriarchal society. Many academic discussions in contemporary anglophone philosophy instead tend to emphasise an individual’s ‘reproductive autonomy’ to engage in infertility procedures as they see fit. This rendering of the matter to a question of individual choice follows the rational consumer paradigm of orthodox economics. That is, the liberty to make decisions free from interference or restriction is a guiding value, and more choice is better than less. On the one hand, various histories of state-imposed population control (anti-miscegenation laws and forced sterilisation in the United States are just two examples), showcase the harms that can result from state interference with individuals’ reproductive decisions. Liberals and feminists alike thus often see negative freedom – to be free from such state interference – as a cornerstone of reproductive autonomy. Correspondingly, it is often taken as axiomatic that reproductive autonomy involves a positive element, grounding access to certain services, such as abortion and family planning clinics. In such cases, the value of reproductive autonomy is clear. Relatedly, the legal scholar and bioethicist John Robertson has pioneered the idea of ‘procreative liberty’, arguing in Children of Choice (1996) that full procreative freedom would include both the freedom not to reproduce and the freedom to reproduce when, with whom, and by what means one chooses. This latter element involves maintaining technological control over reproduction and positive access to assisted reproductive procedures, as a matter of freedom. However, despite the initial appeal of the language of freedom, there is a sense in which making it focal obfuscates salient issues related to the harmful context in which choices are made, as well as how they might collectively make an impact beyond individual cases. To illustrate, respect for reproductive autonomy might be invoked as a justification for permitting access to increasingly risky assisted reproductive procedures. An argument from reproductive autonomy in the context of assisted reproduction might state, for example, that even if a procedure is risky, experimental or possesses a relatively low chance of success, we should respect the reproductive autonomy of the woman choosing to undergo it. She is the best judge of her interests and should be free to choose any such option available to her. Such appeals to reproductive autonomy organise the philosophical debates surrounding the uterus transplant, an experimental procedure that involves transplanting a donated uterus, typically from a living donor, into a recipient who seeks to ‘experience pregnancy’ and gestate a fetus to term. Recipients might have been born without a uterus or had it removed due to illness. Recent estimates indicate that there have been 40 live births following uterus transplants, and several clinical trials are currently underway. The bioethicist Laura O’Donovan argues that there are limits to when reproductive autonomy can deliver verdicts on the permissibility of reproductive decisions. Potential harms that uterus transplants pose to the live donor, to the recipient and to the developing fetus might constrain an individual’s reproductive freedom. Similarly, one’s autonomy to undergo a uterus transplant might be undermined by social conditioning and pressures to procreate. However, in response to this latter concern, Donovan notes that, generally, we do not seek to influence or curtail an individual’s choice in natural reproduction. In such cases, we do not seem to question the authenticity of a seemingly autonomous decision. Thus, she suggests, we have no more reason to do so with regard to uterus transplants as a treatment option. Yet this general acceptance of all reproductive decisions in the context of harmful, essentialising norms is precisely what Corea, Raymond and others criticise. They would not have viewed such decisions as an unobjectionable, neutral benchmark from which to permit further reproductive risk. Concerns regarding social pressure and compromised autonomy come to the fore, and indeed may be heightened, in the context of uterus transplants, which pose increased risks, compared with natural reproduction and established ARTs. To what extent can reproductive autonomy justify a procedure that appears to involve clear harm? There are clear risks involved in uterus transplants, both to uterus donors and to recipients. The donor undergoes risks similar to hysterectomy, such as haemorrhage, infection, and bladder injury, and the longer procedure time for living uterus donation may increase these complication rates. The majority of reported complications have been urinary tract injuries due to the complex and precise dissection required in the pelvic floor. Beyond these immediate surgical complications, hysterectomy for uterus donation may also have long-term medical consequences that affect quality-of-life. For example, premenopausal women may be at increased risk for ovarian failure after hysterectomy and may require hormone replacement therapy for early menopause. There may also be some risk of sexual dysfunction after hysterectomy. The recipients of uterus transplants also face risk. As with any transplant, patients face the general risks of surgery, transplant rejection, and infection. They are also required to take immunosuppressant drugs until the womb is removed again after successful gestation, which generally leave patients at higher risk of death, including a lasting increased risk of certain cancers. Transplant recipients will undergo high-risk pregnancies requiring close monitoring. To what extent, therefore, can reproductive autonomy be invoked as a justification for a procedure that appears to involve clear harm? Setting aside any risks that are hypothetical, there are unavoidable, tangible harms: the surgery and recovery and use of immunosuppressants all turn a healthy person into a patient. This seems particularly concerning in the case of uterus transplants, where surgery does not attend to saving life and medical need but, rather, to the desire to gestate a fetus. In her book Making Babies: Is There a Right to Have Children? (2002), the philosopher Mary Warnock, who had a guiding role in the development of UK legislation on surrogacy and assisted reproduction services, argued that it was difficult to ground access to such services through the language of rights. To say that one had a right to children was to employ no more than a rhetorical device. Rather, it makes more sense to provide these services in relation to social welfare. To extend Warnock’s argument to this discussion, we might consider that, even if infertility presents a confounding case for disease classification, it causes real suffering, which we can and should alleviate with available technologies. The extent to which such technologies really do reflect and promote social welfare is thus worth exploring. A number of contemporary scholars defend access to uterus transplants by appeal to essentialising arguments that tie womanhood to childbearing. Carlo Petrini and others argue that uterus transplants provide ‘a woman the opportunity for the experience of pregnancy that may be felt as a central expression of her womanhood’, thus restoring an ‘identity’ function. The legal scholar Amel Alghrani employs a procreative liberty approach to make a tentative argument for state-funded assistance to access uterus transplants, arguing that they allow ‘cisgender women suffering from uterus factor infertility the opportunity to experience gestation, pregnancy, and childbirth akin to their fertile female counterparts who conceive ‘naturally.’ Such scholars may defend these arguments by claiming that they reflect a reality: many women do indeed view the absence of their uterus and the corresponding benefit of a transplant in this way. In 2021, Anji Wall et al., interviewed 21 women undergoing uterus transplants. Their study found that UTx made a positive impact on healing the emotional scars of living without a uterus and ‘enhanced female identity’ through allowing these women to participate in previously unobtainable ‘common female experiences’ such as menstruation, pregnancy, and motherhood. It may well be the case that UTx allows women to achieve these ‘normalizing’ experiences. However, the question remains as to whether we ought to continue placing such value on a biologically reductive conception of female identity that demands intervention to enforce alignment. Common to these accounts is the aim to alleviate a distinctly female infertility-related distress. In order to explore why this might not sit right, we ought to consider social welfare beyond the individualised cost-benefit model. While reproductive procedures such as uterus transplants might respond to distress in individual successful cases, there is a more general way in which such technologies reinforce and exacerbate distress. That is, it seems unclear whether uterus transplants could fulfil long-term social welfare goals, due to the way in which this procedure trades on objectionable social norms that tie womanhood to childbearing. Reproductive technologies limit our imaginations to what has come before Culturally, the message that abounds goes beyond the idea that reproducing is a fundamental human need or desire. Rather, it is consistently touted as a specifically feminine purpose. The ‘wandering womb’ was an ancient Greek diagnosis for various psychosexual problems viewed as peculiar to women. Plato wrote that the womb was a living creature with a desire for childbearing and, when this desire went unfulfilled, a raft of problems ensued. Centuries later, Arthur Schopenhauer wrote that ‘women exist in the main solely for the propagation of the species, and are not destined for anything else.’ We continue to live under an ideology that ties women to an embodied condition and associates them with their reproductive role in a manner evaded by men. This ideology also harms men, who transgress norms when demonstrating care and affection for their children, in a society that expects less. There are real costs borne by women if they are unable to access the benefits associated with pregnancy, gestation and this notion of femininity. Yet, there is a sense in which reifying this norm, through the widespread deployment of technologies such as uterus transplants, serves to reinforce the problem, making infertility and childlessness more painful by continuing to hold women to this biological reproductive role. Such measures arguably serve to entrench the grip that infertility has on the welfare of people. In aiming towards the provision of biological reproduction, they reinforce the primacy of certain contingent cultural ideals. Meanwhile, most societies are not so set up for alternatives. Labour mobility means people often move away from family for work. It is not straightforward to take on a care-giving role with friends’ or neighbours’ children. Overwork and psychic exhaustion mean our free time is limited. We live in small flats with little communal space. Grandparents and older people are forced to work in line with ever-increasing retirement ages. Public funding cuts mean fewer libraries and public spaces to meet and operate. We are relegated to our private, nuclear homes. There is a sense in which reproductive technologies limit our imaginations to what has come before. What is instead required is a radical re-imagining of the kind of social roles and worlds we want to occupy. IVF was highly controversial when it was first introduced and is now a widely practised and established procedure. It remains intensive and onerous, with varying chances of success. Indeed, it is so established that a range of UK and US companies offer egg-freezing services to employees. A range of cultural and economic factors have influenced the age at which people reproduce, leading to the increased use of IVF technologies, and plausibly also egg-freezing services. The growth and normalisation of IVF attests to the deterministic nature of such technologies in shaping future choices and preferences, rather than being mere additional options. Far from being neutral, technology has a role in perpetuating certain values and beliefs. It can restructure our physical and social worlds, and so how we live. We should be wary of the way in which assisted reproductive technologies like uterus transplants reinforce harmful or regressive norms, related to an essentialist notion of womanhood and siloed kinship, entrenching some of the very problems that generate their demand.
Gulzaar Barn
https://aeon.co//essays/how-infertility-treatments-create-life-and-reproduce-harm
https://images.aeonmedia…y=75&format=auto
Political philosophy
How Eugene V Debs turned American republicanism against the chiefs of capitalism – and became a true crusader for freedom
A shot rang out from the jailhouse at Woodstock, Illinois on a quiet summer’s morning in 1895. One of the inmates had pulled the trigger on an old Civil War musket after thrusting it between the iron bars of a window. The gun was fired by the union leader Eugene V Debs to mark the Fourth of July: this was no prison break, but a demand for another kind of freedom. Later that day, he would write in praise of liberty, while delivering the grim verdict that, in the United States, it now ‘lies cold and stiff and dead’. Soon to be the country’s most influential socialist, how had Debs come to find himself contemplating the prospects of US liberty from behind the bars of McHenry County Jail? Debs’s story is more than mere biography: it speaks to the wider struggles of the US working class, who were facing the harsh new realities of industrial capitalism. But it must be admitted that the man himself cut a striking figure, being possessed of great charisma, courage and powers of speech. His French immigrant parents found their way from Alsace to Terre Haute, Indiana, where little Gene was born in 1855. Something of the frontier spirit still survived in the small city, which had not yet suffered the sharp class divisions that were visible elsewhere in the country. Although Debs left school at 14 – heading to the railroad to scrape paint and grease for 50 cents a day – his education acquainted him with a republican history of the US, which held the liberty and sovereignty of its citizens in the highest regard. Eugene V Debs (seated, far left, appropriately) aged 14 with fellow painters at the Vandalia Railroad in Terre Haute, Indiana, 1870. National Archives and courtesy the Debs Collection, Indiana State University, Cunningham Memorial Library Graduating from the paint shop to a job as a locomotive fireman, his mother pleaded with Debs to quit the railroads after an accident killed two of his colleagues. An abysmal safety record and negligible compensation for injured railwaymen were just some of the many indignities of the ruthless business practices of avaricious industrialists. Before wisely heeding his mother’s warnings, Debs joined the Brotherhood of Locomotive Engineers, subsequently rising to become its Grand Secretary and editing the widely read Firemen’s Magazine for well over a decade. From this editorial perch, he could be far from militant, initially opposing strikes and delivering moralistic sermons on the Brotherhood’s motto of benevolence, sobriety and industry. His new prominence would even lead to his election as a Democratic member of the Indiana House of Representatives in the mid-1880s. But sensing the limitations of both the state legislature and the conservative craft unionism of the Brotherhood, he sought a different vehicle to advance the interests of railroad workers. What began to disturb Debs was the despotic power of corporations and the judicially sanctioned violence that could be unleashed on workers who resisted attempts to sack them or suppress their wages. As he later remarked, their combined influence let loose ‘deputy marshals armed with pistols and clubs and supported by troops with gleaming bayonets and shotted guns’, and has ‘vanished liberty from the land.’ He came to feel that citizens could no longer be counted as free when they were under the thumb of a plutocratic ruling class backed by a corrupt judiciary and the violence of hired ‘Pinkerton’ strikebreakers, with helpless workers even coming to resemble slaves. That language of slavery was shocking, even hyperbolic, but it allowed Debs to associate the plight of workers with the two great republican bids for emancipation in the country’s history: the fight for independence against the political slavery supposedly imposed by the British, and the more recent attack on chattel slavery that had led to the Civil War. On the railroads, the insular craft unions, who organised separately according to job roles, seemed doomed to irrelevance in the face of the titans of industry. Leading the Brotherhood of Locomotive Engineers in the failed Burlington railroad strike of 1888, where several workers were killed, brought home to Debs the bitter limits of this kind of organising. So he gradually broke with the Brotherhood of Locomotive Engineers in order to help found the American Railway Union in 1893, which aimed to unite all those who worked on the railroads, while enjoying rapid growth ahead of an early victory on the Great Northern Railroad. This set the scene for the Pullman strike that would propel Debs to national fame and put him on a collision course with a powerful coterie of economic, legal and political elites. George Mortimer Pullman was an industrial magnate who manufactured railway sleeper carriages from his company town of Pullman on the outskirts of Chicago. He set the rents of the houses in which his workers were required to live, and imposed a strict moral code on them, including discouraging alcohol and gambling. When a nationwide financial panic hit in 1893, Pullman made dramatic cuts in wages while maintaining rents at their existing levels. Workers were squeezed dry, and they braced themselves for a desperate fight for survival. The ensuing conflict drew in the American Railway Union, which eventually called a boycott of trains that continued to carry Pullman carriages. The federal government responded by obtaining an injunction to stop strikers interfering with the railways, and specifically the free traffic of mail cars. Eugene V Debs c1922-23. Courtesy the Library of Congress The leaders of the American Railway Union, including Debs, did not stand down. Their fate was sealed by the Supreme Court upholding an earlier decision to jail them for contempt of court. What most outraged Debs and his fellow defendants was that no jury had convicted them of a crime. They had been incarcerated at the ‘autocratic whim’ of a federal judge, with no opportunity to plead their case in front of ordinary citizens. If Debs could be locked up so easily, he reasoned that his fellow citizens were unfree too. But the courts were not the only arbitrary authority that must be fought in order to be free. The rise of an industrial capitalism in the closing decades of the 19th century created a breeding ground for unaccountable power. Waged workers in particular were at the mercy of their wealthy employers, who determined whether they would earn enough to house, clothe and feed themselves and their families. Without the safety net of a welfare state, the discretion to dispense with labourers effectively gave wealthy industrialists the power to impoverish workers who demanded better conditions or were simply deemed surplus to requirements. The huge fortunes of industrial millionaires also granted sweeping political influence, whether through outright bribes or more subtle webs of influence. In mounting these criticisms, Debs took himself to be following in the footsteps of a longstanding republican tradition in the US, which lauded the freedom of its citizens. Yet his own path through the labour struggles of the Gilded Age would lead Debs to a more radical destination than his republican forebears. He demanded an unequivocally socialist republic in which all could be free. Debs turned the language of republican thought against bosses and the system that sustained their power Many know republicanism best today as a rejection of monarchy – a commitment Debs eagerly shared with earlier generations of Americans who parted with King George III. But hostility to royal authority is only a small part of a much richer republican philosophy that stretches back to the ancient world. Republicans stand for the freedom of citizens who can come together to pursue the common good. The great evil they recognise is subjection to another’s arbitrary will. Someone who has to depend on sheer goodwill lacks freedom, even if they happen to be treated well. Such a person acts at the mere indulgence of others. That is the servile condition of chattel slaves and subjects of absolute monarchs, no matter how kindly or enlightened their masters are. The freedom of the citizen has always been precious to republicans. But the bounds of that citizenship could be drawn narrowly, with no space for women, the poor or those outside a ruling racial caste. Thus, to the glee of fellow critics of a fledgling American republic, Samuel Johnson could ask: ‘[H]ow is it that we hear the loudest yelps for liberty among the drivers of negroes?’ However, a century later, Debs would recall abolitionism with pride, organise across racial lines, and align himself with the cause of women’s suffrage. From early in his career as a labour organiser, he even claimed to be tackling ‘wrongs which take on some of the forms of slavery’. Back in the 17th century, the English republican Algernon Sidney claimed that to ‘depend upon the will of a man is slavery’. Drawing on ideas with classical roots, he was warning about the unaccountable power of monarchs (with good cause, given his own subsequent beheading for treason). Sidney’s writings have been dubbed a ‘textbook of revolution’ and, partly through his influence, we find framers of the US Constitution such as Alexander Hamilton distinguishing liberty from a state of slavery in which someone is ‘governed by the will of another’. The very possibility of unlimited British taxation without representation in their parliament could seem to fit the bill. But neither Sidney nor Hamilton would defend the plight of servants or hired labourers in these republican terms. Debs turned the resonant language of republican political thought against bosses and the whole system that sustained their power. Like republicans of old, he warned of a fatal dependence on the arbitrary will of others. But rather than attacking the untrammelled authority of a king or overseas legislature, he adapted this analysis to the conditions of a rapidly industrialising capitalist economy. There could be no political equality when workers were dependent on capitalists who owned the resources, tools and machines needed to make a living. Debs would conclude that: ‘No man is free in any just sense who has to rely upon the arbitrary will of another for the opportunity to work.’ But that unfreedom was the reality for most working people, who laboured and therefore lived by the permission of bosses. Control became key to how Debs understood work under capitalism. The long hours, unsafe conditions and exhausting nature of much of this labour was not lost on him. As a former locomotive fireman, shovelling coal into the fire-box of a rail engine, Debs knew what gruelling work meant. Nor did he have any illusions about the bleak conditions among the mills, factories, mines and farms where, day after day, workers slogged their guts out for a meagre reward. But Debs’s complaint was more fundamental than poor working conditions or even low wages – it took aim at the lack of freedom at the heart of the economy. Republicans want to eliminate arbitrary power, not entrust it to wise and kindly rulers. In this spirit, Cicero remarked that ‘freedom consists not in having a just master, but in having none.’ Sidney would add: ‘[H]e is a slave who serves the best and gentlest man in the world, as well as he who serves the worst; and he does serve him if he must obey his commands, and depends upon his will.’ Debs saw that this was the plight of those workers who desperately needed a wage and could not buck the strenuous discipline of employers that came with it. Such a life was lived at the mercy of others, whose favour or displeasure could determine whether families could eat or put a roof over their head. To be so squarely under the thumb of a class who could see that you starved – and so to have many masters rather than a solitary owner – is to begin to taste at least some of the characteristic unfreedom of the slave. If freedom was the destination, the solution to such vulnerability could not be nicer bosses brimming with paternal affection for their workers. Control must instead rest with citizens rather than plutocrats and their minions. This conviction led Debs to a socialism that sought to secure economic freedom for all. Finding himself in another courtroom towards the end of his life, he set out this socialist demand: [A]ll things that are jointly needed and used ought to be jointly owned – that industry, the basis of our social life, instead of being the private property of a few and operated for their enrichment, ought to be the common property of all, democratically administered in the interest of all. That conclusion was not the product of the giddy enthusiasm of youth or a life of detached scholarly study. It was wrested from a tireless struggle in support of workers being chewed up and spat out by a capitalist economy and the plutocrats who it enriched. Debs’s time in McHenry County Jail was a point of inflection in the development of his political thought. Confronting the combined power of industrialists, the government and the judiciary, while ending up imprisoned for his troubles, led Debs to think more seriously about what kind of society would allow people to be free. That his answer was socialism might surprise us. Perhaps a socialist society would be more equal, perhaps even more just, but why think it would be freer? Won’t the clunking fist of the state actively deprive people of their liberty by appropriating their property and pushing them around? Debs did not see it like that, and had some impeccably republican grounds for his judgment. A poster from 1965 incorporating one of Eugene V Debs’s most noted slogans. Courtesy the Library of Congress There is a long republican tradition that associates freedom with property. The ancients often assumed that liberty depended upon having the time for leisure and political activity granted by ownership of land and slaves with which to work it. Economic independence became the material foundation of freedom, and itself rested on secure possession of private property. Modern republicans can be found taking up the call for land for citizens more broadly – lauding the independent yeoman or homesteader, who appears somewhat insulated from the arbitrary power of others, insofar as they can provide for themselves through their own labour. Another model of the economic security needed for freedom was the artisan who owned their own tools and workshop. But both agrarian and artisanal independence were increasingly threatened by industrial society. Debs recognised that in the age of the factory, railroad and mass-market goods, there was no hope in clinging to this already romanticised past: the future of economic production would be inescapably social and interdependent. After reading socialist writings during his incarceration at Woodstock – above all, Karl Kautsky – Debs was increasingly convinced of the need for a cooperative economy that took power out of the hands of plutocrats and gave it to ordinary citizens. He was not the first member of the US labour movement to have such thoughts or to frame them in republican terms. The Knights of Labor, an influential nationwide labour federation that reached its height in the 1880s, had called for a cooperative commonwealth. They were labour republicans, who believed workers were effectively rendered slaves by their subjection to the will of employers. If this dominating control was to be eliminated, citizens would have to ‘engraft republican principles into our industrial system’, rather than reserving them for politics alone, as the labour leader George McNeill put it. Under such a system, working life would be geared not to profit but to human needs The Knights set up many cooperatives owned by workers themselves, but these experiments eventually ran aground. Debs concluded that something more ambitious was needed: the genuinely collective ownership of the means of production and distribution. If all citizens have a stake in the economy, without unaccountable bosses who could sack them whenever it proved profitable, then people would possess the economic security necessary to be counted as genuinely free. Furthermore, in the workplace itself, workers rather than capitalists could govern how labour was organised, and so not be subject to the whims of owners who were not answerable to those they employed. Debs arrives at a socialist republicanism. While freedom borne of control over property remains central to this republican story, this is no longer private property. Instead, ‘Economic freedom can result only from collective ownership.’ A real republic, according to Debs, cannot restrict democracy to narrowly political affairs, but must be founded on an economic democracy. Under such a system, working life would be geared not to profit but to human needs. Nor would anything of great value be lost, since we ‘no more need owners of the railroads and other great machines than we need a king.’ This was the ethos that informed the Socialist Party of America (and its forerunner, the Social Democratic Party of America), which Debs helped found in 1901. He was their US presidential candidate five times, and secured 6 per cent of the national vote in 1912. This turn to electoral politics was not only motivated by the opportunities for propaganda afforded by electoral campaigns but the realisation that political office was needed to definitively transform the country. Yet, this did not imply a rejection of the union activity to which Debs had devoted much of his life. Just as he had turned from craft unions segregated by job type to the industry-wide American Railway Union, Debs came to embrace an even more capacious model of industrial unionism that sought to bring together the whole working class. Thus, along with many of the most influential members of the US labour movement, Debs contributed to the creation of the Industrial Workers of the World in 1905. They were committed to abolishing waged labour and ultimately sought to construct a ‘new society within the shell of the old’. Debs’s magnetic appeal was not confined to his timely political programme but flowed from an impassioned and empathetic character. His speeches were legendary, with working people being left in no doubt he would always fight alongside them. But he hated demagoguery, and would often stress the importance of self-education and following one’s own conscience: I am no labor leader. I don’t want you to follow me, or anyone else. If you are looking for a Moses to lead you out of this capitalist wilderness, you will stay right where you are. I would not lead you into the promised land if I could, because if I could lead you in, someone else could lead you out. You must use your heads as well as your hands and get yourself out of your present condition.This is a socialism from below. Fatherly leaders directing the class struggle with their superior foresight were not what was needed. The working class must instead free itself, while drawing its strength from the democratic energies of workers and citizens as a whole. He would get close to a million votes as a presidential candidate in 1920 while still a prisoner Nor did this concern for others stop at the country’s borders. Despite his opposition to Prussian militarism, Debs believed that the First World War was a senseless disaster for the working classes of all the nations involved. In a fiery speech in Canton, Ohio in 1918, he articulated his opposition to the war. Before a fortnight had passed, Debs was arrested for sedition, then put on trial and sentenced to a decade in prison, after the court found that he had sought to obstruct military recruitment. Upon his conviction, Debs made one of his most widely known pronouncements: [W]hile there is a lower class, I am in it, and while there is a criminal element, I am of it, and while there is a soul in prison, I am not free.It was this sincere identification with the oppressed that won Debs such a hearing throughout his political career. Indeed, he would get close to a million votes as a presidential candidate in 1920 while still a prisoner and unable to conduct rallies or give speeches. His victorious opponent in that contest, Warren G Harding, eventually commuted his sentence to time served, with Debs being released on Christmas Day 1921, after spending more than two and a half years inside. 13443Convict No. 9653 at the US Penitentiary, Atlanta, where he was sentenced to 10 years for sedition. Courtesy National Archives at Atlanta, RG 129 A side- and frontfacing mugshot portrait of Eugene Debs13444A political cartoon featuring Eugene V Debs in his cell; the ‘front porch’ comment refers to Republican candidate Warren G Harding’s conservative presidential campaign. Courtesy National Archives A cartoon of Debs pictured looking out from behind a barred prison window. A speech bubble says ‘Anyhow, there are worse places than a front porch’13445Eugene V Debs on Christmas Day 1921 outside Atlanta Federal Penitentiary upon his release. Courtesy Library of Congress A tall, slim balding man smiles and gestures with his hat to the cameraThis was not the first time Debs had been imprisoned. But this longer and more onerous stay led him to reflect at greater length on the conditions of prisoners and the place of criminality within a socialist society. The main function of law, he believed, was to keep the poor in subjection to the ruling class, and most prisoners owed their sentences to the poverty thrust upon them by the demands of a capitalist economy. In an early statement of what we would now call ‘prison abolitionism’, Debs held that prisons in anything like their current form would and should be eliminated by a socialism that stood for human liberty. Theft, in particular, would dwindle when the economic subordination created by capitalism was replaced by cooperative production and common ownership. Other crimes that remained ought to be handled by civilised institutions rather than through the brutality of prison life. Debs’s often-frayed health had suffered from incarceration, and did not hold for long after his release. Slowed by cardiovascular problems, he died less than five years later at the age of 70 in 1926. But he left a vivid legacy, not only in a life that has inspired later US socialists such as Bernie Sanders, but in an intellectual if unscholarly contribution to republican and socialist thought. The promise of liberty – of free citizens not subject to the whims of bosses, judges and a plutocratic class – was not to be found in the anarchy of market competition. Instead, it required us all to have a democratic say in how the economy is governed, and the workplaces where we spend so much of our lives. That vision asks people to break with the desperate attachment to private property that has become second nature to so many of us. True security, and the liberty-giving independence of thought and action that it brings, rests upon a deeper act of solidarity, where we learn to cooperate in our economic lives rather than dance to the tune of profit. Debs held that the path to freedom runs through the institutions of a socialist republic.
Tom O’Shea
https://aeon.co//essays/for-socialism-and-freedom-the-life-of-eugene-debs
https://images.aeonmedia…y=75&format=auto
Knowledge
Academics need to think harder about the purpose of their disciplines and whether some of those should come to an end
Right now, many forms of knowledge production seem to be facing their end. The crisis of the humanities has reached a tipping point of financial and popular disinvestment, while technological advances such as new artificial intelligence programmes may outstrip human ingenuity. As news outlets disappear, extreme political movements question the concept of objectivity and the scientific process. Many of our systems for producing and certifying knowledge have ended or are ending. We want to offer a new perspective by arguing that it is salutary – or even desirable – for knowledge projects to confront their ends. With humanities scholars, social scientists and natural scientists all forced to defend their work, from accusations of the ‘hoax’ of climate change to assumptions of the ‘uselessness’ of a humanities degree, knowledge producers within and without academia are challenged to articulate why they do what they do and, we suggest, when they might be done. The prospect of an artificially or externally imposed end can help clarify both the purpose and endpoint of our scholarship. We believe the time has come for scholars across fields to reorient their work around the question of ‘ends’. This need not mean acquiescence to the logics of either economic utilitarianism or partisan fealty that have already proved so damaging to 21st-century institutions. But avoiding the question will not solve the problem. If we want the university to remain a viable space for knowledge production, then scholars across disciplines must be able to identify the goal of their work – in part to advance the Enlightenment project of ‘useful knowledge’ and in part to defend themselves from public and political mischaracterisation. Our volume The Ends of Knowledge: Outcomes and Endpoints Across the Arts and Sciences (2023) asks how we should understand the ends of knowledge today. What is the relationship between an individual knowledge project – say, an experiment on a fruit fly, a reading of a poem, or the creation of a Large Language Model – and the aim of a discipline or field? In areas ranging from physics to literary studies to activism to climate science, we asked practitioners to consider the ends of their work – its purpose – as well as its end: the point at which it might be complete. The responses showed surprising points of commonality in identifying the ends of knowledge, as well as the value of having the end in sight. As scholars of the Enlightenment, we draw our inspiration for this intertwining of end and ends from an era that initiated many of our models for producing, sharing and using knowledge. Enlightenment thinkers combined practical and utopian definitions of ends as they called for new modes and institutions of knowledge production, understanding ends as large-scale goals that must, at the same time, be achievable. In the early 17th century, Francis Bacon called for both a new start to knowledge production and a reconsideration of its ends. ‘[T]he greatest error of all,’ he wrote in The Advancement of Learning (1605), ‘is the mistaking or misplacing of the last or furthest end of knowledge.’ Its ‘true ends’, he later wrote, were not professional reputation, financial gain, or even love of learning but rather ‘the uses and benefits of life, to improve and conduct it in charity’. Advocating an end to scholasticism, the medieval educational programme that emphasised dialectical argumentation and deductive logic, Bacon devised his Novum Organum (1620), ‘new organon’, as both a blueprint for and the beginning of a generations-long and worldwide effort to seek new ‘ends’. His work is generally taken as an origin point for the Scientific Revolution. In this way, the Enlightenment offers a model of how the end of one view of knowledge production can be a launchpad for new ideas, methods and paradigms. The fracturing and decline of Aristotelian scholasticism during the Renaissance gave rise to a host of philosophies devised to replace it. The conflicts of the Thomists and Scotists, the inadequacies of revived Hellenistic doctrines, the discomforting mysticism of Rosicrucianism and Kabbalah, and even the failed promise of Platonism to provide a modern, comprehensive alternative to Aristotle led thinkers like Bacon to seek answers in other fields. Bacon’s terms – exitus, finis, terminus – suggest a focus on endpoints as well as outcomes. Knowledge, in his philosophy, had ends (ie, purposes) as well as an end (a point at which the project would be complete). The new science, he believed, would lead to ‘the proper end and termination of infinite error’ and was worth undertaking precisely because an end was possible: ‘For it is better to make a beginning of a thing which has a chance of an end, than to get caught up in things which have no end, in perpetual struggle and exertion.’ Bacon believed scientists could achieve their ends. The disciplines as we currently occupy them are artefacts of the 19th-century origins of the research university The following year, however, the scholar Robert Burton took a less sanguine view of knowledge production in The Anatomy of Melancholy (1621). Considering the lot of ‘our divines, the most noble profession and worthy of double honour’, who despite that worthiness had little hope of material reward or encouragement, he asked rhetorically: ‘to what end should we study? … why do we take such pains?’ The (enviable) certitude of the natural philosopher juxtaposed with the (highly relatable) lament of the humanist scholar suggests a division between modes and objects of enquiry that remains stereotypical of the STEM-humanities divide. We continue, fairly or unfairly, to associate the natural and applied sciences with specific and comprehensible ends, while the search for humanistic knowledge seems endless. Seeking to sidestep such stereotypes, we asked knowledge producers to revisit Bacon’s foundational question of the Enlightenment: what is ‘the last or furthest end of knowledge’? Some may be quick to point out that past efforts at ending often appear quixotic or ludicrous with the advantage of hindsight. For literary scholars, the paradigmatic examples of this are Jorge Luis Borges’s short story ‘The Library of Babel’ (1941) and the character of Edward Casaubon in George Eliot’s novel Middlemarch (1871-2). Casaubon’s work on his Key to All Mythologies is literally unending; he dies before completing it, leading his young wife Dorothea to worry that he will guilt her into promising to continue the work after his death. Scientists too have sometimes conceived of their ends as providing, as Philip Kitcher wrote in his essay ‘The Ends of the Sciences’ (2004), ‘a complete true account of the universe’, but the idea that such an account could exist, or that, if it did, we could comprehend it, remains very much in doubt. The aspiration for a global end is generally delusive and potentially dystopian. Our goal, then, is not to offer a single or final answer to the question of knowledge’s end(s), but rather to open and maintain an intellectual space in which it can be asked. Scholars across fields may bristle at the idea of their work ending, with ‘defences’ of various fields commonplace today. The disciplines as we currently occupy them are artefacts of the 19th-century origins of the research university, which gave us the tripartite structure of the natural sciences, social sciences and humanities. This model, which trains scholars in narrow but deep disciplines, emerged out of the Enlightenment’s 200-year shift away from the medieval curricular divisions of the trivium (grammar, logic and rhetoric) and quadrivium (arithmetic, geometry, music and astronomy). The rise of the research university, first in Germany and then in the United States, put an end to this system. The fact that such academic structures have changed dramatically over time shows that they are not inherent, and the past few decades have witnessed widespread interest in interdisciplinarity in the form of institutional programmes and centres as well as in new fields such as American studies, area studies and cultural studies. However, critiques of interdisciplinarity point out that such efforts are frequently additive rather than interactive: that is, they combine established disciplinary methods rather than remaking them. Questions of purpose, unity and completion have been key to, if often implicit in, the discourse of interdisciplinarity that has dominated discussions of academic institutional organisation. Of course, knowledge production does not take place solely within the ivory tower. It was precisely during the Enlightenment that writers such as Joseph Addison called for philosophy to be brought ‘out of Closets and Libraries, Schools and Colleges, to dwell in Clubs and Assemblies, at Tea-tables, and in Coffee-houses’. The period saw the takeoff of ‘improvement’ societies, which initially focused on agricultural and public infrastructure but soon expanded to include the arts and sciences more broadly. Some of these organisations, such as Britain’s Royal Society (originally the Royal Society for Improving Natural Knowledge), remain important institutions for bridging the continuing gap between universities and the public. But other extra-academic efforts have had the goal of repudiating the university, rather than connecting with it. The Thiel Fellowship, founded by the Right-wing venture capitalist Peter Thiel, provides recipients with a two-year $100,000 grant on the condition that they drop out of or skip university in order to ‘build new things instead of sitting in a classroom’. For many, academic organisations appear moribund and continuing improvement requires new institutional arrangements. Ending one institutional arrangement often happens in the name of starting something new. Once we start looking for the ends of knowledge, then, we notice that interlocking questions about purpose and completeness are central to many of our scholarly undertakings. It can be easy to identify some knowledge projects that failed for good reason: alchemy, phrenology and astrology, for example, are now understood as abandoned pseudosciences (though the latter has taken on new life in 21st-century culture). Other disciplines’ deaths have also been reported, albeit perhaps prematurely. In 2008, Clifford Siskin and William Warner argued that it was time to ‘write cultural studies into the history of stopping’. In a blog post titled ‘The End of Analytic Philosophy’ (2021), Liam Kofi Bright opined that the field was a ‘degenerating research program’. Peter Woit used similar language to describe string theory in an interview with the Institute of Art and Ideas earlier this year; he called it a ‘degenerative program’ whose goal of unification had been ‘simply a failure’. And Ben Schmidt, in his blog, has diagnosed ‘a sense of terminal decline in the history profession’ given cratering numbers of academic jobs. These fields have produced valuable knowledge, but (according to these authors) they may have taken us as far as they can go. Rather than focusing on a single field, we surveyed knowledge producers from across the humanities, social sciences and natural sciences, inside and outside the university, to answer the same question: what are the ends of your discipline? While we encouraged them to consider multiple kinds of ends, we did not prescribe a definition for the term and we recognised that some would reject the premise itself. We did not expect consensus, but we did find points of commonality. This synthetic approach revealed four key ways in which to understand ‘ends’, which emerged collectively: end as telos, end as terminus, end as termination and end as apocalypse. The first two definitions relate most directly to the work of a discipline or an individual scholar: what is the knowledge project being undertaken, and what would it mean for it to be complete? Most scholars are relatively comfortable asking the former question – even if they do not have clear answers to it – but have either never considered the latter or would consider the process of knowledge production to be always infinite, because answering one question necessarily leads to new ones. We argue that even if this were true, and a particular project could never be completed within an individual’s lifetime, there is value in having an identifiable endpoint. The third meaning – termination – refers to the institutional pressures that many disciplines are facing: the closure of centres, departments and even whole schools, alongside political pressure and public hostility. How can we get anywhere if we cannot even say where we want to go? Over all this looms the fourth meaning, primarily in the context of the approaching climate apocalypse, which puts the first three ends into perspective: what is the point of all this in the face of wildfires, superstorms and megadrought? For us, this is not a rhetorical question. What is the point of literary studies, physics, history, the liberal arts, activism, biology, AI and, of course, environmental studies in the present moment? The answers even for the latter field are not obvious: as Myanna Lahsen shows in her contribution to our volume, although the scientific case is closed as far as proving humans’ effect on the climate, governments have nevertheless not taken the action needed to avoid climate catastrophe. Should scientists then throw up their hands at their inability to influence political trends – indeed, some have called for a moratorium on further research – or must they instead engage with social scientists to pursue research on social and political solutions? What role do disciplinary norms separating the sciences, social sciences and humanities play in maintaining the apocalyptic status quo? To some extent, then, particular ends are less important than the possibility of discovering a shared sense of purpose. Ultimately, we hope to show what the benefits would be of knowledge projects starting with their end(s) in mind. How can we get anywhere if we cannot even say where we want to go? And even if we think we have goals, are we actually working toward them? Ideally, a firm sense of both purpose and outcome could help scholars demonstrate how they are advancing knowledge rather than continuing to spin their wheels. As we noted, our survey found four ideas of the ends of knowledge: telos, terminus, termination and apocalypse. But in answering the question of the ends of their disciplines, our contributors fell into another set of four groups, which cut across the three-part university division of the humanities, social sciences and natural sciences. One group took the approach of unification: how could the author’s field achieve a unified theory or explanation, and how close is the field to that goal? A second group argued that the purpose and endpoint of knowledge production is increased access, and that such access is key to social justice. Discussions of utopian and dystopian outcomes comprised a third group, while a fourth located their ends in the articulation and pursuit of key concepts such as race, culture and work. These four groupings – unification, access, utopia/dystopia and conceptualisation – synthesise many of the ways that knowledge workers respond when asked to consider their discipline’s ends, from seeking a point of convergence for knowledge to articulating the central project of their field. In this way, we asked contributors to reimagine their places within the university structure. As we know, any individual scholar’s research or methodology – what we have called her knowledge project – might diverge significantly from those of her colleagues within a department or discipline. The 19th-century formation of the university established our three primary divisions of the humanities, social sciences and natural sciences. Now, we are proposing a thought experiment of a new four-part structure. What might a department or division of unification or conceptualisation look like? We are asking how knowledge production might change to fit the present moment if we organise ourselves not by content – English, physics, computer science and so on – but by how we understand our ends. At the same time, these ends are necessarily interconnected, and individual research projects would likely fit into several at once. As Hong Qu argues in his contribution to our book, for example, individual researchers and teams working towards autonomously learning AI systems, or artificial general intelligence (AGI), will need more deliberate exposure to moral philosophy, political science and sociology to ensure that ethical concerns and unintended consequences are not addressed on an ad hoc basis or after the fact but are anticipated and made integral to the technology’s development. Educators, activists and policymakers will concordantly need more practical knowledge about how AI works and what it can or cannot do. Achieving the immediate end of AGI entails the pursuit of a new and more abstract end greater than the sum of its disciplinary parts: ‘a governance framework delineating rules and expectations for configuring artificial intelligence with moral reasoning in alignment with universal human rights and international laws as well as local customs, ideologies, and social norms.’ Qu explores potential dystopian scenarios as he argues that, if the end of creating ethical AGI is not achieved, humanity may face a technological end. In this way, current disciplinary divides are driving a society-wide sense of potential doom. The strategies that got us this far may not be the ones we need to move forward Returning to the Enlightenment shows how concerns over disciplinary divisions have been present since their inception. In 1728, Ephraim Chambers, editor of the Cyclopædia, wondered ‘whether it might not be more for the general Interest of Learning, to have all the Inclosures and Partitions thrown down, and the whole laid in common again, under one undistinguish’d Name’. By the end of the century, the redivision of knowledge had been formalised in the proto-disciplinary ‘Treatises and Systems’ of the Encyclopaedia Britannica. In 1818, the rise of specialist groups like the Linnean Society and the Geological Society of London led the eminent naturalist Joseph Banks to write: ‘I see plainly that all these new-fangled Associations will finally dismantle the Royal Society.’ Disciplinarity was seen as ending some kinds of knowledge while not fulfilling their ends. The boundaries established in the mid-19th century and hardened throughout the 20th are now maintained managerially and financially as well as through methods and curricula; they are often reified by architecture and geography, with humanities and STEM departments housed in buildings on opposite ends of campuses. For a long time, these tactics and strategies worked: they gave the new disciplines that emerged from the Enlightenment time and space to grow. Disciplinarity offers an important means to certify knowledge production. The strategies that got us this far, however, may not be the ones we need to move forward. If the utmost end of the university is or should be the advancement and distribution of knowledge – an increasingly open question in some quarters – then, at the largest scale, the ability to determine and articulate shared ends among fields of knowledge would be an important step toward addressing institutionally entrenched, often counterproductive, divisions and authorising new systems and organisations of knowledge production. Can we escape the discourse of competition and crisis, which tends to keep us focused on the health of individual disciplines or college majors, by reorganising knowledge production around questions or problems rather than objects of study? What if, instead of endlessly attempting to analyse and remedy the troubles of a particular division, we turn our attention to the system of division itself? Our volume is an initial attempt to see what the advancement of learning could look like if it were to be reoriented around emergent ends rather than inherited structures. The question of ends must continue to be pursued at increasing scales, from the individual researcher, to the office or department, to the discipline, to the university, to academia and to knowledge production as a whole. The shared project of considering the end(s) of knowledge work reveals the rich history and scholarly investments of individual disciplines as well as the larger goal of producing accurate knowledge that is oriented toward a more ethical, informed, just and reflective world. We are, in many ways, only at the beginning of the end.
Rachael Scarborough King & Seth Rudy
https://aeon.co//essays/should-academic-disciplines-have-both-a-purpose-and-a-finish-date
https://images.aeonmedia…y=75&format=auto
Biography and memoir
Whenever I stand in a flat landscape, I feel myself becoming weightless, taken out of my childhood full of painful nothing
At the Wildfowl and Wetlands Trust centre in Slimbridge, Gloucestershire, children zigzagged between the duckponds like bees performing a cryptic private dance. The sound of children screaming makes my hands judder, in half-remembered horror. But today I could bear it, because there were geese to feed; they ran after me, pistoning seed out of my hand and leaving crescents of mud behind. As we left the feeding area, birdwatching hides rose up from the path: dark and shady, with silence inside and long windows giving out on to the marshy flatlands around the Severn Estuary. This was more like it. Very quietly, we unhooked the wooden window clasps and let the pane down. My friend settled in with his binoculars, while I, chin-on-arms, watched the flat landscape – the low, ironed green, sprinkled with buttercups; the patches of water like gleaming fallen coins. We’d come in summer: a bad time for wetland wildfowl, my friend told me. In wintertime, godwits and dunlin and grey plovers come in from northern Europe and Russia to nibble on Britain’s mudflats. But when the weather gets warmer, many of these nice solid wading birds go back to the Arctic Circle, and leave Britain’s flat landscapes to themselves. That was OK with me. I was really here for the bare, stretched horizons of the wetlands. The flat places with nothing much to look at. So I looked. And gradually the noise in my head got quieter. It always does, when I’m in a flat place. Something in me stills and lines up with the horizon. Flat places are the ground that my mind is built upon. Wetlands, fenlands, stretches of shingle: I never get tired of their clear, straight horizons. Whenever I stand in a flat landscape, I feel myself becoming weightless. Without mountains or hills, there’s nothing to catch on my vision, or distract me. I’m freed from hindrance. I could rise up, I think, into the air and float. This isn’t a popular view, I know. I’m aware that people often find flat landscapes alienating. They can seem bleak, boring, even terrifying, because there’s nowhere to hide, and everyone can see you for miles. There’s no landmark to fix your gaze upon, and this makes it difficult to orientate yourself. That’s why people tend to prefer breathtaking mountains or lush forests or plunging valleys. Scenes with texture, that steer your vision comfortingly as you move from detailed foreground to rising background. People know where they are in varied, hilly landscapes. And they know who they are. The experience of elation or awe in the face of a mountain is as old as literature. Gods lived on Mount Olympus, in ancient Greece. The Romantic poets climbed Mont Blanc to enthuse and gush. Loving a mountain means that you join a whole long line of mountain-loving humans, well-documented in novels and poetry and drama. Loving a mountain joins you to something bigger than yourself. I understand those preferences. But I am different. It is flat spaces that make me come alive. The lack of landmarks makes me feel I could do anything, or go anywhere I wanted. Uncontrolled and uncoerced: unsteered by other people’s beliefs or priorities. In a flat space, there are no focal points to fixate on, to force me to see some things and miss out on others. Looking out at the flat wetlands of Slimbridge, that summer’s day, my mind spilled out across the space like water over a floor: expanding, becoming sensitive and alive again, where life and work and other people had shut it up close. My life has made me strange. I don’t mind admitting that. I was born and raised in an odd house in Pakistan, dominated by an arrogant and grandiose father. He was a celebrated doctor, and he had big ideas: so big that they absorbed us all and left no room for anything else. He was a genius, he told us. Other people were stupid: we should stay away from them. Especially other Pakistanis. He saw them as benighted by religion. My father was a Pakistani in love with the West, with the very surface layer of its cultural touchstones: Mozart, Vincent van Gogh, Gilbert and Sullivan. He painted a copy of The Dance Foyer at the Opera on the rue Le Peletier (1872) by Edgar Degas, five foot by three, which he hung in the living room: the arms of the ballerinas bare and provocative and just a bit wrong in the elbows, where he’d misjudged the angles. Yet my father didn’t like British people any more than he liked Pakistanis; they didn’t defer to him in the way he thought he deserved. So he kept us away from everyone. We weren’t to speak to the neighbours, or visit friends from school. Our whole world was inside that house, our eyes trained upon him, braced for anything to happen. He might bring home chocolate. Or live turkeys. Or come in roaring and grabbing and throwing things. Anything could happen. And anything happened all the time. Or maybe it was nothing. It felt like nothing: those long days cooped up in the hot dazed rooms. My father went out, and we didn’t. We were driven to school and driven back, and that was all. In the summers, when school was out, we went nowhere. There was nothing to do or look at or think about, except the floor, and the books we’d read and reread over and over, with the sound of the traffic shouting outside, and my uncles and aunts and grandparents shouting downstairs. Life was a bare landscape with nowhere to hide. I knew the other children at school didn’t live like this, but I couldn’t explain what this was. I lived my life in a daze. I wished there was more space. ‘You’re lucky,’ my mother said. White and British, she had moved to Pakistan to be with my father. All day she mopped and cooked and scraped up vomit. She roamed through the house, back and forth and back again, even more trapped than we were. ‘You’ve got enough to eat. You go to school. Do you know how many girls don’t go to school in this country? Should we take you out of school and just marry you off, so you can scrub floors for your in-laws all your life?’ My life in Pakistan, full of painful nothing, had left a flat landscape inside my head I was lucky. And I went on getting luckier. I was lucky when my father disowned me, two weeks before my 16th birthday, and I fled with my mother and sisters to Britain. I was lucky when I got to go to school again, near my grandmother’s house in Scotland, and chose what I wanted to study, and could walk in the street by myself for the first time, and look at the sea and the grass. I was lucky when I got into Oxford to study English, floated by government grants and equal opportunity bursaries. But for some reason, throughout my 20s, my body didn’t seem to know I was lucky. It cried, and hurt, and clouded over, and numbed out. It was terrified of other people. It wouldn’t come close to them, wouldn’t be drawn to them, wouldn’t be caught up by passion. I kept falling asleep. I couldn’t want anything. And I couldn’t explain why. When I tried to share my feelings with other people, they couldn’t see what I saw. They couldn’t see the nothing of my life, which burned a thick stark line across my mind. My life in Pakistan, full of painful nothing, had left a flat landscape inside my head. Not a bleak, dead one. That would almost have been easier. This flat landscape seared with painful livingness. It wouldn’t let me look away: kept me mesmerised by its agonised, intense emptiness. And it seemed more real than any of the strange world around me. Even in safe cosy Britain, where there were consequences for hurting your children and education was free, I sensed something sinister under the gleaming surface. Something stark and painful, and utterly relentless that refused to know how much its wealth and serenity was built on the pain of others, stripped for parts by white colonisers and taught to hate themselves. It made it hard to be around people, in their happy ignorance. It made it hard to feel safe with them. So I lived in my own world, alone with what was real to me alone. I had no words to describe any of this. I loved my friends, but I couldn’t bring them in there with me. I knew the flat place was trying to tell me something important: something that Britain didn’t want to know. I just couldn’t work out what. And it hurt that no one else could see it. When the wading birds go home for the summer, warblers from Europe and Africa take their place, eating, breeding and shouting seductively at each other. Little brown birds, mostly: flighty, quick, difficult to glimpse and to distinguish. Cetti’s warbler, Dartford warbler, grasshopper warbler. Garden warbler, marsh warbler, reed warbler, willow warbler, wood warbler, sedge warbler. So many and so alike, and in the summer the trees grow thick leaves to hide them from view. At Slimbridge, my friend peered at the reedy edge of a pond, binoculars to his eyes, muttering about a warbler, half to himself. I had no idea what kind of warbler it might be. Between the shivers of leaves, and the everyday swell and tremble of my vision, I could barely see the bird at all. How do we see things? Usually we see what we know: what we expect to see. If there’s a mountain in the middle of a plain, we stop seeing the plain. The mountain ‘matters’; the plain doesn’t. Our cultures tell us what’s worth seeing and what isn’t. What counts as real and what doesn’t. In a flat place, we’re told there’s nothing to see. But the life I’ve lived has made me struggle to see what I’m supposed to: to focus on the right things, and ignore the wrong ones. What I can see instead, all the time, is the flat place. As we went on along the path, the birdwatching hides fell away. Suddenly there was a high, raised bank on the left, a little furred comb of grass along the top, joining earth to sky. I knew what that bank held back from us: the flat stretch of the Severn Estuary, too muddy to step upon. The Bristol Channel brings up armfuls of brown mud, day and night: its funnel shape and sandy base turn the water heavy with silt. But in that mud lives delicious food for wading birds. Redshank, curlews, wigeon, shelduck, dunlin all gather to feast on ragworms and clams. If you cut a square metre out of the Severn mud, just 2.5 centimetres deep – like a big thick square of turf – it would contain the same number of calories as 13 Mars bars, all in snails and worms. There’s a richness in flat places, and the birds know it. The smooth grass felt like hands running reassuringly over my head and down my neck My friend was getting excited. Wading birds are his favourite, and they’d be out on the estuary. Birdwatching is a good hobby if, like my friend, you enjoy seeking focal points, ways of ordering your experience of nature. Giving things names. Yet there’s another kind of life, which is about living alongside things that have no names: memories that can’t be explained. The day something might have been put into my shoes, to smuggle it across a border. The day something was injected into me, at home, for reasons unknown. My grandfather, roaming the corridors with wide eyes, screaming and shouting for help. The men who came to the house and had conversations in whispers. And throughout, my father: mouth stretched with rage, throwing a metal box at me – its wires dangling and snapping – with a hatred even he couldn’t name. Such a nameless life means that, normally, the longer I spend around people, the more I feel like I’ve been set on fire. But my friend is good at letting me be. He carries around my world respectfully, without prying, like a very polite bellhop with a lady’s handbag. On our right were the inland flats: a river winding through, trees assembled at the back like an audience of mixed height, watching the bare stage of the level landscape, as I was: the prickling nothing that was happening all over it. Yellow flowers waved stiffly, out of sync, like buzzing made visible. We passed through a tunnel of trees stretching over the path, almost touching overhead, and then – suddenly – the bank, which had been blocking our view, gave out, and there was clear flat land uninterrupted between us and the Bristol Channel. Salty grassland stretched out wide and, beyond, a little strip of sea. What was the land beyond, my friend wondered? Was it Wales? Already my mind was settling into straight quiet lines. The white shine on the green land and the smooth grass felt like hands running reassuringly over my head and down my neck. My friend walked ahead, towards the sea, and I took a photo of him, tiny on the path, in his pink shirt, with the blue sky arching over. Four years ago, none of this – light, comfort, awe – would have seemed possible. I was 29. I’d just finished my doctorate, and got an academic job that would last a whole three years. This changed my life because it paid enough for me to start weekly therapy. I went into my therapist’s office – bony, exhausted and struggling to want to stay alive – and explained that nothing had happened to me, and could she maybe help, please? My therapist was delicate and wise. She knew when to be offhand. Almost in passing, she mentioned complex post-traumatic stress disorder (cPTSD). I snatched up the term and went straight to books, to the internet, to strip it for meaning. Complex PTSD, I learned from the psychiatrist Judith Herman and the psychotherapist Pete Walker, is different from the sort of PTSD we associate with war trauma, or attack. It doesn’t turn on a single, traumatic memory, which marks the point when, for the survivor, the world turned from OK to not OK. In complex PTSD, the world may never have been OK in the first place. Complex PTSD is caused by ongoing events – often, where it feels like they’ll never end, or there’s no hope of escape. It’s worse when the traumas were caused by someone who was meant to take care of you. It’s worse if they start when you’re very young: too little to know what counts as an ‘event’. Or what counts as something being ‘not OK’. I dug my toes into mud, traced shapes in shingle, and stared at long gorgeous horizons This explained why the flat place in my mind had no landmarks I could pick out. No single terrible thing had happened to me, yet my whole life had been filled with a nameless terror and fear since I was born. I’d learned to dissociate to protect myself – to vanish from terrifying situations that I couldn’t fight or fly from. And that had been a wise response, said my therapist, to the situation I’d been in. Now, in Britain, a new way of handling life and its terrors might be more helpful. With my therapist, I started lining up what I could see, in my mind, with what I felt: sliding them up along the same straight horizon. And I started going for walks in flat places. Morecambe Bay, in the northwest. The Cambridgeshire fens. Suffolk. Orkney. I dug my toes into mud, traced shapes in shingle, and stared at long gorgeous horizons in places that held themselves unapologetically in their strange refusal to be conventionally attractive to viewers, seducing them with hidden turnings or mystical peaks. In such places, I could be strange too: inscrutable, solitary, refusing to fit into an easy story that rose to a climax and fell to a satisfying ending. What was inside me found its counterpoint in the fens and mudflats. I was no longer alone. From those flat places, drained and bare and empty, and which hid nothing – which, like me, couldn’t stop showing their damage – there rose up stories of more migrants from Asia and Africa. Not birds, this time, but cockle-pickers, farm-workers, a human zoo, a labour battalion. Migrants whom Britain does not know how to see; whom it prefers not to see. I wrote about these walks in my book, A Flat Place (2023). I put the flat place inside me on to paper, made it into a solid flat rectangle bound between boards, so that it didn’t need to surge up under my eyes any longer. I could show it to friends who loved me. There were little birds out on the mud. I could see them but I didn’t know what they were. My friend had his binoculars out, and was muttering. ‘What do you think that is?’ As we relaxed into the space between us, birds started slowly to come into focus for me. I leaned in, peered through his binoculars. ‘I can see grey,’ I said. ‘And a bit of black. And maybe brown? Near the head?’ There was a pause. ‘Oh,’ said my friend. ‘It’s a wigeon.’ The plural of wigeon is either wigeons or wigeon. The males have brown-russet heads, peach chests, grey bodies. It was the flatness, bigger and better than anything else in that landscape, full of brightness and clarity We’d seen almost no one since we left the main centre, but now we stopped near a man who’d set up his camera on a tripod. His friend was sitting on the ground, nearby. They were both talking, neither of them listening to the other. ‘I was hoping we could have the soup at the centre,’ said the friend. ‘But I saw on the board, it’s tomato. I can’t stand tomato.’ ‘Either a curlew or a whimbrel,’ said the cameraman, curving the lens round. ‘Earlier, the way it was moving made me think curlew. But now I’m not so sure.’ ‘So I suppose we’ll just do sandwiches,’ said the friend. ‘I don’t know what they’ll have in the way of vegetarian though. If it’s just egg…’ ‘This would be the right time of year for whimbrel,’ said the cameraman. How can we ever know each other? How can we even know what we are seeing? My vision ran over the clear brown land, mirrored with blue and white where the water had come in. It ran and ran as fast and as far as it wanted. It ran over the tiny birds, unseeing, over the wigeon and the curlews or whimbrels. It was the flatness I could see, bigger and better than anything else in that landscape, full of brightness and clarity. It didn’t have an existence for anyone but me, that day. But I could see it and felt I knew what it was. Right at the end of the big map, printed on boards throughout Slimbridge, was a kingfisher hide, facing a river. When we got there, the hut was full, so we hovered waiting for a seat to become free. Who wouldn’t want to see a kingfisher? They are so beautiful and elusive: a rare, jewelled handful of blue and orange in among Britain’s collection of little brown birds. I’d seen a flash of blue down at a river, once, the year I was bones, but that was the closest I’d come. Once we were seated, my friend leaned next to me. This was allowed, because we were friends. Intimacy is very, very hard for me. This is one of the most powerful and painful parts of cPTSD. At its root, complex trauma is relational trauma. It comes from being totally dependent on someone, for a long time, and being catastrophically betrayed by them – so catastrophically that they distort your sense of other people, and what they will do to you. Complex PTSD can mean feeling that other people aren’t real, or safe. That you are fundamentally different from them and can never share a world. Or, worse, that you are essentially defective and repulsive, and wise people should stay away from you. Yet the only way out of cPTSD is relationship. It’s a cruel irony. The only way to start feeling better is to get close to people, and trust them: to have the experience of them not betraying you. It’s difficult, because everyone is busy and human and distracted, and makes mistakes. A little slip on their part will prove, beyond doubt, that you were right to be suspicious in the first place. The third time – at last – I saw it. The kingfisher came out of the hole My experience of cPTSD made it hard for me to imagine that anyone would want to be near me, ever. I always marvelled when my friends leaned in close or hugged me. But when I’m sure it’s safe and allowed – that I won’t harm them, or disgust them – I can’t take my hands off them. I drape myself over them, poke my chin into their clavicles, touch their heads. I’m sold. I’m theirs. I touch them again and again, to check they’re still real. My friend put his binoculars over my head, and the lady sitting next to me told me where to look. The kingfishers were coming in and out of that little hole in the bank, she said. I tried to find it through the lenses. Twice everyone in the hide gasped and started clicking cameras, while I waved the binoculars frantically, unable to see what they were seeing. The third time – at last – I saw it. The kingfisher came out of the hole; it sat on the twig. I saw its orange tummy. I saw its little head turning, taking everything in, at peace. I looked and looked until I felt guilty, and held the binoculars up to my friend. But he shook his head. ‘This is your first proper kingfisher,’ he said. ‘I’ve seen them before, lots of times. You enjoy it.’ I turned back to the bank, and went on seeing what everyone else could see, till the kingfisher went back into its hole and the moment was broken. What I call the flat place inside me, now, is the feeling of intensity, of angry stubbornness: of knowing that I am real, and that what I know is real, even if the world can’t see it. I know what people can do to each other. What parents can do to their own children. Although the good moments get more and more frequent – when my friends and I find, even briefly, that we’re seeing the same thing at the same time – in the end, I wouldn’t trade the flat place for anything. Even if it means living mostly in my own world, alone with the memories without names that draw my eye endlessly but never rise into focal points in the flat place inside me. Shoes. A metal box. The scratch of a needle. The things I alone can see.
Noreen Masud
https://aeon.co//essays/flat-places-are-the-ground-that-my-mind-is-built-upon
https://images.aeonmedia…y=75&format=auto
Music
One day, my hand stopped speaking to my brain. As a doctor and flute player, I had to understand this strange affliction
‘All movement has a direction, and that direction obeys a motivation that is accompanied by an emotion.’– from Limitless: How Your Movements Can Heal Your Brain (2016) by Joaquín FariasThe morning after performing the concert of my life, I could no longer play the flute. The pinky and ring fingers of my left hand failed to cooperate with what my mind wanted to do – I couldn’t work the keys. The harder I tried, the more my fingers curled into a claw, stuck in spasm. Even stranger: no other activity was affected. I could type on a keyboard with the same facility as usual and play scales on the piano with unimpeded finger action. The concert, the capstone of my master’s degree in historical performance at the same university where I’d worked as a palliative care physician until 2019, was in March 2020 – among the last before the COVID-19 lockdowns. My weird finger problem seemed small compared with the unfolding pandemic. I initially opted for self-diagnosis, starting with a medical process called a ‘rule-out’. For instance, I ruled out a stroke. Otherwise, why did I have symptoms only when I played? I ruled out an injured hand. I couldn’t remember hurting or straining it. I had no pain, no history of arthritis, and no wrist, arm or shoulder movement limitations: no numbness or tingling. I could air-play an invisible flute with virtuosity; only a real one induced the symptoms. My other hand worked fine. I felt well. So I ruminated on other possibilities. Had my brain-finger circuity become unglued or rewired? What was the origin of the spasming – my hand or my mind? Was this an issue of age? Of nerves? I found myself confronted with a problem that my background as a physician could not make sense of. From another musician, I learned that my experience was not unique. This trusted colleague speculated I might suffer from musician’s focal dystonia. I was embarrassed that I had never heard of it. I soon discovered that I might have a disorder that has plagued some of the world’s most famous musicians. The 19th-century German composer and pianist Robert Schumann was thought to have dystonia, based on his letters to friends, and used a weighted contraption to strengthen a rogue finger. In his diaries, Glenn Gould, known for contorted body postures at the keyboard, described symptoms in his left hand and arm as if writing the definitive dystonia textbook. And Leon Fleischer, after years of misdiagnosis and a right hand frozen into a claw (he played the piano with one hand instead!), brought worldwide attention to dystonia in musicians as never before. The term ‘dystonia’ is rooted in the Latin prefix dys, or difficulty, and tonus, meaning tone or tension. It refers to involuntary disruptions in muscle tone that cause spasms and shakes. It has been divided into many categories and subtypes, depending on the body parts affected and the age of the person when it began. Primary focal dystonia affects specific muscle groups and does not connect to an underlying medical problem. It seemingly comes out of nowhere, and otherwise healthy people have it. Persons can experience, for instance, an imbalance in the neck muscles called cervical dystonia or torticollis. The neck pulls in one direction while the opposing muscle, usually working to keep our gaze forward, stays inert. Imagine a situation where your neck is drawn to the right against your will whenever you speak or walk. Task-specific focal dystonia is related to repeating a physical action, like trilling a note on a keyboard. Smaller muscles working in refined ways seem most vulnerable. The precise movements characterising the muscle actions of archers (target panic), tap dancers, runners, hairdressers, golfers (the yips), musicians, and computer programmers are found among people living with dystonia. Musicians seem particularly susceptible: as many as one or two in 100 are affected, usually professional players in their 30s or 40s. I thought I could practise my way out of the problem. Then I would try to play I was a musician long before I was a doctor. Returning to the flute was a gradual build, rediscovering long-dormant musical chops. My skills were dusty but acceptable and, to my delight, I was welcomed into the music community at the university. I play the baroque flute, a light and airy instrument with a woody hollow sound whose heyday was 17th- and 18th-century Europe. Despite a simple design – a series of unadorned tone holes ending with one key on the foot joint – the baroque flute has an astonishing capacity. With the correct technique, musical colours and textures are shaped by limitless combinations of airspeed, embouchure (combined action of the tongue, mouth and breath), and the light toggling of fingers over the holes. Timbres can be guttural and raspy, liquid or penetrating. Retaining control of the range is like a sprinter at the start, combining Zen and quick-twitch readiness. Tensions from muscles seen and unseen can translate into disaster, amplifying as rhythmic blips, technique malfunctions or a quivering, undisciplined sound. At first, I thought I could practise my way out of the problem. Each morning, I pretended all was well. Then I would try to play. The hand spasmed and shook with barely a touch to the instrument. Had I forgotten how to play? Days went by, then weeks. After several months of denial, my search for answers began with a walkabout to various medical subspecialists – a neurologist, a hand surgeon, and a primary care physician. All were kind, attentive listeners, and outstanding clinicians. But I began to understand how few solutions the medical community had to offer. They told me dystonia was incurable and to switch to another instrument – for a musician, the equivalent of being advised that I could always have another child. They told me about illnesses I did not appear to have. Although I always brought my flute and offered to demonstrate, no one seemed interested in observing me while I played. As soon as it was clear that my problem did not match up with their therapeutic solutions, I was passed off like a hot potato to the next practitioner. And no doctor asked me how I was doing, even though I was now living in the wreckage of my falling-apart musical life. Not everybody who studies and treats dystonia agrees on the cause or solutions. The medical literature reveals a disorder that for decades has existed in the hinterlands between psychological and neurological. Descriptors such as ‘elusive’, ‘perplexing’, ‘intriguing’, ‘baffling’, ‘fascinating’ and ‘enigmatic’ pepper the research – signifiers that a unifying theory has yet to be discovered. On one side is the exploration of dystonia as a physical expression of internal mental conflict or defences (hysteria, neurosis); on the other, the search for identifiable structural changes in the brain. One focuses on subjective experiences and personal history, emphasising personality traits; the other aims at diagnostic precision primarily using scientific techniques like brain imaging. Neither approach in isolation has satisfactorily explained the complexities of the disorder, or why some people get dystonia and others don’t, despite similar personal characteristics, genetics or environmental conditions. This remains a mystery. Steven Frucht, a neurologist and a musician’s dystonia specialist (one of very few in the United States), bristles at reducing dystonic movements to a matter of opposing muscles ignoring each other: ‘That is a wild oversimplification.’ Frucht is the director of the movement disorders programme at NYU Langone Health in New York City and a classical violinist, and has worked with dystonia patients for more than 25 years. I spoke to him recently about the current state of dystonia treatment. He thinks it’s a matter of brain programming. ‘This is a software problem,’ he said. He sees the growth of functional brain imaging with PET and MRI brain scans as game-changing in understanding dystonia as a neurological disorder. (In a nutshell: they stick a musician in a brain scanner and watch what happens to the pictures of the brain while they play.) Brain imaging has allowed researchers to resolve previously held assumptions about the structural location of dystonia in the brain. New research describes disruptions in conductivity between the parts of the brain involved in the execution of fine motor control. In other words, there is no one dystonia locus in the brain – it’s more like the brain communication network is stuck on autopilot, like being forced to ride back and forth in perpetuity on a subway car with no way to exit. Frucht conducted a 2021 study looking at the use of tiny doses of botulinum toxin (aka Botox) injected directly into the offending muscle. ‘It’s the refinements in how to use toxin, how to localise injections, and how to choose the muscles, that have changed how we treat it,’ he said. He found that musicians regain some of the balance in muscle activation without creating muscle weakness when botulinum toxin (BoNT) is used in microdoses, with a booster dose given a few weeks later. ‘If you create weakness in the muscle [BoNT is a paralytic agent], you have overdosed the patient,’ Frucht said. It is rare for musicians to regain their full playing capacity after a Botox series The bigger-picture downside of BoNT injections for musicians, according to Frucht, is that there are only two people in the US whom he trusts to have the level of technical expertise required for musician’s focal dystonia (MFD): one is at New York University, and the other at the Mount Sinai Hospital, also in New York City. (Frucht does not inject limbs himself.) Another complicating factor is that the Federal Drug Administration does not approve BoNT for use in upper-limb dystonia, only for cervical and eye dystonias, so insurance will not cover the cost to patients in the US. Frucht points out that BoNT injections, considered by the medical community to be the primary therapy for dystonia, are no panacea for MFD. It is rare for musicians to regain their full playing capacity after a BoNT series. He has found that pianists respond better than violinists, who rely on subtly executed micromovements of the hand. ‘A quarter of a millimetre is everything,’ Frucht said. As for me, it has been two years since that big concert and my approach, partially rooted in temperament, has been to find the least invasive, least risky alternatives – so no BoNT. Instinct told me to go slow. So I learned to juggle (three balls). I entered a Bollywood dance contest (I lost). Improved my exhale. I learned hand exercises: two minutes slowly rotating my left thumb. I am back to practising in short stints but have learned to put the flute away at the first signs of fatigue or hand tension. Simply put, I am learning how to calm down. During the COVID-19 lockdown, I discovered a web-based dystonia recovery platform started in 2018 by Joaquín Farias, the director of the Neuroplastic Training Institute in Toronto and adjunct faculty at the University of Toronto. Farias, who has a doctorate in biomechanics and a master’s in neuropsychological rehabilitation, has been observing and analysing patients with dystonia for 25 years and has written two books on the subject. His website houses a smorgasbord of mostly movement activities meant to help a body rebalance a frayed nervous system. Every day, I try out another offering – I might, say, work on the curated set of exercises, or learn Shaolin kung fu. But mainly the site has helped me cope with feelings of loss and isolation. Farias is a passionate guy. Get him going, and he will talk for an hour about how the brain connects our movements, emotions and thoughts. He is 50, compact, fit, and seems perpetually on the go. Our several conversations were squeezed into his intense patient care schedule and work on expanding the platform. I caught him, on one call, during a brisk walk in his native Spain. For him, the mystery of dystonia is understanding inciting events, the shock that kicks off a dystonic response. His life work focuses on uncovering common threads, and finding a unifying theory, regardless of the dystonia type. Farias does not see dystonia as an illness to be cured or tethered to a trauma diagnosis. He veers from standard approaches that medicalise personal characteristics into a set of dysfunctions to be managed. Instead, he analyses dystonic responses as a state of perception, like autism, a condition of ‘being, living, and feeling the world’. In his book Limitless (2016), he observes that his patients seem to ‘live in a state of overstimulation’, as if their internal clock has been sped up. The book is a summation and analysis of hundreds of patients and goes well beyond persons with MFD. His profile of the typical dystonia patient is hyperalert to environmental dangers, often ‘brilliant’, and ‘very determined’. He muses that the primitive dystonic would be the one not eaten by a tiger. His patient care goal is to unwind and reset. Here’s what I remember most about my experience of dystonia: a pervasive feeling of fatigue, strange sensations of detachment from my hand­ – like a phantom limb; unexplained bouts of nervousness; vague anger; a bloating pain as if my stomach was lodged permanently in my throat. I blocked friends from asking too many questions­, my face reading: I am a trigger warning. Farias observes in his patients that all dystonias, regardless of the type, produce non-movement symptoms – some more than others. Commonly he sees patients with sleep issues, rashes, dizziness, menstrual problems, autoimmune conditions, and food sensitivities. He recommends patients be checked by a medical doctor for underlying problems, particularly endocrine and digestive issues. He theorises that symptoms beyond movement stem from ‘dysautonomia’ of the nervous system. In other words, changes in the brain producing dystonia can also cause derangements in bodily functions, like digestion and sleep. ‘It is difficult to say at this point how non-motor symptoms can be a consequence of a dysregulated nervous system. It makes sense as a clinical observation, but the mechanisms are not yet completely understood. More research is needed,’ he tells me in an email. On our last call, we spoke about botulinum toxin. ‘I am not against it,’ he said. ‘Botox only affects the muscle; it has not been demonstrated that it affects any other aspect of the condition.’ Flow and ease can be the most challenging part of recovery According to Farias, BoNT could help certain patients if surrounded by a host of supportive interventions­ – including mental health and the retraining of body biomechanics. Botox should be used well, and well means ‘injecting the right muscle that needs to be injected, and no more,’ Farias said. He tells me he sees far too many people asking for help after a lousy injection experience, and worries that focusing on a pharmaceutical-based solution to dystonia has kept the field from progressing. During the height of the pandemic, I attended a Zoom seminar he hosted, along with, among others, a 19-year-old cellist from the Juilliard School in New York who has a deranged vibrato and an uncooperative fourth finger, a professional classical guitarist with a frozen curled pinky, a 35-year-old IT specialist who was forced to type with only her index finger, and a 17-year-old high-schooler whose illegible scribblings could be accomplished only by moving her entire arm. All were under the care of a neurologist and had been through one or several courses of BoNT injections. I was the relative newcomer with a year of dystonia; others had had it as long as 10 years. The seminar toggled between sharing stories and Farias working with each of us, masterclass style, peeling back layers of compensations and disordered movement patterns. When you were not the one under his scrutiny, it felt like watching brain surgery from an observation deck. It is the compensatory movements that Farias wanted us to recognise in ourselves the most. For musicians, relying on changes in body positions to correct dystonia, in the hand, for instance, can alter playing technique in ways that can be difficult to reverse over time. I remember watching Farias dissect the Juilliard cellist’s hand position. Her dystonic habit was to muscle her uncooperative finger into pressing too hard into the cello’s strings. The extra tension created a stiff forearm, which forced her to use too much of her shoulder to play. This extra work made her neck hurt. ‘The pain in her neck tells me about the compensation happening in her shoulder,’ Farias said. It did not take her long to gas out with fatigue. She beamed frustration toward us like a beacon from a ship stranded at sea. Flow and ease can be the most challenging part of recovery because it requires developing the habit of intentional, slow and mindful movement. This mindset can vex musicians who are often trained to build their technique by whipping difficult musical passages into submission. Farias had the group make slow circles with the thumb­ – five in one direction, five in the other. He encouraged us to breath calmly. ‘Compensations are worse when the movements are fast. Don’t let yourself jump ahead mentally,’ he warns. Treating a movement disorder with movement is the foundation for Farias’s dystonia recovery platform. The thumb exercise we did in the seminar is from the platform’s series, designed for people with any dystonic hand problem­ – writers, keyboardists, musicians, golfers. Farias believes in regulating communications between brain pathways that have disengaged, much like an electrical relay station with a powerline down. The goal of movement exercises is to extinguish faulty lines between the brain and the hand, rewiring healthy motor patterns. Farias prevents platform participants from racing ahead and bingeing on a series of those faulty lines all at once. He enforces a slow go, meant to rein in skittish internal selves. ‘To tame a wild horse, you need to approach it slowly,’ he told seminar participants. While Farias is busy tackling the impacts of classical dystonia, Anna Détári, a music psychologist and professional musician (flute), has built her research career around deconstructing the assumptions of music education, hoping to debunk entrenched beliefs about training musicians. A recovered dystonia sufferer­ from oromandibular (better known as embouchure) dystonia defined by muscle spasms in the jaw, Détári’s academic work focuses on the prevention side, an area where the medical profession remains silent. If I were an elite athlete, I would be surrounded by a multidisciplinary treatment team: a psychologist, a physiotherapist, a massage therapist, and a doctor. Musicians are often compared with athletes, but that may be lip service where the medical team is concerened. For musicians, it is often catch as catch can, and treatment approaches diverge when dealing with injuries, Détári explains. For one thing, a dystonia diagnosis is often shrouded in secrecy, as if naming it out loud will cause it to morph from a ghost-like malady into a doppelgänger, wreaking havoc on professional careers. Musician training relies heavily on the master-apprentice model, in which musical knowledge and technique are passed to a trainee like a holy act. Relationships with music teachers can be intense and exclusive. Pedagogy is delivered without much quality control. Music educators are rarely taught functional biomechanics, and often use their bodies to demonstrate correct positioning and stance, regardless of the physical particularities of a student. Orchestras and other professional musical settings rarely serve as a point of access to team-based care for an injured musician. Musicians with dystonia will unconsciously change their body shape to preserve the integrity of the sounds Perfectionist thinking, thought to be associated with dystonia, is a pre-requisite for entry into elite playing, with the expectation of a ‘clean’ performance as the virtuoso’s signature. Researchers speculate that classical musicians are especially susceptible to dystonia, as opposed to jazz or other genre musicians, because of the limitations placed on personal expression by scored music and the pressures to carbon-copy recordings of famous players. The musician’s brain is its own microcosm. Sounds collected by the ear scatter like fairy dust into the auditory somatosensory loop and reconfigure, some musicians will say, as colours, or work their way into the breath, or into physical sensations deep in the core. Eckart Altenmüller, on a call with me from his office at the University of Music, Drama and Media in Hanover, Germany, explains this phenomenon contextualised to MFD. Altenmüller was trained as a neurologist, and the singular focus of his 30-year career has been to understand the effects of music on the brain. Search for academic articles on MFD, and his name is almost always on the list of authors. Musicians with dystonia will unconsciously change their body shape to preserve the integrity of the sounds they hear, according to Altenmüller. ‘Compensation is not about the muscles, it’s about the auditory representation of the piece,’ he said. ‘The only thing that the musician’s brain wants is to play a nice tune ­– the stiffening of the wrist, or lifting of shoulders, it’s all the brain’s motor system, trying to provide a nice fingering.’ He divides his time among patient care, brain research and running the university’s institute of music physiology and musician’s medicine. When he was late for one of our calls, it was because he’d been caught up in a conversation with a patient about treatment options for dystonia. In recent years, Altenmüller has shifted his focus from trying to rid the world of dystonia to managing its effects with brain retraining. He estimates that roughly a quarter of his patients are good candidates for BoNT injections and, even then, works to persuade them to enter the multidisciplinary retraining programme at the institute. ‘You must be very clear that, when you change the motor system, you also change the perception of the hand,’ he explains. ‘I think every patient needs to work on retraining.’ Like Farias, he now recommends, instead of BoNT, vocal training, singing and yoga for embouchure dystonia, for instance. ‘I have quite a lot of patients who had a crisis for even longer than a year and who came, completely recovering, out from the crisis,’ he said. I tell him about my own experience with dystonia recovery. ‘People won’t say this exactly, but dystonia can be interpreted as a failure of talent,’ I said. ‘It’s like you’ve been kicked out of a club.’ ‘Yes, exactly. I want to help my patients to overcome this,’ he replies.
Lynn Hallarman
https://aeon.co//essays/dystonia-plagues-musicians-and-has-no-easy-remedies
https://images.aeonmedia…y=75&format=auto
Economic history
In post-communist eastern and central Europe, history is intensely personal and economics is saturated with moral feeling
In central and eastern Europe, history weighs heavily on personal relationships. When Russia first attacked Ukraine in 2014, the bonds of family and friendship – of which there were so many between these two societies – came under strain. In 2022, when Russia launched its full-scale war, these tensions became unbearable. Social connections shattered like the branches of a treetop in the fury of a hurricane. At the very moment personal relationships yielded to history, it became evident that it was not solely a matter of politics, but rather deeply rooted moral convictions. After all, how could one maintain a friendship with a person who adheres to the Kremlin distortion of reality, who is willing to condone the propagation of blatant lies and the justification of mass murder? Looking back a bit further into the past, we can observe that the act of breaking social ties, of forcibly severing relationships, is not unprecedented in this context. It has occurred during revolutionary events and significant transformations – the fall of communism, the 1989-91 revolutions, and the turbulent 1990s – which brought fundamental changes to the social fabric of central and eastern European society. What, precisely, are the underlying forces at play here? How can we comprehend them from a sociological point of view? In her book The Taste of Ashes (2013), the intellectual historian Marci Shore guides us through the manifold and multilayered entanglements of history and interpersonal relationships in central and eastern Europe after the fall of communism. Shore portrays a world in which there is no innocence – there can be no innocence – because each and every adult individual was to some degree involved in the system by which the communist parties wielded power and control. By the 1980s, the system had become primarily structured around a corrupt manifestation of political loyalty, characterised, above all, by complicity, spineless nepotism and backdoor dealings. This historical legacy holds great sway over interpersonal relationships. And so, after 1989, when the regimes finally broke down, Pandora’s box was opened. The questions arose: who is responsible? Who can be trusted? For those discovering a trusted friend’s collaboration with the secret police, how they handled the situation post-1989 became crucial. Downplaying the consequences and refusing to sever ties with the past could lead to the end of the relationship, while remorse and acknowledgment of authoritarian rule’s consequences might open a path to reconciliation. People faced choices, determining their associations based on moral clarity and the pursuit of truth – or else, on avoiding unsettling questions. Shore’s account is a captivating everyday phenomenology of politics, laying bare how political pasts and futures can create divisions among people. However, there is more to it. Another subtle process can also alienate former friends: the issue of economic deservingness. In June 1992, soon after the fall of the Wall, Berliner Zeitung, a formerly state-run newspaper in the communist-ruled German Democratic Republic (GDR), invited its readers to submit letters to the editors on the topic of friendship. What was the meaning of friendship today? One reader recounted that: ‘when my friend got married, it did not impact our friendship at all … our relationship broke off, strangely – but tellingly – with the fall of the GDR. Fundamental differences in our characters revealed themselves, ones we were tacitly aware of, but which had not impacted our friendship before.’ What might those differences in character be? During the 1990s, throughout eastern Europe, people witnessed profound economic changes. The transformation period was characterised by contradictions: it gave rise to great political and economic accomplishments, but also to innumerable socioeconomic tragedies. Social upward-mobility expectations, for many, did not materialise. In disadvantaged regions, poverty surged, and many lives were lost due to health-related complications. In an analogy to the deindustrialising rust belt in the United States, social scientists have described the profound health consequences in certain regions and societies of eastern Europe as ‘deaths of despair’. In some areas, especially in the post-Soviet societies further east, economic shock became a persistent reality. In their book Taking Stock of Shock (2021), Kristen Ghodsee and Mitchell Orenstein have calculated that, on average, it took approximately 17 years for the 28 post-communist societies to return to their levels of economic output of 1989. As the state retreated from the economy, public resources like healthcare were marketised and defunded. Welfare systems weakened. Life chances diverged and social inequalities surged. Individuals who once resided in the same neighbourhoods, possessed the means to acquire similar goods, and embarked on comparable if moderate vacations found themselves occupying vastly disparate social positions merely a few years after the collapse of the Iron Curtain. People developed different ways of coping with these new realities. This uneasy connection between the economic and the moral … is what deeply impacts social relationships This is where ideas about economic deservingness come into play. Disruptive economic change and the inequalities it gives rise to are not merely abstract concepts; they resonate deeply within a person’s heart and mind. Often, it is through the lens of economic deservingness that people make sense of such transformative shifts. Economic deservingness involves two aspects: first, the distribution of material resources, which raises questions about fairness and redistributive justice. After 1989, who succeeded in moving up the social ladder, what did they gain, and what were the reasons behind their success or failure? Second, deservingness is evaluated at a personal level, entailing judgments about individuals and their moral qualities. What personal qualities are reflected in material gains or losses during the transition to this new society? This uneasy connection between the economic and the moral – unsettling because it brings together two realms that are often, and for good reasons, thought of as separate – is what deeply impacts social relationships. For better or for worse, it is within their social networks that individuals develop a deep understanding of economic inequalities and express their nuanced beliefs about justice. That’s what we can learn from listening to people’s stories and memories of the post-1989 changes. Before illustrating this point in more detail, it’s worth noting that this is not limited to the transformations in central and eastern Europe after 1989. Even beyond this context, social scientists have often noted that people draw on social comparisons to make sense of inequalities. They assess the significance of differences by comparing themselves to others, such as examining the income or wealth of their peers. Through these comparisons, individuals position themselves within a social framework, determining what is considered normal or excessive. However, frequently this kind of research only touches the surface when it comes to people’s reasoning about justice. Focusing solely on isolated instances of social comparison fails to address the meaning of the kinds of social relationships that are at stake here and, consequently, their moral implications. It overlooks the historical significance that underlies social connections, and the formative power of experiencing how economic pathways might undermine, and change, egalitarian relationships. This becomes apparent when we examine the fractures within the social fabric of post-communist societies, the cracking and breaking of network branches. Individuals who lived through the 1990s often recount stories of losing connections, including former friends, in the aftermath of the system changes. Many have vivid memories of severing ties with individuals they were once closely associated with. Lenka, a Czech healthcare worker in her late 40s, remembers ‘separating’ from a former friend in the mid-1990s who made her feel inadequate, challenging her wish to ‘stay normal’ amid the profound changes after 1989. Others remember ‘breaking off contact’, ‘severing ties’, ‘not wanting to be associated anymore’ with some individuals. As network sociologists know, relationships may end for reasons that are purely circumstantial, such as having less time for each other, moving away, or shifting jobs. However, individuals who witnessed the rapid increase in inequalities after 1989 often recount stories that indicate these ruptures are not neutral in nature. Instead, they carry a moral significance. The language employed by individuals when narrating such experiences reveals a sense of rupture, or what the sociologist Eva Illouz calls ‘micro-traumatic events’ in her book The End of Love (2019). In these episodes, a breach of trust comes to light, an instance where the other person is held morally accountable for the resulting outcome. In many cases, these breaches of trust are framed by economic deservingness. The 1990s are typically remembered as a period of great economic opportunities. With the fall of communism, there was freedom, there was a new market society. People wanted to be part of the process of societal opening-up. It is often expressed that the early 1990s provided a chance for individuals to seize control of their own destinies, liberated from the constraints of socialist complacency and the uniformity of life prospects. But what about those who were not ready to embrace this new future? The process of privatising the formerly socialist economy began in the early 1990s, soon after the political transition. Numerous companies underwent downsizing or disappeared altogether; millions of jobs were lost. In certain regions of East Germany, for example, unemployment affected up to a third of the adult population. Individuals had to navigate the challenges of economic hardship. Some perceived those who were struggling as burdens, lacking the willingness to embrace the available opportunities. There are accounts of people severing ties with former friends whom they viewed as failing to seize new possibilities. For instance, Robert, a successful East German engineer and entrepreneur in his mid-60s, recounts how the period after the 1989 revolutions purified his social circles: ‘So some people were breaking away … Those who remained, they share your values. They know that you have to more or less take matters into your hand.’ Questions arose about whether the other person was constantly ‘complaining’ or instead demonstrating ‘initiative’ and striving to make the best of the situation. In episodes of broken friendship ties, narratives often emerge about individuals who were seen as ‘inert’ and ‘unwilling’ to take responsibility for their own situation. Particularly for those who had experienced upward mobility, this attitude of defiance became increasingly intolerable, leading to the end of the friendship. Such views allowed the successful to uphold meritocratic values and assert their own commitment to hard work as an intrinsic conviction. During the 1990s, individuals’ life prospects were shaped not by their personal commitment or effort, but by structural forces There are also accounts of breaking social ties from the opposite perspective. With swiftly widening inequalities, it was easy to feel stagnant while others, even in modest ways, experienced upward mobility. Some individuals recount distancing themselves from those whom they perceived as becoming ‘arrogant’, suddenly preoccupied with expensive dinners, travels and self-centred pursuits. These stories often revolve around how a former friend introduced a market logic into the realm of interpersonal trust, thereby violating the sacred boundary that once distinguished the two. Maria, today in her late 60s, was laid off from a large, formerly state-owned East German company during its dissolution in the early 1990s, and endured the challenges of a harsh labour market for years. She vividly recalls an incident at her birthday party in the early 1990s that led to the break with a once close friend of hers: ‘At one point she came to my birthday, as a surprise, but only to acquire customers for her business! She occupied my guests, my friends, in this way! So we separated … So that was a case when we said, “No, I don’t want you around anymore.”’ To Maria, the former bond of equality – founded on an implicit agreement about what truly matters in life – was shattered. Accounts like hers typically do not concern friends who became extremely affluent. Instead, they centre on much smaller, subtler differences. People use these stories to criticise meritocratic beliefs and the detrimental effects they have on the purity of social connections. It’s the nuance that counts here, and the fact that these differences emerged from a previously more egalitarian relationship. It is remarkable that these narratives frame economic realities in moral terms. They emphasise the character traits of individuals, highlighting the way people perceive and evaluate these experiences based on notions of personal virtue and values. This is noteworthy when considering that, during the 1990s, individuals’ life prospects were shaped not by their personal commitment or effort, but rather by factors such as their prior qualifications, gender, geographic location, ethnicity, the fate of their firms, or social connections – all of which are structural forces. Taking initiative and not relying on others to take care of you are seen as indicators of being a ‘good’ person. In a similar way, staying true to oneself and avoiding behaviours that prioritise money over friendships are valued traits indicating ‘good’ character. These moral judgments assess an individual’s character. The moral significance involved in these situations lies in the essence of the social relationship as a grown connection. The breach of trust is a breach of a mutual understanding of the history of the relationship. The philosopher Avishai Margalit has eloquently articulated this idea. According to Margalit, betrayal is fundamentally characterised by the disregard of the shared values that previously united two individuals. Betrayal is the act of shattering the meaning of a shared past. Only a strong tie, understood as a tie of mutual commitment, attachment and recognition, can ever be betrayed. Such ties are found in families, but more so in friendship relations. Two friends’ understandings of their past – the past of the self, and the past of the other – is mutually entangled. Because it is constitutive of the relation, the shared past is ‘coloured by the betrayal’. And only a tie that never had as its purpose an external goal can be betrayed. That tie must have been treated as an end in itself. It must have had no other goal but the flourishing of each of the two persons, or more precisely, the flourishing of the relationship itself. Betrayal is the blow to the relation of commitment, which comes with a profound shock to the status of recognition of the other as person. As Margalit notes in his book On Betrayal (2017): ‘The shocking discovery in betrayal is the recognition of the betrayer’s lack of concern; the issue is not one’s interests but one’s significance.’ Court files reveal that the state increasingly persecuted individuals on moral grounds This understanding of the influence of the past, the temporal nature of social relationships, enables us to acknowledge the connection between economic deservingness and betrayal more accurately. We come to realise that the question of who deserves what after 1989 becomes a central concern for individuals. The moral belief that individuals deserve certain economic outcomes, whether through hard work or social support, also extends to their entitlement to specific social relationships. The stories of broken friendships highlight the desire to purify one’s social sphere from relationships that contradict their sense of deservingness. People yearn for recognition of their economic choices and strive for others to perceive them as deserving as well. This moral claim on their environment and the importance placed on it reveal the significance of economic justice in social ties. These dynamics are, to be sure, not solely a result of marketisation after 1989. They have deep historical roots, moulded, in particular, by the social conservatism of late-socialist societies. As the historians Thomas Lindenberger and Michal Pullman have shown for the GDR and Czechoslovakia respectively, the notion that someone was supposedly ‘unwilling to work’ was politically propagated and instrumentalised by the communist parties during the 1970s and ’80s. Court files reveal that the state increasingly persecuted individuals on moral grounds, using charges like ‘socially deviant behaviour’, ‘asocial behaviour’, and labelling them as ‘goldbrickers’ or ‘parasites’ to distance them from the ‘healthy’ and ‘productive’ socialist community. In reality, this politics of scapegoating – accompanied by a sharp rise in racist resentment and violence – was a clear indication that the communist parties had lost ideological support and lacked a positive vision for the future. Nonetheless, this aggressive language subtly infiltrated interpersonal relationships. It shapes dynamics of trust and eerily echoes elements of the moral language of market society, even serving as a framework to attribute economic setbacks during the 1990s to individual choices. Today, in our world of ‘polycrisis’, a term popularised by the historian Adam Tooze to describe the simultaneity of multiple disruptive events and processes, we can again observe numerous social reverberations of crisis dynamics. The COVID-19 pandemic, the energy crisis, the war in Ukraine all potentially affect interpersonal relationships, and social cohesion, in intricate ways. History enters the world of those who had read in academic books that it was over. Social branches are beginning to crack in parts of the world that were lucky to be spared this phenomenon in the past decades. In the US, researchers have, for example, discovered that friendship ties, but not family ties, were weakened during the pandemic shutdowns. Drawing from more anecdotal accounts, we can surmise that the experience of this health crisis has prompted individuals to contemplate their genuine priorities in life. As a result, they began to ask themselves: who genuinely shares my values? While the specifics may vary among individuals, the key point is that social relationships often serve as sources of economic recognition. During times of crisis, these very social relationships, and consequently the sources of recognition, become precarious. Of course there are variations in how individuals maintain their social networks. Some, particularly the young, urban and highly educated, tend to cultivate many weak ties – loose social connections that contribute to their economic prospects and connect them to different social circles. On the other hand, those living in rural areas, who are older and less adaptable in certain cultural contexts, often have smaller networks that are predominantly based on strong ties, often also family-based. For them, issues of loyalty and betrayal are prominent. But what holds true for all these groups is that during times of crisis, the boundary between the private and the economic realms may become blurred. It becomes hard to deny that social ties also serve as economic connections, providing individuals with emotional and cognitive support, as well as information about job opportunities. Whenever the domains of the private and the economic intersect and come into conflict in social relations, people usually struggle to ‘keep the world in moral order’, as Michèle Lamont put it in her book The Dignity of Working Men (2000). Narratives forge some ties, and dissolve others Ideas about deservingness are also politically consequential. As social policy researchers have long argued – far beyond the context of the breakdown of communism in eastern Europe – people who believe that individual effort and hard work are decisive to get ahead in society are also more likely to tolerate greater inequalities and to reject a more active role of the state in redistributing resources in society. They have faith that the market will sort things out and are inclined to hold others accountable for their personal failures and misfortunes. Those who believe that outcomes like poverty or joblessness cannot be addressed via individual commitment alone but that social support is necessary, in turn, are more likely to wish for reductions of inequality, and favour stronger welfare states. Beliefs about deservingness, in other words, influence the degree of solidarity individuals feel towards others. If these normative judgments are deeply embedded in social relationships, then this also shapes what kinds of inequities people perceive in the first place. Whose fortunes or misfortunes are they going to see? What moral choices do they think are involved, and who do they feel sympathetic to on these grounds? The boundaries of their social networks may coincide with the limits of their imagination. People may, in fact, invent social relationships to justify their privileged position in an unequal society. It has been demonstrated by sociologists that British middle-class individuals employ origin narratives about social relationships to rationalise their own economic status. They tend to portray their upbringing and their family background in a way that links their origins to individuals that are working class, effectively fabricating social ties in the past in order to frame ‘their life as an upward struggle “against the odds”’. During periods of crisis, this becomes even more pronounced: the way people define who belongs to their social circles, both in the past and in the present, indexes their self-perception and the manner in which they confront these challenging situations. By examining their narratives about imagined social environments, we can discern their notions of deservingness. Yet – and this is precisely what memories of the 1990s reveal – there will be ambiguity in these narratives. We should resist the temptation to label people as either pro-market or anti-market on these grounds. Deservingness is articulated in stories, and these stories contain multiple, sometimes contradictory views. It is how people make sense of the world, and also how they act on their social world. Narratives forge some ties, and dissolve others. As observers, our point of departure must be to try to grasp individuals’ notions of personal agency, particularly in navigating economic challenges, during times of crisis. The 1989 revolutions in central and eastern Europe occurred over three decades ago, yet the echoes of these stories, and their enduring moral significance, continue to resonate today. The passing of time does not necessarily heal economic and social wounds. The notion that individual responsibility solely determines economic outcomes is highly divisive and largely misrepresents the workings of society. However, we cannot and should not give up on the idea of political responsibility. As Marci Shore reminds us, the readiness to assume responsibility emerges as a key political lesson from the convoluted eastern European experience of the 20th century, and now – in light of the Maidan protest movement, and the struggle against reactionary Russia and its far Right-wing allies around the world – also the 21st. But economic and political responsibility are not one and the same. The central and eastern European historical experience of the recent past teaches us why we should aspire to a world where there is less moralising of economic lives – and instead, more contestation, including moral contestation, over political futures.
Till Hilmar
https://aeon.co//essays/in-post-communist-europe-economics-is-laden-with-morality
https://images.aeonmedia…y=75&format=auto
Nations and empires
Imperial Russia had little access to the bountiful tropics that other empires enjoyed. So it created its own in the Caucasus
In the age of empires, seeds and saplings of tropical plants were making regular voyages throughout the world. Travelling across continents and oceans via metropolitan and colonial botanical gardens, they did not only transform, but also helped to construe the very notion of the tropics. The trans- and intra-imperial circulation of biota shaped a global web that connected colonial realms of the British, French, Dutch and other maritime empires. The Tsarist Empire is never thought of as a participant of this process, but it was one of the stopovers on the tropical plants’ round-the-world journey. It was in the South Caucasus where imperial botanists, agriculturalists and upper-class settlers invented Russia’s own ‘tropical’ domain. This region lies nowhere near the tropical zone but, in the long 19th century, this geographical fact mattered little when confronted with the power of imagination. As Catherine Cocks has shown in her Tropical Whites: The Rise of the Tourist South in the Americas (2013), the term ‘tropics’ was used rather evocatively and indiscriminately, spreading into places like Southern California and Florida. Only at the turn of the 20th century, the term ‘subtropics’ firmly entered the imperial lexicon to lend an air of science to the idea of tropicality. 13330Victoria regia grown outdoors in Europe for the first time, planted in June 1912. Digital colour renderings from glass plates in the Prokudin-Gorskiĭ albums. All images courtesy the Library of Congress 13331Maples and orange trees 13332Banana plants 13394An original page from the Prokudin-Gorskiĭ album The ‘tropics’ was, thus, a wandering notion, which travelled along with the plants that were its hallmark. In the South Caucasus, it roamed from one place to another – from the arid steppes of Azerbaijan to the river valleys of Georgia – before it finally became ensconced on the eastern Black Sea coast, in places like Sochi (today a resort city in Russia), Sukhum (or Sukhumi, the capital of Abkhazia) and, most comfortably, Batum (today’s Batumi, the second-largest city in Georgia, close to the border with Turkey). Why did imperial Russia need the tropics? From the early 19th century onwards, the pursuit of tropical commodities was the main driving force of the acclimatisation of exotic plants. As the century wore out, the tropics took on a new meaning – as places of delight and self-indulgence. Ornamental exotic plants equalled – if not outmatched – useful ones in importance. Hotels, sanatoria and ‘climate stations’, where tsarist subjects succumbed to dolce far niente, dotted the littoral. Well-off settlers were coming in large numbers. Villas and vacation residences popped up throughout the coastline. Tropicality entailed settler colonialism: the public and the government alike treated the Indigenous population as too indolent and primitive to be able to unleash the real potential of the local climate. The inflow of ‘pioneers’ and ‘Kulturträger’ from Russia proper and the empire’s European provinces was the only means to productively harness and economically uplift the region. Lastly, tropicality served imperialism as a symbol of grandeur. Many a member of tsarist society took pride in the very thought that their empire was immense enough to stretch from the polar ice caps as far as places where coconut palms and banana plants grew. In other words, if there were no tropics, they were worth to be made. In 1902, in an account of the years spent in Java, Modest Bakunin – the Russian imperial consul to the Dutch East Indies – reflected on his experience of living in a part of the world that few of his countrymen had a chance to see firsthand. Tasked with advancing trade opportunities, and searching for a colonial territory for the Russian Empire in Southeast Asia that would host a coaling station for tsarist ships, Bakunin felt disillusioned. For him, just as for a vast number of other colonial sojourners whose career paths took them to the warm oceans, his stay in an exotic place was about boredom and disappointment rather than about adventure and excitement. Homesick, he was ever more longing for the Old World he had left behind. To escape the tedious routine, Bakunin took comfort in socialising with other Europeans, who for various reasons came to stay in the colony’s capital, Batavia. This was, as Bakunin described it, ‘a tight circle of colleagues, that is, real Europeans who, like me and my family, see their existence in Java as something temporary and transient.’ Indeed, the yearning for Europe, as well as the shared sense of Europeanness, was what brought them all together: ‘All of us keep and carefully maintain the “European” flame that unites us in our faraway captivity and all the time tells us about old Europe, better and nicer than which nobody has invented anything.’ Bakunin enthusiastically described the main advantages that the tropics could offer – colonial commodities produced from heat-loving plants. Cinchona tree, coconut palm, bamboo, rattan, teak and tea, among others, delivered enormous benefits to metropolitan societies. If the tropics, the ‘white men’s cemetery’, were of any use to Europeans, it was the immense exploitative potential of their vegetation. Whereas his own colonial mission proved an exercise in futility, Bakunin envisioned the introduction of exotic crops – particularly tea – to tsarist soil because Russia, he assured, had a territorial possession in the south that, in many respects, resembled the Dutch East Indies: ‘Transcaucasia for us is just the same colony that Java is for the Dutch metropole. The climatic and soil conditions of the Russian tropics allow planting in our domains and successfully introducing many of tropical and subtropical cultivars.’ Bakunin’s consular service in the tropics, if anything, made him an avid proponent of inter-imperial transfers of useful plants from Java to the South Caucasus. Tsarist visitors discovered the South Caucasus as capable to yield ‘tropical’ commodities to satisfy the metropole Writing about the latter as a tsarist tropical colony, Bakunin followed in the footsteps of many of his contemporaries and predecessors who propagated this view. Due to their efforts, by the turn of the 20th century, the South Caucasus had come to be firmly associated with the idea of tropicality. The dream of the Russian tropics was not a new fin-de-siècle passion. It had been haunting different layers of tsarist society since the moment Russia first secured its dominion over the region. In the opening decades of the 19th century, imperial Russia expanded south of the Caucasus Mountains. Its challenge was to rule over not only an extremely variegated social, cultural and political landscape, but also over an environment that differed from the rest of the empire – the one that was expected to open new prospects for the imperial economy. Imperial travellers, scientists and officials appropriated various parts of this unfamiliar space through images and tropes that emphasised the luxurious, exotic and exuberant nature of its scenery. The spectrum of meanings attached to the South Caucasus was one way or another associated with the idea of the region’s purported tropicality. Much like in early 19th-century India, which increasingly came to be perceived as part of the broader tropical world – as David Arnold has shown in his study The Tropics and the Traveling Gaze (2006) – tsarist visitors discovered the South Caucasus as capable to yield ‘tropical’ commodities to satisfy the needs of the metropole. Speculations about its tropical climate conditions defined much of the imperial attitudes towards the region, believed to be able to produce nearly everything the southern climate might offer and nearly everything the imperial cause might need. 13333Views of the Tbilisi Botanical Gardens, Georgia, c1870. Courtesy the NYPL digital collections 13334 13335Different authors entertained different opinions as to which parts of the region best matched their expectation of the tropics. Such was the great lowland of modern-day Azerbaijan, which, an 1836 survey assured, ‘will become a nursery for tropical plants and will substitute for Russia Persia, India, and South America.’ This survey, commissioned by the imperial authorities as a compendium of statistical data about the South Caucasus to expose its natural riches, assessed that most of its territory possessed tropical qualities. To make them work for the empire, the improvement of local agriculture was needed so that ‘the South Caucasus, as a colony, could satisfy the demands of Russian manufactures by its tropical and southern production in raw form.’ Such an outspoken mercantile colonial vision of Russia’s new territorial acquisition, driven by fantasies about its alleged tropicality, was a product of discourses about empire and environment that tsarist officialdom borrowed from Western Europe. Colonialism was a joint enterprise, in which the cross-fertilisation of ideas and policies across imperial states and their dependencies contributed to the emergence of the shared vocabulary of imperialism, conceptualised by Christoph Kamissek and Jonas Kreienbaum as ‘the imperial cloud’. One of those who drew extensively on the trans-imperial repertoire of knowledge, images and practices was Egor Kankrin, the empire’s finance minister and one the first senior tsarist officials who formulated the concept of the South Caucasus as a colony of exploitation. In 1827, he wrote: ‘Not without reason, the Transcaucasian province can be called our colony, which should bring for the state very important benefits of the products of southern climates.’ In the second quarter of the 19th century, when these ideas proliferated, nobody knew what exactly these colonial commodities could be. Because of the want of any reliable scientific data, colonial fantasies of the imperial bureaucracy about the tropical climate of the region led them to believe that the abundant supply of sunshine in the South Caucasus would make for the cultivation of ‘southern’ and ‘tropical’ crops, much needed by tsarist commerce, medicine and industry. Environmental imaginaries of this kind resulted in early ambitious undertakings by Russian imperial agents. Today, Aleksandr Griboedov is mostly known as a poet and playwright. However, apart from his literary passion, he had a passion for empire. As a tsarist diplomat in Persia, he took pains to expand Russia’s imperial sway south of the Caspian Sea. The neighbouring South Caucasus, in his eyes, deserved a special arrangement akin to that of India vis-à-vis the British Empire. In 1828, Griboedov put forward a project of the Transcaucasia Company, which would administer the province in a fashion similar to the East India Company. He bemoaned the fact that the Russian Empire imported commodities of the hot climates from abroad while it could obtain its own ‘southern and even tropical production’ from the South Caucasus. The Transcaucasia Company, as a concerted effort of entrepreneurial capitalists, would remedy the problem by producing, manufacturing and exporting ‘colonial products’ to the Russian metropole and the most distant parts of the globe. The death of Griboedov at the hands of an angry mob in Tehran the following year buried the project, but not the idea of tropicality. The local administration was keen to employ foreign – mostly French – experts in tropical agriculture to put these ideas into practice. Among those who came to the South Caucasus was the botanist and agriculturalist Joseph-Elzéar Morénas, who had a long record of colonial service in India and Senegal. His stay in Africa made him a vocal critic of slavery and brought him to Haiti, where Morénas admired the achievements of the anticolonial revolution. He resented colonial slave trade but marvelled at colonial commodities. In 1829, he ended up in the South Caucasus at the invitation of the government. Instructed to bring out the most suitable areas for plantations of exotic crops, Morénas suggested introducing sugarcane, oranges, lemons, coffee tree, indigo and other ‘plants of the tropics’, arguing that some parts of this region were ‘not inferior to the best colonies’ in terms of its climate. The main hazard that awaited planters, he warned, were local fevers not unlike those in tropical colonies. Morénas fell victim to one himself in 1830. In just a few years, the first tea shrubs germinated in the garden’s soil The corollary of governmental pursuits – and an echo of Griboedov’s project – was the Transcaucasian Society for the Advancement of Agricultural and Manufacturing Industries and Trade. Established in 1833, it concentrated its efforts on the acclimatisation of exotic plants and their dissemination across the South Caucasus. In its own experimental farm and the garden in the region’s capital, Tiflis (today’s Tbilisi), the society attempted to cultivate Chinese indigo, olive trees, Egyptian cotton, tobacco, sugarcane and other crops, all with mixed success and almost no effect on the local economy. After its dissolution in 1845, the society gave way to the Caucasus Society for Agriculture, designed to put the issue of acclimatisation on scientific footing, while its experimental nursery was transformed into the Tiflis Botanical Garden, which also tried its hand at acclimatising southern plants for plantation purposes. If any tangible results of this early phase of the tropicalisation endeavour were anywhere to be found, however, it was in another botanical garden in Sukhum, a major tsarist outpost in Abkhazia on the Black Sea coast. Founded in 1840 by Lieutenant General Nikolai Raevskii, the commander of the military fortified line along the coastline and a passionate botany aficionado, it could boast some of the most exotic plant species growing in the Russian Empire in the open air, including varieties of citrus trees. An officer who visited the garden in 1842 gleefully anticipated that, thanks to it, the empire would have its own ‘oranges, lemons, almonds, olives, cotton, the best tobacco, and, maybe, coffee, tea, cork as well as many pharmaceutical plants.’ In just a few years, the first tea shrubs germinated in the garden’s soil. At the time Britain introduced Camellia sinensis to the Darjeeling area to become self-sufficient in the production of its favourite drink, Russia pursued the same goals in the South Caucasus. Unlike the former, however, it took some more decades before the first large tea plantations were set up in the region. The rise of tea cultivation in the South Caucasus was a trans-imperial story from the outset. Walter Tschudi Lyall was the man behind it. A British colonial officer in India, he was a nephew of the chairman of the East India Company. His younger brothers were senior officials in the service of the British Raj, whose careers made them governors of Punjab and the North-Western Provinces. In the 1860s, Lyall attempted to establish a tea plantation in the Himalayan foothills, but failed and tried his luck in Tiflis, where he introduced himself to the local authorities as a tea planter with almost two decades of experience, offering his service in founding a company for the cultivation of tea on a large scale. The company planned to bring, just as the British did, Chinese labourers to work on plantations and to ship tea seeds and seedlings from China. The plans never materialised for the lack of money and enormous difficulties that the company met in the corridors of tsarist bureaucracy. No other development contributed more to the assertion of the South Caucasus tropical image than the war between the Russian and Ottoman empires that broke out in 1877. In the wake of it, Russia annexed the town of Batum on the Black Sea shore and the whole adjacent area. Villages in the region were remarkable for their orange groves, cultivated by the native population, which testified to the unique conditions of the local climate, evidently suitable for exotic crops. As the region was opened for Russian colonisation after the exodus of most of its residents to the Ottoman Empire, it was this milieu of Russian colonists who began introducing evergreen vegetation for ornamental purposes and for the sake of making their fortune. Among them was the Tiflis naturalist and ethnologist Nikolai Zeidlits, who attempted to cultivate tea, eucalyptus and other exotic crops in Chakva, north of Batum, but soon realised that this agricultural hobby obstructed his scholarly studies. Nevertheless, Zeidlits, under the influence of Lyall’s manuscript about his tea experiments, encouraged another Chakva settler, the retired colonel Aleksandr Solovtsov, to start a tea plantation on his land. Planting material was brought from one of the largest centres of tsarist tea production, located well beyond the Russian Empire. The treaty port of Hankou in China (today part of Wuhan), one of the most important nodes of Russia’s informal empire, which subsequently hosted a Russian territorial concession, with its tea factories run by Russian entrepreneurs and employing Chinese manpower, became a supplier of the first tea plants for the emergent tea enterprise in the South Caucasus in 1884. Solovtsov’s plantation proved successful, and soon large capital followed suit. Konstantin Popov, a scion of the founder of one of the largest tea companies of the empire, which also owned a plantation in Hankou, bought a large area next to Solovtsov’s estate in 1892 and established a cutting-edge tea plantation with a tea factory. Tea was not the only transfer from China that Popov implemented. Besides plants, he brought people – just like Lyall had proposed two decades before him. A dozen skilled Chinese tea farmers together with an expert on tea cultivation, Liu Junzhou, became the first example of labour transfer from a semi-colonial treaty port to this part of the Russian Empire. Settlers from the metropole would cultivate whimsical plants for aesthetic pleasure and commodities The bringing of the Chinese was preceded by heated discussions about the practicability of such a move, which betrayed acute racialised concerns among Russian commentators. The phantom of the ‘yellow peril’, well-entrenched in the mindset of tsarist society at the turn of the century, and the fear of race deterioration loomed large. As one article in the official local newspaper went, ‘however desirable it is to acclimatise the tea shrub here and free ourselves from the multimillion tribute that we pay to China for tea every year, it is nevertheless even more desirable to free ourselves from the necessity to acclimatise here the Chinese themselves.’ The author of this piece compared the Chinese with phylloxera, a grape disease that devastated vineyards across Europe, tsarist wine-growing areas being no exception, noting that it was still possible to fight the plant pest, ‘whereas one cannot rid of the Chinese by any means.’ The government was a latecomer to the tea plantation business in the South Caucasus. In 1895, when Popov’s workers were harvesting tea for the first time, it organised an expedition to China, Japan, Ceylon and the Himalayas to bring specimens of tea and other southern crops to the Russian Empire. As the expedition was off to the tropics, the Department of Crown Domains acquired nearly 17,500 hectares of land in Chakva to establish a state-owned experimental and, as its title suggested, ‘colonisation’ estate, the main purpose of which was to receive, acclimatise and grow exotic plants expected to be brought by the expedition. This venture resulted in astonishing success, turning the estate and, eventually, the vicinities of Batum into a quasi-tropical landscape with vast plantations of tea, citrus trees, bamboo, loquat, Japanese persimmon and many other exotic species. By 1915, the Chakva estate had the largest tea plantations amounting to almost 550 hectares. Popov lagged behind with 140 hectares, while 200 private farmers together cultivated slightly more than 200 hectares. In the 1890s, two botanical gardens, one in Sochi and one in Sukhum, were established and, then a decade later, experimental stations were added. There were few people in the Russian Empire as obsessed with the tropics as the mastermind behind the latter’s creation, Pavel Tatarinov. He travelled extensively. His dreams about the tropical world took him to South America, where he admired the marvels of the ‘earthly paradise’ but remonstrated that ‘semi-civilised’ people were turning it into ‘hell’. He made trips to Algeria to explore the experimental Jardin du Hamma, and to the French Riviera, where he visited Villa Thuret in Antibes, a botanical garden and an acclimatisation and research facility dealing with tropical plants. In conversation with Tatarinov and, later, in a separate essay, its director Charles Naudin argued that the villa, inspired by British experimental colonial gardens, could, in turn, serve as a model for similar institutions in Russia’s southernmost possessions. In 1885, Tatarinov took the most important decision of his life, purchasing land on the coast of the Black Sea near Sukhum. There, he used his important skills to reproduce a tropical oasis in the open air, which turned into the most spectacular showcase for exotic vegetation on the whole coast. Tatarinov envisaged turning the coastal part west and south of the Caucasus Ridge into a nearly tropical realm, where incoming settlers from the metropole would cultivate whimsical plants for aesthetic pleasure and the production of commodities. Inspired by the French, he spoke in favour of establishing experimental stations in Sochi and Sukhum and, as soon as it happened, he became the director of the latter. The reality betrayed expectations, however. Debates over which kinds of plants should be given preference ensued in the forthcoming years. Many opted for more down-to-earth pursuits, such as the introduction of traditional fruit, cereals and vegetables to the coastal zone of Russian colonisation. Others insisted that wasting the close-to-tropical conditions of the region for plants of the temperate climate was unreasonable. More than a decade after, the new director of the Sochi, and later Sukhum, experimental stations, Vasilii Markovich, described the rivalry between the proponents of growing ordinary Russian plants and the supporters of southern vegetation as a battle between ‘cabbage and orange’, arguing that ‘where ananas [pineapple] and other exotic fruit grow, there cannot be room for cabbage and potato.’ Tatarinov would certainly agree. However, his quest for the Russian tropics took him southwards. Disillusioned with not uncommon freezing temperatures and snowfalls of Sukhum winters, he acquired a new estate near Batum in 1898, the climate of which seemed a much better fit for his tropical garden. By the time Tatarinov moved there, the region had already been recognised by scholars as the Russian Empire’s ‘subtropical’ corner. No one else did more for this idea to take hold in the imagination of fin-de-siècle Russia than Andrei Krasnov, one of Russia’s foremost scholarly experts in tropical flora. Krasnov was a member and the public face of the governmental tropical expedition of 1895. His resounding article, published the same year under the telling title ‘The Russian Tropics’, was meant to put the long-standing idea of the South Caucasus tropicality on a scientific footing. Krasnov argued that regions with a tropical outlook – humid, winterless and rich in rainfall – could be found far beyond the tropics themselves. ‘Subtropical’ thus meant ‘tropical’ in most respects save for the geographic one. With the warmest winters and the highest amount of precipitation in the Russian Empire, red laterite soil, and a number of indigenous evergreen plant species that formed the undergrowth of local woods, the Batum region reminded Krasnov of Java and Ceylon. What set it apart, in his view, was the want of genuinely exotic vegetation typical of tropical rainforests, which had existed here in the prehistoric era but had mostly perished during the Ice Age. In his later writings, Krasnov suggested correcting this historical injustice and restoring the region’s appearance to its ‘authentic’ tropical condition by reintroducing exotic plants from the Global South. This endeavour underlined many activities of the Chakva estate in the Batum region. Krasnov did not feel it was enough, arguing that the tropical transformation of the local environment could be achieved only with the help of a botanical garden. This acclimatisation institution, similar to British colonial gardens in India and the garden of Dutch Buitenzorg in Java, as Krasnov wrote, would ‘restore’ the prehistoric flora in the area and would facilitate the dissemination of tropical and subtropical plants along the coast so that settlers would be able to produce ‘colonial’ commodities on Russia’s own home turf. The imperial army advanced along the Ottoman coast, occupying lands with orange orchards and evergreen flora The Batum Botanical Garden, established in 1912, was conceived by Krasnov as having a broader appeal for the public. He wanted it to host not only exotic plants, but also exotic human beings – to be an ‘ethnographic exhibition’ or an ‘exhibition park’. In essence, Krasnov designed a human zoo of unique proportions. While the structure of the garden represented various (sub)tropical parts of the globe – from Japan, Ceylon and Florida to Australia, New Zealand and Chile – the garden’s sections were to be populated by these places’ Indigenous people, ‘placed within the real conditions of the nature that nurtured them.’ Amid palm trees and banana plants, humans on display would feel at home and would serve visitors delicacies made of tropical fruit grown on the spot, sell handicrafts, and entertain their guests in many other ways. Such a spectacle of tropicality and race was likely inspired by what Nigel Rothfels in his study Savages and Beasts: The Birth of the Modern Zoo (2002) termed the ‘Hagenbeck revolution’ – still a new way of exhibiting animals and people in their ‘natural habitats’, promoted by the zoo tycoon Carl Hagenbeck in the early 20th century. If implemented, Krasnov’s daring creation would have been unparalleled in the history and practice of human zoos in Europe, but his vision never materialised. The First World War halted the development of the garden, while Krasnov died in late 1914, unsure about what kind of future awaited the fruits of his years-long efforts. Despite all the horrors and anxieties that the Great War brought to the Russian Empire and, particularly, to the South Caucasus, there was room for excitement. The empire was expanding in Asia – for one last time before its nearing end – and so did the tsarist (sub)tropics. In 1915, the imperial army advanced along the Ottoman coast, occupying new localities with orange orchards and evergreen flora. A correspondent of the official local newspaper excitedly wrote about the crossing of the border between the Batum region and Russian-occupied Ottoman Lazistan: ‘One more step to the south, and we are in our new subtropical possessions.’ With the transfer of Anatolian territories to Russia, he noted, ‘the dream of the poet of the Russian subtropics, the late professor A N Krasnov, comes true. The soil of these areas is suitable not only for the growth of oranges and lemons; tea will grow perfectly here.’ Indeed, Russian tea planters were quick to petition the officialdom with a request to start plantations in the ‘new’ regions. State agronomists began analysing which areas of Anatolia were most suitable for this cause, suggesting that up to 16,000 hectares were available for prospective plantations. As the Russian Empire disintegrated and Turkey reclaimed its territories in a few years, the Turkish government brought the idea of tea plantations in Anatolia into fruition and made it a reality in the 1920s. Yet another success at tea planting in neighbouring Persia was at least partially based on the tsarist tropical experience. In 1901, Iran’s tea pioneer Prince Mohammad Mirza visited the Chakva estate and brought from there new knowledge about methods of tea cultivation and tsarist specialists. His first tea plantations in Gilan came into being thanks to Russian imperial expertise. Plant-based industries and transfers in the South Caucasus came hard on the heels of those undertaken by Russia’s imperial rivals and allies. Surprisingly, however, they also served as models for more southern countries to follow. Oleksandr Polianichev’s project ‘Tropics of Tsardom: Plants and Empire in the South Caucasus, 1800s–1917’, is supported by Sweden’s Riksbankens Jubileumsfond.
Oleksandr Polianichev
https://aeon.co//essays/how-tsarist-russia-sought-to-make-a-tropics-on-the-black-sea
https://images.aeonmedia…y=75&format=auto
Stories and literature
Long derided as mere coincidences, acrostics in ancient poetry are finally being taken seriously – with astonishing results
Ten years ago, one of the most disruptive events in my intellectual life occurred at a dinner party at my house. My friend Richard Thomas, who had just given a talk at Baylor University, mentioned that a student of his had discovered an ‘Isaiah acrostic’ in Vergil’s Georgics, a 1st-century BCE poem ostensibly about farming but really about life and the universe. This remark simultaneously opened the door to two phenomena in ancient Greek and Latin poetry that I had not really thought about, despite a lifelong career in Classics: acrostics and Judaism. The relationship between the biblical and the classical traditions has always been fraught. As Tertullian testily asked in his screed against pagan writers: ‘What has Athens to do with Jerusalem?’ Similarly, St Jerome felt compelled to abandon the classical authors he loved after a nightmare vision in which the judge accused him of being a Ciceronian, not a Christian. One of the many reasons Vergil is central to the Western tradition is that his Fourth Eclogue, which portrays the birth of a miraculous boy ‘sent down from heaven’ to inaugurate a new Golden Age, helped calm these fears: it was seen by readers from late antiquity until the 18th century as a pagan prophecy of the birth of Christ, thereby allowing Christianity to assimilate the Classics rather than merely rejecting them. Post-Enlightenment readers, however, tended to react against the Christian interpretation – they had no desire to view Athens through the prism of Jerusalem. Whatever one may think about the supernatural dimension, there is abundant evidence for personal and intellectual contact between Jews and non-Jewish Greeks and Romans before and after the birth of Christ. Jews composed something like 10-20 per cent of the population of the Roman Empire; there are many overt references to Jews and Judaism in classical texts; and the Septuagint – the Greek translation of the Hebrew scriptures undertaken in the 3rd century BCE – would have been accessible to educated non-Jewish people throughout the Mediterranean world. Classicists and intellectual historians should be paying far more attention than they presently do to the impact of Jewish texts and culture on classical authors. In my paper ‘Was Vergil Reading the Bible?’ (2018), I argued that the answer to that question is probably ‘Yes’, and that at least some scholars are beginning to realise that Jewish themes are an important component of his meaning. Scholars’ lack of attention to acrostics, on the other hand, may stem more from an intuitive sense that they are beneath the dignity of sophisticated authors. Indeed, acrostics are an art form simple enough for a child to create. The most basic kind is a word spelled vertically by the first letters of successive lines: Catches mice,Adorable whiskers,Tail’s up – look out!That such vertical words exist in the columns of long poems is undeniable; the difficulty consists in figuring out whether they are intentional. The vertical CAT above has such an obvious connection to the horizontal text that no one could reasonably deny its intentionality. But when acrostics are embedded in real poetry and no one is telling you to look for them, most are not so obvious. As with any disruptive phenomenon, there are both enthusiasts, whose close-meshed nets catch some dubious fish, and deniers, who insist that even the big ones should be thrown back. For many years, insanity was a common metaphor employed for those who believe acrostics in ancient poetry are intentional. The most influential one-paragraph Classics article ever written, Don Fowler’s playful intervention about the acrostic MARS spanning Vergil’s description of the Gates of War, ends with the memorable sentence: ‘I await the men in white coats.’ What Fowler did not anticipate was that, four decades later, acrostics would begin to be recognised as not just an occasional jeu d’esprit in ancient poetry, but a widespread phenomenon and a major source of meaning. Acrostics always have, in theory, plausible deniability There are several reasons why believing that some acrostics in Greek and Latin poetry are intentional is both sane and rewarding. First, ancient writing and reading practices were more congenial than ours to letterplay and vertical ‘decoding’. Texts consisted of blocks of capital letters with no spaces in between, rather like our word-search puzzles. As one unrolled a scroll, the columns would appear before the rows, and sometimes the first letters of verses were even enlarged and separated by dots. Second, ancient authors such as Cicero actually talk about acrostics, especially in the context of the Sibylline Oracles. Third, the vertical axis allows for both permanently unresolvable ambiguities, which is a plus for learned writers conveying complex messages, and the addition of a ‘voice’ freed from the horizontal constraints of metre, authorial persona and decorum. Acrostics always have, in theory, plausible deniability – even if that deniability seems rather implausible sometimes, as in the modern example from Arnold Schwarzenegger to members of the California State Assembly. Fourth, they are delightful ‘Easter eggs’ for those hardy souls who read carefully, like the undergraduate student who published an article on an acrostic she had discovered during my class. Finally, vertical texts can parallel and enhance the ‘Great Conversation’ among horizontal texts that lies at the heart of the humanities. Vergil’s ‘Isaiah acrostic’ – the great disruptive event of my intellectual life – participates in an intertextual conversation involving snakes, desire, and (im)mortality that ultimately traces back to the most consequential of biblical stories: the serpentine seduction of Eve. One of the more fascinating parts of my journey has been getting to know the dipsas, a snake whose name comes from the Greek for ‘thirsty’ (as in ‘dipsomaniac’). This unsavoury critter, which appears frequently in ancient literature and material culture, was thought to experience unquenchable thirst itself and to induce that state in its victims. A Greek magical amulet, apparently intended to aid in human fertility by reducing excess uterine blood, pictures two snakes flanking an altar and bears the inscription ‘Dipsas-Tantalus, drink blood!’ Tantalus (source of ‘tantalise’) is the sinner punished in the underworld with unending hunger and thirst, as fruit and water constantly recede just out of his reach. Though there is some scholarly disagreement about how to interpret dipsas here – it could be referring to the snake, or it could be describing Tantalus as ‘thirsting’ – literary evidence suggests that Tantalus and the dipsas are closely connected, and that both are associated with sexual desire and sexual morbidity. This is certainly true of the dipsas in Roman poetry. My first encounter with the dipsas was actually in the bedroom of Ovid’s girlfriend in the Amores. Here, ‘Dipsas’ is the name given to one of the stock figures in Roman elegy, the aged, drunken procuress (or lena), who spends most of the elegy instructing her young charge in how to be a tease and squeeze more money out of her clients. Ovid ends the poem with a curse activating the etymology of her name: May the gods give you no home and an impoverished old age,and long winters, and perpetual thirst.The association of sexual desire with unquenchable thirst, and sometimes with snakes, is in fact an Ovidian leitmotif. In a hilarious dirge lamenting the poet’s impotence despite the proximity of his extremely desirable girlfriend, he compares himself to Tantalus, ‘thirsting in the middle of the waves’. Ovid’s masterpiece, the Metamorphoses, depicts a plague whose symptoms bear a suspicious resemblance to lovesickness – fever, blushing, insomnia, shortness of breath, and insatiable thirst – caused by snakes infecting springs and lakes. This snake, so entwined in ancient literature and material culture, plays an essential role in the Vergilian acrostic Later authors pick up on this connection as well. In Lucan’s Civil War epic, when Aulus, a soldier in the army of the Stoic hero Cato, is bitten by a dipsas in the Libyan desert, his symptoms again recall the imagery of lovesickness: Look, the poison enters silently, and the devouring firegnaws his marrow and kindles his insides with wasting heat.The Greek satirist Lucian, apparently drawing upon the Lucan passage, describes a Libyan statue of a dipsas victim: For on [the monument] had been carved a man, as they depict Tantalus in paintings, standing in a lake and reaching out for the water to drink from it, and that wild beast – the dipsas – which had clung to him and twined around his foot.Though the piece ends on a humorous note, likening these insatiable cravings to Lucian’s own wish to converse with his friends, it clearly establishes the association of the dipsas with various forms of human desire. This bizarre and terrifying snake, so entwined in ancient literature and material culture, will have an essential role to play in the Vergilian acrostic. Nicander, a Greek poet of the 2nd century BCE, is not exactly a household name, and for good reason. His compendious didactic poems about snakes, other poisonous creatures and antidotes are hardly congenial to modern tastes. His older editors complained that he had little poetic talent, knew little about his subject matter, and brings little pleasure to his readers. Though scholars have recently begun to show the witty sophistication with which he transforms his literary predecessors, I must confess that I find reading him in Greek rather tedious: he’s short on story, and there are many lines where I have to look up every word, only to find I don’t know what half of them mean in English either. Nevertheless, I have lately come to realise that he is a crucial link in the chain connecting some of the Western tradition’s most important texts. Of the many, many snakes he describes, along with the usually revolting effects their bites have on the human body, two stand out. One is the viper, from the Latin for ‘viviparous’ (live-young-bearing). This serpent has the amiable quality of biting off her mate’s head while he is impregnating her, but she gets her comeuppance when her young eat their way out of her womb. Human terms like ‘bedmate’ and ‘vengeance’ associate this phenomenon with the murderous family dysfunction of Greek tragedies. The other snake is our friend the dipsas. Nicander introduces these two species right after a snake simply called ‘The Female’, which gives a clue about what he is up to. Nicander’s second and final dipsas passage tells us explicitly that it looks like the female viper. This episode is the poem’s most striking, both because it relates a highly significant story and because it contains an indisputably intentional acrostic of the poet’s own name. He begins by describing some graphic symptoms of a dipsas bite: Above all, the form of the dipsas will always be similar to the viper,the smaller one [ie, the female], and the doom of death will come more swiftlyto those whom this fearful snake assails: indeed, its slender tail,always somewhat dark, gets black at the end;and at its bite, the heart is utterly enflamed, and all around with feverthe parched lips wither with scorching thirst;but he [the victim], like a bull bending over a river,with gaping mouth takes in measureless drink until his bellybursts his navel and pours out the too-heavy load.He then relates a ‘primeval myth’ with universal consequences: ὠγύγιος δ’ ἄρα μῦθος ἐν αἰζηοῖσι φορεῖται, ὡς, ὁπότ’ οὐρανὸν ἔσχε Κρόνου πρεσβίστατον αἷμα, Νειμάμενος κασίεσσιν ἑκὰς περικυδέας ἀρχάς Ιδμοσύνῃ νεότητα γέρας πόρεν ἡμερίοισι Κυδαίνων· δὴ γάρ ῥα πυρὸς ληίστορ’ ἔνιπτον. Αφρονες, οὐ μὲν τῆς γε κακοφραδίῃσ’ ἀπόνηντο· Νωθεῖ γὰρ κάμνοντες ἀμορβεύοντο λεπάργῳ Δῶρα· πολύσκαρθμος δὲ κεκαυμένος αὐχένα δίψῃ Ρώετο, γωλειοῖσι δ’ ἰδὼν ὁλκήρεα θῆρα Οὐλοὸν ἐλλιτάνευε κακῇ ἐπαλαλκέμεν ἄτῃ Σαίνων· αὐτὰρ ὁ βρῖθος ὃ δή ῥ’ ἀνεδέξατο νώτοις ᾔτεεν ἄφρονα δῶρον· ὁ δ’ οὐκ ἀπανήνατο χρειώ. ἐξότε γηραλέον μὲν ἀεὶ φλόον ἑρπετὰ βάλλει ὁλκήρη, θνητοὺς δὲ κακὸν περὶ γῆρας ὀπάζει· νοῦσον δ’ ἀζαλέην βρωμήτορος οὐλομένη θήρ δέξατο, καί τε τυπῇσιν ἀμυδροτέρῃσιν ἰάπτει. A primeval myth is told among people,that, when the eldest blood of Kronos [Zeus] held the sky,[acrostic begins] having allotted to his brothers glorious realms far apart,in his wisdom he gave Youth as a reward for mortals,honouring them: for indeed they told on the stealer of fire [Prometheus].Fools, they got no joy from it, because of their negligence:for out of weariness they entrusted their gift to a stupid ass to carry.Skipping along, his throat burning with thirst,and seeing in its hole the dragging beast,with terrible folly he begged that deadly one to help,[acrostic ends] fawning: but he [the snake] asked the witless one for the loadhe had taken on his back as a gift: and he [the ass] did not refuse the request.From that time, the dragging serpent casts off its aged skin,but evil old age attends mortals: and the destructive beastreceived the parching thirst of the braying one, and imparts it with its feeble blows.A section of Nicander’s Theriaca manuscript dating c11th century CE, illustrating his acrostic signature. Aeon/the BnF, Paris Not only is Nicander marking his territory, so to speak, with his vertical signature, but he is also activating the meaning of his name: andros means ‘of man’, and nik- (as in ‘Nike’) means victory. Such Greek compounds are frequently ambiguous: nik-andros could signify either the victory of man or the victory over man. The latter is obviously more appropriate here, since the wily serpent has bamboozled humankind out of eternal youth. Nicander associates it with the Genesis story: the implicit theme of the war between the sexes Where did Nicander get this idea? The story was treated by a number of tragic, lyric and comic poets of the 6th and 5th centuries BCE. The earliest source we know for a tale about a snake thwarting man’s immortality is the Babylonian Epic of Gilgamesh, predating Nicander by a millennium or more, in which the hero, while taking a dip in a pool, has a plant called ‘Man Becomes Young in Old Age’ stolen from him by a snake. There was plenty of cross-fertilisation among the cultures of ancient Greece, Rome, Egypt and the Levant; the Bible itself, especially the stories about the early world, contains much recycled material. So it would be claiming too much to say that Nicander could only have derived his deceitful talking snake directly or exclusively from the Jews. Nevertheless, one feature of Nicander’s version associates it more particularly with the Genesis story: the implicit theme of the war between the sexes. Since Greek andros means not just human but male human, as opposed to the unisex anthropos, NIKANDROS may also suggest the victory of woman over man. In Genesis 3, God pronounces judgment upon the serpent and the Woman who caused Man to fall: The LORD God said to the serpent,‘Because you have done this, cursed are you above all cattle, and above all wild animals; upon your belly you shall go, and dust you shall eat all the days of your life. I will put enmity between you and the woman, and between your seed and her seed; he shall bruise your head, and you shall bruise his heel.’To the woman he said,‘I will greatly multiply your pain in childbearing; in pain you shall bring forth children, yet your desire shall be for your husband, and he shall rule over you.’ Nicander’s dipsas is strongly associated with the female viper, who represents both pain (death, in fact) in childbearing and a dysfunctional relationship with her husband, in which she is the dominatrix. Furthermore, the symptoms of the dipsas’s bite – fever, parching thirst, and drinking until the fluid explodes out of one’s navel – sound suspiciously similar to those of the lovesickness depicted by Roman authors. The coupling of viper and dipsas, especially right after the snake called ‘The Female’, suggests that Nicander was alert to the relationship between the Fall of Man and the war between the sexes. Though there are obviously some differences, it is plausible to suppose a genetic connection with the Genesis episode. The supremely learned poet would surely have been interested in – and eager to show off his knowledge of – this memorable Jewish story, available in his day in Greek translation, in which a talking snake plays a leading role. In my article on Original Sin and Vergil’s Orpheus and Eurydice episode in the Georgics, my starting point was the acrostic ISAIA AIT, ‘Isaiah says’, in the context of a woman dying by snakebite. Orpheus’s new bride Eurydice, fleeing from her would-be rapist Aristaeus (to whom the Isaiah-like prophet Proteus is recounting the tale), encounters a huge water-snake. Several cue words point to the ‘huge’ acrostic, which lies ‘before the [metrical] feet’ of the hexameter and along the ‘banks’ of the poem: illa quidem, dum te fugeret per flumina praeceps, Immanem ante pedes hydrum moritura puella Seruantem ripas alta non uidit in herba. At chorus aequalis Dryadum clamore supremos Implerunt montis; flerunt Rhodopeiae arces Altaque Pangaea et Rhesi Mauortia tellus Atque Getae atque Hebrus et Actias Orithyia. Ipse caua solans aegrum testudine amorem Te, dulcis coniunx, te solo in litore secum, te ueniente die, te decedente canebat.She, indeed, while she was fleeing you headlong by the stream,a girl destined to die, did not see a [acrostic begins] huge water-snakebefore her feet guarding the banks in the high grass.But her sister-chorus of Dryads filled the high mountainswith a wail; the peaks of Rhodope wept,and high Pangaea [‘All-Earth’] and the martial land of Rhesus,and the Getae, and the Hebrus, and Attic Orithyia.Orpheus himself, solacing his miserable love on a hollow lyre,kept singing you [acrostic ends], sweet wife, you to himself on the lonely shore,you with the coming, you with the departing day.I argued that Eurydice, the ‘girl destined to die’, is a kind of Eve figure, killed by a snake and mourned by universal nature. What I did not know then – but came to realise thanks to another chance conversation, this time with Michael Reeve about his article ‘A Rejuvenated Snake’ (1996-7) – is that Vergil’s biblical acrostic alludes to Nicander’s. That is, Vergil was not only recalling the Genesis episode that brought death into the world, but recalling it through the lens of a Greek author who did the same – and expecting at least some of his learned readers to get both allusions. While this sort of ‘window reference’ is common for highly literate classical authors, the fact that the Bible is involved sheds new light on the highway between Athens and Jerusalem, suggesting that ancient non-Jewish readers’ access to and interest in the Septuagint may have been far greater than is commonly supposed. A characteristic feature of window references is the ‘correction’ of one’s predecessors, and Vergil’s is no exception. For instance, he adds the water that the Greek poet strangely omits: why else would the ass have asked the dipsas to help him with his thirst unless the snake were, as Vergil has it, ‘guarding the banks’? Vergil corrects the name as well, calling his water-guarding snake hydrus, from the Greek for ‘water’, the opposite of dipsas, ‘thirsty’. Like Nicander and the Bible, Vergil implicitly depicts a battle of the sexes, but he inverts the roles and assigns the blame to men. Eurydice dies once fleeing from a rapist, and again when Orpheus makes the fatal mistake of looking back as he is leading her out of the underworld. Vergil and Nicander show both similarities and differences in their divine rewards and punishments. In Nicander’s story, the god of the sky gives humankind a chance at eternal youth as a reward for our tattling on Prometheus, stealer of fire. In Vergil’s, the god of the underworld gives a human a chance to escape from his realm temporarily as a reward for Orpheus’ transcendently beautiful song. The stories diverge in that Nicander’s Man, weary from carrying the precious gift, entrusts it to the back of a foolish ass, who becomes a potent symbol of appetite in conflict with rational intelligence. Yet there is a certain similarity to Orpheus nonetheless: emotionally wearied by having Eurydice at his back, he foolishly and irrationally gratifies his desire to see her rather than delaying his gratification so as to save her. I have argued that the story spanned by Nicander’s signature serpent alludes to the biblical Fall, and that Vergil in his Orpheus and Eurydice episode incorporates both Nicander and the Bible, signalling the allusion with a biblical acrostic of his own: ISAIA AIT. But that leaves a final question. Why is the dipsas, in particular, the star of Nicander’s Fall story, since all snakes renew their youth by shedding their skin? Nicander’s association of the Fall with unslakable thirst, which we attempt to satisfy in ways that lead to our destruction, shows his insight into both the biblical narrative and the nature of evil. In the Jewish and Christian understanding, the constant result of our primordial separation from God – brought about by a malicious serpent – is an insatiable yearning for that lost communion, which the psalms and prophets frequently describe as thirsting for God: ‘As the deer longs for streams of water, so I long for you, O God.’ God promises that this thirst will be quenched: ‘Ho, everyone who thirsts, come to the waters.’ But there’s one exception. In the Edenic vision of God’s holy mountain, ‘The wolf and the lamb will feed together, and the lion will eat straw like the ox, but dust will be the serpent’s food.’ The serpent is the only animal left out of the party – just as God had promised in Genesis, ‘dust shall you eat, all the days of your life.’ Classical texts are enmeshed in a web of relationships that can surprise and invigorate, even after thousands of years The perpetual thirst of Nicander’s dipsas is a logical consequence of that unsatisfying diet. The dipsas may be functionally immortal, but only at the price of eternal misery; imparting torturous thirst to others does not actually bring the beast any relief. Like the biblical serpent, its motivation is pure malice. As St Ambrose declared: ‘No one ever healed himself by hurting another.’ In the present age of electronic venom, absorbing that lesson from Nicander’s etiological tale could help us all. Another reward of the humanities’ Great Conversation is the joy of seeing familiar words borrow the serpent’s power to ‘become young in old age’. It had simply never occurred to me that reading vertically might enhance my understanding of the horizontal narrative. Discovering this new dimension in Vergil and other ancient poets has both increased my appreciation of their genius and emphasised the benefit of learning the original languages, since acrostics vanish completely in translation. On the other hand, the potential importance of Jewish texts for classical authors is something that should be receiving more attention from all readers, scholars and students alike. As Italo Calvino observed: ‘A classic is a book that has never finished saying what it has to say.’ Vergil was not only reading the Bible, but reading it through the eyes of a Greek author who used the biblical story to enrich his own – and both authors left vertical clues whose significance is only now coming to light. In tracing the biblical serpent’s acrostic tail, we see how classical texts are enmeshed in a dense web of relationships that can surprise and invigorate us, even after thousands of years. The eternally thirsty snake may symbolise evil’s contagion and ‘victory over man’. But it can also symbolise the contagious, unquenchable thirst of the humanities – and humans – for the truth and beauty ever ancient, ever new.
Julia Hejduk
https://aeon.co//essays/why-it-pays-to-read-for-acrostics-in-the-classics
https://images.aeonmedia…y=75&format=auto
Illness and disease
The culture around breast cancer is full of positivity and femininity. But it comes at the expense of the marginalised
Every day when I open my eyes, my vision bobs between my bedroom and the horizon of sheets that have crept up the bed in the night. As a result of secondary breast cancer, I am paralysed from the waist down, and can’t drag myself up in the bed, so I remain slumped, inside and outside the day. When I use the remote control on my hospital bed to sit my back up, I slide down in the bed and my paralysed feet press against the bed end. I can’t trust myself to fix it, so I remain half under the sheets, waiting for my husband, mother or carers to come and release me. Secondary breast cancer is the bad one – the one you die of. The public picture of breast cancer is the primary version – the one you can possibly be cured of, if you’re willing to go through a mastectomy, radiation and chemotherapy. The public picture of breast cancer suggests that women who have been subjected to this ‘slash, burn, poison’ model of treatment should consider themselves survivors. Yet breast cancer can come back as Stage 4 at any time in a woman’s life, and secondary breast cancer (SBC) is incurable. Depending on the type of breast cancer – there are at least four subtypes – a secondary breast-cancer patient is on chemotherapy, immunotherapy or hormone therapy for the rest of their shortened lives, and treatment is life-altering for almost all SBC patients. I’m one of the unlucky ones. In January 2022, I woke up one morning and couldn’t lift my leg. We got me into hospital, dragging my right leg like a reluctant schoolchild, and soon discovered I had spinal tumours, in an uncommon location inside the spinal cord. A few days later, I lost feeling in my left leg, which turned out to be from a separate spinal tumour. I had radiation treatment which I was told could stabilise the tumour. But the nerve damage on my spinal cord was unlikely to improve, and I was told I’d probably never walk again. Not being able to walk means a complete loss of independence. It dashes any hope I’ve had for a period of respite in which I might travel, go to work, or even meet with friends. It’s forced me to accept that I will, almost definitely, not see my home country, Australia, ever again. I’ve spent most of the past decade travelling abroad and far away from the Antipodes, and it was this adventurous life that I thought I’d miss the most. But it turns out to be the severing of my ties to home, to the spare sound of the morning magpie and the smoky smell of fallen eucalyptus, that feels physically unbearable. They don’t tell you about permanent paralysis at SBC Breast Cancer camp, where the emphasis is often on ‘thriving’ with a terminal disease. They don’t tell you about liver failure or lung disease, which can make the last months of many patients’ lives almost unbearable. Then again, SBC is sneaky and has the ability to shape-shift and emerge malevolently in unexpected places. Aside from paralysis, I have a tumour in my left eye that has caused a retinal tear that makes it hard to see. Women I know have bladder tumours, bowel tumours, and all manner of bony tumours that affect their mobility. And then there is each SBC patient’s greatest fear: brain metastases, which can cause neurological symptoms such as seizures, palsy, and sensory alterations. All of this exposes the lie of efforts by mainstream charities to glamorise breast cancer – to celebrate ‘survivors’ with a barrage of pink merchandise, pink-themed events, and pink-ribbon fundraising. It’s a vehicle for the promotion of heteronormative femininity, one that’s largely about being pleasing, attractive and sexualised for the benefit of others. I find little common ground here with the visceral reality of what I am going through. As a person with cancer, I often feel stripped of the layers of other identities that I have casually worn for years. Now I am swaddled with other labels – a patient, a woman, a victim, a survivor. Yet keeping me company on this hard road are a group of feminist theorists who suffered through, thought with, and eventually died from breast cancer. Both Audre Lorde and Eve Kosofsky Sedgwick exposed cancer as something fundamentally discursive, dominated by the usual discriminatory and exclusive categories of gender and race. They have guided me through some of the unexpectedly radical, gender-fluid and contradictory dimensions of living with breast cancer. On the one hand it’s all ‘pink positivity’; yet, on the other, it can involve years of interventions that cause havoc with our hormones, make us lose our breasts and hair, and otherwise disrupt many of the conventional markers of gender. These two opposing pressures – to perform an extreme kind of femininity and to be forced to trouble it in my own body – have been a crucial aspect of my own experience. While I might feel powerless in the face of how this disease is undoing me, Lorde and Sedgwick have granted me a measure of solace – a sense of power, and a means to resist the political endowments of my condition. The pink ribbon itself is most closely associated with the Susan G Komen foundation – the largest breast cancer awareness charity in the United States, which hosts the Susan G Komen Race for the Cure every year. Founded in 1982 and named for the founder’s sister who died of SBC, the organisation very quickly became the most powerful in the breast cancer community. It spearheads the pink-ribbon campaigns of October, the designated ‘Breast Cancer Awareness Month’, hosts large charity galas, and has more than 200 corporate sponsors. Komen was not the origin of the breast cancer movement; that was arguably in the ‘women’s health’ movement of the 1970s, a branch of feminist activism that sought to fight the medical establishment’s disregard for women’s health issues (the most famous output of this is the book Our Bodies, Ourselves, first published by the Boston Woman’s Health Collective in 1970). The women’s health movement was far more radical than Komen and its offshoots. Rather than focus on positive survivorship, it emphasised how the medical profession ignored women and, especially, failed to strive for a cure or less interventionist and traumatic treatments. The emergence of charities such as Komen, the Estée Lauder Breast Cancer Campaign, or the Ralph Lauren Pink Pony campaign arguably represent not only the corporatisation but also the feminisation of the breast cancer movement. As Barbara Ehrenreich has written in Smile or Die (2009), her memoir and critique of breast cancer discourse: ‘Everyone agrees that breast cancer is a chance for creative self-transformation – a makeover opportunity, in fact.’ Transmen are increasingly presenting at doctors’ surgeries with lumps that need to be investigated The way the mainstream breast cancer movement has aestheticised the disease goes hand in hand with social pressure to perform heteronormative womanhood. Nowhere is this more evident than in the charity Look Good Feel Better: founded in the late 1980s, it offers women with breast cancer a free makeover (with products donated by cosmetics companies) which includes a class to teach them how to make up their chemo-ravaged faces. In other charitable efforts, pink glitter is sprinkled on cars, on yogurt pots, on bras, T-shirts and scarves. Conventionally attractive models and celebrities wear Ralph Lauren’s Pink Pony sweatshirts and sometimes even strip down to their underwear to encourage women to ‘check your boobs’ for lumps. Companies investing in pink-ribbon culture are rarely transparent about where their money goes, because it would reveal how little of it goes to research as opposed to endless awareness campaigns. For example, Yoplait, which ran a pink-ribbon campaign each October from 1998 to 2016, donated just 10 cents for each pink lid that consumers mailed back to the company. As the medical writer Gayle Sulik despairs in Pink Ribbon Blues (2011): ‘billions of dollars are siphoned into branding efforts instead of the prevention and eradication of disease.’ Komen currently donates less than 20 per cent of the money it raises to research, and a large proportion is poured back into costs related to fundraising, thus existing to perpetuate itself. This pink positivity is clearly highly gendered, focusing almost exclusively on those who identify as women. This is despite the fact that a small number of men get breast cancer, while transmen are increasingly presenting at doctors’ surgeries with lumps that need to be investigated. Indeed, breast cancer, and the experience of treatment, opens up an array of potential genderqueer identities. One might even argue that the heart of pink-ribbon culture is the demand to perform upbeat, cheerful and attractive femininity, even when shorn of one’s breasts, precisely so as to close off the genderqueer possibilities it otherwise raises. Even the emphasis on ‘checking your breasts’ exposes the bias of pink-ribbon cancer culture. It is an important, albeit scientifically controversial, message; it can also distract from awareness of other symptoms in other, less overtly ‘female’ parts of the body, especially of Stage 4 breast cancer. The first symptom of my own cancer was redness – not a lump – on the skin of my breast. This was 2019 and I was 35 years old. The following year, I started having chest pains, though they disappeared and it’s not clear if they were related to the cancer. But a later scan revealed a mass on my lungs, which turned out to be metastases. Within the breast cancer patient community, there is considerable discussion of femininity, womanhood and its loss. I messaged and spoke to some fellow members of METUPUK, the breast cancer advocacy group I belong to, about how their treatment did or did not relate to any sense of femininity. Meg was one of the first to respond (all names have been changed out of respect for correspondents’ anonymity) and her WhatsApp reply was one of the starkest: ‘Being bald I felt like an “it” (with) no gender.’ Seeing their female identity through the prism of their appearance was a common theme; baldness was frequently raised as a source of grief, as was anything to do with breast surgery. ‘I had my mastectomy in 2016 and still not had my reconstruction,’ Anna wrote to me, ‘it made me less than a woman, having a flat side and not being able to wear a T-shirt or even a V-neck dress … without putting in a softy [prosthetic]!’ It’s notable that Anna said she became ‘less than a woman’ – not just that she felt less like one. For many women, our treatment chips away at the gender identity we have carefully built up over the course of our lives. Most women have to take hormones for years after primary treatment, and potentially indefinitely if they have hormone-positive Stage 4 breast cancer. This often involves a deliberate shutdown of the ovaries by monthly injections of either Zoladex (a luteinising hormone blocker) or Lupron (a gonadotropin-releasing hormone agonist), which can mimic menopause in premenopausal women; an aromatase inhibitor, which further lowers the oestrogen levels in postmenopausal women; and for Stage 4 women, new classes of blockers such as CDK4/6 inhibitors (which target not hormones but proteins in breast cancer cells) and selective oestrogen receptor degraders (SERDs). This melange of treatments essentially works to make oestrogen levels as low as possible so as not to feed hormone-dependent cancers. Given that around 70 per cent of breast cancers are hormone dependent, that is a large number of women grappling with the physical and psychological effects of profoundly altered hormones. One consequence is a change in libido and the physical ability to have sex. Meg wrote to me about her vagina becoming ‘the Gobi desert’, and some of us swapped tips for oestrogen-free lubricants in our WhatsApp group. Cheryl said: ‘I fell out of love with sex after the primary diagnosis … I feel absolutely nothing.’ Such feelings suggest a psychological as much as a physical change. A striking point was how many of the respondents assumed that I meant sex and intimacy when I asked about how breast cancer affected their gender identity. Femininity and sex are so closely intertwined in public culture that sexual intimacy is where many believe their identity is to be found, and damaged. Pink femininity has to be so visible precisely because it is so unstable and weak Yet this brings with it certain genderqueer possibilities. Molly observed that trans and other gender-nonconforming people get breast cancer too, yet they’re excluded from pink-ribbon culture. She had a friend assigned female at birth who was transitioning to male, but his mother wanted him to hold off on all hormone treatment until he was 21 because of a family history of breast cancer. However, it may be that some hormone treatments prescribed during transition could decrease breast cancer risk. Some of the laundry list of hormone treatments for breast cancer are the same as drugs taken during transition – most notably Lupron, which can be used as a puberty blocker in children as well as adults, and aromatase inhibitors, sometimes given for so-called ‘precocious puberty’. There is so little research done so far on transgender individuals and breast cancer; one hopes there will be more soon. Molly found it hopeful, though, that the increased visibility of transmen and transwomen was encouraging more acceptance of fluid gender identities, and that genderqueer people were ever more welcome in grassroots breast cancer groups. Perhaps there are seeds of hope for a genuinely queerer breast cancer culture, which might see with greater clarity what a patriarchal medical establishment has elected to overlook. Cracking open the association of breast cancer with biological womanhood can reveal other intersectional identities, especially the cross-pollination of gendered and racial selves. Sara, from METUPUK, wrote to me: ‘I’m from an ethnic community. That’s the worst thing, people from our community look at you and feel pity for you, [the] worst thing is going to an Asian wedding when you are diagnosed with cancer.’ The experience of breast cancer patients from Black, Asian and other marginalised groups is rarely present in media portrayals, and although groups like Black Women Rising exist to resist this marginalisation, in mainstream breast cancer culture, there is still only a token effort to change. In this context, the frantic performance of pink femininity takes on a specific hue; it has to be so visible precisely because it is so unstable and weak. As the scholar Amy L Brandzel has written, the ‘anti-intersectionality’ of pink-ribbon culture serves to close off patients’ connections ‘to transgender embodiments, queered affects, disabled communities’. Breast cancer is a direct strike against stereotypic womanhood – with the hair loss that accompanies chemotherapy, the early menopause wrought by hormone treatments and, perhaps most of all, the mastectomies, which involve a physically and psychologically violent act. Faced with this attack on normative femininity, pink positivity allows women to feel they not only retain womanhood, but that it has been augmented. While these efforts are designed to show that women with breast cancer are still wives, mothers and lovers, the effort that’s put into foregrounding the ‘feminine’ in breast cancer charity and awareness could be read as a performance of a new femininity – a truer one than before. An unusual number of late-20th-century feminist thinkers suffered and died from breast cancer. Perhaps the most famous is Lorde, the poet whose book The Cancer Journals (1980) still dis/comforts women living with the disease. A slim volume, it is searingly honest about the violence breast cancer does to the various veils that we otherwise hide behind: gendered, but also racialised, sexualised identities. She writes: For months now I have been wanting to write a piece of meaning words on cancer as it affects my life and my consciousness as a woman, a Black lesbian feminist mother lover poet all I am [emphasis mine].A lot of meaning lies nested in those three words ‘all I am’: they reveal the extent to which Lorde wrapped herself in both marginalised and generalised identities – a Black lesbian but also a lover and a poet. All these identities suffered from the invasion of cancer into her life and her consciousness, but by listing ‘Black lesbian feminist’ first, Lorde emphasises how at odds with the breast cancer experience they are. Lorde’s reflections on the disorienting effect of cancer may seem untethered from the physical sides of treatment. But her other writings put the violence of surgery front and centre, when she laments how the act of mastectomy untethers her from her femininity: I believe that socially sanctioned prosthesis is merely another way of keeping women with breast cancer silent and separate from each other.Lorde imagines an army of one-breasted Amazons marching to Congress and protesting the use of carcinogenic agents. She believed that environmental degradation was partially responsible for her cancer, beliefs that linked her to the feminist environmental movement of the 1970s. Lorde is critical of breast cancer ‘positivity’, but her passages against mastectomy chime with other activists who tie the loss of a breast to a loss of female-ness. She reifies the relationship between the woman and the breast, and cries loudly against the attempt to break that relationship. The healing she experiences afterwards is wrapped up in her sense of womanhood. She describes the group of women in her life, including her partner, who rushed to help her when she was recovering from the mastectomy: ‘Perhaps I can say this all more simply; I say the love of women healed me.’ Sisterhood and solidarity are central to Lorde’s understanding of womanhood, and her recovery more specifically. Sedgwick gives us a way to think that’s quite different to the heteronormative, pink-ribbon platitudes Lorde’s fear of the broken relationship between woman and breast is echoed in the writings of Sedgwick, the feminist and queer theory scholar who also lived with and eventually died from secondary breast cancer. Sedgwick is considered one of the founders of queer theory, and labelled her own experience of cancer ‘an adventure in applied deconstruction’. For her, queer theory’s emphasis on ambivalences, penumbrae, erasures and fracturing helped her probe her own psychological responses to the disease. ‘I have never felt less stability in my gender, age and racial identities,’ she wrote, calling her own process of treatment a ‘dizzying array of gender challenges and experiments’. Her response was in many ways typical – she mourned the violent interventions that produced the bald head, the lack of breasts, the missing eyelashes – but she also considered herself lucky to have the crutch of queer theory to see her through. As she wrote in Tendencies (1993), Sedgwick coped by ‘hurling my energies … to the very farthest of the loose ends where representation, identity, gender, sexuality, and the body can’t be made to line up neatly together.’ Her career-long infatuation with queerness (she was, herself, in a long heterosexual marriage, an irony that did not escape her) helped her in at least one other way. In the 1990s, the same decade in which she was diagnosed, it was hard to think of incurable illness without thinking of the AIDS epidemic. Breast cancer and AIDS activism had been linked before: the AIDS Coalition to Unleash Power (ACT UP) was the inspiration for groups such as Breast Cancer Action, founded in 1990. Breast Cancer Action refused to produce the depoliticised material that came out of mainstream groups like Komen, and focused instead on lobbying for more research and probing into the causes of breast cancer. This also linked certain corners of the breast cancer movement with the environmental movement, as activists focused on possible carcinogenic chemicals and pollutants. Sedgwick’s involvement with ACT UP predated her cancer, and at the time she was diagnosed she was deeply involved in setting up a local chapter and providing emotional support for a distant, dying and dear friend with AIDS. After her diagnosis, she deepened her thinking about AIDS and the role of incurable illness in the fashioning of late-20th-century identities. She was interested, she later wrote, in the ‘dialectical epistemology of the two diseases, too – the kinds of secret each has constituted; the kinds of outness each has required and inspired – [this] has made an intimate motive for me.’ She understood herself to live at a point in history and in a way that forced an intimate association with early death: not only for those in a queer milieu, but also for urban women of colour, forced to the brink and beyond by poverty, violence and state indifference. When plagued with the inevitable ‘Why me?’ question, Sedgwick gives us a way to think about an answer that’s quite different to the heteronormative, pink-ribbon platitudes. If a queerer, more radical form of breast cancer activism is to be inspired by ACT UP, it makes sense for it to focus on improving access to drugs and accelerating research into a cure. And any research into a cure has to start with the people actually dying of the disease – women with Stage 4 breast cancer – rather than focused purely on prevention. This is what the organisation I am involved with, METUPUK, seeks to do, and is joined in the US by the likes of Breast Cancer Action and METAvivor. All work to raise awareness of the plight of women dying of SBC and of raising funds for and promoting research into SBC – as opposed to primary cancer, which tends to have better outcomes and is therefore more lucrative. These SBC groups are vocal critics of the pinkwashing that happens in October and throughout the year, and seek to provide an alternative for people (be they woman, man or nonbinary) who are angry about the statistics and want to see change. This is the urgent, vital work that does indeed get me out of bed. It also inspires me as I try to recover from my spinal injuries. And something may be working – because, reader, I can wiggle my toes. Philippa Hetherington died on 5 November 2022. Her family invite you to make a donation to METUPUK here.
Philippa Hetherington
https://aeon.co//essays/how-breast-cancer-rips-up-conventional-markers-of-gender
https://images.aeonmedia…y=75&format=auto
Work
There is always a demand for more jobs. But what makes a job good? For that, Immanuel Kant has an answer
Work is no longer working for us. Or, for most of us anyway. Citing lack of pay and promotion, more people are quitting their jobs now than at any time in the past 20 years. This is no surprise, considering that ‘real wages’ – the average hourly rate adjusted for inflation – for non-managers just three years ago was the same as it was in the early 1970s. At the same time, the increasing prominence of gig work has turned work from a steady ‘climb’ of the ladder into a precarious ‘hustle’. Of the growing number of people working through apps like Uber or Taskrabbit, nearly 70 per cent of them say that they do so on the side, supplementing a main income that is too low to provide for life’s necessities. Even young and upwardly mobile professionals must change jobs, rather than stay in them, in order to grow in their careers. Almost perversely, the loss of stable careers is branded as a benefit. Sarah Ellis and Helen Tupper, both career consultants, argue that we ought to embrace these ‘squiggly careers’ as a new, more ‘flexible’ norm. Politicians claim that the solution to our work problems is ‘more jobs’. But simply increasing the number of bad jobs won’t help us avoid the problems of work. What we need, it seems, is not more work, but good work. But what exactly is good work? The United States Department of Labor identifies a ‘good job’ as one with fair hiring practices, comprehensive benefits, formal equality of opportunity, job security and a culture in which workers are valued. In a similar UK report on the modern labour market called ‘Good Work’ (2017), Matthew Taylor and his colleagues emphasise workplace rights and fair treatment, opportunities for promotion, and ‘good reward schemes’. Finally, the UN’s Universal Declaration of Human Rights has two sections on work. They cite the free choice of employment and organisation, fair and equal pay, and sufficient leisure time as rights of workers. What all three of these accounts have in common is that they focus on features of jobs – the agreement you make with your boss to perform labour – rather than on the labour itself. The fairness of your boss, the length of your contract, the growth of your career – these specify nothing about the quality of the labour you perform. And yet it is the labour itself that we spend all day doing. The most tedious and unpleasant work could still pay a high salary, but we might not want to call such work ‘good’. (Only a brief mention is made in the Taylor report – which totals more than 100 pages – of the idea that workers ought to have some autonomy in how they perform their job, or that work ought not be tedious or repetitive.) This is not to say that the extrinsic aspects of work like pay and benefits are unimportant; of course, a good job is one that pays enough. But what about work’s intrinsic goods? Is there anything about the process of working itself that we ought to include in our list of criteria, or should we all be content with a life of high-paying drudgery? Kant defines art as a particular kind of skilled labour Philosophers try to answer this question by giving a definition of work. Since definitions tell us what is essential or intrinsic to a thing, a definition of work would tell us whether there is anything intrinsic to work that we want our good jobs to promote. The most common definition of work in Western thought, found in nearly every period with recorded writing on the subject, is that work is inherently disagreeable and instrumentally valuable. It is disagreeable because it is an expenditure of energy (contrast this with leisure), and it is instrumentally valuable because we care only about the products of our labour, not the process of labouring itself. On this view, work has little to recommend it, and we would do better to minimise our time spent doing it. A theory of work based on this definition would probably say that good jobs pay a lot (in exchange for work’s disagreeableness) and are performed for as little time as possible. But this is not the only definition at our disposal. Tucked away in two inconspicuous paragraphs of his book about beauty, the Critique of Judgment (1790), is Immanuel Kant’s definition of work. In a section called ‘On Art in General’, Kant gives a definition of art (Kunst in German) as a subset of our more general capacity for ‘skill’ or ‘craft’ (note that Kant’s definition should not be limited to the fine arts like poetry or painting, which is schöne Künste in German, which he addresses in the following section of the book). In other words, Kant defines art as a particular kind of skilled labour. Kant’s definition of art as skilled labour will direct us to the intrinsic features of work that we ought to include in our conception of good jobs. Kant defines art using his analytic method, which is a way of getting at what a thing is by distinguishing it from what it is not. His first distinction concerns the difference between things produced by natural forces on the one hand, and things produced by human effort on the other. Art, as skilled labour, is an instance of the latter. He writes: By right we should not call anything art except a production through freedom, ie, through a power of choice that bases its acts on reason. For though we like to call the product that bees make (the regularly constructed honeycombs) a work of art, we do so only by virtue of an analogy with art; for as soon as we recall that their labour is not based on any rational deliberation on their part, we say at once that the product is a product of their nature (namely, of instinct).The capacity that allows humans to create art is our freedom, our ‘power of choice’. This is what distinguishes human labour, which is free, from the labour of the bees, which Kant will go on to say is ‘constrained’ or ‘mechanical’. What enables humans to produce freely is that they raise their object in the ideal world first, as a concept or purpose in consciousness, before raising their object in the real world. This is what Kant means when he says that our act, our labour, is ‘base[d] … on reason’. Bees do not have this capacity for purposive activity, which is why we do not consider their products works of art, but merely effects of nature. For the bee, the honeycomb is a product of instinct. The bee has no choice but to produce according to the standards it has been given by nature. Since humans have the ‘power of choice’, we are ‘free’ to produce according to any concept or standard we desire. This means that, if we want, we can produce according to the bee’s standard (a point Karl Marx will go on to make in his 1844 Manuscripts). Most workers do not have ‘the power of choice’ at work. Rather, that power resides exclusively with their bosses Already then, we can see that Kant gives us a preliminary philosophy of work with his distinction between art (as skilled labour) and nature. Skilled labour is essentially purposive. The product of our labour is based on a purpose, and this purpose makes possible the product in a way that brute nature could not. To identify human labour with purposiveness is to highlight the importance of thinking in the labour process. Unlike the animal, for whom labour is a mere effect of nature, human labour is a product of thinking and acting, in coordination with each other. The more our thoughts and plans are reflected in the product of our labour, the more ‘human’ our labour is. This insight has deep implications for the question of what makes work good, especially in light of capitalism’s division between the planning and the execution of labour. In capitalism, most workers are permitted to execute only their bosses’ purpose at work. They themselves do not determine what purpose to execute. Using Kant’s language, we might say that most workers do not have ‘the power of choice’ at work. Rather, that power resides exclusively with their bosses. This makes many workers mere animals at work, since what is produced is ‘not based on any rational deliberation on their part’. So, while labour in capitalism is determined by some purpose (ie, the bosses’), it is importantly not the workers’ purpose. Take a look at some of the more prominent theories of good jobs and you will be hard pressed to find any reference to purposiveness. That is because the modern organisation of work is so thoroughly structured by this division of labour into purposive planning by management on the one hand, and brute execution by workers on the other, that it is often taken for granted. The strictness of this division may vary by workplace, but the very idea of management presupposes the categories of planner and executor. Yet, here we see that such an organisation of work prevents many of us from exercising our distinctly human capacity for purposive activity, making our work feel ‘constrained’ and ‘mechanical’ rather than ‘free’. Within the domain of things produced through human effort, Kant makes a further distinction between things that can be produced merely by following pre-given rules, and those that require some kind of judgment or creativity. Kant calls the former ‘scientific’ and the latter ‘technical’. Art, as skilled labour, is technical. He continues: Art, as human skill, is also distinguished from science ([ie, we distinguish] can from know), as practical from theoretical ability, as technic from theory (eg, the art of surveying from geometry). That is exactly why we refrain from calling anything art that we can do the moment we know what is to be done, ie, the moment we are sufficiently acquainted with what the desired effect is. Only if something [is such that] even the most thorough acquaintance with it does not immediately provide us with the skill to make it, then to that extent it belongs to art. Art is distinguished from science because in order to engage in artistic production, we need more than a theoretical understanding of what we are trying to produce. There is a gap between ‘know[ing] what is to be done’ and our actual ability to do it. Art, in other words, involves productive indeterminacy. Kant’s idea that art is productively indeterminate is a consequence of his claim that: ‘There can be no objective rule of taste, no rule of taste that determines by concepts what is beautiful.’ For our painter, this means that the process by which she paints something beautiful cannot be codified in rules. Rather, she must use her ‘genius’, Kant’s term for our ‘talent for producing something for which no determinate rule can be given’. As Kant might say, ‘even the most thorough acquaintance with [the manual] does not immediately [fix the bike]’ At first glance, the art-science distinction does not seem relevant to the question of work. Composing the electrical wiring of a home can be taught by rules in a way that composing a beautiful poem cannot. Perhaps this is where our artistic concept of work breaks down. Kant disagrees, citing shoemaking as a kind of work that falls on the ‘art’ side of the art-science distinction. The implication is that any kind of work that involves indeterminacy in how to produce the object in question has an artistic element. Consider Matthew Crawford’s example of the motorcycle mechanic in his essay ‘Shop Class as Soulcraft’ (2006): a mechanic must check the condition of a starter clutch on a decrepit 50-year-old motorcycle. In order to do so, however, he must remove the engine covers, which are fastened with screws that are stripped. Drilling out the screws risks damaging the engine. ‘The factory service manuals tell you to be systematic in eliminating variables,’ Crawford writes, ‘but they never take such factors into account.’ Crawford’s mechanic may know what ‘the desired effect’ is – fix the bike – but the way to achieve that effect is not fully specified by any set of rules he has when he starts working. The mechanic lacks, in Kant’s words, ‘the skill to make it’. He might have great familiarity with the motorcycle service manuals, but as Kant might say, ‘even the most thorough acquaintance with [the manual] does not immediately provide us with the skill to [fix the bike].’ There are practical problems – problems of implementation, of contingent and unpredictable environments – that cannot be grasped scientifically (ie, theoretically) prior to production. This means they cannot be taught by a manual, a supervisor, or a master craftsman, but must be learned firsthand. It is the difference between ‘knowing’ about something and the ‘practical ability’ of performing it. The term I have been using to describe the kind of practical problems one encounters at work is ‘productive indeterminacy’. Kant’s distinction between art and science tells us that work – which falls on the ‘art’ side of the distinction – is productively indeterminate because the process of working cannot be exhausted by explicit instruction. Put another way, there is always a gap between, on the one hand, the rules and instructions for how to perform one’s work and, on the other, what is required to actually produce the desired product or service. Here the similarity to Kant’s artist comes into relief. When the artist sets out to make something beautiful, she is faced with the productive indeterminacy of knowing the outcome to be achieved yet having no rules to follow to achieve it. Instead of following rules, she must use her judgment to reflect on (Kant’s term) which rules – the artistic techniques, styles, etc – are best suited to the outcome she desires. In Kant’s words, she must use her ‘genius’. It is the same with our worker. Reconsider the motorcycle mechanic. The mechanic has a suite of ‘rules’ – common techniques and tests – he learns as an apprentice. But, when faced with an actual motorcycle, he must reflect on which of these rules and techniques to apply to an indeterminate work environment. At the outset, the mechanic does not know in fact which technique is correct. He must use his judgment to figure out which is most appropriate given the circumstances. And it isn’t just manual labour that requires judgment and creativity. All jobs have indeterminacies that cannot be resolved through mere rule-following. The psychodynamic theory of work, a prominent thesis about work in contemporary French social theory, argues that ‘no amount of prescription, however substantial or refined, can foresee all the possible variations in the concrete, real context in which the [work] is to be performed.’ For the proponents of the psychodynamic theory, adjusting for these ‘variations’ is just the quintessential experience of working. What does Kant’s idea of productive indeterminacy – drawn from his distinction of art from science – tell us about good work? According to Kant, overcoming productive indeterminacy through judgment rather than rule-following is an essential part of what it means to work. The use of judgment at work makes our labour feel free, creative and deliberative. If, on the other hand, our judgment at work is blocked, our work can feel less ‘playful’ and artistic, and more ‘mercenary’ (a distinction Kant makes in the next paragraph). That any one particular job involves a lot of rule-following need not be troubling for our account of good work. Some socially necessary jobs simply don’t require a lot of judgment, or can be performed only when standardised. Trash collection appears to have both of these features. It makes sense that regulations for trash collection – whether to pick up, eg, construction waste – across an entire city ought to be determined by a central agency, rather than by individual sanitation workers. If Kant is right, the standardisation of trash collection may make such work feel tedious. But this need not mean that trash collection is simply ‘bad work’. Recall that the use of judgment is not the only desideratum of good work. In exchange for performing socially necessary but tedious work, sanitation workers would ideally be compensated with additional pay, benefits, and safe and regular working conditions. In these jobs, managers exercise their judgment ahead of time so workers don’t have to The problem with rule-following, however, is that the modern organisation of work appears to reduce, in general, the level of judgment required by workers. Managers regulate the labour process in the name of efficiency and standardisation, but in doing so they appropriate many of the decisions workers would have otherwise made. Put another way, management transforms workers from judgers to rule-followers. The most extreme way you might become a rule-follower at work is if your job is scientifically managed. The central idea of Frederick Taylor’s scientific management is that managers of labour, not the labourers themselves, ought to control the labour process to the greatest extent possible. ‘The work of every workman,’ Taylor writes, ‘is fully planned out by the management … and each man receives … complete written instructions, describing in detail the task which he is to accomplish, as well as the means to be used in doing the work.’ Taylor’s vision of a scientifically managed workforce is one in which management decides ahead of time exactly what work to do and how to do it. But such control over the labour process leaves the worker with little to do but follow management’s rules. Importantly, scientific management tries to anticipate any and all indeterminacies in the labour process and incorporate them into the workers’ instructions ahead of time. This means it is the manager, not the worker, who gets to use their judgment at work. The result is that workers are faced with less productive indeterminacy, and are robbed of whatever opportunities for judgment and creativity their work once afforded them. Such scientific management of labour is still, as Harry Braverman writes in Labor and Monopoly Capital (1974), the ‘bedrock of all work design’, even if the term ‘Taylorism’ has fallen out of fashion in management circles. You can find its most extreme forms in Emily Guendelsberger’s investigation of low-wage work in On the Clock (2019), but you don’t need to go to an Amazon warehouse or a McDonald’s kitchen to see its effects. Even highly coveted jobs have elements of scientific management, like sales jobs with pre-written scripts and quotas. Importantly, in these jobs, managers exercise their judgment ahead of time so workers don’t have to. Freedom and discretion at work have always been sources of conflict between labour and management. Just look at the history of the labour movement and you will find countless examples of conflict over who determines the work process. Yet this conflict seldom informs our theories of good jobs. Kant’s idea that workers must resolve productive indeterminacies by judging rather than following rules fixes this. Of course, work that involves judgment is not enough. A job with opportunities for judgment but terrible pay is no better than its converse. But Kant’s theory urges us to be sceptical of the call for ‘more jobs’ if no consideration is given to what those jobs will be like.
Tyler Re
https://aeon.co//essays/what-kant-can-teach-us-about-work-on-the-problem-with-jobs
https://images.aeonmedia…y=75&format=auto
Film and visual culture
From chopsocky films to disco earworms, Asian caricatures have proliferated since the 1970s. Can Hollywood kick the habit?
What is this and who am I? is not ordinarily a question that crosses my mind when I’m ensnared in the earthly and freaky delights of Latin American wedding perreo. But as the DJ spun the last bars of a Bad Bunny single into the uncannily familiar nine-note ‘Oriental riff’, I froze. I turned to my partner. Is this a microaggression? I could ask the same thing about the song that followed, ‘Kung Fu Fighting’ (1974): There were funky China men from funky Chinatown They were chopping them up, they were chopping them down It’s an ancient Chinese art and everybody knew their part From a feint into a slip and a kicking from the hip.I probably hadn’t heard the disco earworm since I watched a bootleg DVD of the animated film Kung Fu Panda (2008) shortly after its release. The DreamWorks movie, which follows the life of a bumbling panda seeking to prove his worth as the unlikely Dragon Warrior, featured a cover of ‘Kung Fu Fighting’ by CeeLo Green and Jack Black, both non-Asian performers. The song was originally written and performed by Carl Douglas, a Black artist; in his music video, clad in a silky red approximation of a karate gi, he chops and slices at the air with his elbows locked at 90-degree angles. From a dimly lit stage, he sings to an eager crowd. Every audience member there is white. When DreamWorks produced Kung Fu Panda, the movie needed a tuneful theme song. And why reinvent the wheel? DreamWorks also produced a music video where Green and Black, with the asymmetrical necklines of their Mandarin-collar tunics safely secured with knotted frog buttons, mix breakdancing with inspirational messaging that deviates from the original song (‘Although the future is a little bit frightening / it’s the book of your life that you’re writing’). At the end of the video, when the students of the Kung Fu Panda Academy of Awesomeness kowtow to their shifu (master) out of respect, Green returns the bow, pressing his palms together. Respectfully. How and why do Americans embrace Asianness? Douglas’s orientalising version of American performance – the version I know – busted straight through the doors of discotheques and movie theatres, and it prevails to this day. On the one hand, the 1970s was part of a new era of East Asian exploration: new waves of immigrants setting up their lives for the second time changed the United States’ cultural landscape for good, while student activists at San Francisco State University included Asian America in the diversity movement that was opening up in much of academia nationwide. That same era introduced a legacy of East Asian appropriation reflected in Kung Fu Panda that has never disappeared. In the geopolitical theatre of the late 1960s and early ’70s, both Asians and Americans were dealing with the immediate after-effects of an internecine, capitalistic struggle against a communist bugbear. US-led conflicts in Korea and Vietnam directly influenced generations of people living in and between Asia and the US, often for the worse. US soldiers dehumanised the Asian enemy, painting them as communist vermin that needed eradicating. During and after the Vietnam War, Hollywood blockbusters gleefully fanned the racist flames. The brown and yellow enemy was flattened hopelessly into a stereotype of super-soldier resilience in films like The Green Berets (1968) and Apocalypse Now (1979). In the films, hordes of Southeast Asian drones, who could make weapons out of nothing and terrorists out of nobodies, employed dirty guerrilla warfare to decimate the good guys. They were agile and especially skilled in close combat, fighting hand-to-hand without fear or remorse. The caricature is far from what I know of East and Southeast Asia, a place that feels like mine from another life. Yes, there are hordes of people but, rather than faceless bad guys, they are for the most part now composed of Instagrammers and motorcyclists. Were these caricatures meant to represent any reality known to Asian people? Can I recognise myself or anyone I know? He bridged East and West with pure athleticism and onscreen charisma, not diplomats and state dinners Around the same time, another sort of transpacific exchange was occurring, bringing another version of kung fu to US audiences. Legend has it that an impoverished Cantonese grandmaster opened a kung fu school after fleeing mainland China, in an effort to finance his opium addiction. In 1953, Ip Man, as he was known, reluctantly accepted a new student in his Hong Kong school, which focused on a style of kung fu called Wing Chun. The student, Lee Junfan, had been born in San Francisco but moved to Hong Kong as a child. He was a quick study, though his short temper made him prone to starting street fights and engaging in physical conflict, to the point of complaints to the police. Fed up, his parents sent him to live with an elder sister on the US west coast, where he began to open martial arts schools, and became a teacher himself. In Anglophone contexts, he went by the name of Bruce Lee. Given his father’s fame as a Chinese opera singer, Lee was exposed to celebrity at a young age, but he did not encounter much luck in Hollywood at first. After several frustrating and racist snubs, he followed the advice of a producer to return to Hong Kong and pitch a kung fu movie there. Raymond Chow, a former Shaw Brothers executive, welcomed Lee gladly, midwifing two successful movies in quick succession: The Big Boss (1971) and Fist of Fury (1972). Through these films, Lee brought wuxia, Chinese martial arts fiction, to Western screens. Just as Lee was becoming a legendary name in US households, he died days before the release of his magnum opus, Enter the Dragon (1973). Enter the Dragon crystallised the peak of Hong Kong kung fu cinema. Lee had successfully bridged East and West: but he’d done it with pure athleticism and onscreen charisma instead of diplomats and state dinners. People with a few bucks flocked to the movie theatre to watch Lee’s films and those of diasporic copycats. Variety magazine called the spinoff genre ‘chopsocky’, a punny neologism that praised Lee’s ‘James Dean aura’ but disparaged the nature of ‘badly dubbed Chinese-lensed pix that [would] go the way of those sleazy porno loops that once drew business simply because they were there.’ Whatever the producers’ initial intentions, the biggest audiences for chopsocky movies were young and Black. Films like Black Belt Jones (1974) and The Last Dragon (1985) exemplify the genre crossover of chopsocky and blaxploitation, a cinematic genre that centred Black protagonists but also caricatured them. Blaxploitation movies also peaked in the 1970s, inspiring breakdancing and parkour, movements that drew inspiration from East Asian martial arts. These highly skilled pursuits openly cite Lee as a cultural catalyst. As much as Asian American kung fu ushered many people in, the genre also openly shut others out. Principal among the excluded communities were Asian and Asian American women. Perhaps due to the traditionally masculine nature of kung fu, directors cast overwhelmingly more male actors as leads. When women did appear, they were flattened into stereotypes: nameless sex workers, victims of sexual violence, or damsels in distress. Or they bent gender as men, like Cheng Pei-pei in the wuxia film Come Drink with Me (1966). Good old-fashioned machismo pervaded kung fu film sets and writers’ rooms; in addition to male-centred storylines, the Asian actors themselves believed women could not perform at the same standard. It’s a tale older than a wuxia storyline. Americans did not stop at film when it came to consuming East Asianness. After decades of conflict, diasporas of real Asian women had become exploitable commodities. Both ‘yellow peril’, the belief that Asian immigration poses an existential threat to Western culture, and ‘yellow fever’, the sexual fetishisation of East Asian women, ran high. The former, a longstanding cultural attitude held over from the 19th century, led to explicitly racist scapegoating and cast Asian Americans as invading pests. The latter, often disguised as a dating ‘preference’ or ‘type’, combines elements of yellow peril with a good old-fashioned hatred of women. Hypersexualised, fetishised and dehumanised, Asian women continued to suffer at the intersection of racism and misogyny. My own life is punctuated by events that have intentionally fractured the myriad pieces of my identity, like many Asian women who have come before me, whose faces and stories have yet to show up in movie theatres. Born in Kingston, Jamaica, Carl Douglas had immigrated to London and started working with the British Indian music producer Biddu when he encountered a group of teenagers playing pinball in an arcade. Allegedly, the group was punching and kicking in the style of Hong Kong action cinema, which prompted Douglas to pen the first lyric of his one-hit wonder. The same audiences in 1974 who filled movie theatres for chopsocky films gathered under the mirror ball for ‘Kung Fu Fighting’. Intended as the B-side of the single ‘I Want to Give You My Everything’, ‘Kung Fu Fighting’ rocketed to the top of Anglophone charts thanks to its basic disco beat and its indelible melody, based on the ‘Oriental riff’. The pentatonic mini-anthem was an aural cue for catch-all Asianness, a tune popularised in cartoons and films like Lady and the Tramp (1955) and The Aristocats (1970). It helped that nearly anyone could dance to it, feigning martial arts moves while grunting on beat. Flying high on the success of his sleeper hit, Douglas attempted to replicate the track’s popularity by releasing another single in 1975, ‘Dance the Kung Fu’. You would be hard-pressed to find many musical differences between the two. By the 1990s and early 2000s, East Asian stereotypes had become entrenched in Hollywood. The overt and casual racism of the Rush Hour franchise (1998, 2001, 2007) left both Jackie Chan and Chris Tucker the butt of the series’s many, many jokes. Among other racist scenes, there are moments where Tucker tells Chan ‘Y’all [Asians] look alike,’ and another where Tucker asks to be introduced to ‘a couple of them Chinese girls’ because he ‘want[s] a massage’. The action genre was not the only one that pigeonholed Asian actors into punchlines: there were comedies like Austin Powers in Goldmember (2002), in which the titular spy seduces Japanese twins ‘Fook Mi’ and ‘Fook Yu’, and films like Gran Torino (2008), in which the Korean War veteran Walt Kowalski rattles off various anti-Asian slurs. Only animated characters or gun-brandishing white male actors could elevate kung fu above slapstick comic relief Although wuxia films continued to be produced on the fringes, it seemed like, by the turn of the millennium, the martial arts fad of the midcentury had begun to fade from popular view. I have a Hong Konger father whose passion for the cultural artefacts of the 1970s and ’80s is second only to his love for dim sum. As a result, my pop culture education had its foundations in oblique references to Ip Man, the Hong Kong Phooey theme song, and Astro Boy’s limpid gaze. However, contrary to my dad’s childhood watching Bruce Lee movies on a nine-inch television screen, my first real exposure to kung fu popular culture came from animation. First, I saw Disney’s animated film Mulan (1998) – a children’s movie with elements of kung fu and wuxia in the background. It was a formative queer text for me: I grew up watching the Chinese warrior Fa Mulan lead her blundering colleagues to triumph against hawkish Huns, leaning on chameleonic androgyneity to disguise her ‘real’ gender. Later, I was absorbed by animated animals making fat jokes. Kung Fu Panda was hard to resist. By the mid-2000s, it seemed that it was only these animated characters or gun-brandishing white male actors who could elevate kung fu above slapstick comic relief. Audiences flooded cinemas to see the franchises of John Wick and Jason Bourne. The cultural bridge between ‘East’ and ‘West’ that Bruce Lee had constructed began to sag, straining at its girders. Where were the Asian Americans in the fray? I am a historian now, but as an Asian American kid hitting replay on a pirated DVD, I am not sure how much I understood of kung fu’s legacy in the US. Were the animals Chinese? I recognised Mandarin words in characters’ names (masters ‘Oogway’ and ‘Shifu’). But if they were Asian or Asian American, I could not see myself in a noodle-slurping, bumbling bear who was living his goofy Bildungsroman through a series of martial arts antics, all while receiving mentorship from a cast of kung fu grandmasters. Lots of things changed for me after Kung Fu Panda. I went through puberty, for one thing. I went to school, and later college, and later graduate school, where I read about yellow fever and yellow peril for the first time, even though I had known those hateful truths since I was little. I read the Asian scholars and writers Gayatri Spivak, Viet Thanh Nguyen, Cathy Park Hong, and so many others who articulated the tragedies and euphorias of Asia and Asian America. I came out and learned that a sense of humour and a proclivity for farce are good to have in your queer toolbox. I read up on diasporas and splintered concepts of home, and not belonging to any one place. The president of my home country blamed the coronavirus on Chinese people. Seven women died in the Atlanta spa shootings, six of Asian descent. I moved out of the US to a city where I grew so accustomed to harassment that I took to wearing a hat, sunglasses and a face mask whenever I left the house. And then, last year, I watched Everything Everywhere All at Once (2022). As I watch the film, I cut off snorts of laughter with choking sobs, all in the same minute There aren’t any Oriental riffs in Everything. Instead, there are sentient rocks, Bluetooth earpieces, dowdy outfits, moody lesbians, and so much Michelle Yeoh. And, as an extension of her, so much kung fu. The 2020s needed Yeoh’s character, Evelyn Wang. And I, myself a moody lesbian, needed her. Contemporary films about Asian America, like The Farewell (2019) and Minari (2020), told family stories, vivid in their heart-wrenching verisimilitude. Everything chronicles a family, too: with absurdist peaks and agonising troughs, carried in the vehicle of Yeoh’s incredulous glares and intermittent shadowboxing. It begins to rhyme with Bruce Lee’s legacy, all while extending beyond it. As I watch the film, I cut off snorts of laughter with choking sobs, all in the same minute. That’s queerness. That’s Asian America. ‘East’ and ‘West’ are connected not because of some greedy yearning for Asianness by non-Asians, but because of the knotty tangles of transpacific diaspora. In the past, those cultural snarls were Bruce Lee as the proletarian regular guy seeking revenge in the name of justice. And now they’re a laundromat-owning, googly-eye-donning, middle-aged mom who knows her best in any universe might not be enough, but she keeps fighting anyway. That kind of performance can win you an Oscar. Glittering in a bright white gown, Yeoh accepted the award with a sated hunger in her eyes, as hot dogs fluttered in the background. The moment was just as it should have been. To be clear, I left the dance floor at the wedding immediately – I was gone before Douglas launched into the chorus. There was a general wave of animosity towards the DJ anyway; dancers meandered towards the bar to top off their mezcal glasses. In between heartbeats, yearning for bass-heavy reggaeton to wipe my mind clean of cheap wisecracks about Asians, I answered my original question: this is history, and I am here.
Stephanie Wong
https://aeon.co//essays/can-pop-culture-kick-the-kung-fu-asian-stereotyping-habit
https://images.aeonmedia…y=75&format=auto
Philosophy of language
How Borges and Heisenberg converged on the notion that language both enables and interferes with our grasp of reality
As history’s bloodiest war metastasised from Europe outward, two men – a world apart from each other, and coming from profoundly different disciplines – converged on one fundamentally similar idea. One of the men was a poet and short-fiction writer with middling success in his own country but virtually unknown outside its borders. The other man had already won the Nobel Prize for work he had done around 15 years earlier and would soon top the Allies’ most-wanted list for the work they suspected he had done in Germany’s unsuccessful atomic weapons programme. But while Jorge Luis Borges knew nothing of the advances of quantum mechanics, and while Werner Heisenberg wouldn’t have encountered the work of a man among whose books was one that sold a mere 37 copies on the other side of the world in Argentina, around the year 1942 they were each obsessed with the same question: how does language both enable and interfere with our grasp of reality? After the resounding failure of History of Eternity (1936), the book that sold only 37 copies in a year and garnered almost no critical attention, Borges slipped into a bog of depression. That book’s philosophical themes, however, continued to percolate and eventually emerged in an entirely different form in a series of stories called Artifices (1944). In that collection’s opening story, Borges describes a man who loses his ability to forget. The man goes by the names Ireneo Funes. When the narrator of the story meets him, he is still a young man and known in his village for his quirky ability to tell the time whenever he is asked, although he never wears a watch. Two years later, upon his return to the town, the narrator learns that Funes has suffered an accident and is entirely paralysed, confined to his house on the edge of town. The narrator goes to visit him and finds him alone, smoking a cigarette on a cot in the dark. Astonished and saddened by Funes’s change of fortunes, the narrator is even more surprised to learn that the young man doesn’t perceive his condition as a disability, but as a gift. Funes believes the accident has endowed him with perfect memory. The young man, who has never studied Latin, borrows a Latin dictionary and a copy of Pliny’s Naturalis historia from the narrator. He then greets him on his return by reciting, verbatim, the first paragraph of the 24th chapter of the tome’s seventh book: a passage about memory. However, though his ability to recall is astounding, Funes’s gift extends beyond mere memory. His immersion in the present is so profound, so perfect, that nothing to which his senses are exposed escapes his attention. In a poetic passage, Borges describes Funes’s abilities: With one quick look, you and I perceive three wineglasses on a table; Funes perceived every grape that had been pressed into the wine and all the stalks and tendrils of its vineyard. He knew the forms of the clouds in the southern sky on the morning of April 30, 1882, and he could compare them in his memory with the veins in the marbled binding of a book he had seen only once, or with the feathers of spray lifted by an oar on the Rio Negro on the eve of the Battle of Quebracho.– from Collected Fictions (1998) by Jorge Luis Borges, translated by Andrew HurleyWhile Funes insists that his abilities make his former life seem, in comparison, like that of a blind man, the narrator at once begins to glean the limitations of his condition. As Borges goes on to write: [Funes] was able to reconstruct every dream, every daydream he had ever had. Two or three times he had reconstructed an entire day; he had never once erred or faltered, but each reconstruction had itself taken an entire day. The man who perceives and remembers flawlessly the perception of everything around him is saturated in the immediacy of his memories. The very intensity with which he experiences the world interferes with that experience. For, if it takes an entire day to reconstruct the memory of a day, what has happened to that new day? And is it surprising that a man who experiences the world in such a way feels the need to wall himself off in a dark room to avoid being consumed by the converging floodwaters of memory and sense perception? He lives in a world of individuals, and requires a representative system that honours that individuality As Borges’s narrator starts to realise, the paradoxes of Funes’s affliction express themselves in his struggles with language. Emblematic of this struggle is how Funes deals with numbers. Rather than seeing them as elements of a general system, Funes feels the need to create an individual name and identity for every number. His numerical lexicon has, by the time of his conversation with Borges’s narrator, surpassed 24,000. As Borges, writes: Instead of seven thousand thirteen (7013), he would say, for instance, ‘Máximo Pérez’; instead of seven thousand fourteen (7014), ‘the railroad’; other numbers were ‘Luis Melián Lafinur,’ ‘Olimar,’ ‘sulfur,’ ‘clubs,’ ‘the whale,’ ‘gas,’ ‘a stewpot,’ ‘Napoleon,’ ‘Agustín de Vedia.’ Instead of five hundred (500), he said ‘nine.’Aside from being hilarious, the idea of Funes using one numeral to designate another captures the enormous disability that his superpower entails. Borges’s narrator notes this as well, pointing out that he tried to impress upon Funes that his system entirely misses the point of numbers, but to no avail. Funes isn’t capable of generalisation, of taking one sign as a stand-in for more than one thing. He lives in a world entirely populated by individuals, and requires a representative system that honours that individuality. Funes requires the kind of language some early modern philosophers, such as John Locke, had postulated, one with a term for every being in existence. But, as the narrator goes on to speculate, if Locke rejected such a language for being so specific as to be useless, Funes rejects it because even that would be too general for him. This is because Funes is incapable of the basic function underlying and enabling all thinking – abstraction. Consequently, the way other humans use language inevitably dissatisfies him. The narrator tells us: Not only was it difficult for him to see that the generic symbol ‘dog’ took in all the dissimilar individuals of all shapes and sizes, it irritated him that the ‘dog’ of three-fourteen in the afternoon, seen in profile, should be indicated by the same noun as the dog of three-fifteen, seen frontally. His own face in the mirror, his own hands, surprised him every time he saw them.For Funes, human language is limited precisely by its slipperiness, and yet – and here is the brilliance and philosophical umph of Borges’s exploration – behind Funes’s claims for perfect perception and perfect recall, a paradox lurks. Funes would have us believe that each and every impression he has of the world is so overwhelmingly specific that our use of the same word for a dog in two different moments of spacetime is inadequate; he would have us believe he feels surprise each time he sees his own reflection. But both his surprise and his irritation belie the very claim he is making; for, in order to be surprised at his own reflection, in order to be irritated by the generality of the word ‘dog’, Funes must himself also be able to generalise between the various impressions his face in the mirror or the dog at 3:14pm and at 3:15pm make. It is – and this is the whole point of Borges’s reflection – utterly impossible to be as immersed in the present as Funes claims to be and also to be aware enough of the generality of language to criticise it. Funes is having his proverbial cake – by experiencing the generality of language that allows it to identify different aspects of a thing – and eating it, too – by being so immersed in the present that such generality is ostensibly inconceivable. Meanwhile, as war raged around him, and as he worked to produce (or to hinder the production of, we may never know for sure) an atomic weapon for Germany, Heisenberg was secretly working on a philosophical book. The ‘Manuscript of 1942’ would be named not for the year it was published, which wouldn’t be until long after his death, but for the year he finished and circulated it among close friends. From that work, it would seem that what really interested Heisenberg during the time he was supposed to be working on Germany’s weapons programme was the mystery of our relation to and knowledge of reality. The issue, he believed, came down to language. For Heisenberg, science translates reality into thought. Humans, in turn, require language in order to think. Language, however, depends on the same limitations that Heisenberg’s work from the 1920s showed held for our knowledge of nature. Language can home in on the world to a highly objective degree, where it becomes well defined and useful for scientists who study the natural world. But, when it is so focused and finely honed, language loses its other essential aspect, one we need in order to be able to think. Specifically, our words lose their ability to have meanings that change depending on their context. Heisenberg calls the first kind of language use static, and the second dynamic. Humans use language in a variety of ways that span the spectrum between the mostly static and mostly dynamic. On one extreme, there are physicists, who strive to link their words as closely as possible to a single phenomenon. On the other side are poets, whose use of language depends on its ability to have multiple meanings. While scientists use the static quality of words so as to pin down observations under very specific conditions, they do so at a cost. As Heisenberg writes: What is sacrificed in ‘static’ description is that infinitely complex association among words and concepts without which we would lack any sense at all that we have understood anything of the infinite abundance of reality.Because of this trade-off, insofar as thinking about the world depends on coordinating both the static and dynamic aspects of language, ‘a complete and exact depiction of reality can never be achieved.’ Perceiving an object as it changes requires us to forget the minute difference between two different moments We can see in Heisenberg’s theory of how language works parallels with Funes’s struggle. With Heisenberg, Borges’s poetic creation becomes the ideal example of an internal check on our knowledge, for the very perfection of Funes’s memory and the intensity of his perceptive abilities turn out to be a hindrance to his ability to understand or to distinguish perceptions from recollections. Imagine Funes as a physicist in his laboratory. He distinguishes every observation as sui generis, unrelated to anything else. His perfection of perception allows him to discern, in Borges’s words, ‘not only every leaf of every tree in every patch of forest, but every time he had perceived or imagined that leaf.’ Give him a cloud chamber, and he distinguishes not only each bead of condensation left by an errant electron, but the particle itself; and not only the particle, but each and every moment in the infinite sequence of moments that defines its trajectory. But, of course, he cannot do this. He cannot because the very nature of perceiving an object, a particle, as it changes over time requires the perceiver to forget, ever so slightly, the minute difference between two different moments in spacetime. Without this minuscule blurring, this holding on to a moment of time so as to register its infinitesimal alteration in the next moment, all Funes the physicist would experience is an eternal now. A dog of 3:14pm, seen frontally, never to earn the name ‘dog’, never to be recognised, never to be observed at all. Like Borges, as he strove to imagine what the world must be like for someone who perceives perfectly, what Heisenberg grasped was that to simultaneously observe a particle’s position and momentum with exactitude would require the observer’s co-presence with the particle in a single instant of spacetime, a requirement that contradicts the very possibility of observing anything at all. Not because of some spooky quality of the world of fundamental materials, but because the very nature of an observation is to synthesise at least two distinct moments in spacetime. As the great Prussian philosopher Immanuel Kant had put it more than 100 years earlier, any observation requires distinguishing ‘the time in the succession of impressions on one another’. Observation undermines perfect being in the present because the observation injects space and time into what is being observed. A particle captured in a singular moment of spacetime is by definition unperceivable because, in Kant’s words, ‘as contained in one moment no representation can ever be anything other than absolute unity’ – an infinitely thin sliver of spacetime, with no before, no after, and hence nothing to observe. Kant thought it was vital to understand this fundamental limit on human knowledge in order to ensure that science not fall into error. Heisenberg believed the same. As he writes in his manuscript, when science makes a new discovery: [Its] sphere of validity appears to be pushed yet one more step into an impenetrable darkness that lies behind the ideas language is able to express. This feeling determines the direction of our thinking, but part of the essence of thinking is that the complex relationship it seeks to explore cannot be contained in words.We need to be on the lookout for a barrier to our knowing, not one out there in the Universe but one we create when we impose our image of reality on the perpetually receding limit of our future discoveries. In Heisenberg’s words again: The ability of human beings to understand is without limit. About the ultimate things we cannot speak.Or, to put it another way, by presuming to speak of ultimate things, we put restraints on our ability to understand. In the same year that Heisenberg finished and circulated his manuscript among a small circle of friends – to avoid the scrutiny of a regime that had labelled the brand of physics he was known for as ‘Jewish science’ and targeted him personally as a ‘white Jew’ – Borges published a curious essay in the magazine La Nación. The essay ostensibly reviewed the contributions made by John Wilkins, the 17th-century natural philosopher and co-founder of the Royal Society, to the search to create a language that would not suffer from the deficiencies and mutations that plague natural languages. The essay’s most famous sentences come from its concluding paragraphs, in which Borges compares the redundancies and inconsistencies he sees in Wilkins’s rational language to a system of categorisation he claims to have found in ‘a certain Chinese encyclopaedia entitled Celestial Emporium of Benevolent Knowledge’, in which: the animals are divided into (a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel’s hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance.Michel Foucault memorably begins his book The Order of Things (1966) by recalling his reaction when first reading Borges’s list. But whereas Foucault’s reaction was astonishment – the alienating wonderment provoked by an entirely different, arbitrary and seemingly contradictory classification system – Borges’s fictive encyclopaedia is meant to undermine a confidence we tend to share with Wilkins, and one that his rational language is built on. A language designed to account for everything that exists founders on the shoals of its own completeness Communication is slippery because the words in natural languages are, in Ferdinand de Saussure’s assessment, unmotivated. Different words in different languages dissect the world in different ways. But a truly rational language would avoid such discomfort. The vicissitudes of translation would forever be banished. Wilkins aimed at a system of classification akin to the Linnaean taxonomy but which would apply to everything that can be expressed in language. Every letter in a word would be meaningful and add to its distinctness. As Borges explains it: For example, de means element; deb, the first of the elements, fire; deba, a portion of the element of fire, a flame.But far from rational, perfect communicability, Wilkins’s system devolves into a dumpster fire of contradictions, redundancies and tautologies. It turns out that a language designed to account for everything that exists founders on the shoals of its own completeness. Wilkins didn’t aim to produce a work of comedy, but his lists are every bit as absurd as those of the Celestial Emporium. The reason for this, however, has nothing to do with the choices Wilkins makes. Any similar attempt, Borges implies, would quickly rack up such inanities. For the very idea of a representational system that categorises being on a one-to-one level, like Locke’s abandoned hope or Funes’s ridiculous numerical grid, imports a false idea of reality: that it is out there, broken into bite-sized chunks, just waiting to be corresponded to. But, as Borges goes on to write: [O]bviously there is no classification of the universe that is not arbitrary and conjectural. The reason is very simple: we do not know what the universe is. More than that, he continues: [W]e must suspect that there is no universe in the organic, unifying sense inherent in that ambitious word.However, the Universe, in that organic, unifying sense, is what underlay generations of presuppositions regarding the nature of space and time, the independence of reality from our measurements of it, and the ability of science to know that reality down to its most intimate core. It was precisely such a universe – a universe in which a particle would have the decency to have both a position and momentum to be measured with perfect accuracy, the very hope and presumption of science – that Heisenberg’s discovery demolished.
William Egginton
https://aeon.co//essays/borges-and-heisenberg-converged-on-the-slipperiness-of-language
https://images.aeonmedia…y=75&format=auto
Human rights and justice
A century after the trial against ‘Ulysses’, we must revisit the civil liberties arguments of its defender, Morris Ernst
Upon its publication in 1922, critics agreed that James Joyce’s Ulysses was the masterpiece of the age. No work of literature more fully embodied the experiments in literary form essential to modernism. But even its most ardent advocates had to admit to its many Rabelaisian moments. Joyce revelled in what Edmund Wilson in 1929 described as ‘this gross body – the body of humanity.’ As the literary historian Paul Vanderham asserts, the obscenity in Joyce’s work ‘is something more than a Victorian fantasy.’ He transgressed literary and moral categories, and his book was indeed profane, scatological and salacious. All of which meant it fell afoul of obscenity laws in the United States. Widely known as the Comstock Laws, this legislation strictly prohibited materials seen as lewd, indecent, lascivious, and immoral, to list some of the synonyms used to define obscenity in the federal statutes. Joyce’s novel was rife with such matters, and Ulysses had been ruled obscene in the US from its initial instalments published in The Little Review magazine during the First World War. A 1928 US Customs Court decision ruled the entire book obscene. The fact that Ulysses was still banned in the US a full decade after its publication struck denizens of the literary world as absurd. Malcolm Cowley, editor of The New Republic, captured the exasperation when he wrote that ‘James Joyce’s position in literature is almost as important as that of Einstein in science. Preventing American authors from reading him is about as stupid as it would be to place an embargo on the theory of relativity.’ But freeing Joyce’s masterwork from the clutches of the censors would require prodigious effort, legal aplomb, and federal judges willing to hear the book’s defenders. Morris Ernst arguing for the Ludlow resolution to place the power of declaring aggressive war in the hands of voters, May 1939, New York. Courtesy the Library of Congress Two New York lawyers, Morris Leopold Ernst and his junior partner Alexander Lindey, had the requisite aplomb, as revealed in a note Lindey wrote to Ernst in August 1931 about taking up the defence of Ulysses: ‘I still feel very keenly that this would be the grandest obscenity case in the history of law and literature, and I am ready to do anything in the world to get it started.’ Lindey was right about its grandness. And their optimism about the outcome wasn’t naive either. The two had already obtained legal precedents in a series of earlier cases in which they successfully challenged the application and administration of federal obscenity laws against other notable artefacts deemed obscene. They had laid the groundwork for defending Ulysses, and had learned how to breach the nation’s obscenity laws. The federal and state obscenity laws Ernst and Lindey targeted before taking Ulysses to trial sanctioned not just the suppression of literary works, but forbade the distribution of sex-education materials, marital advice manuals and virtually anything having to do with contraception, including birth-control techniques and devices. The Comstock Act of 1873 was the most formidable of these laws. Formally known as the ‘Act for the Suppression of Trade in, and Circulation of, Obscene Literature and Articles of Immoral Use’, the statute was capacious in its breadth, giving US Postal and Customs officials a wide berth to patrol the mails and ports of entry for allegedly obscene goods. St Anthony Comstock, the Village Nuisance (1906) by Louis M Glackens. Courtesy the Library of Congress The Comstock Act also gave the law’s author, Anthony Comstock, enormous authority. Already the executive secretary of the New York Society for the Suppression of Vice (a private organisation financed by the city’s elite), the federal law enabled him to search the mails, secure arrests and convictions, and destroy material seized from the mails. He had special police powers in New York City as an agent of the New York Police Department, allowing him to conduct raids on bookstores, publishers’ warehouses, theatres and other sites of ‘vice’, including houses of prostitution, gambling dens, saloons and dance halls. Anti-vice organisations incurred virtually no opposition from politicians A devout evangelical Christian, Comstock presented himself as an avenging hero against forces of moral disruption and sinfulness, referring to himself as a ‘Soldier of the Cross’. Highly controversial and oft satirised, Comstock maintained the support of many of the era’s elites who agreed that strictly enforced obscenity laws were necessary tools for keeping sex, sin and disorder at bay. He was a brilliant publicist who warned against the power of a sexualised popular culture to undermine the moral sensibilities and self-discipline upon which the social order supposedly depended. Comstock, of course, was not alone. There were anti-vice organisations throughout the nation’s largest cities and smaller towns. They enlisted clergy from all denominations, found eager activists in women’s civic groups, drew upon the financial resources of leading male citizens, and could count upon police enforcing the laws while judges upheld them in the courts. They incurred virtually no opposition from politicians. In the law, Comstockery lasted well beyond his death in 1915, partly because the courts continued to uphold the law’s constitutionality and allowed their administration to go unchecked. Crucially, his cultural work was carried on by his successor as Vice Society head, John Saxton Sumner – and it is he who became Ernst’s primary target in the struggle over obscenity laws. Ernst was a hustler. He was peripatetic. He was ambitious to be known, and he liked to be thought of as connected to those in power. Born in 1888 in Alabama, he was the son of a Jewish immigrant from Pilsen, Germany on his father’s side, and a second-generation Jewish immigrant on his mother’s. His mother took ill when he was a child, and he lived with an uncle and aunt, separated from his siblings. In an interview late in life, he reflected that he had ‘no ancestors’ and ‘no past’. Asked to explain, he said he had ‘no great grandfather, no rootedness’ and no sense of security. His second wife, Margaret Samuels, who came from a well-established Jewish family in New Orleans, ‘had security, I had none.’ He continued: ‘I’m a ham. I like publicity.’ His second-generation immigrant sensibility and his attendant insecurity come across potently in this interview. He did not discuss his Jewishness per se, other than calling himself a ‘non-worshipping Jew’. Perhaps he did not discuss his Jewish identity much because it made him feel vulnerable. His biographer Joel Silverman writes that Ernst ‘recalled how he was mocked during his formative years for his appearance. “I had been brought up to believe that I was ugly … I had uncles who always kidded me about my big Jewish nose which did my ego no good.”’ He also remembered being ‘told that I was Jewish, and for that reason, inferior.’ Ernst internalised the financial precariousness of immigrants, too. His father, Carl Ernst, made (and occasionally lost) money in the real estate business, so his family endured difficult periods. Still, Carl had enough financial success that he could send Morris to the prestigious Horace Mann School in New York for Ivy League prep, and then to Williams College in Massachusetts. At Williams, Ernst was one of the few Jewish students, and was aware of how his Jewishness (and small physical stature) marked him. But he was gregarious, a successful debater, and fit in well enough to be accepted into a Jewish fraternity. Upon graduation, he lived in New York City, working in the family shirt-making business (established by his father and uncle), and took up night classes at the New York Law School where other immigrants and Jews took their legal education. Ernst later formed his law firm – Greenbaum, Wolff, and Ernst (GWE) – with Jewish students he met at Williams College. Always self-deprecating about his legal education, Ernst often remarked that he was a ‘partially trained’ lawyer and called himself a ‘dilettante’. This sense of being dilettantish was no doubt connected to his sense of the inadequacy of his legal training. While he may have described himself as a half-trained lawyer, other civil libertarians of his generation recognised his talents and his energies. He quickly rose in the American Civil Liberties Union (ACLU) to become an executive member and co-general counsel to the national board from 1929 to 1955. The ACLU’s post-First World War commitment to expanding civil liberties across multiple fronts reinforced his free-speech ideals. Ernst waged his anti-censorship campaign in the courts, as well as in the court of public opinion Ernst began laying the intellectual groundwork for a strategic attack on the obscenity laws when he undertook a lively historical study of Anglo-American censorship practices titled To the Pure (1928). His steady, piecemeal campaign against the obscenity laws was significantly aided by Alexander Lindey, who joined GWE in 1925, upon graduating from the New York Law School. Lindey was one of a handful of junior associates in this highly successful Jewish law firm who worked alongside Ernst on his censorship cases, and helped make GWE the key player in fighting for sexual liberalism in the US courts prior to the Second World War. While the ACLU was deeply involved in promoting what the historian Leigh Ann Wheeler describes as ‘sex as a civil liberty’, Ernst was the architect of decisive trials from the late-1920s to the beginning of the war, and his firm did much of the work pro bono. Pursuing what Ernst described as ‘rational sex laws’, GWE took on obscenity laws as well as the wider culture of Comstockery, which they saw as responsible for irrational sex laws, over-zealous application of vague 19th-century statutes, and the unseemly deference accorded to administrative censorship authorities. They did their work on behalf of the many publishers, writers, birth-control activists, sex educators, bookstore owners, burlesque theatre owners and others who ran afoul of those laws. Ernst waged his anti-censorship campaign in the courts, as well as in the court of public opinion, constantly vilifying and brawling in the city’s newspapers and magazines with John Saxton Sumner of the New York Society for the Suppression of Vice. Ernst spent years building momentum for the anti-censorship cause and garnering allies among those victimised by Sumner. Ernst was more pragmatic in the actual courts, where he defended credible materials whose status as ‘obscene’ was contestable; he and Lindey carefully built their cases and wrote impeccable, erudite legal briefs aimed at persuading judges that the Customs and Postal authorities were seizing works with demonstrable value to the public. New York City was a stage for Ernst, an ideal place for him to wage his battles, not least because Sumner was a great foil who could be held up to the public as the antithesis of the city’s cosmopolitanism. Moreover, New York was the centre of the US book publishing world, and Sumner had so much clout over what publishers could risk publishing that he antagonised the intellectual class. He monitored publishers’ lists, warned them he would go after them if they published works he thought were obscene – such as Ulysses itself – and often followed through on his threats by raiding publishers’ offices, bookstores and magazine stands, all with police and photographers in tow. Ernst easily painted him as an anti-democratic scourge, the censor who was utterly out of touch with the spirit of New York City. Ernst and Sumner shared a deep animus, which fed journalists’ accounts of their skirmishes. Ernst and Lindey won acclaim in a series of high-profile cases leading to the defence of Ulysses. They successfully defended the birth-control pioneer and sex-education pamphlet-writer Mary Ware Dennett (in the case of US v Dennett), the British birth-control advocate and sex educator Marie Stopes (in the cases of US v One Obscene Book Entitled Married Love and US v One Book Entitled Contraception), the novelist Radclyffe Hall, and Margaret Sanger’s birth-control clinic following a police raid in 1929. Their fight against Comstockery gained momentum, as did their sexual liberalism, which emphasised freedom of expression at the expense of oppressive, capricious moralism. They readied themselves for their crowning victory. One can usefully interpret Ernst’s challenges to the obscenity statutes and their administration as being essentially a parallel free-speech movement to the one his ACLU colleagues took up on behalf of political radicals and labour unions in the 1920s and ’30s. Free speech was in the air in civil liberties circles, and Ernst and Lindey succeeded in part because they effectively mobilised an already available free-speech tradition. They gave voice to a body of arguments familiar to Americans across social classes, not just against censorship but on behalf of deeply held ideas about free speech and political liberty. Those ideas gained adherents in the face of rising totalitarian threats in the 1930s and beyond. Even if the First Amendment did not protect all forms of speech and expression, literary censorship in particular came to be understood as anti-modern, anti-intellectual and anti-democratic – and this aided the anti-censorship cause. Nazi Germany revealed what an odious practice book burning was, and how much it contravened democratic ideals of free speech. Ernst also effectively rooted his arguments about the value of his clients’ work in terms of modern democratic theory: the rational, self-determining adult was capable, and therefore ought to have access to a diverse marketplace of ideas. Adult interests and needs mattered in this marketplace. Moreover, the censors’ and prosecutors’ assertions of moral harm to unknown readers were simply inadequate as evidence of actual harms, and demonstrating actual harm was crucial as a due process matter in criminal law. The law’s assumptions about potential harm to some unknown reader had to be weighed against the potential – and actual – medical value of contraceptives, or marital value of sex education, or intellectual value of modern fiction. The greater harm to democratic life lay in the state’s vaguely written laws, administered by anonymous figures His experts assured judges that there was manifest public value in adult citizens having access to up-to-date, scientifically credible information about the mysteries of human sexuality, in reading about complicated matters of sexual desire, in controlling their reproductive lives, in achieving greater marital happiness, in reforming laws that penalised people for sexual acts that were actually widely practised. The laws lagged far behind people’s actual behaviour. Ernst was a small-d democrat who invoked the competent, rational adult citizen at the conceptual centre of democratic theory. Democracy demanded that rational adults be able to navigate a thriving marketplace of ideas. Ernst’s fight for Ulysses against the obscenity laws was fired by his conviction that the greater harm to democratic life lay not in some readers possibly being aroused by reading modern fiction, but rather in the state’s continued use of vaguely written laws, administered by anonymous figures in impenetrable federal agencies, whose decisions were rarely challengeable. While reading certain kinds of materials might well inspire lust, the true harm was in depriving citizens of access to the great works of modern literature. When Ernst and Lindey finally freed Ulysses in 1933 (a decision upheld in the 2nd Circuit Court of Appeals in 1934), the win brought Ernst near-celebrity status in the New York City literary world, as well as in the nation’s civil liberties circles. For a time, he was recognised as one of the most important civil libertarians in the US. In 1944, Life magazine, one of the 20th century’s most widely circulated periodicals, published a multi-page profile of Ernst, introducing the broader public to his accomplishments in the realm of sexual and literary free speech. The focus was Ernst’s remarkable string of victories in high-profile sexual censorship cases. Life’s Fred Rodell ascribed Ernst’s success to his abundant energy, alert eyes and athletic strength, and to his ‘exhibitionism’ – that is, his desire for fame. The Ulysses case earned Ernst the reputation of being preeminent ‘among lawyers crusading against censorship’. As an earlier essay in Scribner’s noted: ‘Before Ernst came along, the vice crusaders used to scare booksellers into pleas of guilty and light fines by promising to get the case over quickly, and without publicity.’ But by waging ‘almost continuous warfare against Federal, State, local, private, and ecclesiastical censorship bodies’, Ernst changed that dynamic. As the Life profile observed – with a dig thrown in – Ernst could ‘crow, with pardonable pride’ that ‘no book published by a regular publisher or reviewed by a regular critic, no book published honestly and without surreptition, is in any danger of suppression.’ In 2019, the novelist Michael Chabon, in The New York Review of Books, hailed Ernst’s defence of Ulysses. Ernst was ‘known, and much sought-after, as a gifted, skilled, and cagey courtroom attorney with a discerning eye for the kinds of cases that could change the law if you won them.’ Chabon proclaimed that Ernst achieved a level of mastery similar to Joyce’s, as he had ‘brought as much artistry and erudition and sly, masterful skill to defending one book, called Ulysses, as its author had brought to its creation.’ But Ernst’s story did not end with Ulysses. His own growing concern with the threats to US democracy in the late 1930s led him down a path of increasing intolerance toward US communists. Deep rifts formed between himself and other ACLU leaders, and his record became especially troubling when he took up a friendly, even sycophantic, relationship with J Edgar Hoover, whose five-decade reign at the helm of the FBI was fraught with illegal, unethical, anti-democratic activities that destroyed many careers and took some lives. In the decades after the Second World War, Ernst lost track of his core free-speech principles. He framed nearly every issue through the Cold War lens of the struggle between democracy and dictatorship. He came to argue repeatedly that US communists did not have free-speech rights because of the dangers they posed to US democracy. Ernst and his colleagues helped make anti-censorship principles core precepts of modern US liberalism In the furores of that tempestuous era, Ernst discarded his civil libertarian bearings. Nearly all his allies from earlier battles deserted him. His former friends and admirers asked how Ernst – the great ACLU leader and defender of Dennett, Stopes, Sanger, Joyce, Alfred Kinsey, and many other sexual modernists – could make common cause with Hoover? Perhaps it was in the vain hope that he could make Hoover more solicitous of due process. But, if so, he was duped. Their entirely one-sided relationship destroyed Ernst’s reputation with civil libertarians, and Ernst’s legacy has taken a beating from which it is not likely to recover. I’m not aiming to recuperate his reputation. However, I think it is important to separate Ernst the flawed man from Ernst the anti-censorship strategist who took on the formidable censorial power of the federal and state obscenity laws. Ernst and his colleagues gave legal counsel to some of the great protagonists of the first sexual revolution and helped make anti-censorship principles core precepts of modern US liberalism, shaping the law to better protect personal sexual privacy, sexual selfhood, reproductive autonomy, sexual self-determination, and the right to not be discriminated against on the basis of sex and sexual identity. In the past several years, the US has witnessed a rash of censorship laws and book bans, a political backlash designed to turn back the clock with regard to human sexual freedom, knowledge about race and racism, and women’s reproductive rights. Not surprisingly, conservatives are invoking the availability of the 19th-century ‘Comstock’ obscenity laws – never fully repealed – to prohibit the interstate distribution of the drug mifepristone (part of the so-called ‘abortion pill’). They are also appealing to Comstock himself as a beacon in their efforts to combat abortifacients and contraceptives, sex education, positive portrayals of the LGBTQ+ community, and anything else they perceive as ‘obscene’ or ‘pornographic’. That the 150-year-old federal statute authored by Comstock is being invoked in 2023 is chilling, and it reveals just how reactionary and puritanical the US Right has become. Attacks on sexual identity, sexual knowledge and female reproductive autonomy have been the main issue of most of the ‘culture wars’ disputes over the past four or five decades, especially the assumption that women’s bodies are the subjects of men’s political and personal access. These notions are at the heart of patriarchal thinking – as they were in the age of Comstock. And now, the cruel contemporary conservative assault on trans people – their right to public presence, their right to medical knowledge and technologies, their right even to straightforward medical advice – is gaining momentum. Ernst and his colleagues clearly understood that the public should have access to a dependable, well-sourced, up-to-date marketplace of sexual knowledge that might make people’s lives happier and more fulfilling. Personal autonomy and access to knowledge, they reckoned, is the starting point for achieving personhood in a democratic society. They would certainly recognise that their legal work is far from done, and would be horrified that the ghost of Comstock is being welcomed back by Right-wing legal architects opposed to abortion and contraceptive access generally. US progressive forces – lawyers and journalists, in particular – would do well to recall the work of the long-forgotten Morris Ernst.
Brett Gary
https://aeon.co//essays/the-story-of-morris-ernst-who-defeated-americas-obscenity-laws
https://images.aeonmedia…y=75&format=auto
Archaeology
The Greeks and Romans portrayed these elusive priests as bogeymen who bathed in their victims’ blood. Who were they really?
Gaius Verius Sedatus was a respectable citizen of the community of Chartres in the early 2nd century CE. He was a member of his local town council (a sort of mini-senate), where he and his colleagues presided over its laws and management, under the aegis of Roman law. Gaul had been conquered by Julius Caesar two centuries earlier and was now administered by trusted locals such as Sedatus, overseen by distant Roman officials. But Sedatus lived a double life. In the evening, he donned the mantle of a magician-priest and descended to his underground temple in the small cellar of his house. There he kept a group of four large incense-burners, placed symmetrically at the points of a square. He filled these vessels with aromatic, perhaps hallucinogenic herbs, and lit fires beneath them. When the drug-laden smoke was sufficiently dense for his needs, he and his followers began to summon the spirits by chanting their names and demanding that they provide him with guidance in the dark arts. Who were these spirits, who had to be contacted so secretly in a small space, dimly lit with oil lamps and flickering candles? Fortunately, one of the incense-burners is complete enough to study closely. The vessel is inscribed, telling us who Sedatus was: a Roman citizen (because of his triple name) and the presiding ritualist who summoned the spirits. Beneath this statement is a long list of spirit names, almost all of which are unknown to archaeologists. But one stands out: ‘Dru’. If we are right in assuming this is an abbreviation of ‘Druid’ (and what else could it be?) then it is the only direct archaeological evidence for the existence of the Druids. The Druids have long allured. A great many Greek and Roman writers mentioned them, with a mixture of awe, fear, disgust and respect. A thorn in the side of Rome because they exerted sufficient influence in ancient Britain and Gaul to threaten the expansion of the Roman Empire, the Druids were a shadowy class of priests, religious leaders, even freedom-fighters. They whipped up resistance and sedition that the Roman army found hard to fight. The Druids had no literate footprint of their own, and so their reputation continues to rely on Greek and Roman ‘spin’ that painted these elusive priests as bogeymen who bathed in the blood of human sacrificial victims, calling down terrifying spirits from the dark otherworld to shrivel their enemies. But who were they really? And how do we know? More than 30 Greek and Roman writers from around 200 BCE to the 4th century CE were fascinated by this enigmatic group of ritualists. There has been a lot of controversy concerning the veracity of these ancient authors and their plagiarism or recycling of ‘facts’ about the Druids. However, I do have faith in one: Julius Caesar. He was in Gaul for nearly 10 years in the 50s BCE, leading the war of Roman conquest and so he knew the region personally rather than relying on secondhand information. And, because he was writing his chronicle for scrutiny by the Senate in Rome, it is doubtful whether he would have got away with fanciful imaginings because they could have been contradicted by his fellow officers. 13294The body of Lindow Man, aged mid-20s at time of death (c2 BCE-119 CE), found in Lindow Moss bog in Cheshire, northwest England. All images courtesy the British Museum 13295Lindow Man was the victim of a violent assault, sustaining many injuries before being placed face down in the bog 13296The Lindow Moss bog site in 1984 Caesar seems to have had quite intimate knowledge of Druidism, gained – at least in part – from his close friendship with a Druid named Diviciacus, who was also the ruler of a prominent Gallic tribe in eastern Gaul, the Aedui, an ally of Rome. We know of Diviciacus from another contemporary source, a comment by the orator Cicero who had met Diviciacus in Rome, and spoke warmly of the Gallic Druid as particularly skilled in the art of divination. So, while many ancient authors painted negative pictures of the Druids, condemning them as blood-soaked savages lurking in sinister forests, Caesar respected them for their erudition as natural scientists, teachers, healers and their specialism in liaising with the denizens of the otherworld. It is this last skill that appears to have left intriguing archaeological traces, including the rite of human sacrifice. The sophistication of the murder suggests it was to keep the victim hovering between life and death for some time In August 1984, industrial peat-cutting operations at Lindow Moss in Cheshire, in northwest England, revealed a horrific object: part of a human body. The police were called in to investigate a possible murder, and the body parts were soon identified as belonging to a young man. However, this man hadn’t died recently but about 2,000 years ago, at the time when Britain was in the process of becoming a Roman province. Lindow Man, as he is known, was the victim of a viciously violent and repeated assault that ended in his death and interment in a boggy pool sometime in the mid-1st century CE. His immersion in the swamp preserved not only his skeleton but his skin, hair and internal organs – a wonderful resource for archaeologists. He was fit, in his mid-20s, and we think he was a person of high status for his fingernails were in mint condition and his facial hair was neatly trimmed using a razor (an expensive piece of grooming equipment). The reason for this man’s significance in the context of the Druids lies in the contents of his gut. There is a description of ancient Druidic rituals in Gaul by the Roman author Pliny the Elder (writing in the mid-late 1st century CE) in his massive Natural History, a multi-volume work describing natural curiosities throughout the known world. On the sixth day after the new moon, the Druids would gather beneath a sacred oak. One of them would climb the tree to cut down mistletoe from its branches, using a gilded sickle. According to Pliny’s testimony, the Druids regarded mistletoe as having the spiritual power both to heal and to promote fertility in crops and livestock. In the final hours before his brutal death, Lindow Man had consumed a special meal – a kind of ‘last supper’ – that contained a peculiar mixture of seeds, wild plants, cereal grains and mistletoe pollen in sufficient quantities for it to have been deliberately included in the griddled loaf that the luckless victim had consumed just before his brutal death. The complex manner of his killing strongly suggests human sacrifice. He was stunned by a violent blow to the head, garrotted, and his throat cut. Then, while still breathing (there was bog water found in his lungs), he was thrust facedown into the marsh: a highly orchestrated ritual killing. And the presence of the mistletoe might, just might, tie his death to Druidic sacrificial action. The sophistication of the murder suggests that it was conducted with care and with the intention to keep the victim hovering between life and death for some time. (Bogs themselves are liminal and contradictory places, neither fully dry nor wet.) And, even after his death, the body of Lindow Man was suspended between states of being, since its preservation did not allow his remains to decay and thus, perhaps, denied his spirit to join the ancestors. It is possible that the geographical situation of Lindow Man’s murder and interment has historical significance. There had been a Roman military presence in Britannia for at least 10 to 15 years before his death. This was at a time when the Roman army was pressing northwest from its primary bases in the southeast and, under the Roman general Suetonius Paulinus, a large force marched diagonally across Britain in order to destroy the Druids’ main power base, which, Tacitus tells us, was on the island of Anglesey in Wales. It is possible that the execution of Lindow Man might have been an aversion sacrifice, designed to protect the island with its Druidic sanctuary. If so, it didn’t work: the Roman army burned it down. Tacitus wrote with revulsion of the dark groves on Anglesey, haunted by Druids and containing crude and sinister altars soaked with the blood of human sacrificial victims. We have no unequivocal archaeological evidence for such atrocities on the sacred island, but there is one site, Llyn Cerrig Bach, a small boggy pool, that cries out for recognition as an important late Iron Age shrine. Here, in the early 1940s, excavations were carried out to prepare a wartime landing strip for Royal Air Force station Valley. During this process, large numbers of objects were found, including iron slave-gang chains, military hardware, a pair of bronze cauldrons, beautifully decorated bronze implements and, significantly, the bindings from sceptres. All appear to have been deliberately cast into the bog. The bones of animals were also found here, plus a few human remains. Could this site have been the one described by Tacitus? Llyn Cerrig was certainly a holy place in which precious gifts were planted, not for safekeeping but for the propitiation of the gods. It is tempting to imagine the Druids at work here. The reason why so many classical writers gave the Druids such a bad press was their reputation as perpetrators of human sacrifice. Caesar alludes to it in a pragmatic manner. According to him, the gods found it acceptable to use human sacrificial victims who had committed crimes – but, should the supply run out, the innocent would have to make up the numbers. Other Roman authors, such as Tacitus and Lucan, were clearly horrified by such ‘barbaric’ behaviour. But there is a sense in which this judgmental attitude might be accused of double standards, for the abolition of human sacrifice among the Romans themselves occurred only in the 1st century BCE. Such Roman hypocrisy is a good example of ‘conqueror prejudice’: the deliberate exaggeration of barbarism in order to paint the Druids and their people as nothing better than savages who deserved to be expunged. The medical kit had been carefully placed upon a game board with glass counters, suggesting two players There is substantial archaeological evidence for the practice of ritual murder in Iron Age and Roman-period Britain and Gaul, though whether or not by Druids is debatable. Lindow Man is the exception that proves the rule inasmuch as his body was so well preserved that it has been possible to track the details of his protracted torture and killing. But skeletal evidence also yields up its secrets of how people died in cult-driven circumstances. There is a cluster of Iron Age ‘war sanctuaries’ in central France with graphic examples of human sacrificial killings, the victims probably being prisoners-of-war. Two stand out: Gournay-sur-Aronde (Oise) and Ribemont-sur-Ancre (Somme). Decapitated human heads were prominently displayed at the entrance to Gournay and the bones of people and horses were constructed into large ossuary tables at Ribemont. Other sanctuaries, near the mouth of the river Rhône in southern France, such as Roquepertuse and Glanum, were likewise decorated with the trophy heads of slain enemies. The archaeological evidence demonstrates that Iron Age communities repeatedly killed people as part of ritual activity, but that this practice was rare. Two points arise from this. On the one hand, human sacrifice was probably undertaken only at times of great stress, when communities were under threat; on the other, the material evidence indicates that such rituals were undertaken in an organised manner, almost certainly conducted by priests. Whether they were Druids or not remains an open question, but it is sorely tempting to lay it at their door. Given that classical writers stress that one of the prime functions of the Druids was to predict the future by consulting the spirits, there is a significant body of archaeological evidence that suggests organised ritual that might be linked to divination. At a cemetery at Stanway, just outside Colchester, a Roman garrison town in southeast England, a man was buried with great ceremony, his grave filled with precious objects, some of which are reminiscent of the shamanic practices that took place in Chartres. The Stanway grave appears to have been that of someone practised in the healing arts, since it contained a set of medical tools, as well as a metal bowl whose spout was found to have contained a wad of remains identified as the Artemisia plant, possibly inhaled by the ‘doctor’ or by his patients for healing purposes. But there was much more telling paraphernalia in the tomb. The medical kit had been carefully placed upon a game board with glass counters whose position suggests the presence of two players. This might simply have been a nod to the dead man’s passion for board games – or it could have symbolised the transition between the world of humans and the dwelling place of the gods. The possibility that the Stanway ‘doctor’ practised Druid-like divination is further demonstrated by another group of grave goods, also positioned on the game board: two sets of metal rods, four of bronze and four of iron, four small and four larger. They don’t appear to have had a practical function but they strongly suggest to me their use as divining rods. Tacitus describes the wooden rods that Germanic priests used to gather into bundles and toss onto a piece of white cloth. They divined the will of the gods by reading the patterns made by these rods when they had fallen. The Stanway physician, if that was his profession, may therefore have been someone who doubled as a ritual healer, just as some modern shamans do. Excavations of Romano-British and Gallo-Roman temples provide ample evidence for the presence of doctor-priests. A case in point is the great temple at Lydney in the English West Country, perched high above the River Severn in Gloucestershire. Inscriptions tell us that it was dedicated to a British god named Nodens. There is evidence that – following practices at ancient Greek healing sanctuaries, such as Epidaurus in the Peloponnese – pilgrims visiting Lydney bathed in the waters of a sacred spring and then slept in a special dormitory, called an abaton, where it was hoped they would be healed by the presence of the god in their dreams. While this temple appears to have begun its life late in the Roman period, the organisation of the rituals involved in managing the life of the shrine and its pilgrims was overseen by priests. And we know from late Roman authors, such as the Gallic academic Ausonius, that the Druids still existed as late as the 4th century CE. So what can archaeological discoveries such as those at Chartres or Stanway tell us about the Druids? Were they simply magicians who professed to be able to liaise with the spirit world, or were they more sophisticated than that? The latter is more likely. The cult paraphernalia found at these sites suggests that Gaul and Britain possessed spiritual leaders, Druids, who – like modern-day shamans – guided their people by using their skills in communing with the gods to advise and to heal. There is clear evidence for their use of hallucinogens and other ritual means in order to cross the divide between the earth world and the otherworld, and to use the wisdom gained in their soul-journeys to aid the communities they served. There is a further, rich, enigmatic vein of evidence for Druids. In a number of locations, from Castell Nadolig in Ceredigion (west Wales) to Crosby Ravensworth in Cumbria (northern England), pairs of bronze spoons were deliberately buried together in graves and, more commonly, bogs. They are all very similar in form. Their handles are intricately decorated with the swirling, intertwined designs associated with a particular type of Iron Age ornamental metalwork technique known as La Tène (after a site in Switzerland, where a huge number of decorated artefacts were found on the shores of Lake Neuchâtel in the late 19th century). In contrast to their ornate handles, the bowls of the spoons were left plain, though one of each pair has a small hole drilled in it, while the inner (concave) surface of the other is marked by intersecting lines dividing it into four quadrants. What was the purpose of these mysterious spoons? A few years ago, the British Museum in London had replicas made of the spoons from Crosby Ravensworth. I managed to acquire a pair for my university teaching and so, when contacted by the BBC with an invitation to take part in a television programme about Druids, I brought them with me in the hope of being able to engage in a bit of experimental archaeology. I had wondered whether these spoons might have been used in rituals designed to elicit the will of the gods or predict the future. So the BBC presenter and I made preparations for a ritual reconstruction. It’s possible that some liquid or powdered substance might have been thus prepared by Iron Age priests. Putting the twin spoons together, with the rims of the bowls connecting, a straw made from a hollow bird’s bone could have been used to suck up blood or ground-up bones or even red ochre (an important ritual substance in many ancient and modern traditional societies). The substance could be blown through the hole of the upper spoon to land on the marked-out squares of the lower one, the resultant spatter-pattern to be read as an indication of divine will or portent. This would be similar to the Roman practice of augury, wherein priests (called augurs) drew crossing lines on sacred ground and watched the pattern of birdflight over the marked area in order to make divine predictions. The Roman augurs used a staff with a curved end like a hockey stick, called a lituus, to draw the quadrants. And, excitingly, there is archaeological evidence from images on Iron Age coins that these ceremonial staves were known in Britain prior to the Roman conquest. Who could be better candidates for such ritual activity than the Druids? I envisage this young man as a religious leader, even a Druid, his body buried with full military honours While there is comparatively little evidence for what went on in British Iron Age sanctuaries, some Romano-British temples definitely had their roots in the pre-conquest past. Iron Age coins were deposited at a shrine at Wanborough in Surrey, some of which were even laid down as a pathway to the shrine’s entrance. But the priestly headdresses are the most exciting discoveries. Several were unearthed in 1983, suggesting the presence of multiple clergy on duty at the same time, and some of these crowns had adjustable headbands, as if to accommodate a range of head sizes. They must have made a spectacular show when worn, for the bronze headbands were embellished with long chains that supported a wheel-shaped ornament at the top of the wearer’s head that would have glinted in the sunlight when the priests made ceremonial processions. These clergy also carried great bronze sceptres of office, adding to the visual spectacle. The Wanborough headdresses bear a strong resemblance to one found adorning the head of a fragilely built young man who died in the 2nd century BCE and was buried in an Iron Age cemetery at Deal in Kent. He was interred with a sword in its finely ornamented scabbard and a shield, as if he had been a warrior. Yet his headdress was not a military helmet but formed a decorated, crown-like ring closely fitted to his head, with a hoop over the top. By his ankle was a decorated brooch, perhaps used to pin a cloak or shroud. That this person perhaps had special status is not only implied by his headgear and finely wrought military equipment but also by the position of his grave, set apart from the main cemetery area. My inclination is to envisage this young man as some kind of religious leader, even a Druid, whose body was buried with full military honours as befitted an important cleric. We know from Caesar’s account that, while Druids did not fight, they were closely associated with waging war and negotiating peace. Indeed, Caesar’s comment about the Druids’ exemption from military service is interesting. He added that they were also exempted from taxation. So might it be that, because of their high status, Druids were neither expected to down tools and go to war when their rulers commanded, nor to pay the dues exacted from ordinary people? But perhaps there was another reason why the Druids were not summoned to the battlefield: that they were too valuable to their community to be at risk of death or serious injury. At the far end of Britain from Deal and Wanborough, in the wild country of northwest Wales, sometime in the 4th century BCE, a middle-aged man’s remains were interred in a stone cist (a slab-built box) at Cerrig-y-Drudion (Conwy). Like the youth from Deal, he played a special part in his community. He was buried with a fantastic sacred headdress fashioned from leather and highly decorated bronze, in the form of a kind of brimmed hat, topped with a long horsehair plume. Dangling down each side was a chain terminating in a bronze amulet shaped like a wheel, very similar to the headgear worn by the Romano-British priests at Wanborough. Its wearer would have made an amazing spectacle as he led religious ceremonies, his sacred hat of polished bronze glinting in the sun as he processed. I suspect that, like its Roman-period successors at Wanborough, the wheel-shaped pendants were designed to represent the sun. What does this elaborate regalia, revealed by archaeology, tell us about Druidic ceremonies? First of all, it must be admitted that we can’t know for certain that it was the Druids who wore these headdresses or who carried the sceptres found in ancient sacred places such as Wanborough and Cerrig-y-Drudion. But this archaeological material strongly indicates the existence of a high-ranking priestly class in Britain and Gaul, whose members displayed their powers by visual symbols of religious office (similar to a bishop’s mitre and crozier in the Christian tradition). So who were the Druids? How influential were they? Did they really exist or were they constructed by the would-be conquerors of Britain and Gaul – rather like Saddam Hussein’s mythical weapons of mass destruction – to whip up fear and hostility to nations that Caesar and his peers wished to incorporate into the Roman Empire? I think that they did exist but that they were subject to bad press perpetrated by classical writers, their ‘barbarous’ habits of sedition and sacrificial murder either invented or exaggerated not only to instil fear in their readers but also to glorify the conquest of the peoples to whom the Druids belonged. Sedatus’ shrine in Chartres survived to be excavated only because the house above the cellar – whether by accident or design – burned down, sealing the crypt under a thick layer of collapsed debris. It was discovered during clearance work to install a carpark in the centre of the city in 2005. Could it be that Sedatus’ secret life as a Druid had been found out and the shrine condemned by local people? Did he pay for his subversive, anti-Roman activities by having his house destroyed? We may never know. But what is certain is that he dared to be subversive enough to summon strange, non-Roman gods even within the context of a fully Romanised town, a town in which he was regarded as an upstanding Roman citizen. His activities – if nothing else – are a sure indication that Druids remained alive, at least as an idea, long after the absorption of their nations into the maw of the Roman Empire. The shadowy figure of the Druids will continue to beckon – and we will continue our search to find out who they really were.
Miranda Aldhouse-Green
https://aeon.co//essays/what-can-archaeology-tell-us-about-the-druids-dark-arts
https://images.aeonmedia…y=75&format=auto
Quantum theory
The concept of the atomic void is one of the most repeated mistakes in popular science. Molecules are packed with stuff
The camera zooms in on the person’s arm to reveal the cells, then a cell nucleus. A DNA strand grows on the screen. The camera focuses on a single atom within the strand, dives into a frenetic cloud of rocketing particles, crosses it, and leaves us in oppressive darkness. An initially imperceptible tiny dot grows smoothly, revealing the atomic nucleus. The narrator lectures that the nucleus of an atom is tens of thousands of times smaller than the atom itself, and poetically concludes that we are made from emptiness. How often have you seen such a scene or read something equivalent to it in popular science? I am sure plenty, if you are fans of this genre like me. However, the narrative is wrong. Atomic nuclei in a molecule are not tiny dots, and there are no empty spaces within the atom. The empty atom picture is likely the most repeated mistake in popular science. It is unclear who created this myth, but it is sure that Carl Sagan, in his classic TV series Cosmos (1980), was crucial in popularising it. After wondering how small the nuclei are compared with the atom, Sagan concluded that [M]ost of the mass of an atom is in its nucleus; the electrons are by comparison just clouds of moving fluff. Atoms are mainly empty space. Matter is composed chiefly of nothing.I still remember how deeply these words spoke to me when I heard them as a kid in the early 1980s. Today, as a professional theoretical chemist, I know that Sagan’s statements failed to recognise some fundamental features of atoms and molecules. Yet his reasoning is still influential. While preparing this essay, I ran a poll on Twitter asking whether people agreed with Sagan’s quote above. Of the 180 voters, 43 per cent answered that they mostly agreed, and 27 per cent fully agreed. Google ‘atoms empty space’, and you will find tens of essays, blog posts and YouTube videos concluding that atoms are 99.9 per cent empty space. To be fair, you will also find a reasonable share of articles debunking the idea. Misconceptions feeding the idea of the empty atom can be dismantled by carefully interpreting quantum theory, which describes the physics of molecules, atoms and subatomic particles. According to quantum theory, the building blocks of matter – like electrons, nuclei and the molecules they form – can be portrayed either as waves or particles. Leave them to evolve by themselves without human interference, and they act like delocalised waves in the shape of continuous clouds. On the other hand, when we attempt to observe these systems, they appear to be localised particles, something like bullets in the classical realm. But accepting the quantum predictions that nuclei and electrons fill space as continuous clouds has a daring conceptual price: it implies that these particles do not vibrate, spin or orbit. They inhabit a motionless microcosmos where time only occasionally plays a role. Most problems surrounding the description of the submolecular world come from frustrated attempts to reconcile conflicting pictures of waves and particles, leaving us with inconsistent chimeras such as particle-like nuclei surrounded by wave-like electrons. This image doesn’t capture quantum theory’s predictions. To compensate, our conceptual reconstruction of matter at the submolecular level should consistently describe how nuclei and electrons behave when not observed – like the proverbial sound of a tree falling in the forest without anyone around. Here’s a primer on how to think of the fundamental components of matter: a molecule is a stable collection of nuclei and electrons. If the collection contains a single nucleus, it is called an atom. Electrons are elementary particles with no internal structure and a negative electric charge. On the other hand, each nucleus is a combined system composed of several protons and a roughly equal number of neutrons. Each proton and neutron is 1,836 times more massive than an electron. The proton has a positive charge of the same magnitude as an electron’s negative charge, while neutrons, as their name hints, have no electric charge. Usually, but not necessarily, the total number of protons in a molecule equals the number of electrons, making molecules electrically neutral. The interior of the protons and neutrons is likely the most complex place in the Universe. I like to consider each of them a hot soup of three permanent elementary particles known as quarks boiling along inside, with an uncountable number of virtual quarks popping into existence and disappearing almost immediately. Other elementary particles called gluons hold the soup within a pot of 0.9 femtometres radius. (A femtometre, abbreviated fm, is a convenient scale that measures systems tens of thousands of times smaller than an atom. Corresponding to 10‑15 m, we must juxtapose 1 trillion femtometres to make one millimetre.) Instead of localised bullets in empty space, matter delocalises into continuous quantum clouds Particles with the same electric charge sign repel each other. So additional interactions are required to hold protons close-packed in the nucleus. These interactions arise from quark and antiquark pairs called pions that constantly spill out of each proton and neutron to be absorbed by another such particle nearby. The energy exchanged in this transfer is big enough to compensate for the electric repulsion between protons and, thus, bind together protons and neutrons, storing the immense energy that may be released in nuclear fission processes. However, the extremely short lifetime of the pions limits how far protons and neutrons may be from each other, curbing the nucleus size to a 1 to 10 fm radius. Thus, from a particle perspective, the nucleus is tiny compared with an atom. A nitrogen nucleus, composed of seven protons and seven neutrons, has a radius of about 3 fm. In contrast, nitrogen’s atomic radius is 179,000 fm. At the scale of atoms and molecules, nuclei are no more than heavy, point-like positive charges without any apparent internal structure. So are the electrons: they are just light, point-like negative charges. If atoms and molecules remained a collection of point-like particles, they would be mostly empty space. But at their size scale, they must be described by quantum theory. And this theory predicts that the wave-like picture predominates until a measurement disturbs it. Instead of localised bullets in empty space, matter delocalises into continuous quantum clouds. Matter is fundamentally quantum. Molecules cannot be assembled under the rules of classical physics. The classical electrical interactions between nuclei and electrons are insufficient to build a stable molecule. Due to the electric attraction of charges of opposite signs, the negatively charged electrons would quickly spiral toward the positively charged nuclei and glue to them. The resulting combined particles with no net charge would fly apart, preventing any molecule from forming. Two quantum properties avoid this bleak fate. The first property arises from the Heisenberg uncertainty principle, which holds that a quantum particle cannot simultaneously be at a precise position and also have zero speed. This implies that an electron cannot glue to a nucleus because both particles would be in a well-defined place and at rest to each other – defying a central rule of the quantum world. The second quantum property is the Pauli exclusion principle. The fundamental components of matter are split into two types, bosons and fermions. The gluons inside the proton are examples of bosons. We can have as many of them as we want, sharing the same position simultaneously. On the other hand, fermions – such as electrons, quarks, protons and neutrons – obey a much more restrictive rule named the Pauli exclusion principle: no two identical fermions can simultaneously occupy the same space and have the same spin (a quantum property analogous to a classical rotation of a particle about its axis). In the quantum world, the wave function represents more than a mere lack of knowledge With all those effects encoded into the Schrödinger equation, the master equation of quantum theory, it predicts that our point-like nuclei and electrons must, in fact, behave like waves. They delocalise in quantum clouds much bigger than their particle-picture size to satisfy the Heisenberg uncertainty principle, with electrons shaped into different clouds to satisfy the Pauli exclusion principle. The lighter the particles are, the bigger the delocalisation. Thus, a single electron cloud may spread over multiple nuclei, forming a chemical bond and stabilising the molecule. Take an ammonia molecule, NH3, illustrated below. The small blue smudge in the middle is the nitrogen nucleus cloud, while the three green blobs are the proton (hydrogen nuclei) clouds. The 10 electrons of the ammonia molecule delocalise into the fat yellow cloud, tying the party together. Figure 1: Electronic and nuclear quantum clouds in an ammonia molecule. The yellow cloud represents the 10 electrons in this molecule. The small blue cloud is the nitrogen nucleus, while the three green clouds indicate each hydrogen nucleus. Electronic points in front of the nuclei were made transparent so as not to hide the nuclear clouds. Technical details are explained in Toldo et al, 2023. Courtesy the author A particle-like nitrogen nucleus has a 3 fm radius. However, in the ammonia molecule, the nitrogen nucleus grows to a respectable 3,000 fm radius due to delocalisation. The delocalisation of the hydrogen nuclei is even more impressive. They grow from a radius of 0.9 fm when seen as particles to clouds of about 23,000 fm. But the electrons take the cake. Due to their tiny mass, they grow from particles much smaller than a nucleus into a cloud that defines the molecular volume. Nuclei and electrons, however, are not atomic giants. If the nitrogen nucleus is measured (for instance, by throwing fast electrons against it and observing them bounce back), the nuclear cloud would immediately collapse into the initial 3 fm dot. The same is true for each electron. Indeed, quantum theory prescribes a precise relationship between the wave and particle pictures. The clouds of the wave picture are mathematically described by a wave function, essentially an equation that attributes an intensity to every point in space and how these intensities change with time. The wave function is analogous to mathematical functions describing conventional sound or water waves, but with the peculiarity that it has an imaginary-number component, which is negative when squared. The square of the wave function modulus (a mathematical operation that always yields positive numbers) gives the probability of finding the particle at each point in space if we attempt to observe it. The denser the cloud, the bigger the odds of observing the particle there. Thus, if we try to measure the point-like nitrogen nucleus, we are sure that it will be somewhere in the region of the delocalised nitrogen nucleus cloud, the blue smudge in the figure. However, interpreting the quantum cloud as probability does not mean it is just a measure of a lack of knowledge about the system. If I left my keys in one of my jacket’s two pockets, but I am unsure which one, I may write a probability function with a 50 per cent value at each pocket and zero value at every other point of my office. This function obviously does not imply that my keys are delocalised over the two pockets. It just states my ignorance, which can be easily fixed by checking the jacket. In the quantum world, the wave function represents more than a mere lack of knowledge. Delocalised systems – like nuclear and electronic clouds – cause phenomena that localised particles cannot explain. The existence of chemical bonds forming molecules is a direct example of the effect of electronic delocalisation. In the case of nuclear delocalisation, one of its main effects is to boost the chances of a hydrogen nucleus (a single proton) flowing from one molecule to another nearby. This kind of enhanced proton transfer has dramatic biological consequences, like increasing the acidity of specific enzymes compared with how acidic they would be if hydrogen nuclei behaved as particles. Although electron clouds are commonly depicted in popular science and chemistry, delocalisation of the nucleus is often interpreted as vibrations and rotations. But these are only classical, albeit helpful, analogies. From a quantum perspective and for conceptual consistency, nuclei should be depicted on the same footing as electrons, as clouds as well. Yet another misconception is that atoms are empty because their mass is in their nucleus. The atomic mass is indeed highly localised. In an ammonia molecule, 82 per cent of the mass is in the blue smudge of the nitrogen nucleus shown in Figure 1 above. If we add the masses of the three green proton clouds, they account for 99.97 per cent of the total. Thus, the big yellow cloud of the electrons carries only 0.03 per cent of the mass. The association between this mass concentration and the idea that atoms are empty stems from a flawed view that mass is the property of matter that fills a space. However, this concept does not hold up to close inspection, not even in our human-scale world. When we pile objects on top of each other, what keeps them separated is not their masses but the electric repulsion between the outmost electrons at their touching molecules. (The electrons cannot collapse under pressure due to the Heisenberg uncertainty and Pauli exclusion principles.) Therefore, the electron’s electric charge ultimately fills the space. Anyone taking Chemistry 101 is likely to be faced with diagrams of electrons orbiting in shells In atoms and molecules, electrons are everywhere! Look how the yellow cloud permeates the entire molecular volume in Figure 1. Thus, when we see that atoms and molecules are packed with electrons, the only reasonable conclusion is that they are filled with matter, not the opposite. Despite all this, anyone taking Chemistry 101 is likely to be faced with diagrams of electrons orbiting in shells, like concentric and separated layers with empty space between them. The idea that these diagrams represent physical reality is a third common misconception. Electrons do not literally orbit around the atomic nucleus in the shape of these shells. In atoms and molecules, electrons must have specific energies, each energy associated with a particular cloud shape. Consider, for example, an atom with a single electron. In the lowest possible energy, the ground energy level, this electron delocalises into a spherical cloud, dense at the centre of the atom and gradually fading out. The single-electron wave functions describing these clouds are called orbitals. At higher energy levels, the single electron delocalises into more complex clouds with nested spheres, multiple blobs or even doughnut shapes. Thus, when speaking of atoms and molecules, electrons are not little particles chaotically rocketing around the nuclei until they become a fuzzy cloud, as often depicted. And electrons are not in the orbitals, nor do they populate them. Electrons are the orbitals. They are delocalised clouds. With multiple electrons, which have been terra incognita in popular science, things get much more complicated. This is hardly a surprise since even professional theoretical chemists are uncomfortable describing them, despite their exceptional competence in predicting the properties of multi-electron systems. Like ill-fitting clothes, chemistry vernacular is filled with awkward analogies and descriptions. Chemists may say that an electron occupies or populates an orbital as if orbitals were pre-existing places where electrons are put. Chemists often draw diagrams where orbitals are represented as short horizontal lines and electrons as small vertical arrows on those lines, like objects on shelves. All these verbal and visual metaphors fail to translate what quantum theory tells us about atoms and molecules. When dealing with multi-electron systems (encompassing virtually all molecules), quantum theory no longer distinguishes between each electron; they are all described by a single wave function, a single cloud. Nevertheless, single electron orbitals are still a valid approximation that chemists constantly use to rationalise chemical reactions. The multi-electron wave function resembles a composition of these individual clouds overlapping within the volume defining the molecule. They feel each other; they recombine into new shapes; some bulge and others shrink; the clouds skew, stretch and twist until they comfortably adapt, occupying every available space. It may look like a messy sock drawer. For a fraction of a picosecond, the tempest rages and reshapes the molecular landscape until stillness is restored A molecule is a static object without any internal motion. The quantum clouds of all nuclei and electrons remain absolutely still for a molecule with a well-defined energy. Time is irrelevant. Quantum theory does not predict vibrating nuclei or orbiting and spinning electrons; those dynamic features are classical analogues to intrinsic quantum properties. Angular momentum, for instance, which in classical physics quantifies rotational speed, manifests as blobs in the wave function. The more numerous the blobs, the bigger the angular momentum, even though nothing rotates. Time, however, comes into play when a molecule collides with another one, triggering a chemical reaction. Then, a storm strikes. The quantum steadiness bursts when the sections of the electronic cloud pour from one molecule upon another. The clouds mix, reshape, merge, and split. The nuclear clouds rearrange to accommodate themselves within the new electronic configuration, sometimes even migrating between molecules. For a fraction of a picosecond (10-12 seconds or a billionth of a millisecond), the tempest rages and reshapes the molecular landscape until stillness is restored in the newly formed compounds. In the Flammarion engraving (Figure 2 below), a person at the edge of Earth dares to look beyond the firmament dome to uncover the marvellous machinery of clouds controlling the heavens. They could well be looking at a molecule instead. Then, this non-disturbing observer would find that nuclei and electrons are majestic, stable, structured, closed-packed clouds, driving every aspect of matter as we know it. Figure 2: Wood engraving from L’atmosphère: météorologie populaire (1888) by Camille Flammarion. Courtesy Wikipedia My criticism of the empty atom picture isn’t meant to shame people’s previous attempts to describe atoms and molecules to the public. On the contrary, I applaud their effort in this challenging enterprise. Our common language, intuitions and even basic reasoning processes are not adapted to face quantum theory, this alien world of strangeness surrounded by quirky landscapes we mostly cannot make sense of. And there is so much we do not understand. We have yet to learn how to reconcile the dual wave-like and particle-like behaviour of matter. We do not even know whether wave functions have objective reality. Our brains melt, facing the multiple potential interpretations of quantum theory to the point that outstanding scientists seemingly gave up hope that we may reach a scientific consensus. We turn a blind eye to the dirty tricks we carry from the conceptual construction of quantum theory to the actual predictions. The account of the quantum molecular world I presented is on comfortably safe grounds We could conform to the unsatisfying ‘Shut up and calculate!’ attitude that has accompanied the increasingly weird predictions of quantum theory, which enabled the outstanding technological advancements of the past 100 years, from lasers to microprocessors. However, we do not want to make only useful predictions. Our ultimate goal is to tell stories about our Universe. Thus, we calculate but do not shut up. Generations of scientists and science popularisers do their best to translate all this strangeness into friendly metaphors of a theoretical body still full of mystery. We build new mental images of the quantum world one step at a time, even under the risk of tripping up here and there. The account of the quantum molecular world I presented is on comfortably safe grounds. It is based on a quantum theory domain that is highly consensual among specialists. It is the town square of what the Nobel laureate Frank Wilczek called the Core Theory, the physics framework describing fundamental particles, their interactions and Albert Einstein’s general relativity. Physicists are so confident about this core’s stability that they believe it should persist within any new theories of matter developed in the future. Breathing this confidence and realising we are not made of empty space may be a soothing thought. This Essay was made possible through the support of a grant to Aeon+Psyche from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Foundation. Funders to Aeon+Psyche are not involved in editorial decision-making.
Mario Barbatti
https://aeon.co//essays/why-the-empty-atom-picture-misunderstands-quantum-theory
https://images.aeonmedia…y=75&format=auto
Thinkers and theories
In the face of climate crisis it might seem myopic but philosophers from Spinoza to Næss argue it is the only way forward
Each of us experiences the climate crisis. We try to adapt to it: buying face masks to brave smoke-filled air outdoors or air purifiers to clean it indoors, turning up the air conditioning to insulate ourselves from excessive heat, preparing to evacuate our homes, if need be, when another hurricane hits the coast. We wonder where we can settle down that won’t go to hell in a handbasket during our lifetime. Some of us wonder whether we should bring children into this world. The climate crisis prompts questions that challenge our very being. We ask ourselves: ‘Who am I in this increasingly unstable world? What is to become of me?’ Such questions can lead to despair, or lead us to look away, but, as we will see, they can also positively challenge the way we think about ourselves. Our current political and economic circumstances lead us to think of ourselves as useful cogs in a machine, and of our identity in terms of certain hoops we need to jump through: go to college to get well-paying jobs, climb the property ladder, and make sure we have adequate savings for retirement. However, the climate crisis can prompt us to rethink these suppositions. What good are retirement savings if the world is burning? We need a much richer concept of self – a fully realised self that is worth preserving. The concept of self-realisation acknowledges our strong drive to preserve ourselves and to persevere in the face of the climate crisis. This self-concept is much richer and more expansive than is commonly recognised. It’s not enough to preserve your narrow, personal self. You are part of a vast, interconnected Universe, where your wellbeing crucially depends on maintaining relationships and connections with others, including nonhuman others. The Norwegian philosopher Arne Næss (1912-2009) coined the term deep ecology. The main idea of deep ecology is that we should address the ecological crisis through a paradigm shift. Rather than tinkering with concrete targets (such as CO2 emissions), we must radically re-envisage how we engage with the world. Næss was a wide-ranging philosopher with varied interests. Among many other things, he was a huge fan of the Sephardic Dutch philosopher Baruch de Spinoza (1632-77), particularly of his Ethics (1677), which Næss re-read frequently, and which plays a key role in his environmental philosophy. Arne Naess reading Spinoza’s Ethics. Courtesy Open Air Philosophy Næss is famous in his home country. He is considered a national treasure, widely admired for his social activism, mountaineering, philosophy textbooks, and even his practical jokes and spectacular feats such as climbing the walls of the tallest building at the Blindern campus of the University of Oslo while being interviewed by the Norwegian Broadcasting Corporation. He was a man of polarities: on the one hand, a member of an eminent Norwegian family, appointed as a full philosophy professor at Oslo aged 27 – in fact, the only philosophy professor in Norway at the time. On the other hand, he published his extensive works with little regard for prestige or fame, including in obscure ecological magazines with small print-runs. This partly explains why Næss still remains relatively unknown in English-language academic philosophy. Especially in later life, he approximated what his friend and fellow environmental philosopher George Sessions called a ‘union of theory and practice’, practising his ecophilosophy by spending extensive time outdoors, hiking and mountaineering until well into his 80s. Næss had a spartan vegan diet consisting of unseasoned boiled vegetables. After retiring early, he gave much of his pension away to various projects such as the renovation of a Nepalese school. Næss’s notion of self-realisation is inspired by many philosophical traditions, including Mahayana Buddhism and Gandhi’s philosophy of nonviolent resistance. Another important inspiration was from Spinoza. According to his Ethics, everything in nature has a conatus, a fundamental striving to continue to exist: ‘Each thing, as far as it can by its own power, strives to persevere in its being.’ We see this fundamental tendency not only in humans but also in trees, bees and geese, and even inanimate objects such as tables, mountains and rocks. Things don’t spontaneously disintegrate and they tend to keep their form over time; even something seemingly transient like a fire will try to keep itself going. How can we understand this universal drive? Næss situates the conatus in a bigger picture of nature, namely, one that helps us to persevere and affirm ourselves as expressions of nature. Spinoza argued that there is only one substance, which he called ‘God’ or ‘God or nature’. Nature and God are coextensive, as God encompasses all of reality. So, Spinoza’s God is similar to what we now call ‘the universe’, the totality of all that is. This totality expresses itself in infinitely many modes, such as thought and physical bodies. We, like everything else, are expressions of this one substance. When our surroundings are hurt, we feel hurt too Unlike a traditional theistic God, Spinoza’s God has no overall higher purpose, no grand design. This God is perfectly free and acts in accordance with its own laws, but doesn’t desire anything. Nature simply is, and it is perfect in itself. As Næss put it in 1977: ‘If it had a purpose, it would have to be part of something still greater, eg, a grand design.’ As Næss interprets him, Spinoza’s metaphysics is fundamentally egalitarian. There is no hierarchy, no great chain of being with creatures lower or higher. We are on an ontological par with fish, oceans and beetles. A bear’s interests roaming about in the Norwegian countryside matter just as much as those of the surrounding farming communities. Nature as a whole expresses its power in each individual thing. It is within these expressions of power that we can situate the drive to preserve our own being. To actualise ourselves, we need to understand what our ‘self’ is. Næss thinks that we underestimate ourselves, writing in 1987: ‘We tend to confuse it [the self] with the narrow ego.’ Self-knowledge is partial and incomplete, this lack of knowledge prevents us from acting well. Here again is a clear influence of Spinoza. Spinoza thinks that knowledge and increased (self-) understanding help us to increase our ability to act, and hence our ability to persevere. We can realise this expansive conception of self by considering our relation to place, an idea that Næss draws from Indigenous thought. We often feel attached to places of natural bounty and beauty, to the point that we might feel that, as Næss said: ‘If this place is destroyed something in me is killed.’ Loss of place has by now well-documented effects on mental health, including eco-anxiety, which arises from a sense of loss of places to which people feel a strong emotional connection. When our surroundings are hurt, we feel hurt too. Inuit communities in northern Canada feel homesick for winter. This spontaneous feeling of connection to place signals to us that our self does not end at our skin, but that it includes other creatures. Indigenous people, through their activism and landback movements, demonstrate that there is more to the self than these metrics. In a letter in 1988, Næss tells the story of an indigenous Sámi man who was detained for protesting the installation of a dam at a river, which would produce hydroelectricity. In court, the Sámi man said this part of the river was ‘part of himself’. Differently put, if the river were altered, he would feel that the alteration would destroy part of himself. In his view, personal survival entailed the survival of the landscape. For Næss, there is no grand, external purpose to our lives other than the purposes we assign to them. But because our wellbeing depends on factors outside of us, there still is some sense in which we can be worse off or better off, and it is rational to strive to be better off. In this sense, self-realisation is distinct from happiness. A tree that flourishes and does well, with leaves gleaming in the sun and birds nestling on its branches, is realising itself although we don’t know whether it is happy. A similar concept is articulated in the work of the Black American feminist author Audre Lorde (1934-92). For her, survival does not only mean having a roof over your head and food on the table. As Caleb Ward explains in a recent blog of the American Philosophical Association, for Lorde there is a difference between safety and survival. Safety is what we are told we must try to realise: we study, get a mortgage, and a job, to protect ourselves from the vicissitudes of life. Survival on the other hand, which is closer to self-realisation, is a concept that receives virtually no attention in policy or life advice: ‘survival includes living out and preserving [Lorde’s] identity across its many aspects: as Black, as a woman, as a lesbian, as a mother.’ Ward quotes one of Lorde’s talks: I am constantly defining my selves, for I am, as we all are, made up of so many different parts. But when those selves war within me, I am immobilised, and when they move in harmony, or allowance, I am enriched, made strong.Drawing together these insights from Lorde, Næss and Spinoza, we can say that the climate crisis seriously hampers our ability for self-expression. Its degradation of our sense of place and belonging makes it difficult for us to realise ourselves as human beings. Increasingly, we are pushed to settle for safety from immediate threats posed by the degradation of the environment. We cannot even begin to think about how to preserve ourselves in all the diverse aspects of our existence, and therefore cannot really survive. This is in part why the climate crisis is so corrosive to our sense of self: it impedes our ability to know ourselves. Self-realisation implies a unity of acting and knowing: you need to know yourself accurately as part of a vast, interconnected nature, and as more than a narrow ego. Once you know this, you can begin to act. By contrast, lack of knowledge (of ourselves, as conceived of a larger whole) immobilises and disempowers. Unfortunately, the climate crisis is undergirded by massive denialism. This denialism is more than us looking away as individuals. It is bankrolled by wealthy elites and fossil fuel companies in the face of inescapable climate degradation. As Bruno Latour writes in Où atterir? (2017), or Down to Earth (2018): [T]he elites have been so thoroughly convinced that there would be no future life for everyone that they have decided to get rid of all the burdens of solidarity as fast as possible – hence deregulation; they have decided that a sort of gilded fortress would have to be built for those (a small percentage) who would be able to make it through – hence the explosion of inequalities; and they have decided that, to conceal the crass selfishness of such a flight out of the shared world, they would have to reject … climate change [italics in original].The super-wealthy have tightened their grip on democracy, creating politically motivated diversion tactics, such as blaming so-called ‘metropolitan elites’ (educated people) for the worsening economic circumstances of working-class people, or pointing the finger at refugees arriving in precarious boats on the shores of wealthy countries. The climate crisis lies behind nostalgic nationalist throwbacks to some imagined past, such as MAGA and Brexit. Seeking prestige, fame and wealth seems like it will help us realise ourselves but, actually, we are in their power Unlike some other recent thinkers such as Jason Stanley, Latour argues that these movements are only superficially like early 20th-century fascism. Rather, they represent a novel political order that is based on climate-change denial, where wealthy elites aim to create gated communities and escape routes by deregulation and disenfranchisement. All the while, they try (in vain) to realise themselves in things that seem ultimately unfulfilling and empty: superyachts, short trips into space or into the deep sea, and buying up entire islands. By influencing and subverting the democratic process, they try to encourage deregulation so as to pull more and more resources toward themselves. Realising (at some level) that this is not sustainable, they retreat into increasingly remote fantasies such as TESCREAL (an ideological bundle of -isms: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism). It’s promoted by philosophers at the University of Oxford such as Nick Bostrom, Hilary Greaves and William MacAskill. They envisage a future where humanity will transform itself into a posthuman state (facilitated by so-called ‘liberal’ eugenics and AI), colonise the accessible Universe, and plunder our ‘cosmic endowment’ of resources to produce astronomical amounts of ‘value’ (for an overview, see Émile Torres’s recent essay for Salon). The happiness of these future posthumans, most of whom would be digital, justifies neglecting current-day problems. ‘For the purposes of evaluating actions,’ Greaves and MacAskill write, ‘we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focusing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.’ The TESCREAL world leaves little scope for the diversity of expression of being human: the joyful, vulnerable and diverse ways of being in, for instance, Traveller and Roma communities, Indigenous societies, and more. Why do the wealthiest people seek to actively deny the climate crisis rather than address it? The philosopher Beth Lord, drawing on Spinoza, argues that they are in the grip of bad emotions. Normally, our emotions help us seek out what is good for us and avoid what is bad. We have three basic affects: joy, sadness and desire. Desire is an expression of the conatus: we seek things that bring us joy and avoid things that bring us sadness. Overall, this aids our self-preservation. However, because of the complex ways in which our emotions intermingle, it is possible to be mistaken in them and to desire things that really do not help us to realise ourselves. Seeking prestige, fame and wealth seems like it will help us realise ourselves but, actually, we are gripped by them and are in their power. While these misconceptions are prominent among the wealthiest elites, we see them in everyone. The ethicist Eugene Chislenko argues that we might all be climate crisis deniers in some sense. Not that we literally deny that there is a climate crisis or influence policy to fuel denialism, but that we look away, much like a person in grief who realises someone is dead but has not been able to integrate the loss into her life. As Chislenko writes: ‘We say it is real, but we rarely feel or act like it is. We go to an airline booking site to visit a friend for the weekend; we still think we might see the Great Barrier Reef some day; we have no plans that match the scale of the change.’ And the reason for this is, in part, that we feel like addressing the climate crisis would demand substantial sacrifices on our part, which seem like a drop in the ocean given the scale of the problem. As Næss writes: ‘when people feel they unselfishly give up, even sacrifice, their interest in order to show love for Nature, this is probably in the long run a treacherous basis for conservation.’ How then do we get out of this situation of collective denialism? We have now seen what self-realisation is and how it is tied to knowledge. By increasing our knowledge, we increase our power. For example, knowing that pathogens cause infectious disease led to great advances in preventing or reducing transmission through vaccines. Similarly, to be able to act in the face of the climate crisis, we need knowledge, and for that we can look directly at Spinoza’s philosophy for inspiration. Spinoza lived a very sparse, propertyless existence in rented rooms, and tried to stay away from fame and the limelight. He declined a prestigious professorship at the University of Heidelberg, and did not wish to be named as the sole heir of a friend, even though it would have made him independently wealthy for life, choosing instead to grind lenses to sustain himself. So he did not think that flourishing or, in his terminology, ‘blessedness’ (beatitudo) could be found in material wealth and fame. Instead, his work as a lens-grinder offered more opportunities for self-realisation, because it made him part of the interconnected, budding community of early scientists at the start of the scientific revolution, many of whom used lenses in their telescopes and microscopes. While Spinoza did not see blessedness in this-worldly wealth, he didn’t think it could be found in an afterlife, either. In the 17th century, people commonly believed that you could achieve blessedness after you died if you followed the moral norms and willingly abstained from certain pleasures during your lifetime. However, Spinoza’s radical insight is that you can achieve blessedness in this life. As he writes: Blessedness is not the reward of virtue, but virtue itself; nor do we enjoy it because we restrain our lusts; on the contrary, because we enjoy it, we are able to restrain them.The notion of blessedness is closely linked to Spinoza’s view of self-realisation. Recall that Spinoza sees God as nature. Self-realisation requires that we accurately understand ourselves as modes of God and thereby come to love God. But what does such an accurate understanding entail? One recent interpretation is offered by Alex X Douglas in his book on the topic, The Philosophy of Hope (2023). For Spinoza, blessedness is a kind of repose of the soul or mental acquiescence. It arises from the intellectual love of God or nature. For Spinoza, knowledge increases our power, and hence our self-preservation, by knowledge. If our emotions mislead us (as when we seek prestige or fame), we actually decrease our self-preservation because we are pushed to serve external goods. The highest knowledge we can hope to achieve is knowledge of the Universe as a whole. This knowledge is also knowledge of the self, because each of us is an expression (mode) of God. Douglas clarifies that this does not mean that we are parts of God, like jigsaw puzzle pieces. Rather, each of us – an individual damsel fly, a rose, a mountain or a cloud – ‘expresses the whole, in its own particular way’. Once we understand ourselves as ecological selves, this will feel like preserving our expanded self Once you realise that you are an expression of the whole of nature, you come to realise that, although you will die, you are also eternal in a non-trivial sense, since the one substance of which you are an expression will endure. Spinoza also makes the strong claim that, if we are rational, we cannot but love God. It is the rational thing to do, because the love of God spontaneously and naturally arises out of an accurate understanding of ourselves and the world. Once you realise this, you achieve blessedness. As we’ve seen, Spinoza says that flourishing or blessedness is not the reward of virtue, but virtue itself. Once we achieve this, we no longer have to constrain our lusts, because they will dissipate when we achieve this cognitive unity with the rest of nature. All this talk about tempering one’s lusts may feel moralistic and old-fashioned, but Spinoza brings up an important point, namely that engaging in pursuits such as Last Chance Tourism – visiting places on Earth soon to disappear due to the climate crisis – or deep-sea exploration for fun is ultimately self-destructive. Similarly, we might feel that renouncing steak, or giving up flying for frequent conference travel or for pleasure, might be restraining ourselves. But once we understand ourselves as ecological selves, and understand how we are part of fragile, large ecosystems and the planet, this will feel like preserving our expanded self, rather than cutting ourselves short. As Spinoza explains in his Short Treatise on God, Man and his Well-being (c1660), ‘since we find that pursuing sensual pleasures, lusts, and worldly things leads not to our salvation but to our destruction, we therefore prefer to be governed by our intellect.’ Paradoxically, we underestimate how rich our ecological selves really are. We don’t give ourselves enough credit, on how we are able to derive genuine contentment and wellbeing from simple pleasures that do not involve destroying the planet. Rather, we think that we need infrastructure-heavy, expensive things to make us happy, where happiness always lies just around the corner. Self-realisation increases our power. As we saw, we chase things we imagine will bring us joy, such as wealth and prestige, but which decrease our power, because they have us in their thrall. Active joy in a Spinozist sense is an intellectual understanding of yourself and your relationship to the world. An example of this is the work of Shamayim Harris. When her two-year-old son, Jakobi Ra, was killed in a hit and run, she resolved to transform her dilapidated, postindustrial Detroit neighbourhood into a vibrant village: ‘I needed to … change grief into glory, pain into power.’ Buying up houses for a few thousand dollars, she transformed the area into the eco-friendly Avalon Village with a library, solar energy, STEM labs, a music studio, farm-to-table greenhouses, and more. Such resilient, walkable and child-friendly communities provide a great scope for self-realisation. In an important Næssian sense, Harris created a home for herself and others. Næss’s ecosophy is all about home, but in a broader environmental and ecological sense, where self-realisation is the ultimate norm. There is a beauty about self-realisation. Through wise and rational conduct, we would be able to find new citizenship, a way of being in nature, a polis that also includes nonhuman animals and plants. This way of being would increase our power of acting, and respond to our drive for self-realisation. There is not one set way for us to be. There is not even an ideal that humans must evolve toward, as in the TESCREAL universe. Nature has no ultimate teleology. We matter as we are right now, not (only or mainly) as future hypotheticals, and we can envisage a world where humans, animals, plants, but also mountains and rivers, have their own multifaceted identities and where they exist in community with each other. Such a world can hold diversity of thought and expression. Our way out of the climate crisis must therefore begin by a reconceptualisation of ourselves as ecological and interconnected selves. Self-realisation as conceived by Næss, Spinoza and Lorde is at heart a joyful, affirmative vision. It does not start from the premise that life is inherently filled with suffering. Once we achieve self-realisation, living well becomes easy due to the unity of blessedness and virtue. However, it is difficult to attain because of our collective climate denialism. It’s not that one day we will wake up and be self-realised. We need to achieve that perspective change and realise we are interconnected selves that can flourish only with the rest of nature. It is perhaps fitting to end with the final lines of Spinoza’s Ethics: If the way I have shown to lead to these things now seems very hard, still, it can be found. And of course, what is found so rarely must be hard. For if salvation were at hand, and could be found without great effort, how could nearly everyone neglect it? But all things excellent are as difficult as they are rare.With thanks to Émile Torres, Bryce Huebner, Johan De Smedt, Oscar Westerblad, Phyllis Gould, David Johnson and Ivan Gayton for comments on an earlier draft.
Helen De Cruz
https://aeon.co//essays/how-to-face-the-climate-crisis-with-spinoza-and-self-knowledge
https://images.aeonmedia…y=75&format=auto
Film and visual culture
Slum photography was at the heart of progressive campaigns against urban poverty. And it was a weapon against poor people
A photograph taken in 1880 by Bedford Lemere, a renowned architectural photography firm of the late 19th and early 20th centuries, shows a dimly lit courtyard, narrow and surrounded on three sides by worn brick buildings. Uneven paving stones lead to a passageway, through which, barely visible, a man and a child watch the photographer from a distance, while a spectral presence in the foreground reveals itself to be another person, their form blurred, suggesting they moved during the long exposure time a camera of the era required. The photograph pictures Jerusalem Court in Clerkenwell, London. Most likely, it was commissioned by the Clerkenwell vestry to earmark Jerusalem Court as an area ‘unfit for human habitation’ – a phrase used by housing inspectors to describe dwellings deemed unhealthy for residents to live in. In the annual report for 1899, a special committee, commissioned by the Clerkenwell vestry to examine the condition of courts and blind alleys in the area, states that, according to ‘medical men’, the block of dwellings on the north side of Jerusalem court is ‘very unhealthy, without through ventilation, and such as should never have been built’. Jerusalem Court, Clerkenwell. Photo ©London Metropolitan Archives The photograph of Jerusalem Court was, in fact, deliberately taken in such a way as to reinforce the fatal verdict of the ‘medical men’. The emphasis on the enclosed sense of space – enhanced by the camera angle – and the lack of light, dilapidated buildings and stained paving mobilise visual tropes of disease and filth that recur in slum photography to signal urban decay. When presented to the Clerkenwell Sanitary Committee, the photograph read as evidence that the courtyard was a slum area and, as such, an appropriate candidate for clearance and demolition. Slums have long invited the camera’s gaze. This photograph of Jerusalem Court is characteristic of the many photographs of slum housing taken across Britain since the late 1860s. Most of these are unremarkable in aesthetic terms, but the sheer volume of ‘slum photographs’ held in the archives of British cities such as London, Manchester, Liverpool and Birmingham reveals the extent to which slum housing – and, by extension, the management of the working-class populations who lived in it – was a subject of major national concern. From Thomas Annan’s 19th-century photographs of rundown Glasgow tenements, to images of East End slum clearances taken in the 1950s, such images have informed our ideas about how the urban environment might be critically linked to the nation’s social, moral and physical health. Slums captured the American imagination, too. In the 1880s, Jacob Riis, once a destitute Danish immigrant himself, photographed the abject conditions endured by New Yorkers in the overcrowded tenements of Lower East Side Manhattan. Riis’s photographs revealed for the first time to a suitably scandalised middle-class public how the ‘underclass’ lived, inaugurating a tradition of photographing the most powerless in society, which was built on by 20th-century photographers like Dorothea Lange. While Lange was photographing the rural poor of America’s dust bowl in the 1930s, British photographers like Bill Brandt, Edith Tudor-Hart and Humphrey Spender were documenting British slum life. Brandt made his name showcasing the polarities of class differences in British society. His first book, The English at Home (1936), juxtaposed poverty-stricken working-class neighbourhoods with images of the extravagant homes and lifestyles enjoyed by the British upper classes. Spender began his career photographing daily life in working-class communities for the Mass Observation movement while Tudor-Hart, a radical socialist and Jewish intellectual, photographed deprived areas across London and Wales, focusing on the working lives of women. Between 1930 and 1950 – when two world wars and extensive bombing compounded the nation’s housing shortage – photographs of slum conditions in British cities taken by Brandt, Tudor-Hart, Spender and others began to feature regularly in the pages of popular photo magazines such as Picture Post, Weekly Illustrated and Lilliput. But did these widely consumed photographs of slum life – teeming in the official records and attracting the eye of documentary photographers – lead to any improvement in the lives of ‘slum dwellers’, forcing local authorities to implement social reform? Or did the visual tropes they promulgated merely reinscribe existing social and political structures that framed working-class people as a powerless group, at best, dependent on the philanthropy of middle-class elites and, at worst, as a filthy ‘underclass’ contaminating Britain’s urban centres? In late 19th-century Britain, inadequate, overcrowded housing stock was becoming an urgent issue for local authorities. Cities were growing at an exponential rate, as agricultural workers flocked to urban areas to look for work: between 1851 and 1881, London’s population rose from 2.3 million to over 3.8 million. Overcrowding was endemic in central areas near the commercial markets that offered employment, and was fuelled by unregulated housing laws that favoured landlords and failed to compensate renters if they were evicted. By the century’s end, tens of thousands of Britain’s labouring classes lived in one- or two-room dwellings without proper sanitary facilities, leaving them exposed to diphtheria, tuberculosis, cholera and typhoid, diseases which at the time had no certain cure. Over the second half of the 19th century and the first decades of the 20th century, a succession of acts were passed that gave local authorities increasing powers to purchase areas of old housing and pull down dwellings deemed unhealthy. As medical officers of health descended on the slums to document the worst offences to human wellbeing, the camera emerged as an important surveillance tool. A modern technology that promised to render reality far more faithfully than any illustration, the camera enabled inspectors to make extensive visual records of the housing stock under their purview. Their ‘realist’ photographs gave concrete shape to what housing ‘unfit for human habitation’ looked like and then helped to justify their destruction, displacing thousands of tenants. The camera, used as an instrument of surveillance, coded slum inhabitants as a pliant, homogeneous mass The meaning of these photographs was tightly controlled by municipal officials. Consider this photograph of a Southwark slum, taken in 1923, commissioned by the housing section of London County Council’s Architects’ Department. It pictures a dark courtyard enclosed by grey-looking tenement buildings. Strung up between the houses is a washing line, hung with bright white sheets. To the left of the frame, a gaggle of small children watch the camera curiously. Although the paving stones are stained and uneven, there’s little to suggest that the houses in the street are actually insanitary. In fact, the clean washing suggests that care and attention has been paid to the cleanliness of the street by its residents. Yet the caption on the back of the photograph classifies the street as an ‘insanitary area’, guiding the viewer to see the narrow courtyard, dingy light and dirty pavement as evidence of urban decay. The caption, likely written by a sanitary inspector or medical officer of health, forecloses an interpretation of the photograph as anything other than evidence of substandard housing conditions. Southwark slum. Photo ©London Metropolitan Archives In most local authority photographs of slums, humans appear as incidental figures, haunting the edges of the frame as if surplus to requirement. Huddled to one side of the image, the children in the photograph of the Southwark slum are framed as bystanders: the slum, and the filth and disease inscribed within it, take centre-stage. In an influential essay on 19th-century slum photographs of Quarry Hill in Leeds, John Tagg argues that the camera, used as an instrument of surveillance, coded slum inhabitants as a pliant, homogeneous mass. He argues that slum photography invited viewers to imagine an alternative environment: ‘a desirable space in which people will be changed … into disease-free, orderly, docile and disciplined subjects’. As I sift through images in the London Metropolitan Archives, I’m struck by the all-encompassing nature of the caption ‘insanitary area’, which seems to stigmatise the families who lived in such housing as filthy too – just another feature of the unhealthy urban landscape that, along with damp walls, broken windows and stained paving stones, needs to be ‘corrected’ in order to restore the city to health. This stigmatising of the working-class subject reflects the prevailing attitude of the middle and upper classes in Victorian Britain, among whom it was widely held that the ‘slum dweller’ made the slum. In 1898, Charles Booth’s poverty map of London categorised slum residents as the ‘lowest class. Vicious, semi-criminal’, fuelling national anxieties that moral and social degeneration were linked to urban decay. The relative invisibility of slum housing, which was hidden in alleyways that had originally been used as stables, or behind the townhouses of the wealthy, allowed the slum to become a potent social imaginary onto which the middle classes could project their fears of disease, racial mixing, moral degeneracy and sexual deviance. By denying ‘slum dwellers’ any form of subjecthood in visual representations of their neighbourhood, local authority photographers reinforced power relations that placed working-class people at the bottom of Britain’s social hierarchy. Where medical officers of health and sanitary inspectors targeted the lens primarily on the houses being torn down, documentary photographers focused instead on the often-abject conditions slum dwellers endured. ­I­­n both Britain and America, their gaze takes us beyond the street and into the domestic lives of the urban poor, bringing the blurry spectre of the slum dweller into focus for the first time. In How the Other Half Lives (1890), Riis showed immigrant families crowded into squalid rooms, their personal effects and total poverty starkly displayed. Having mastered the recently invented flash function, Riis was able to take photographs inside dimly lit buildings that had previously been beyond the camera’s reach. Yet Riis’s interest in photographing the working poor didn’t extend to feeling sympathy for their plight. His gaze was largely voyeuristic, and he exploited the subjecthood of working-class people to underline his own social mobility. Recalling his time on Mulberry Street, a notorious thoroughfare in New York, Riis wrote in distinctly othering and racialised terms: ‘I went poking about among the foul alleys and fouler tenements of the Bend when they [immigrants] slept in their filth … sounding the misery and the depravity of it to their depth.’ As some of the first social documentary photographs of slums, Riis’s images prefigure how representations of working-class people by documentary photographers would ignite ongoing debates about voyeurism, exploitation and the nature of reality in the documentary tradition. In Britain, it wasn’t until the 1930s that documentary photography of slum life became widespread. The Great Depression had left the nation grappling with dismally high unemployment rates, a housing shortage and a fiscal crisis, forcing politicians to recognise that only a robust welfare state could ensure good quality housing for all citizens. As a direct result, the Housing Act (1930) empowered councils to make compulsory purchases of land and legally enforce mass clearance schemes. At the same time, it was becoming clear that documentary photography was a far more persuasive medium than text in campaigns for social reform. In response to the British public’s newly awakened interest in the everyday lives of ordinary people, picture editors of magazines and newspapers began to regularly commission photographs from documentarists revealing substandard living conditions across the country. Public discourse around housing at this time was shot through with allusions to the new science of eugenics, with debates about environmental reform pivoting on whether unhealthy living conditions resulted from the unfitness of the slum dweller or the degraded environment of the slum itself. Proponents of eugenicist thought (found across the political spectrum) advocated for controlling reproduction among the ‘mentally defective’ or ‘degenerate’ populations living in deprived urban areas, effectively winnowing the population to produce a healthier human race and eliminate poverty. Documentarists, by contrast, sought to place the blame for the social problems of deprived urban areas on the appalling housing conditions endured by the inhabitants of such neighbourhoods, rather than their supposedly hereditary predilection for deviant behaviour. Photographers like Tudor-Hart, Spender and Brandt enjoyed the enormous reach of publishing in popular photo magazines, building on the assumed veracity of documentary photography to reveal the ‘realities’ of everyday life in slum streets. They approached areas such as the East End of London as the new terra incognita that would compel and fascinate middle-class audiences by letting them observe the lives of an exotic underclass whose social prospects might be brightened with their help. The text frames London as another country altogether In 1934, Weekly Illustrated published a photo essay entitled ‘Pull Down the Slums!’ A double-page spread, it deployed several visual strategies to establish a dialectic between the old, slum-ridden ‘Britain’ and the regenerated Britain of the future. ‘Slum Britain’ is represented through photographs of the jumbled rooftops cluttering the backyards of tenement houses and a close-up of the grating over a half-obscured basement, with a ‘front window’ opening for its residents (both likely taken by Brandt). There are children playing in a narrow street, and a family pictured ‘in a slum home in Wapping’, as the caption has it. These images sit in counterpoint to depictions of a new, modern block of flats in St Pancras and the green vistas of a garden suburb. Unlike the immigrants in Riis’s work, the slum dwellers here are presented as stoic victims of circumstance, an urban ‘residuum’ who must be helped to take advantage of the opportunities offered by improved housing. This framing was characteristic of photographs of working-class people in the 1930s. The photograph of the family in the slum interior – posed to resemble the very picture of domesticity, parents huddled over children and work, in a cluttered but clean home environment – is a vision of pathological social need. As the caption affirms: ‘they continue on the struggle to bring their children up in health and happiness!’ ‘Pull Down the Slums!’ Article in the 17 November 1934 edition of Illustrated Weekly. Photo supplied by the author In some ways, this photo essay represents a continuation of 19th-century philanthropic discourses, in which charity is visited upon the urban poor by the middle classes, whose duty it was to provide relief. The largest photograph in the spread, of children playing in the street, biking and batting balls, offers a deliberate visual connection with Victorian slum photographs that portrayed children as the most vulnerable victims of disease and urban degradation. The accompanying article avows that ‘by far the greater number of rehoused workers and their families respond immediately to new and improved surroundings’, reinforcing the idea that slum dwellers must be transformed into healthier, more economically productive citizens by expert professional elites. The idea that slum dwellers needed to be helped towards respectability often went hand in hand with an understanding of slum areas in Britain as foreign. In 1938, Picture Post published an article under the title ‘Whitechapel’, featuring photographs that presented the area as a ghetto within the city of London. The strapline ‘“Picture Post” turned a cameraman loose in Whitechapel’ evokes visions of an intrepid explorer mapping out a new danger zone in the urban jungle of the ‘inner city’. Indeed, the text frames Whitechapel as another country altogether: ‘You are in High Street, Whitechapel, but you may as well be in High Street, Poland.’ The accompanying photo essay intersperses what by now were typical images of the slums – photographs of children playing in ‘slum streets’; families seated around the dinner table; groups of women talking in the street – with photographs of Whitechapel’s Jewish residents. One of the largest photographs, captioned ‘The Yiddisher Parliament Meets’, pictures a group of Jewish men talking in a square (it may as well be Poland!). In other pictures, a Jewish tailor displays his wares and a bagel seller conducts his business. 13258Original caption: ‘The Yiddisher Parliament Meets.’ All images Picture Post, 1938/Getty Images 13259‘What do they talk about?’ 13260‘Buying Bygles’ The captions and article seem unsure of how to present this vision of racial mixing. One states: ‘to the average East Ender racial questions mean very little’, and yet the description of the Jewish men as a ‘Parliament’ belies deeply rooted antisemitic anxieties about Jewish domination. This is apparent again in the author’s unease about the Yiddish spoken on British streets: ‘What is this language jabbered so frantically on the pavements?’ To the editors of Picture Post, Whitechapel’s Jewish residents are an exotic and unknown quantity that requires glossing for the audience (‘Bygles are ring-shaped rolls of bread’). It is hard not to pick up echoes here of the way that non-white populations are characterised as an unknowable ‘other’ in colonial photography of the period. Even a photograph of a ‘slum street’ that would have been a familiar sight to readers of the 1930s is reconfigured in this Picture Post spread as a foreign and strange scene. ‘This is Their Outlook: Human beings live here, grow up here and die here – in the same world as you and me’ runs the image’s caption. Picture Post might have been committed to social democracy, but at the same time its visual coverage of slum life reified class difference, betraying anxieties about racial mixing and the impact of immigration on the visual landscape of the city. While media coverage of slum life had begun to construct an image of the slum dweller as the helpless, if unknowable, victim of circumstance, photographic records of slum areas made by municipal councils continued to conflate slum dwellers with the disease and squalor engendered by their own living quarters. Much like the local authorities of the 19th century, throughout the 1930s, housing and public health officials visited slum areas to photograph buildings due for demolition. A photograph album made by Hackney’s medical officer of health between 1930-35 for a Ministry of Health inquiry into slum clearances reveals how 19th-century attitudes about the moral laxity of slum dwellers found new expression in the slum photography of the interwar years. One page, entitled ‘Dirty and Filthy Premises’ features two black-and-white photographs of a slum interior at 1A Big Hill, Hackney. The first shows an empty scullery from which: ‘seventeen vanloads of filthy furniture and effects were removed by council for disinfection & destruction 1934.’ The second is of a cluttered living space, the only available light picking out piles of dishes and kitchenware that are nevertheless neatly stacked on the table. Barely discernible in the background, the owner of the premises can be seen standing by the door, blending into the general darkness of the interior as if the camera’s gaze has passed right over her body. There is no record of the resident’s name, or how she felt about her belongings removed and destroyed. Under the surveillant gaze of the camera of the medical officer of health, the slum dweller is dehumanised – implicated with the decay and squalor of her environment. 13251‘Dirty and Filthy Premises’ at 1A Big Hill. Images courtesy London Borough of Hackney Archives 13252Scullery 13253Living and sleeping room The minutes of a town hall meeting held by the Metropolitan Borough of Hackney in 1932 offer a poignant record of the opinions and concerns of tenants who would be displaced by slum clearance orders. Their objections reveal that the prospect of being rehoused in blocks of flats, without gardens or the opportunity to keep pets, was dismaying for many tenants. A tenant named Mr Relton was among the reluctantly dispersed; he tells the lawyer for Hackney Council: ‘You have sunk me, that is all.’ Mr Kidd, another tenant, describes the newly built flats as ‘barracks’ and ‘nothing more than slums’. ‘Why not houses? Why not let us remain as we are?’, he pleads. Mr Kidd’s words speak to the isolation that many slum dwellers felt at being moved into blocks of flats ­– ostensibly an opportunity to join the respectable middle classes, but at the cost of being able to talk to neighbours and build close-knit communities. Strikingly, many tenants thought of living in tenement housing as a marker of English identity, and distanced their own condemned homes from slums in racialised terms: ‘Having travelled the world a bit I know what slums are. You have not got them in England. We are Englishmen and want to live as Englishmen.’ While ‘slum dwellers’ were exoticised by the media as a mysterious foreign ‘other’ who lived right under the noses of the middle classes, working-class tenants saw the neighbourhoods they lived in as quintessentially English, a vision grounded in the ability to have a garden, speak to one’s neighbours and live in a house instead of a multistorey block. Slums remained a potent symbol of urban decay in the British cultural consciousness Following the Second World War, media coverage of poverty-stricken slum conditions gave way to exuberant images of politicians posing at slum clearance sites. By 1956, 35,000 houses in Britain were condemned or demolished in a major government drive to clear slum areas, and more than 200,000 people had been rehoused. Press photographs taken of MP Duncan Sandys, the Conservative Minister for Housing, show him striding through the rubble of an East End slum-clearance area with a group of aides. Sandys had vowed to knock down Britain’s remaining ‘back-to-back’ streets and build modern flats in their place; in 1956, he embarked on a tour of cities with the worst slum areas, beginning in London. The photographs, taken by a photographer from the Keystone Press Agency, play on the contrasts between the polished group of officials, with their wool coats and shiny shoes, and the rubble-strewn ground around them. Small visual details, such as a child’s doll, abandoned in the debris of a demolished slum home, or the scrap of floral wallpaper on the wall Sandys is poised to knock down – pickaxe held for the high swing, his face scrunched in concentration – are the only reminders of the generations of people who made their lives in the area. Conservative MP Duncan Sandys during slum clearances in 1956. ©London Metropolitan Archives Sandys would have known that slums remained a potent symbol of urban decay in the British cultural consciousness. Posing among them was a photo opportunity, in the modern sense, in that images of him literally pulling down the slums helped align the Conservative government with rallying cries for a regenerated postwar Britain – a vision that played well with their middle- and upper-class voters. Read with the privilege of hindsight, however, these images of the political elite smashing through former working-class neighbourhoods seem more prescient of the accusations of social cleansing that were levelled at local councils from the 1960s onwards. What’s more, many of the tenants removed from slums were unable to afford the rents of the new flat blocks, and were forced to leave to seek lower rent in older council houses in the suburbs or, ironically, cheaper substandard housing in the private sector. It was the ‘respectable’ working classes, then, rather than the poorest tenants, who took up residence in the new accommodation. As any photographic theorist would remind us, the meanings inscribed in photographs are never fixed. Photographs are palimpsests, their significance written over by successive generations who interpret them within their own historical context. How then, is a modern viewer – who supposes themselves to be visually literate – meant to approach slum photography? It is tempting to look at these photographs and imagine that we are being granted privileged access to a window into the past. Yet just as medical officers of health and documentary photographers projected their own agendas and prejudices onto the images they produced, so too does the modern viewer. There is a danger of 19th-century photographs of working-class neighbourhoods becoming quaint, Dickensian-style emblems of a bygone era, signified by the figure of the stoic cockney, urchin or street hawker – a vision no more true than the Victorian philanthropist’s imaginary of the slum as a den of vice. Likewise, Brandt’s photographs of stark poverty have, over the years, lost their affective force as images capable of mobilising public concern. As the decades have passed, his photographs of the actual hardships of the working poor – already highly staged and artificial representations of ‘reality’ – have become further romanticised as icons of 20th-century documentary practice. It is impossible, then, to really ‘see’ a historical photograph without interpreting it anachronistically. Modern viewers might look at a Brandt photograph and still be moved, not by the poor housing conditions of interwar Britain so much as by their similarity to images of temporary refuges and hostels in the UK, or photographs of slum landscapes across the global South which regularly feature in the news. At the same time, the modern rage for ancestral history has seen archive photographs acquire new meaning as family photographs. Images of ‘slum dwellers’ become intimate traces of a relative for the researcher who comes to the archive not as a social observer, but as a descendant in search of a lost family history. What remains constant across these interpretations, however, is the power relations of the photographic encounter. Intended for the gaze of groups who hold power, slum photographs ensure the continual disempowerment of the urban poor. As Susan Sontag writes in On Photography (1977), to take a picture is to – however momentarily – betray an interest in maintaining things as they are: ‘to be in complicity with whatever makes a subject interesting, worth photographing – including, when that is the interest, another person’s pain or misfortune’. As documents that operate to record rather than intervene in adversity, slum photographs expose the limits of our compassion as much as they demand it.
Sadie Levy Gale
https://aeon.co//essays/slum-photos-were-weaponised-against-the-people-they-depict
https://images.aeonmedia…y=75&format=auto
Mood and emotion
Suffering the sudden death of a loved person leaves some survivors stuck in grief. Can they win their lives back – and how?
On a January evening in 1992, I was sitting in our kitchen, reading a comic book. My older sister Claudia went out to run an errand at a nearby minimart, just before it closed. Her keys jingled as she said goodbye and pulled the door shut. Her footsteps rushed down the stairs. A minute later, I heard her slam the garage door after she had pulled out her bicycle. Moments later, I heard a loud thud from down the street. I also thought I heard a muffled scream. I was 10. I couldn’t connect the dots. A speeding car had hit Claudia while she was crossing the street. She didn’t die on the spot. Her boyfriend rushed my mom to the hospital. They spent the night at the ICU, while I spent the night in my best friend’s apartment. We set up camp on mattresses on his living room floor. He said: ‘I’m sure it’s just a broken leg.’ I said: ‘You’re right, she’ll be fine.’ We prayed. The next day, my mom stood in the doorframe, sobbing. ‘Claudia is dead,’ she said. I hugged her. I knew I had to be strong for her. What I did not know is that my sister’s death would, in some way, end my mom’s life as well. We cried at the funeral. We cried at the cemetery. We cried at home. After a few months, I stopped crying. My mom never stopped crying. She became obsessed with Claudia’s grave. She would visit it every day, clean the white marble and bring fresh flowers. At the same time, she became frustrated and angry with the world. I spent my entire youth listening to her angry words, but her grief wouldn’t recede the tiniest bit. Somewhere in my teens, I concluded that she must be suffering from depression. But I was wrong. It’s no surprise I had it wrong. Back in the day, even professional psychologists lacked an official diagnosis for persisting grief. That changed in March 2022, when the condition my mom most likely suffered from was added as ‘prolonged grief disorder’ to the latest revision of the psychologists’ diagnostic manual, the DSM-5-TR. The diagnosis hinges on two factors. The first is denial: mourners cannot accept the death of the person they lost. This, in turn, causes symptoms like sadness, anger or guilt that last for more than 12 months. That persistence separates normal from prolonged grief. While the former is like a wave that occasionally flares up and then ebbs away, the latter runs like a horizontal line. Prolonged grief traps its sufferers in continuous rumination. This causes the second diagnostic factor: impairment. Some sufferers quit their jobs; others avoid people and places that remind them of their loss. Avoidance is just one of many hallmarks of this disorder. While guilt, self-blame and anger rank high, one of the most prominent symptoms of pathological grief is the loss of meaning in life. People who get stuck in prolonged grief often see no point in living without the person they lost. This matches what my mom said. Twenty years after Claudia’s death, she would still say: ‘When your sister died, a piece of me died with her. I will never be alright again.’ She would stay alive as long as she had to. The only thing she looked forward to was being reunited with Claudia. If she got cancer, she’d refuse treatment. I offered to help her find a therapist, but she scoffed. It is how people die that determines whether the survivors will develop the disorder Caring for me became her duty, but not a pleasure. I didn’t understand why. But now, a study by researchers in Japan and Italy shows how prolonged grief lowers empathy. The scientists showed bereaved people photos of their deceased loved one and other photos of a living relative or stranger. They then measured those individuals’ empathic response in an fMRI scanner. The result: the more patients were grieving, the less empathic they were with living relatives. Their empathy to the dead person was enhanced. My mom wasn’t an isolated case. Between 7 and 10 per cent of all bereaved people develop pathological grief, a large study from 2021 shows. When the number of deaths went up during COVID-19, the number of pathological grievers, naturally, went up, too. The uptick was not just caused by the numerical increase of pandemic-related deaths. Rather, it is how people die that determines whether the survivors will develop the disorder. ‘If you unexpectedly lose a close person, you are much more likely to develop prolonged grief,’ says the clinical psychiatrist Katherine Shear who heads the Center for Prolonged Grief at Columbia University in New York. Sudden loss includes death by murder, suicide, accident or an unexpected illness, like COVID-19. A loss may become more traumatic when you don’t have a chance to say goodbye. What ensues is often a feeling of powerlessness. Personal factors, like female gender, low education and an existing mental illness, further increase the risk of developing prolonged grief later on, as the psychiatrist Andreas Maercker writes in his clinical handbook Trauma Sequelae (2022). Being part of a tight-knit family and curating a circle of close friends, on the other hand, can mitigate risk. But even a tight-knit family couldn’t protect Amy Cuzzola-Kern when she got stuck in prolonged grief. On a December morning in 2016, this social worker from Erie in Pennsylvania received a call from her father. ‘Something is wrong with your brother Chris. I can’t wake him up,’ he said. ‘I think he might be dead.’ She jumped into the car and rushed to her parents’ house, but it was too late. Chris, a sporty man who had just turned 50 that same year, had died in his sleep. The autopsy would later show that he had suffered a coronary artery blockage. Nobody had known, as Chris had been asymptomatic. His death struck the family out of the blue. ‘I knew he was dead, but I didn’t want to accept it,’ Amy told me during a Zoom call. ‘I was in major denial.’ Chris was her only sibling and her best friend. She had seen him every day since they were babies. With him gone, she ruminated about her role as a sister. Over the next two and a half years, Amy’s social life turned upside down. Before Chris’s death, she had been an energetic, sociable go-getter. After his death, she isolated herself and stopped spending time with her family and friends. When her husband asked her to accompany him to charity events, she refused. ‘I became a hermit,’ Amy says. ‘I rarely left the house. And when I did, I went for long walks. Alone.’ She chose routes that didn’t remind her of Chris. Out of fear she could meet common friends who would talk about Chris, she avoided going to bars or the movies. ‘In the grocery store, I’d pop in my earbuds, so I didn’t have to talk to anyone,’ she says. Both her denial and avoidance pattern are telltale signs of prolonged grief disorder. But she did not notice those signs at first. It is true that bereaved mothers never recover from the loss of a child What Amy did notice was her ongoing and overwhelming sadness, and she decided that two and a half years of suffering were enough. She told her general physician she was in a low mood all the time and asked for help. The doctor suspected a depression and prescribed her an antidepressant. ‘Six months into the treatment, you could have lit my hair on fire, I wouldn’t have cared. But it didn’t help at all with my grief,’ she explains. Because almost every psychotherapist ‘does grief’, Amy also added talk therapy to the mix. However, even combined with antidepressants, the treatment failed. That’s because, while depression is a mood disorder, prolonged grief is a stress disorder, akin to post-traumatic stress disorder (PTSD), and rooted in a different region of the brain. Studies back up this experience. One, from 2016, found that the commonly prescribed antidepressant citalopram performed no better than a placebo. Some of the trial’s patients had simultaneously undergone a therapy specially tailored to prolonged grief disorder. The medication did, however, have a small positive effect on those patients who also suffered from classic depression. It improved only the depressive symptoms, not the grief-specific ones. This might indeed explain why my mom felt a little better for a while when she, too, finally tried an antidepressant; she found that some depression lifted for a while, but her prolonged grief-specific symptoms didn’t subside at all. So, after trying several antidepressants, she gave up. We often fought about her not wanting to try psychotherapy. I had moved out and started a job in a nearby city, but I kept offering to visit a psychotherapist with her. She turned me down with a barrage of rhetorical questions. ‘What good is a psychotherapist going to do? Does he know what it feels like to lose a daughter?’ There was no arguing with her. In her mind, this was her cross to bear, and she would die with it. There’d be no cure for ever healing a grieving mother, she said. That latter part was right. It is true that bereaved mothers never recover from the loss of a child. A 2022 study shows that the experience changes their brain activity. Researchers at the University of California, Irvine showed grieving mothers photographs of their deceased children while observing their brains’ blood flow in an fMRI scanner. They found a particularly strong connection between two brain regions: the first, the amygdala, decides what’s important and manages anxiety. The second, the paraventricular thalamic nucleus, influences how we respond to long-term emotional memories. A feedback loop between the regions is especially triggering, sometimes even provoking the fight-or-flight response. The authors also found that severe grieving permanently lowered the mothers’ capability to learn, use language and manage their thoughts. Neuroscientists have known for decades that grief affects the brain. Back in 2003, the clinical psychologist Mary-Frances O’Connor of the University of Arizona detected these changes using an fMRI scanner. She and her team interviewed eight participants to gather details of how the death had occurred. They also asked them for a photo of the deceased. The scientists then provoked grief reactions by showing the participants combinations of photos of the deceased and words like ‘funeral’ that reminded them of the death. The resulting fMRI scans show that a whole network of brain areas lit up, including the regions for processing, visualising and retrieving emotion-laden memories. A comparison with neutral words and photos showed no such activity. Grief also alters the brain’s size. O’Connor pointed me to a Chinese study that found grieving people had a smaller left hippocampus. The seahorse-shaped brain area plays a crucial role in forming memories. Strikingly, one of its functions is verbal memory – which was impaired in the grieving mothers in the Californian study. Grief shrinks the brain area indirectly, as a result of too much stress. ‘What causes the hippocampus to shrink is an excess of the stress hormone cortisol,’ O’Connor says. Another, equally prominent hormone sheds more light on how prolonged grief alters our brains. The ‘social hormone’ oxytocin is naturally released during breastfeeding and sexual intercourse, binding to receptors in the brain. ‘We believe that patients with prolonged grief disorder have less oxytocin receptors in their reward system,’ O’Connor says. In a recent study, she administered the hormone to patients via a nasal spray. It increased the activity in the same brain areas that are more active in grievers, so oxytocin is definitely somehow involved. But the hormone did not decrease the symptoms of grieving. So, a nasal spray curing prolonged grief won’t hit the market anytime soon. However, another drug shows more promise. Researchers at Weill Cornell Medical College in New York City, led by the medical sociologist Holly Prigerson, argue that prolonged grief disorder can be classified as an addiction, since it involves the same reward pathways as alcohol or opioid addiction. That is why they are administering naltrexone pills to the participants of an ongoing clinical trial. Naltrexone is an opioid antagonist used to treat addiction. Prigerson and her team say that naltrexone takes effect faster than most antidepressants and is cheaper than other opioid antagonists like methadone. This would also help curb grief-related suicides. Depression and grief have different neural pathways; grief is essentially a stress response While the trial is not yet complete, Prigerson said in an email that psychiatrists report benefit from the approach. ‘Naltrexone in those with PGD [prolonged grief disorder] led to them venturing outside and having opportunities for new social connections,’ she wrote. The principle behind the treatment is based on reducing the continuous focus on the deceased person. Loosening this bond would open treated patients to new relationships. O’Connor, meanwhile, prefers specialised psychotherapy to alleviate prolonged grief symptoms; naltrexone, she complains, is so broad it may hinder new attachments to others even as it eases the old. And, increasingly, those therapies can be found. Despite three failed psychotherapies, Amy Cuzzola-Kern refused to give up. She kept scouring the web and newspapers to understand what she was suffering from, and to find help. By chance, she stumbled on an article that mentioned the Center for Prolonged Grief. Shear, its director, had developed an evidence-based therapy called prolonged grief disorder therapy (PGDT). I too found the diagnosis and treatment by mere chance. I hadn’t been actively looking for an explanation of my mom’s condition any more. I had given up. Then, one June morning last year, an email announcing Shear’s new book chapter on PGDT popped up in my mailbox. As a journalist, I filed it under ‘story ideas’ and returned to it later that afternoon. A few pages in, my eyes widened: my mom ticked all the boxes. Here was my chance to finally understand what she had gone through. I needed to talk with scientists, scour studies and explore this disorder through my writing. It felt like this was the one last thing I could do for her. So I called Shear to learn how she had come up with the treatment and what it entailed. Shear ’s 16-week programme, tapping the decades of prior research, acknowledges that depression and grief have different neural pathways, and that antidepressants could not help. Starting from the premise that grief is essentially a stress response, she reached out to Edna Foa, her friend and colleague. Foa had shown that prolonged exposure to a trigger was an effective therapy for another stress disorder, PTSD, and had trained Shear’s team of therapists in her technique. Today, Shear’s grief therapy involves exposure through techniques like imagery, imagining conversations with the deceased loved one, and identifying meaningful life goals. Shear has also proposed diagnostic criteria for prolonged grief disorder; after much debate within the psychiatric community, her work and others’ led to official inclusion of the diagnosis in the DSM-5-TR in 2022. The new diagnosis helps patients find proper treatment, and also enables them to get grief therapy reimbursed by their insurance plans. One of those patients was Amy Cuzzola-Kern. She couldn’t just sign up for the therapy, of course. Before taking her on as a patient, Amy’s therapist (a colleague of Shear’s) had to ensure that she did indeed suffer from PGD. Amy completed a questionnaire based on the ‘Inventory of Complicated Grief’, which itself is based on years of research into the field, with questions like ‘Do you avoid reminders that the person who died is really gone?’ Amy scored high enough on measures of pathological grief to qualify for a pre-treatment interview and, soon after, her weekly sessions began. Amy badly wanted to undergo therapy to stop her suffering, but she also felt a lot of resistance. ‘I was worried that I would forget Chris’s memory,’ she says. This ambivalence is typical of prolonged grief, which is why some researchers call it ‘bittersweet’. Recalling memories may be painful, but for many patients they are the only relationship they have left with their lost loved ones. As much as they want their symptoms gone, they cling on to their memories. Patients fear that therapy will erase those memories. That is the reason, Shear tells me, why many PGD sufferers put off seeking help for years. But the opposite is true, she says: ‘PGDT does not extinguish patients’ memory of the deceased person, but rather, helps them accept the loss and restore their own wellbeing.’ Shear’s therapy uses a conglomerate of therapeutic techniques to tackle the different symptoms from which patients suffer. In the first session, every patient is asked to start writing a grief-monitoring diary. It is a five-minute at-home exercise, in which Amy had to record when her grief peaked and what had triggered it. One exercise was to revisit his death in her mind, an exposure technique used by PTSD sufferers Then, during the therapy sessions, they talked about her diary entries, the triggers, and about the ways she had coped with her grief. Shear calls some of these coping mechanisms ‘derailers’. They are often initially helpful to reduce the stress level, but eventually all of them turn against the patients. Common derailers include isolation, guilt and self-blame. In fact, Amy was derailed by more than isolating at home and turning down all social events. She also tended to ponder ‘What if?’ scenarios. The night before Chris’s death, the family had gone through a tough day. Amy and her brother had accompanied their mom to Cleveland where she underwent chemotherapy. When they came back, Chris was very tired and went to bed early at his parents’ house, where he was staying to help with their mom’s frequent injections. Although Amy had talked to Chris before saying goodnight, counterfactual thoughts crept into her mind afterwards. Thoughts like: ‘He looked really tired, maybe there was something else going on with him. I should have asked him.’ All this reminded me of a thought that has been lingering at the back of my mind for the past 30 years. My mom and I never talked about it. Ever. It was taboo. On that January evening, my sister Claudia had not gone to the grocery store to buy something for herself. In fact, she had not wanted to leave the house at all. Instead, my mom had sent her out because she had run out of her favourite hairspray. Nobody could have foreseen what would happen, but nonetheless: hadn’t my mom been consumed by guilt? ‘Patients often experience thoughts like this. We call it caregiver’s self-blame,’ Shear says. ‘When a child dies, regardless how, they experience a feeling of failure as a caregiver. It is irrational, but people go through all sorts of hypothetical scenarios, like If I had sent her out 10 minutes later, nothing would have happened.’ The more counterfactual thoughts of this kind somebody experiences, the higher the severity of their prolonged grief, a study from 2021 shows. The therapy does not have one specific technique to extinguish guilt. But Amy’s counterfactual thinking dissipated once she started accepting Chris’s death. One key exercise to achieve that was revisiting his death in her mind, an exposure technique that PTSD sufferers use to come to terms with trauma. Amy’s therapist asked her to tell him the story of how Chris died. She had to record it on her phone and listen back to it at home, every day. In the ensuing sessions, she had to re-tell the story and re-record it. With every new version, the narrative became slightly more detailed. This helped anchor the events in reality. Over time, it also dispelled her fear of losing her memories of Chris. At the beginning, the task was tough, so her therapist asked her to reward herself with little pleasures. It could be anything, he said, a piece of chocolate or a glass of wine. Amy found none. But then she went on a family holiday in Florida. As she was floating in the pool, listening to podcasts, she learned again to carve out quality time for herself. Slowly, she began to practise more self-care. Her therapist also started working with her on her avoidance behaviour. Instead of popping in earbuds, Amy started having little conversations with the people she met at the supermarket. The closer her therapy came to its end, the more Amy came around. Before the therapy, Amy had lost her sense of her future and her life goals. Her therapist had to force her to write down an aspirational goal. ‘That was really, really hard,’ she recalls. But eventually, she found one. When the therapy ended, Amy pursued a masters in social work at Shear’s Center for Prolonged Grief. She finished it last year and now wants to help other bereaved people. ‘We’re not meant to grieve alone,’ she says. ‘When somebody close suddenly dies, you need a whole posse of people to help you. For those who have nobody, that is the therapist.’ I wish that I could end this piece writing that my mom came around, too. But she didn’t. Shortly before the pandemic hit, she received a diagnosis. Lung cancer, terminal. Had it not been hopeless, she would have made good on her promise and refused treatment. Despite pandemic rules, a nurse snuck me in to the ICU so I could say goodbye. My mom was unconscious. I cried because she was about to go. But I also cried knowing that she had spent more than half of her life grieving. I now know it didn’t have to be that way. Her condition has a name. It can be treated. It could have been treated. If only.
Martin W Angler
https://aeon.co//essays/how-to-ease-the-seemingly-endless-pain-of-prolonged-grief
https://images.aeonmedia…y=75&format=auto
Global history
Is this the word we need to describe unprecedented convergences between ecological, political and economic strife?
Sometimes words explode. It is a safe bet that, before 2022, you had never even heard the term ‘polycrisis’. Now, there is a very good chance you have run into it; and, if you are engaged in environmental, economic or security issues, you most likely have – you might even have become frustrated with it. First virtually nobody was using polycrisis talk, and suddenly everyone seems to be. But, as often happens, people seem to mean quite different things with the word. So, what does ‘polycrisis’ mean? The term reverberated at the United Nations Climate Change Conference (COP27) in Sharm El-Sheikh in November 2022, and in Davos the following January, as The New York Times noted. In the Financial Times, Jonathan Derbyshire chose it for his 2022 ‘Year in a Word’ piece, defining ‘polycrisis’ as a collective term for interlocking and simultaneous crises. Then 2023 opened with the World Economic Forum adopting this buzzword for its Global Risks Report, highlighting how ‘[c]oncurrent shocks, deeply interconnected risks and eroding resilience are giving rise to the risk of polycrises’. The report explores the interrelation of geopolitical, environmental and sociopolitical risks. The World Economic Forum used the term to advertise the report, with headlines like ‘We’re on the Brink of a “Polycrisis” – How Worried Should We Be?’ or ‘Welcome to the Age of the Polycrisis’. A key champion of the word has been the British historian Adam Tooze, professor at Columbia University in New York, whose efforts to proselytise its fruitfulness and to define it are undoubtedly an important reason for this explosion of usage. Indeed, in October 2022, Tooze launched his monthly Financial Times column with the heading ‘Welcome to the World of Polycrisis’: A problem becomes a crisis when it challenges our ability to cope and thus threatens our identity. In the polycrisis the shocks are disparate, but they interact so that the whole is even more overwhelming than the sum of the parts. At times one feels as if one is losing one’s sense of reality.As Tooze has repeatedly noted, ‘polycrisis’ did not drop out of the blue. In the discussion paper ‘What Is a Global Polycrisis?’ (2022) from the Cascade Institute, Scott Janzwood and Thomas Homer-Dixon locate its origins in the book Homeland Earth: A Manifesto for the New Millennium (1999) by Edgar Morin and Anne Brigitte Kern. They trace its history of use in studies of sustainable transition and in studies of the European Union. A key moment often pointed out is the 2018 speech by the former president of the European Commission, Jean-Claude Juncker, but he had already made an attempt at a definition in an earlier speech in 2016, when he explained how various security threats not only coincide with but also feed each other, ‘creating a sense of doubt and uncertainty in the minds of our people’. The term has emerged from relative obscurity to wild popularity, but it is crucial to note that the meanings of the word diverge. There is ‘a’ polycrisis and ‘the’ polycrisis. That is, on the one hand, people are trying to find a clear working definition of a polycrisis, to define its key characteristics, in order to forge a research concept with which to examine a diverse range of concatenations of events. With this meaning of the word in mind, there can be multiple polycrises: for example, the combination of the financial and the food-system crises around 2008-09, or the convergence of the COVID-19 pandemic, a hunger crisis and the Russian invasion of Ukraine in more recent years. On the other hand, ‘polycrisis’ is understood not as a common noun but as a proper noun, denoting this particular stage of world history. There is only one polycrisis: this historical epoch, when humanity has created a world interconnected and interdependent to an unprecedented degree, combining vast material wealth with radical inequality and teetering on the threshold of ecological collapse. It is a truly novel phase of history, different from anything in the track record of our species. This diversity of meanings has prompted some people to question the usefulness of the word. Some have doubted whether it is even a proper concept or more a fancy way of saying that a lot of things are going on. In his article for Vox online earlier this year, the US political journalist Daniel Drezner notes how, to some, it sounds like ‘a confusing and redundant neologism’ and quotes the historian Niall Ferguson’s quip at Davos that it is ‘just history happening’. The background assumption seems to be that, in order for a word to be worthy, its meaning must be clear and distinct. But this misses a crucial thing about how words work. They are always wanton, impossible to rein in. In order to elucidate this, let us first take a brief detour through wider conceptual history. For heuristic purposes, let us here distinguish words and concepts. The word ‘nature’ is a classic example. As Raymond Williams noted in his essay ‘Ideas of Nature’ (1980), there are more or less distinct concepts of nature in Western intellectual traditions beyond the terminological unity: nature as the inner essence of a being, nature as the ordered cosmos, nature as the nonhuman world – and, later, the terrestrial world including humans. However, there are common connotations between these meanings (eg, totality, originality, unity, essentiality). These common threads facilitate moving from one conceptual realm to another, to engage in struggles of definition. Thus, for example, ideas about morality and sexuality – about the ‘inner nature’ of humans – have been legitimised by referencing to the ‘outer’ nonhuman nature or the claimed normative order of the cosmos. In modern times, ‘freedom’ is a prime example of a word with a diverse and contested conceptual landscape. So, sometimes struggles of definition are waged around old words. Words are wanton: no conceptualisation is immune to cooptation for radically different uses At other times, new words become foci of struggle. In the UN report Our Common Future (1987), the word ‘sustainability’ was conceptualised as referring to the ecological underpinnings of human development: future human welfare can be safeguarded only by taking care of the ecological systems that are the foundation of welfare. But the meanings diverged quickly, and sustainability was reconceptualised around three dimensions or ‘pillars’ – economic, social and environmental – sometimes with the addition of a fourth one: say, cultural. The key question is whether the aim should be a balance of these dimensions, or should the correct image resemble a wedding cake, with ecological sustainability forming the foundation. An added complication is that, originally, the notion of pillars of sustainability emerged because civil society movements from developing countries wanted to highlight the necessity of securing welfare for the millions of people who lack it. But, recently, the notion of balance between the dimensions has been used to criticise environmental policies. Economic growth is just as important as avoiding disastrous climate change or widespread ecosystem degradation, for example. Words are wanton, as I said. However much care is taken to define them, no conceptualisation is immune to cooptation for radically different uses. Only by understanding these shifts and tensions can we make sense of the discussions around us and take part in them in a meaningful way. We have to understand how the meanings move, and to which uses words are put. With ‘polycrisis’, we are again in a situation of conceptual struggle. A conceptual divergence into ‘a’ polycrisis and ‘the’ polycrisis has taken place, and the word is being defined for different kinds of uses. There is no shared social sphere within which a common conceptual framing can be agreed upon – this would be possible among a limited scientific community, but not as a word explodes into the public realm. A good recent example of this is how the word ‘Anthropocene’, a relatively obscure stratigraphical term, burst on to the scene and gained a menagerie of meanings as it was being employed by environmental researchers, artists, humanists, journalists etc. The stratigraphers continue their conceptually restricted discussion and are frustrated at the unruly discussion elsewhere. But with ‘polycrisis’, even locally shared conceptualisations seem to be lacking at this stage, which inevitably results in a lot of talking past each other. Meaningful discussion – and meaningful disagreements – about the word is hard without such shared meanings. I have approached this with the heuristic triad word-concept-conception. The word may be common to all, but the meanings given to it, the concepts, form more or less distinct realms. Within a shared conceptual realm, ferocious debates about the substance of the matter, the conceptions, can still take place – as any researcher knows. But people are still basically talking about the same thing. If the conceptual realms in use differ, meaningful discussion and disagreement becomes harder or even impossible. People are using different tools for different uses but debating as if they are using the same ones. If we understand the polycrisis as a description of our specific era with its existential problems, we can agree and disagree about the details. We can debate about the possibility to ‘decouple’ economic growth from environmental impact, about the tension between ‘green growth’ and a transformative change of societies. We can argue about the potential to predict and to plan for future changes. Overall, the discussion is about this stage in history, about us and those coming after us, about the situation we have inherited. This, clearly, is a proper noun. There is nothing ‘just’ about it, in any sense of the word It is a whole other game to see a polycrisis as a technical concept with which to analyse and understand more specific concatenations of events, some of them with significant environmental dimensions, others with none. This is what Ferguson was no doubt talking about in his ‘just history happening’ snub – whether we need a new concept to understand complex situations in history. Clearly, many parts of the world have seen simultaneous, intertwining and mutually reinforcing crises before – the First World War has been invoked in these debates. As Drezner notes in his Vox article, the current combination of war, pandemic and political upheaval is scarcely unique: The First World War devastated Europe. The war also helped to facilitate the spread of the influenza pandemic through troop movements and information censorship. The costs of both the war and the pandemic badly weakened the postwar order, leading to spikes in hyperinflation, illiberal ideologies, and democracies that turned inward. All of that transpired during the start of the Roaring ’20s; the world turned much darker a decade later.Or why stop there? Why not look at 1848, ‘the turning point that did not turn’, to paraphrase the historian A J P Taylor, a unique flashpoint of history if there ever was? Industrialisation and the plight of the artisanal classes, the potato blight, the decades of Metternichian repression, the rise of nationalism and a host of other ideas, and a legion of other causes formed a complex European-wide web of tension that exploded into years of revolutions, revolts and repression. If one uses polycrisis as a generic research concept, applicable from national to regional and global scales and across wildly different timescales, it is indeed a valid question to ask whether it grasps anything new, or whether it adds anything substantial to the toolkit. This is a practical question, unanswerable without experience. But to claim that ‘the’ polycrisis, our current turning point in history (and one that will turn inevitably, some way or the other), is ‘just history happening’, would be missing the point. The CO2 levels in the atmosphere are higher than ever in human history, global human-made mass exceeds all living biomass, wild mammals make up only around 4 per cent of mammals, and a new mass extinction is already in the works. We are in effect living on a different planet than all the previous human generations – and people around the world are increasingly inhabiting very different planets from each other. Some of them may become uninhabitable pretty soon. The definition in the Cascade Institute’s working paper puts it like this: ‘a cascading, runaway failure of Earth’s natural and social systems that irreversibly and catastrophically degrades humanity’s prospects.’ This, clearly, is a proper noun. There is nothing ‘just’ about it, in any sense of the word. Any discussion that fails to take this divergence of meanings into account will be confused. We can debate fruitfully about conceptions only if we share the conceptual realm. Otherwise, we are blinded by the surface similarity of words. (Often this is intentional: jumping from one conceptual realm to the next is an old rhetoric trick – like making claims about human nature based on sweeping observations of the natural world at large.) As I stated, the viability of ‘a’ polycrisis as a research concept is an empirical question. But how about the value of ‘the’ polycrisis as the description of our historical situation? What key features does the polycrisis, our specific historical situation, have? Some features are often noted, and they speak to the origins of the term in Morin and Kern, and in complexity studies: the increasing complexity, interrelatedness and the lack of ‘buffering’ between eco-social systems has resulted in increasing vulnerability to cascades of changes, domino effects across ecological, social, political and economic systems. Thus, several system-level crises (eg, food systems, energy systems, international politics, logistics) can meet and amplify each other. In essence, there is nothing absolutely novel here, as sudden regime shifts are part and parcel of how complex systems behave. However, the global context has altered, and ‘a global production ecosystem’ has emerged, linking the localities of the world much tighter than ever before. The results of this could be clearly seen with the COVID-19 pandemic. Another way to approach this is to note how the complexifying world situation challenges our inherited ways of causal thinking. This idea has been forcefully put forward by Tooze, and Christopher Hobson stated it thus: This points us to another way of thinking about polycrisis, viewing it as an accumulation of unresolved crises, where stark outcomes have been fudged, clear resolutions denied. Moreover, temporary fixes might have provisionally forestalled reckoning, but increased the magnitude of the remaining challenges.Tooze notes how the polycrisis questions the old notions of ‘fundamental’ political struggles, of underlying tensions beneath the plethora of surface problems. He has, in turn, been criticised for evading the root causes of contemporary problems. A post by the economist Baki Güney Işıkara in the Developing Economics blog argues that the notion of polycrisis carries a notable reluctance to acknowledge capitalism as the underlying force behind overlapping emergencies: ‘The analysis and implications thereof are confined to the level of appearances, and, therefore, become incapable of grasping the web of contradictions that give rise to them.’ There is no root cause to the totality of environmental problems, nor can there be a unified solution However, if we examine the diversity of environmental problems, such causal reduction has always been suspect. Environmental problems are legion, radically divergent in their geographical and temporal scale, very different in their ecological dynamics. Some of them, like climate change, are truly global and can be tracked, unproblematically, to the history of industrialisation and the global spread of capitalism. But many forms of pollution are down to specific chemical compounds, some of which are novel (eg, ozone depletion), others common ones that have caused serious problems in precapitalist societies (eg, lead emissions). Biodiversity degradation is linked to a host of phenomena, some to do with overconsumption and wealth, others with poverty and insecurity. Trying to force this diversity into one mould does violence to it. The fact that human societies across history have met very different environmental problems is crucial for understanding how to (and how not to) live with nonhuman nature. This does not diminish the force of the argument that the world after the fossil-fuel revolution and the spread of global capitalism, ‘the Great Acceleration’, has pushed ecological systems around the globe to the brink and created highly unequal patterns of exchange. But there is no root cause to the totality of environmental problems, nor can there be a unified solution. We have always required a diverse toolkit to understand this, and the era of polycrisis highlights that we take this idea even more seriously. One can quite easily envision a development where a successful energy transformation would take place but, leaning heavily on the use of biomass and carbon capture, it would exacerbate biodiversity decline. Failure to combat poverty, hunger and inequality in many poorer parts of the globe would, in the end, make sustainable food systems or preservation of ecosystems impossible, but powering the required change with fossil fuels would be detrimental. Living in the polycrisis does not invalidate criticism of our current dominant sociopolitical systems. But the fact that we are living among myriad trajectories of change, already in the midst of ruin, requires humility from dreams of social transformation. There is no promised land, whereupon arriving we can lay aside all our troubles. Even in the best of possible worlds, we will have to learn to live with some of our troubles, our historical inheritance. We also face radically different temporal scales, more than ever in the history of human civilisations. Mitigating and adapting to climate change is not only a question of halting global heating. That is of course necessary, and the point where the heating stops determines whether societies and ecosystems around the world reach dangerous or disastrous zones. That is a question on a decadal scale. But if high CO2 concentrations in the atmosphere endure for a long time, huge changes are in store, on a millennial scale. This is why a long period of negative emissions will be necessary in order to reach a safer climatic zone: action on a centennial scale. The ‘the’ in polycrisis is crucial, because our historical condition is truly unique More immediately, the polycrisis requires us to take seriously the coexistence of quick (pandemic, war) and slow (climate change, biodiversity decline) crises. We have to inhabit these temporalities at the same time. It is hard to exist and to act on multiple timescales at the same time, to truly recognise the multiplicity of our troubles, but this is the key challenge of the polycrisis. Focusing on the acute crises and waiting for the normal times to return, in order to handle the creeping crises, is simply a recipe for disaster. To quote Laurie Leybourn of the think-tank IPPR speaking to The Guardian in February: ‘We absolutely can drive towards a more sustainable, more equitable world. But our ability to navigate through the shocks while staying focused on steering out the storm is key.’ The ‘the’ in polycrisis is crucial, because our historical condition is truly unique. Our ability to learn from history is negligible, because such a concatenation of social, political, economic and ecological factors has never taken place. Extraction and consumption of natural resources is still on the rise, while the ecological systems that facilitate this are eroding. The old linear models of development are questioned on a deep material level. But changing the current trajectories of extraction and consumption risks degrading societal cohesion or creating new conflicts – as, for example, when old fossil-fuel powerhouses lose their dominant position or when cutting consumption exacerbates inequalities, both within and between nations. The ‘poly’ in polycrisis is crucial, too. We are not facing merely a host of disparate problems but a radical challenge to the very network of systems that maintain the ‘metabolism’ of existing societies. Holding climate change away from the truly disastrous realm requires transformation of key systems: energy, traffic and transport, housing and heating, food, industry. This requires social coordination on an unprecedented scale – lest these changes obstruct each other by competing over the same limited resources. But these transformations must be made in a way that does not undermine other vital ecological functions. The ecological transition can succeed on the climate front and fail fatally on others. But this is not merely a technical issue: avoiding fatal conflicts and spiralling inequality requires new political coalitions. In the long run, navigating through the polycrisis benefits all, but in the short run the benefits and costs will spread unevenly. There is no avoiding politics. In the era of the polycrisis, environmental politics has to be deeply interwoven with the questions of justice, equality, security and power.
Ville Lähde
https://aeon.co//essays/the-case-for-polycrisis-as-a-keyword-of-our-interconnected-times
https://images.aeonmedia…y=75&format=auto
History of ideas
Liberal philosophy has clipped the wings of the egalitarian ideal. We should return to the bolder ideals of Iris Murdoch
The ideal of equality has broad appeal – most people in liberal democratic societies claim to endorse the principle that we should be equal before the law and that people should be treated with equal respect. Even defenders of the free market often put their case in terms of equal property rights. Yet we live in a world where the gap between the haves and have-nots is growing, racism and discrimination are on the rise, and even basic democratic and legal rights are in jeopardy. Historically, the socialist tradition, particularly the thought of Karl Marx, has been a source of inspiration for the call for greater equality. Socialist ideas sparked revolution in many parts of the world and, in many countries where capitalism persisted, socialist ideas about people’s equal entitlement to have their needs met prompted radical reform, in the form of social welfare guarantees such as socialised medicine, unemployment benefits and pensions. Today, however, the Left faces a sobering philosophical landscape where socialism is eclipsed by liberalism in egalitarian thought. This is not just because of socialism’s declining fortunes in the world with the collapse of the Soviet Union and the rise of Right-wing populist parties and movements. Since the publication of John Rawls’s canonical book A Theory of Justice (1971), the liberal tradition has set the terms for debate about distributive justice among Anglo-American political philosophers. And Marx’s socialist slogan ‘From each according to his ability, to each according to his needs!’ – though stirring – remains just a slogan. Iris Murdoch pictured in 1966. Photo by Horst Tappe/Hulton Archive/Getty In 1958, the philosopher and novelist Iris Murdoch complained about the paucity of progressive thought in the Britain of her day in the essay ‘A House of Theory’, which appeared in a collection of radical political writings. Murdoch, along with her fellow contributors, bemoaned the decline of socialist conviction, the loss of energy and vision on the Left. Murdoch’s essay is largely forgotten among political philosophers, but her lament is no less relevant today. Recent trends in liberal egalitarian political philosophy, for all their influence in the West, have fallen short of adequately defending the ideal of equality, showing a lack of imagination in just the way noted by Murdoch. Furthermore, the influence of liberalism, in political philosophy, and in Western capitalist societies, is such that thinkers on the Left too often assume the necessity of these parameters. In this essay I take up Murdoch’s call for a radical vision of egalitarianism as furthering equality of human flourishing, or wellbeing. I contend this will enable us to better understand and further what is at stake in the ideal of equality. It is often remarked that, for most of the 20th century, political theory languished in the shadow of scientistic views that had dominated philosophy as a whole. Logical positivism insisted on the strict delineation of conceptual from empirical enquiry, matters of fact from matters of value, themes that lingered in the succeeding school of ordinary language philosophy. Murdoch blamed the dominance of a sterile logical analysis for contributing to the lack of vision and creativity in progressive thought. Whereas moral philosophy, as Murdoch put it, ‘survived by the skin of its teeth’, turning itself into a meta-discipline concerned with understanding concepts, political philosophy ‘almost perished’. The intrinsically controversial nature of prescriptions about justice, equality and liberty was replaced with an analysis of how words were used; gone was the ancient Greeks’ idea of political philosophy as reasoned enquiry into how we ought to live in common. The diminished role of political philosophy as a normative exercise doubtless reflected not just an empiricist outlook in philosophy but also a smug acceptance of the empirically given, that is, the ascription of an automatic legitimacy to the liberal institutions of capitalist democracies in the postwar period. This dogmatism about politics in the liberal West that came with the Cold War helps explain why political philosophy was in a state of stagnation. As Murdoch put it, with the achievement of the welfare state, people were no longer motivated to ‘call up moral visions’, to ‘lift their eyes to the hills’. Rawls’s defence of equality falls far short of the imaginative enterprise sought by Murdoch Complacency was jolted in the US, however, in the 1960s, when the postwar liberal consensus came under attack from both a new Left galvanised by the student movement, and a new Right that emerged from conservative critiques of welfare economics. Political philosophy was reborn with Rawls’s canonical work, providing that ‘systematic political theorising’, the absence of which Murdoch had noted more than a decade before. I propose, however, that perhaps it was a pyrrhic victory. Rawls’s theory gave priority to classical fundamental liberties yet married them with redistributive principles to mitigate disadvantage. In so doing, Rawls provided a robust justification for the liberal welfare states of democratic capitalist societies throughout the West, and his profound influence, in both theory and practice, continues to this day. Rawls’s defence of equality in the context of capitalist market societies, however, was achieved at the cost of scaling back both the scope and ideals of egalitarian thought, falling far short of the imaginative enterprise sought by Murdoch. For Murdoch, the desire for human equality was a crucial source of the ‘moral energy’ of past socialist movements. Since Rawls, however, although their views are dubbed ‘egalitarian’, few liberal philosophers call for socioeconomic equality per se. Rawls’s Theory of Justice proposes that, if we reasoned about justice in a thought experiment where we do not know our talents, race or class, we would opt for a ‘difference principle’ where inequalities are permitted, but only if they benefit the worst-off. If it turns out, for example, that human motivations are such that incentives are required for the talented to be productive, then so be it: we should pay the talented more than the rest of us. Better, says Rawls, to have a larger aggregate of resources with which to improve the situation of the worst-off – a bigger though unequally divided pie – than equality per se. In The Morality of Freedom (1986), the legal philosopher Joseph Raz complained that equality is an empty concept, susceptible to justifying the absurdity of ‘levelling down’. Levelling down means that, given a commitment to strict equality, widespread poverty is preferred over unequal wealth, even if everyone, including the less advantaged, would be better off under an unequal distribution. Raz’s objection captured a growing sense that, unless one could adduce evidence that, in fact, an unequal distribution would impose hardship on the worst-off, equality per se should not be our goal. Accordingly, many liberals went even further than Rawls, taking the view that the question of relative shares is irrelevant; what matters is only if people have a sufficiency. Even those who agree that it matters how much one person has, compared with another, tend to dispense with equality per se. In Ronald Dworkin’s theory, social insurance is needed to remedy the inegalitarian effects of luck and opportunity, but there will be inequality; it would be absurd, Dworkin says in Sovereign Virtue (2000), to exact the very high taxes required to insure against the likelihood of not being a movie star. In general, liberal egalitarians either start with equality but depart from it, or assume inequality and aspire to mitigate it. All in all, the principle of equal distribution of resources is largely abandoned. Rawls scaled back egalitarianism in another respect, taking the view that principles of justice should concern the distribution of the means to one’s pursuits, without taking an interest in the pursuits themselves. Both liberal political theory and the liberal state should be ‘neutral’ about people’s plans of life, Rawls argued in Political Liberalism (1993). This aversion to invoking considerations about wellbeing is widely shared among liberal philosophers. Indeed, in A Matter of Principle (1985), Dworkin had ventured an egalitarian rationale for political neutrality: the ‘television-watching, beer-drinking’ citizen’s plan of life should not count any less than the plans of life of the intellectual or the aesthete. Market mechanisms, not normally thought to promote equality, were commended by egalitarians for their indifference to people’s choices, their even-handedness towards all plans of life. The ‘natural lottery’ captures how people’s talents and temperaments are unequally but arbitrarily allocated Liberal egalitarians’ resistance to incorporating ideas of living well in their philosophical doctrines doubtless reflected an understandable unease with the repressive and intolerant strains in US political culture, a fear that bigoted moral views would be enforced on others. The idea that one’s life goes best if led from ‘the inside’, according to one’s own plans and goals, had much to do with the influences on the outside in the US – where ideas of the good life are derived from convictions about the right to bear arms, not being ‘un-American’, the denial of women’s bodily autonomy, bible-thumping, and repressive notions of salvation. But the result has been an inability to confront the fundamental question of the inequalities in how people live, what Murdoch eloquently called ‘the power to imagine what we know’. Rawls’s ideas about individual choice generated further constraints on egalitarian thought. He stressed that people were responsible for their plans of life and the extent to which fulfilment might come with one plan rather than another. At the same time, Rawls coined the evocative expression the ‘natural lottery’ to capture how people’s talents and temperaments are unequally but arbitrarily allocated and thus not the basis for unequal reward. His focus on the worst-off was a no-strings-attached view, but liberal egalitarians since Rawls have argued that a theory of justice should focus on his theme of arbitrariness and responsibility more precisely. Dworkin proposed that inequalities that result from the unpredictable vagaries of ‘brute luck’ are properly the object of egalitarian policy, but inequalities that are due to ‘option luck’, that is, due to people’s choices, are not owed compensation. A community may offer humanitarian assistance to the hapless squanderers of resources, but it is not required to by justice. At issue, Dworkin says, is a principle of responsibility; we cannot expect to be protected from failure if we risk or make poor use of our resources. ‘Luck egalitarianism’, as this concession to inequalities came to be called, proceeded to dominate egalitarian thought, even drawing the approval, surprisingly, of thinkers from the Left, like G A Cohen, who in 1989 congratulated Dworkin for ‘the considerable service’ he had performed for egalitarianism in incorporating ideas ‘of choice and responsibility’ from ‘the arsenal of the anti-egalitarian right’. Such a rationale is a far cry from the ‘general revolt against convention’ that Murdoch celebrated about socialist movements in the past. True, Left-wing luck egalitarians stress the great extent of brute bad luck, to argue that their position actually dictates significant redistribution of wealth. Cohen went so far as to propose that the arena of luck included affinity for expensive pursuits. For example, playing a sport that involves costly equipment (think of the misfortune of being a Canadian hockey parent) should not be considered a matter of mere choice. Moreover, in the case of global justice, factors like climate are examples of bad brute luck meriting significant redistribution (see, for example, its application to global justice by the philosopher Kok-Chor Tan). Yet for all these generous interpretations on the part of Left-wing interpreters, it remained that luck egalitarianism, in one form or another, tended to dominate egalitarian thought. Even the doctrine’s critics, such as Elizabeth Anderson, agreed that a satisfactory egalitarian theory must dispense with ideas of receiving goods without an obligation to produce them. Context is again relevant. The luck-egalitarian creed was born in the ‘New Right’ era of the 1980s and ’90s, which saw the election, with working-class support, of the Right-wing leaders Margaret Thatcher in the UK and Ronald Reagan in the US, and the ensuing move to the Right of social-democratic parties like the Labour Party in Britain, a move that in the years since has arguably been compounded rather than reversed. Perhaps it seemed a good strategy to exclude from one’s doctrine of equality, even if only conceptually, what conservatives called ‘welfare bums’, people whose poverty was deemed to be in some sense their fault. What Murdoch sardonically called the ‘dangerous region of “mushy” thinking’, in all its senses, could then be averted. Though it made an important contribution to understanding the injustice of economic disadvantage, I believe liberal philosophy clipped the wings of the egalitarian ideal. First, let’s return to our original question of equality in the allocation of goods. It is interesting to note that Marx, too, rejected strict equality in distribution, but for reasons unlike those of Rawls. Given the diversity of human needs, Marx argued that equal shares would simply aggravate inequality. Indeed, giving the brawny rugby player the same diet as his diminutive grandmother affords them equal shares, but leaves them unequally fed. The idea that true equality involves differential treatment, however, was no mere ‘sufficientarian’ or ‘prioritarian’ position, in the words of Pablo Gilabert and Richard Arneson, respectively. Equal wellbeing is, after all, the ultimate goal in the socialist picture. In calling for a nuanced approach to the problem of economic disparities, Marx was a thoroughgoing egalitarian where liberal egalitarians are not. Rawls permits departures from equality on pragmatic, not moral, grounds, justified only insofar as they benefit the worst-off. But why should there be any such departures? At a time when the gap between rich and poor is especially yawning, the incentive argument seems particularly hollow. Indeed, Left-wing egalitarians like Cohen contend that, when the ‘high-flyers’ insist that their productivity necessitates they keep a greater share to themselves, they are betraying bad faith with the egalitarian project, indeed engaging in a kind of blackmail; this hardly counts as a principle of justice. Cohen invokes the feminist slogan ‘the personal is political’ – how justice requires that individuals be committed to its ideals in their everyday lives. As he evocatively put it: ‘If you’re an egalitarian, how come you’re so rich?’ Incentive arguments take for granted that narrowly selfish, monetary interests will always be the principal motivation for human beings. This is a long way from the idea of ‘true community life’, as Murdoch put it, that was inherent in the ideal of equality for the Left. Liberal preoccupations with restricting eligibility for egalitarian redress conjure up odious Victorian notions of the ‘deserving’ and ‘undeserving’ poor. But what about the undeserving rich, those who are lucky enough to inherit wealth? Moral justification cannot be mustered for cases of good brute luck enriching people, let alone cases where the rich owe their wealth to exploitative behaviour. And think of the many ways in which governments subsidise capitalist companies; there are ‘corporate welfare bums’ as identified by David Lewis, a Canadian social-democratic politician active in the 1970s. It seems odd indeed that egalitarian attention is focused on the so-called irresponsible behaviour of the poor. No one denies or even moralises cancer treatment to the lifelong smoker, or knee surgery to the extreme skier In any case, people may be making imprudent decisions ‘according to their ability’, as Marx’s communist principle put it. If we reflect on the choices we have made in our lives, be they wise or foolish, and the conditions under which we made them, it is difficult to isolate the effects of luck. Parents, friends and mentors, education, locale and situation all influence our choices; indeed, capacity to choose is arguably itself unchosen. (This was Rawls’s thinking in his notion of the natural lottery: ‘Even the willingness to make an effort, to try, and so to be deserving in the ordinary sense is itself dependent on happy family and social circumstances’, while Keith Dowding invokes ‘relative parenting pushiness’ as an example that tracks cultures, but also diversity within cultures, that suggest it’s ‘hard to disentangle luck and responsibility’.) Questions of responsibility are hard to determine in light of what we know about the impact of social class, the culture of the chronically poor, the challenges of initiative and enterprise under straightened circumstances. The school of luck egalitarianism has been responsive to criticism, elaborating further the idea of insurance provisions to protect people from the consequences of their bad choices. Yet such remedies raise the question of why choice is being adduced in the first place, even as a conceptual possibility. The most cash-strapped, inadequate systems of socialised medicine in capitalist democracies do not attach conditions of prudence for the distribution of healthcare; no one denies, triages or even moralises cancer treatment to the lifelong smoker, or knee surgery to the extreme skier. Instead of thinking about accounting for disadvantages in a ledger of responsibility or haplessness, the socialist R H Tawney wrote in the 1930s that the equal society should not get mired in the ‘details of the counting-house’. Murdoch’s remark that ‘we have not mended our society since its mutilation by 19th-century industrialism’ is an apt reflection on the mean-spirited attitude threatened by luck egalitarianism, which seems a far cry from socialist ideals of solidarity, trust and generosity. The socialist tradition appreciates the extent of the disadvantages of social class, how they limit the options available to a person, who must sacrifice long-term opportunities for the sake of immediate material needs, whose family background renders some opportunities unimaginable, who is discouraged by teachers who underestimate their abilities. It seems a tragic irony that contemporary liberal theories of egalitarianism seem to embrace not Marx’s communist ideal of distribution according to need but Stalin’s repurposed maxim that ‘he who does not work, neither shall he eat.’ The equal community is best guided instead by a ‘social ethos’, where people undertake a personal commitment to the egalitarian project, contributing as best they can, and displaying a generous attitude to their fellows. In Why Not Socialism? (2009), Cohen illustrates this with the delightful example of a camping trip where there is ‘collective property and planned mutual giving’. The campers’ flourishing requires that all share the fruits of their prudence, capacity and luck, be it knowhow about the best fishing hole, or equipment when it comes to the hapless unprepared camper. (Of course, this ideal is at odds with Cohen’s luck egalitarian view, which Cohen admitted he could not resolve.) I will not tackle further the problems of luck egalitarianism here; but my own human flourishing approach in Equality Renewed (2017) eschews the distinction between option and brute luck as a criterion for the remedy of disadvantage. This brings us to the question of the place of human wellbeing in our understanding of equality. We saw how Rawls and his colleagues reduced the scope of equality in that respect, too. Indeed, liberals like Rawls and Dworkin could be said to promote a kind of juridicalisation of political thought, where legalistic notions of neutrality and personal responsibility barred questions of community and substantive value. In the face of the persisting, profound inequalities of capitalist societies today, it is worth considering a more ambitious vision that better captures what’s at stake when people have unequal income, and that can engage ordinary people in their political commitments. Liberal squeamishness about what’s been dubbed a ‘perfectionist’ creed is understandable given, as already noted, intolerant conceptions of the good rampant in US society, but also given historic exemplars like Plato and Friedrich Nietzsche, who contended that the nature of the good was grasped only by the few. In its pursuit of equality, the socialist tradition, in contrast, takes a broad view of the reach and scope of human wellbeing, focusing on goods and resources, respect and equal participation, but also non-alienated, fulfilling and valuable pursuits for all. The socialist critique of capitalism takes seriously not just impoverishment, but the impoverished lives that people are forced to live under conditions of inequality. Marx’s case against capitalism centred on how material deprivation resulted in the affront to the ‘nobility of man’, how alienating work makes people ‘stupid and one-sided’ and allows for the ‘overturning of individualities’. The ideal of communism involved creative labour and community as well as the satisfaction of basic needs. Murdoch remarked decades ago that, for ordinary people, ‘work has become less unpleasant without becoming more significant’. That is still true today. The Victorian aesthete William Morris came to his socialist convictions in his analysis of how capitalism condemned people to lives of ugliness – in their work, but also in their homes, relationships and communities. Socialism, in contrast would aspire to all living well in surroundings conducive to human flourishing. It was not just Marxists who put human flourishing at the centre of their political aims. Living well undergirds the original rationale for the welfare state, as evident in the British economist William Beveridge’s concern that problems of ‘idleness’ and ‘squalor’ be addressed by postwar social policy. His Labour colleague, the political theorist Harold Laski, also purported that the remedy of inequality would involve a ‘high level of general culture’ since civilisation is ‘a common enterprise which is the concern of all’, enabling people to lead lives of dignity. In contrast, even the most rousing radical critiques of liberal egalitarianism today – such as Anne Phillips’s call for ‘equality without conditions’, or Nicholas Vrousalis’s freedom-focused critique of capitalism – seem to accept the neutralist creed insofar as they eschew the perfectionist dimensions of the socialist tradition. Perhaps they are daunted by what Murdoch decried as the ‘demand for precision’ in postwar philosophy that inhibits bold approaches, compounded by an unease, perhaps with the possibility of seeming to be unrealistic or extravagant in one’s claims. Inequality in wellbeing is the result of factors over which society can exert considerable influence It is worth noting, however, that, for all their egalitarian shortfalls, liberal societies in this case too are not cowed by the strictures of liberal theory. Liberal philosophers’ squeamishness about making judgments about human wellbeing is in contrast with how liberal societies take a broad view of their responsibilities to their citizens, prepared to encourage some ways of life and discourage others. Public libraries, parks, galleries and museums all receive state support as a means of enabling people to engage in valuable pursuits, and these policies attract little controversy. Human wellbeing pertains to where and how we live, whether we have autonomy and self-realisation in our work, our means of transit, the support we have for raising children, the shape of our leisure time, our physical and mental health. People can fail to flourish because they lack meaningful relationships; alienation and loneliness are rife in our times, where many lack genuine friendship and love, for all the opportunities for Facebook ‘friends’, dating apps and options to sext or hook up. Murdoch anticipated these issues back in 1958: ‘A stream of half-baked amusements hinders thought and the enjoyment of art and even of conversation.’ As we become aware of issues of depression and anxiety, and that many of us, particularly those on the autism spectrum, struggle with social connections, it seems especially urgent that we attend to inequality in wellbeing in all its dimensions. True, some of us cannot help but be sad sacks, with a tendency to cheerlessness. Nonetheless, we should be attentive to how inequality in wellbeing is the result of factors over which society can exert considerable influence. Moreover, an ‘egalitarian flourishing’ view can tackle the problem of responsibility, not to disqualify people from amelioration of their disadvantage, but to assist the more vulnerable so that they can enjoy the flourishing that comes with contributions to society. Once we steer away from the allocation of goods and focus instead on the constituents of flourishing, recent ‘postwork’ literature should prompt us to give up productivist preoccupations and embrace a broad view of worthwhile contribution, be it that of the surgeon, the surfer or the social worker, the intellectually challenged person or the brilliant artist. Inspired by Marx’s ideals of all-round development and socialist community, a flourishing approach suggests a radical answer to a range of egalitarian issues, a robust alternative to the liberal view of political community as playing no role in people’s choices about how to live. In sum, with his canonical treatise on justice, Rawls resuscitated political philosophy, but perhaps he did so by keeping it semiconscious. For in banning controversy about value from the domain of public debate, egalitarian political philosophy became curiously apolitical, burdened by an outlook of modest ambitions and imagination that seems ill-prepared to address the deep inequalities of capitalist societies or to motivate the activism necessary to facilitate social change. Cowed by the gains of the political Right and conservatives’ hostility to utopianism, still permeated by the legacy of scientistic philosophy, liberal egalitarianism is rather thin gruel for the aspiration to a society where people may flourish as equals. Murdoch notes her society’s ‘loss of religion as a consolation and guide’. Indeed, in the search for meaning in the largely secular societies of the West, liberal theory has left a void, leaving working people to find moral purpose in fundamentalist religion, Right-wing populism, xenophobic and authoritarian creeds. Egalitarians can and should do better. Since the COVID-19 pandemic, we have become especially aware of the essential work done in underpaid, precarious jobs, and how the most vulnerable, particularly the elderly, are inadequately cared for. Many on the Left urge that we deliver on our heightened sense of obligations to community as captured in the recent slogan (however disingenuous) that ‘we’re all in this together’. Moreover, the exceptional circumstances of the past three years have prompted valuable soul-searching about the constituents of wellbeing, how capitalist society should be reshaped so we no longer spend so many hours in cars and planes, so everyone can take walks in nature, spend more time with family, engage in acts of compassion and kindness. We live in times in which the citizens of prosperous societies are, like never before, unequal in wealth, wellbeing and the opportunity to make meaningful contributions to their communities. Liberal egalitarianism has stepped up to shed light on these problems, but we should look again at the socialist tradition, to ‘go back and explore the other road’ as Murdoch enjoins us, to remind us of the challenging implications of the ideal of equality.
Christine Sypnowich
https://aeon.co//essays/we-should-return-to-the-bold-egalitarianism-of-iris-murdoch
https://images.aeonmedia…y=75&format=auto
Palaeontology
When we think of changes in Earth’s history as changes of dynasty we miss out on understanding how life really works
The worst day in the entire history of life on Earth happened in the northern springtime. On that day, the last of the Age of Dinosaurs, a roughly seven-mile-wide chunk of rock that had been hurtling towards our orbit for millions of years slammed into Earth’s midsection and immediately brought the Cretaceous to a close. The consequences were so dire that survival in the hours immediately following impact was merely a matter of luck. Of course, life wasn’t totally extinguished on that day 66 million years ago. Some species survived, emerging into a transformed world. We can’t help but draw our own history to this specific moment, the dawn of the Age of Mammals, when fuzzy beasts could finally flourish. Dominant dinosaurs suffered a stroke of cosmic bad fortune, and our mammalian kin inherited a planet where they would no longer have to fear death in reptilian jaws. The image is of a great ecological cast change, different players continuing the evolutionary story. It’s a very appealing distortion. The entire reason we so often fixate on the supposed dominance of the dinosaurs is because we now see ourselves in that position. For more than a century, the decimation of the ‘ruling reptiles’ has been taken as a cautionary tale of what could happen to us – not all that different from pundits who cry that the United States is set to topple like the Roman Empire. The narrative becomes one of power, influence and longevity, one group of organisms above all others deciding the course of entire ecosystems over the span of millions of years. Mass extinctions become examples of winners and losers. Where Tyrannosaurus rex and family faltered, the story goes, our mammalian relatives were victorious. The story says more about the way we interpret the past than what actually transpired; by creating a fairly-tale out of a distant prehistoric event, we’ve inflated our sense of importance in the world. We are not bound to that view. We created the image of tyrannical dinosaurs ruling Earth. We can just as easily deconstruct it. The process requires returning to the mass extinction of the past, not looking for the victorious and the vanquished but considering how entire communities of living things change in the face of unimaginable disaster. Prior to the disaster at the end of the Cretaceous, all of Earth’s mass extinctions were protracted, grinding transformations defined by species disappearing faster than new ones could evolve. Some of those extinctions, caused by active and erupting volcanoes, and the climate-altering gases they belched out, took more than a million years to unfold. About 75 per cent of all known species went extinct in a geological snap of the fingers The last day of the Cretaceous was different, a cataclysm of unfathomable speed and violence. The flying pterosaurs, the coil-shelled ammonites and all dinosaurs but birds vanished, not to mention deep losses to surviving groups of creatures such as lizards and mammals. No species could have prepared for what was to come, even if they had somehow been granted foreknowledge of the calamity. Within minutes of impact, the ground under the feet of dinosaurs in ancient Montana began to shake from seismic shockwaves emanating from the strike. Only a few hours later, tiny chunks of rock, glass and other debris thrown into the atmosphere by the strike began to rain down all over the planet. No single particle had much of an effect, but together the millions of tons of byproduct produced by the impact created so much friction that the result was a horrific heat pulse – hot enough to cause dry forest tinder to burst into flame. Earth’s temperature was set to broil, turning the last non-avian dinosaurs into what could be described as Cretaceous chickens in the oven. The mammals, birds, lizards and other meek creatures that would survive that first day did so by finding shelter underground, little more than a few inches of soil or water shielding them from the global conflagration. And that was merely the first day, followed by three years of a biting, impact winter that would almost bring photosynthesis to a halt and test the limits of biological resilience. About 75 per cent of all known species went extinct in a geological snap of the fingers. We often leave the story on the morning after the devastation ebbed, with some bewhiskered mammal sticking its twitching nose out of a dinosaur skull’s eye socket to take in a new dawn free of reptilian horrors. It’s a satisfying story. More than 66 million years removed from the last of those fantastic saurians, we often fill in the gaps with our expectations and assumptions. The asteroid was our ancestors’ deliverance, and through the aeons they pulled themselves up by their primordial fuzzy bootstraps to claim their own dominance over Earth. Dinosaur menageries in museums become bittersweet tributes to creatures that prowl our imaginations yet would have easily erased the possibility of our existence if they had been allowed to continue to keep their clawhold on the planet. Dinosaur decimation was a prerequisite for us to be here and interrogate their bones. Considering one form of life dominant over others is worse than nonsensical. It’s a form of biological chauvinism that says everything about what we project on to nature and nothing about reality. As Stephen Jay Gould pointed out in his book Full House (1996), we may as well concede that Earth has always been in the Age of Bacteria with animal, plant and fungal life being rare anomalies by comparison. According to this mythos, first dinosaurs ruled, then mammals did, with each evolutionary dynasty powered by some special character to outrace and outcompete other lifeforms, becoming incredibly diverse and widespread. In that iconography, there is no better example of dinosaurian prowess than T rex. Since the time the dinosaur was named in 1905, it’s been taken as the culmination of more than 150 million years of carnivorous innovations. Its very name, ‘king of the tyrant lizards’, feeds us this perception. Nevertheless, we can look at the gleaming, serrated smile of T rex and challenge conventional wisdom: what would this dinosaur have been without its prey species? And what would a Cretaceous magnolia tree be without a bumbling beetle covering itself in pollen at the heart of the tree’s flower? T rex existed as part of an ecosystem, both shaped by and shaping the world around it. The dinosaur could even be said to have been an ecosystem unto itself, a living animal that harboured parasites and bacteria in and outside its body (just like us). The dinosaur was large, impressive and no doubt ferocious, but it was also a living thing at the intersection of various ecological connections. To say the dinosaur ‘ruled’ anything is ridiculous, a form of fossiliferous individuality that ignores broader communities. We’ve often ignored these threads in favour of simplicity, as if each surviving species were pitted against each other in a neverending battle for survival. The extinction of T rex and all other dinosaurs save for the beaked birds was not a frivolous disappearance. It wasn’t the equivalent of a prehistoric apartment the dinosaurs cleared out to let mammals redecorate. A vast array of animals that shaped the world around themselves, as they also shaped the evolution of other species, suddenly vanished. The loss of the dinosaurs and the good fortune of the mammals had deeper ecological consequences for the fate of flowering plants, leaf-eating insects and various other forms of life that often make up the background of these stories. We’ve entertained the idea of shifting power between dynasties for far too long. How the asteroid changed the world isn’t a tale of shifting dominance, but how communities cope in the aftermath of disaster. Every choice a dinosaur made altered the landscape in some fashion Let’s take another look at the world after the impact, not in the heat of asteroid-triggered extinction but as life began to entwine in new ways. Try to put your mind back to the forest primeval, about a million years after the impact, approximately 65 million years ago. You’d likely hear the squawking of birds, the chattering of mammals and the trill of insects in a forest the likes of which the world has never seen before. These woods grow thick, flowering plants for the first time in their history forming the core of these humid glades, rather than conifers. Tree limbs spread wide and entangle with each other overhead, broad leaves shading the understorey far below. Aside from the odd oldtimer crocodile, no animal in this environment gets larger than the size of a German shepherd. That fact alone has fundamentally changed the world. Prior to the impact, the average dinosaur weighed about three and a half tons and was roughly the size of a small African bush elephant. Such immense animals browsed and grazed bushels of vegetation at a time, trampled pathways through the forests, pushed over trees, and left plenty of chlorophyll-packed dinosaur pats to keep prehistoric dung beetles busy. Every choice a dinosaur like the three-horned Triceratops or shovel-beaked Edmontosaurus made altered the landscape in some fashion, from busting up rotting logs inhabited by invertebrates to creating shallow ponds in areas where they frequently churned the soil. Big dinosaurs kept the forests open and clustered together, their appetites and footfalls altering the shape of the forest itself. But now they are all gone, leaving forests to grow thick and tall. The rise of those very forests relied on the few dinosaurs that survived. Birds were just another form of feathery dinosaur that evolved alongside their relatives since the late Jurassic, about 150 million years ago. Some kept their ancestral teeth, short nubs perfect for gripping crunchy insects or the occasional small lizard. But others evolved to be herbivores, losing their teeth entirely and evolving muscular gizzards to help them break down seeds, nuts and other sturdy plant parts. Because these birds were very small compared with the average non-avian dinosaurs, they were able to find shelter in the crevices of the world, shielding many of them from the heat pulse. And during the impact winter that followed, when much of the world had been denuded of vegetation and small insect morsels, the beaked birds dug into the seed banks held safe in the soil. Beaked birds survived while any carnivorous survivors vanished, and the herbivorous birds would end up spreading the seeds they had survived on. Some of the seeds and nuts were busted and broken inside the birds’ digestive tracts, but others surely passed unscathed and were deposited with a gift of guano to begin reseeding the early Cenozoic woodlands. Such changes might have been swamped by the activities of the larger dinosaurs just a million years earlier, but now birds could plant a new kind of forest. And our furry ancestors certainly benefitted from these sweeping changes. Dinosaurs provided the foundation for the so-called Age of Mammals not by stepping aside, but by inadvertently helping to grow an entirely novel garden. During the tens of millions of years prior to impact, ancient beasts did not shiver in the shadow of the dinosaurs as if waiting for an end to the sharp-toothed nightmare. Mammals and their close relatives evolved into a stunning array of forms during the Triassic, the Jurassic and the Cretaceous. There were ancient equivalents of flying squirrels, aardvarks, otters, squirrels and more that evolved right alongside the ‘terrible lizards’. The very first primates even evolved around the same time as Triceratops, a shrew-like animal called Purgatorius that scampered through the trees embodying the form of these earliest members of our own mammalian family. And while their small size is often overplayed – most mammal species alive today are mouse-size, after all – the diminutive statures of prehistoric mammals helped them find hiding places on the fateful day the asteroid struck. Many perished, but those that survived witnessed a world devastated by fire and brought nearly to a standstill by cold, living off the planet’s crumbs until the forests grew back. A million years after impact, then, the world’s dense forests offered the surviving mammals a greater array of habitats than ever before. Mammals might take up a living searching for fruit and insects in the branches of the canopy, clamber along tree bark and branches in search of succulent leaves, chase down prey along the surface of the soil, or even burrow into the dirt itself. Competition for space and food is certainly part of the story, but primarily as a nudge for mammals to open up new niches and ecological interactions. The field was so open that some mammal lineages began to increase in size extremely rapidly, their bodies evolving larger on the surfeit of nutrition these forests offered. Palaeontologists are only just beginning to understand what transpired in the first 10 million years or so after the Cretaceous came to a close. The earliest parts of this time, known as the Palaeocene, are preserved only in patches around the planet, and fossil evidence is sparse. What palaeontologists find interesting and which organisms gain the most attention have roles to play, too. The most mundane discovery about the life of T rex is more likely to get press attention and public interest than a new, strange Palaeocene bird or mammal. Some of these creatures have been known for more than a century but are only just beginning to be understood now as living things rather than static objects in museum drawers. We assumed that the story of what happened after the impact would be straightforward, as simple as survivors filling a void left by saurian giants. We were wrong. Our actions are cutting through life’s web, affecting entire communities and ecosystems Just as we have projected our hopes and worries on to the dinosaurs, the emerging image of changing, entangled communities ripples outward to our own time. We are living through an ecological crisis of our own making. The loss of every species, whether documented by science or not, is not just another tally of biodiversity’s losses. When a species vanishes, it leaves a void in its ecosystem. The way those living things uniquely interacted with the world vanishes, nudging adjustments in the ecosystem that once hosted the species. The extinction of a plant might alter nutrient cycling in a patch of forest of what plants a herbivore eats. The disappearance of a carnivore might make prey populations more vulnerable to disease if another predator doesn’t take up its role. A large herbivore’s population crashes and forests grow differently, some plants losing a means to disperse their seeds and others growing thicker in the absence of large feet trampling down trails through the woodland. Evolution and extinction are bound together in these small, often-invisible interactions between species, the connections that continually shape the unique nature of life on our planet. In our present moment, we are not only playing a role in which lineages will survive and which will disappear. Our actions are also cutting through life’s web, affecting entire communities and ecosystems that will test the resilience of more species than we’ll ever count. The history of life on Earth cannot be encapsulated as a balance sheet of losses and gains through time. Nor can our present moment be understood as different groups of creatures ceding the way for each other as life climbs the rungs of progress. The reality, like life itself, is messy. Comprehending what transpired 66 million years ago – or even in this moment – requires that we look beyond the details of what we can discern from a given species in isolation. Every fossil bone we uncover and carefully cradle in a museum grew from nutrition derived from other forms of prehistoric life; and those food sources, in turn, built their tissues from plants that took up essential components from the soils, enriched by the decay of yet other creatures that came before. Wherever we find life, one existence touches another, enmeshed and setting the conditions for what might appear tomorrow.
Riley Black
https://aeon.co//essays/earths-story-is-not-about-dynasties-but-communities
https://images.aeonmedia…y=75&format=auto
Political philosophy
The work of John Rawls shows that liberal values of equality and freedom are fundamentally incompatible with capitalism
Completed in 1910, the renaissance revivalist Mahoning County Courthouse in Youngstown, Ohio would make any city proud. Its Honduran mahogany, terracotta, 12 marble columns and 40-foot diameter stained-glass dome stand testament to the region’s turn-of-the-century success as a moderate industrial power. Across Market Street, the humbler federal courthouse completed in 1995 invokes a then-au courant corporate office-building style: concrete and panelised stone relieved by blue-black glass, with decorative squares and circles scattered here and there. The Thomas D Lambros Federal Building and Courthouse is named for Judge Thomas Demetrios Lambros (1930-2019), native son of Ashtabula, Ohio, who in 1967 was appointed to the federal bench by the US president Lyndon B Johnson. The website of the US General Services Administration remembers Judge Lambros as ‘a pioneer in the alternative dispute resolution movement’ – arbitration, as it is generally known. But the people of Youngstown and the Mahoning Valley might remember Judge Lambros for a different reason. Lambros presided over a fiercely contested lawsuit in 1979-80 filed by 3,500 steelworkers laid off by United States Steel Corporation’s Youngstown Works plant – part of a wave of closures across what we now call the Rust Belt. The lawsuit was an avowedly desperate effort to compel US Steel to sell the company either to the city or else to the workers who, hopefully with federal loans, would continue to operate the plant and keep sending paychecks to the thousands of families depending on them. In an early hearing, Judge Lambros made a remarkable – revolutionary, almost – suggestion to the workers’ lawyers. They might have a shot if they argued that the people of Youngstown had a ‘community property right’ accrued from the ‘lengthy, long-established relationship between United States Steel, the steel industry as an institution, the community in Youngstown, the people of Mahoning County, and the Mahoning Valley in having given and devoted their lives to this industry’. Because steel production had become such a central part of community life, the judge suggested, the community arguably had a right to decide what happened to the steel mill. The suit failed. When called upon to issue a ruling in the Youngstown case, Judge Lambros turned on his own suggestion. There just was no precedent in US law to say that the workers or the people really had such a ‘community property right’. Lambros was torn between his moral sense that they should have one, and his professional duty as a judge to find that the law (then as now) recognised no such right. Youngstown Works shuttered for good. Judge Lambros’s profound ambivalence reflects a contradiction that seems to lie at the heart of liberalism. On the one hand, the promise of a liberal society is of a society of equals – of people who are equally entitled and empowered to make decisions about their own lives, and who are equal participants in the collective governance of that society. Liberalism professes to achieve this by protecting liberties. Some of these are personal liberties. I get to decide how to style my hair, which religion to profess, what I say or don’t say, which groups I join, and what I do with my own property. Some of these liberties are political: I should have the same chance as anyone else to influence the direction of our society and government by voting, joining political parties, marching and demonstrating, standing for office, writing op-eds, or organising support for causes or candidates. On the other hand, liberalism is usually uttered in the same breath as capitalism. Capitalism is a social system characterised by the fact that private persons (or legal entities like corporations) own the means of production. Combined with liberalism’s protection of rights and liberties, this means that, just as I get to decide what to do with what I own (a 2004 Hyundai with a busted A/C unit and squeaky wheel bearings), so did the legal entity US Steel get to decide what to do with what it owned: the Youngstown Works. Liberalism’s apparent commitment to capitalism threatens to prevent it from delivering on all that it promises. To see this, it is important to remember that formal political processes do not exhaust the way our society governs itself. One of the main tasks for a society is to organise economic production. We humans are a species that makes stuff. We make tools, dwellings, food, art, culture, more little humans, and much else. Moreover, we usually do this together, as a joint activity. Such cooperative production inevitably produces a division of labour: some hunt while others gather; some fish while others sow; some design artificial intelligence while others squeegee windows at the stop light. When societies industrialise, achieving economies of scale and the capacity to purchase cutting-edge technologies needed for profitable production becomes extremely expensive. So expensive, in fact, that it is possible only for a relatively small number of people or entities to do it. This leads not just to a division of labour, but to a class-stratified society. Some people – capitalists – own the materials or technologies that produce society’s wealth, while other people – workers – have to work for the capitalists in exchange for a wage. In such a class-stratified society, capitalists not only make the important investment decisions that guide society’s overall direction, but they are also effectively private dictators telling their workers what to do, when to do it, what to wear, when to pee, and what to post online. Given liberalism’s defence of the capitalists’ rights to do all this, it is hard to see how liberalism might reliably achieve its goal of bringing about a society of equals in which we all have a share in our collective governance. Hence the contradiction at its heart, and Judge Lambros’s ambivalence. Liberals rarely question the basics of political economy like who owns what and lords it over whom The political-economic background of the Youngstown closure is an object of ongoing controversy among historians, economists and sociologists. All agree that it was part of the phenomenon of ‘globalisation’. But whereas the former US president Bill Clinton – whose administration oversaw the construction of the Lambros Federal Building – could declare in 2000 that globalisation was ‘the economic equivalent of a force of nature’, nobody seriously believes that anymore. Under US leadership (itself a response to Chinese rivalry), the world is turning toward ‘neomercantilism’, embracing strategies whereby governments protect domestic industries while intervening mightily in markets, imposing carrots and sticks to steer private investors toward targeted economic goals like place-based investments in green technology. This means that the way society organises production has re-emerged as a contested political issue. But it does so in a fractured ideological moment. Liberalism’s hegemony is perhaps at a nadir. Populist authoritarians and ‘illiberal democrats’ have attracted a surprising level of legitimacy and support, while post-liberal ideologies look ahead to new possibilities. Critics on the Left and the Right offer two main visons of the near-future. On the Left, critics suspect that the return of industrial policy might be less than the ‘new economic world order’ its proponents tout it to be, reflecting liberalism’s inability to get to the root of capitalism’s problems. Those who hold this view, like the economist Daniela Gabor, see legislative efforts like the US president Joe Biden’s Inflation Reduction Act (IRA) or the European industrial policy proposed by the French president Emmanuel Macron as merely underwriting private profit-making by using the state to ‘de-risk’ some capital investments, making them safer for capitalists who reap massive rewards with little downside. Some socialists even go so far as to suggest that Biden’s IRA is a regression to a kind of feudalism. On the Right, some so-called post-liberals, like the political theorist Patrick Deneen, hope that an industrial policy focused on restoring blue-collar manufacturing jobs to the US heartland will turn out to be a revolutionary first step toward throwing out liberalism with all its (hypocritical) aspirations for individual liberties and social equality. This dichotomy ignores the possibility of a liberal anticapitalism. This may sound like an oxymoron. Neither liberals nor their critics disentangle liberalism from capitalism (though some historians have begun to). Most liberals even emphasise the happy marriage between the two. Among those liberal egalitarians who stress the redistributive New Deal as liberalism’s moral core, few seriously grapple with big issues of political economy. Liberals advance institutional and procedural solutions – ‘structural change’ to representative processes, expanding voting access, etc – but rarely question the basics of political economy like who owns what and lords it over whom. That makes it all the more surprising that liberalism’s greatest philosophical exponent, John Rawls, developed a sustained, systematic and principled argument that even the most humane, welfarist form of capitalism is incompatible with the possibility of achieving liberalism’s deepest aim: free people living together in a society of equals. These arguments should be much better known. Contrary to a common caricature of his views, Rawls does not reduce politics to technocratic nudges and tinkering with marginal tax rates. Liberalism is a philosophy of the ‘basic structure’ of society. The basic structure includes a society’s fundamental institutions: not only political structures like constitutions (where they exist), but also markets and property rights. Everything is up for moral assessment, not just considered abstractly, but with respect to how different institutions interact with one another and with ordinary human behaviour, over the course of generations. ‘Everything’ here includes the basics of political economy like who makes what and who owns what, and who decides. Crucially, for Rawls, this includes the way society organises the production of goods and services. Focusing on the inequalities and domination that arise from the way capitalism empowers a small group to control how we produce society’s wealth, Rawls argues that no form of capitalism can ever cohere with the liberal ideal of a society of equals. Social equality and basic liberties will always be thwarted by it. Rawls’s corpus is complex and contested. But we don’t need to agree with him on everything. Even if we ditch the larger Rawlsian apparatus and the many tweaks he made on particular topics after the publication of A Theory of Justice (1971), he stated the core of a liberal, anticapitalist political economy, and never abandoned the conviction that a liberal society must overcome capitalism. For Rawls, liberalism revolves around two ideals: society as a fair system of cooperation, and people as free, equal, capable of acts of joy, kindness and creativity; and disposed – if not always without reluctance – to cooperate with one another to flourish. Rawls shows that these ideals lead to principles that we can appeal to in designing, improving and maintaining our basic political and economic structures. Capitalism is an economic system with three features. First, Rawls said it is a ‘social system based on private property in the means of production’. It allows almost unfettered private ownership of not only personal property, but also the highly valuable and productive industrial and financial assets of a society – what Vladimir Lenin in 1922 called the ‘commanding heights’ of the economy. Second, it allocates access to private property primarily via markets. This includes markets in goods, markets in financial products, and markets in labour. That leads to the third feature: most people – workers – try to earn enough to support themselves or their families by selling their labour for a wage to a capitalist who owns the means of production. As a result of this, capitalism is an economic system that produces a class-based society and division of labour. This is what Rawls’s liberal anticapitalism targets, focusing on the obstacles that a class-stratified society of owner-capitalists and workers poses to a genuinely cooperative and emancipatory liberal society. Rawls argues that capitalism violates two core tenets of liberalism: the principles of social equality, and of extensive political liberty. Moreover, reforms that leave the capitalist core in place are unlikely to be stable. Let’s look at each of these in turn. Social equality One component of social equality is fair equality of opportunity. Your chances at attaining or succeeding in any valued role should not depend on aspects of your birth or circumstance over which you had little or no control. All current societies fail to meet this: race, gender, religion, disability, sexuality and other circumstances favour the chances of some over others. Likewise, in a class-stratified capitalist society, whether you or your parents own productive assets significantly determines your life chances. So, fair equality of opportunity is unlikely under capitalism. I say ‘unlikely’ because it is possible that a capitalist welfare state could promote equal opportunity by investing heavily in education and healthcare. But even a society that fulfilled equal opportunity would still fall short of the full ideal of liberal equality. More difficult to specify but infinitely more powerful than equal opportunity is the value Rawls called ‘reciprocity’. This is the idea that it matters that we are, are seen by others as, and see ourselves as, fully participating members of society, on equal footing and status with other participants. Capitalism makes reciprocity impossible because it requires a division of labour that prises apart the ‘social roles and aims of capitalists and workers’. Consequently, Rawls said, ‘in a capitalist social system, it is the capitalists who, individually and in competition with one another, make society’s decisions’ about how to invest its resources and what and how to produce. This makes it hard for workers to see themselves as active participants in directing society because, well, they aren’t (with the limited exception of voting in a ballot every few years). Capitalist make important decisions on behalf of society, yet their interests diverge from those of the working class This is what the people of Youngstown learned when the owners of US Steel decided to move the factory away. Likewise, today the CEO of McDonald’s can tell his employees that ‘we’re all in this together’ until he’s blue in the face, even though he gets more than 1,150 times what they do per hour and makes all the decisions about how they spend their time. When this is true, ‘society’ simply feels like something we are ‘caught in’ rather than something we are making and sustaining together, Rawls wrote in Political Liberalism (1993). Such capitalists make important decisions on behalf of society, yet their interests diverge from the interests of the working class. This is a form of social domination. Rawls worried that those who do not own the means of production will be ‘viewed both by themselves and by others as inferior’ and will likely develop ‘attitudes of deference and servility’ while the owners grow accustomed to a ‘will to dominate’. This conflicts with a true ‘social bond’ between equals, which calls on us to make a ‘public political commitment to preserve the conditions their equal relation requires’, as he wrote in Justice as Fairness: A Restatement (2001). Political liberty Capitalism is also inconsistent with the basic liberal value of political liberty. Political scientists have been arguing for some time that advanced democracies like the United States and Western Europe are probably better characterised as oligarchies, since their policies bear almost no relation to the interests of the poor when these deviate from those of the wealthy. The usual suggested solution is to ‘get money out of politics’ by reforming campaign finance rules. But the Youngstown Works story suggests something even deeper. The steelworkers actually enjoyed considerable political support in their fight. They were represented by President Johnson’s former attorney general, Ramsey Clark. Meanwhile, the Youngstown City Council, the Ohio State Legislature, and the US House Committee on Ways and Means all took some action on their behalf. But, as Judge Lambros finally conceded, none of these were a match for the power of capital. Anticipating the economist Thomas Piketty’s claim in Capital in the Twenty-First Century (2013) that capitalist societies ‘drift toward oligarchy’, Rawls argued that economic and political inequality ‘tend to go hand in hand’, and this fact encourages the wealthy to ‘enact a system of law and property ensuring their dominant position, not only in politics, but throughout the economy’. The wealthy use their dominant position to set the legislative and regulatory agendas, monopolise public conversation, hold political decision-making hostage by threatening capital flight, and engage in outright corruption. Reforming campaign finance rules to keep money out of politics is perhaps an important start in curbing this tendency. But it’s just a start. Rawls was sensitive to Karl­ Marx’s criticism that liberal rights might be empty or merely formal – naming protections without really providing them. In response, Rawls insisted that rights to political participation must be given ‘fair value’. In A Theory of Justice, he observed that the full policies needed to protect political liberty ‘never seem to have been seriously entertained’ in capitalist societies. The reason is that the conversion of economic inequality to political domination happens quickly. ‘Political power rapidly accumulates’ when property holdings are unequal, and the ‘inequities in the economic and social system may soon undermine whatever political equality might have existed’. Ensuring political liberty therefore requires us not just to restrict the use of money in politics but, in Rawls’s words, ‘to prevent excessive concentrations of property and wealth’ in the first place. Altering property rights goes near the heart of capital’s power As Piketty and his colleagues have shown, today’s stupefying level of inequality has two primary sources: massive income inequality aided by lower top-marginal tax rates, and a high rate of return on capital compared with returns on labour. Compound interest for the rich keeps making the rich richer as the poor get relatively poorer. Rawls argued we need to address both: the first via taxation and wage controls, and the second by altering the ‘legal definition of property rights’. It is easy to overlook how radical this latter proposal is. As the legal scholar Katharina Pistor has shown in The Code of Capital (2019), capitalism relies on the legal definition of property rights. Not all property rights accumulate at the rapid rate of capital, nor confer on their owners as much control over other people. Altering property rights therefore goes near the heart of capital’s power. This could take the form of recognising ‘community property rights’ of the sort that Judge Lambros suggested and then backed off from. Or it could involve separating the capitalists’ ownership rights over factory equipment, say, from the rights to manage how that equipment gets used, reserving the latter rights for the workers who actually use it. Liberalism may protect property of some kind, but not necessarily give the turbocharged property rights that capitalists enjoy today. Stability But can’t we just reform capitalism piecemeal along social-democratic lines to alleviate these problems? Rawls says no: reformed capitalism would just swiftly revert back to inequality and domination. This is the problem of instability. We must ask of any proposed regime, like a reformed welfare-state capitalism, whether it ‘generates political and economic forces that make it depart all too widely from its ideal institutional description’. To assess a conception of justice or an institutional arrangement intended to satisfy it, we need to consider how the political, social, psychological and economic dynamics it is likely to foster will play out over time. Central to Rawls’s understanding of stability was Marx’s observation in ‘On the Jewish Question’ (1844) that ‘while ideally politics takes precedence over money’, under capitalism, ‘in fact politics has become the servant of money’. It’s not enough to recognise some domain as ‘political’ and to try to summon the ‘political will’ to change it. Political power – the legislative and policymaking power of the state – cannot simply achieve anything it desires. It is constrained by, among other things, the power of capital. It is vital to understand how the dynamics of capital ownership may thwart proposed political interventions. Many of these are familiar: the regulatory race to the bottom fuelled by the threat of capital flight; ‘market discipline’; creditors imposing austerity and structural adjustment on not-so-sovereign nations. Rawls had little to say about how social organising might challenge capital’s political hegemony, and here perhaps most of all we must look elsewhere for guidance. But he understood that, so long as capital reigns, liberal politics is doomed to be ineffective. Any exertion of political will that leaves in place the political economic core of capitalism will preserve a class-stratified society based on the unequal ownership of the means of production and a destabilising and demeaning division of labour. At this point, one might wonder how useful all this abstract moral criticism of capitalism really is. On the Right, some may dismiss it as otiose since there is no serious alternative to capitalism, and anyway these moral criticisms haven’t acknowledged capitalism’s primary advantage: its productivity and ability to make more stuff for more people. On the Left, some may suspect that abstract political philosophising is perhaps interesting, but ultimately useless – like a Fabergé egg, as the Marxist political theorist William Clare Roberts described Rawls’s theory: beautiful, well-crafted, but ultimately useless. Capitalism won’t be overcome by convincing ourselves that it is unjust: that requires revolutionary action. But as advocates of ‘moral economy’, including the legal scholars associated with the Law and Political Economy project acknowledge, there is an important role for moral criticism of political economy of the type that Rawls and others have furnished. This can provide clarity and focus. Especially when empirical information about economic trends is noisy, moral critique provides a kind of compass. This leaves the question what liberal anticapitalism looks like. Rawls thought there were two, possibly just, types of regime. One, a ‘property-owning democracy’, allows private ownership of the means of production, but only on the condition that private capital is held in roughly equal shares by everyone, preventing the emergence of a significant split between classes of owners and non-owners. This requires hefty wealth and inheritance taxes to spread out ownership of productive assets, and a background welfare state to ensure robust ‘human capital’ by providing both education and healthcare. Rawls’s liberal critique of capitalism leaves us not with a silver bullet, but with a moral compass and an agenda The other type of just regime is liberal, or market, socialism. Under market socialism, the state controls the economy overall, but worker-managed firms are left to compete in a closely monitored and adjusted market. This is an attempt to harness the allocative efficiencies of markets and the price mechanism within a socialist system that democratises major investment decisions and prevents private accumulation of significant wealth. Rawls saw in the actually existing socialist states of the 20th century an intolerable absence of political liberty. This is why he insisted that a just socialism would be liberal and should to large extent rely on markets rather than central planning. But the liberal critique of capitalism that Rawls developed in his later work gives us reasons to be wary still of either of these alternative regimes. It is reasonable to fear, for instance, that markets will simply reproduce the destabilising dynamics Rawls himself identified, as Marx had before him. The Marxist philosopher G A Cohen might well have been right when he declared in Why Not Socialism? (2009) that ‘[e]very market, even a socialist market, is a system of predation.’ This is where Rawls’s liberal critique of capitalism leaves us. Not with a silver bullet, but with a moral compass and an agenda. His critique leaves out some important things. Notably, he had little to say about race and ignored the dynamics of ‘racial capitalism’. No reckoning with capitalism is complete without this dimension. Nevertheless, he helped illuminate the important fact that what we need, and many of us want, are decentralised, cooperative forms of economic production that are consistent with the core liberal values of social equality and basic liberties. But we’ve yet to discover at scale how to have this cake and eat it. Rawls’s work provides little help on this, but fortunately we do not need to rely on his theoretical and philosophical approach in isolation. Here we can turn to social science and to the countless examples of activists and visionaries – from Jackson, Mississippi to Preston, England – developing participatory economics, community wealth-building, and other new and richer forms of what the political theorist Bernard Harcourt today calls ‘coöperism’. These social experiments are continuations of the efforts of the people of Youngstown in 1980 to claim a right to make decisions about the management of the factory that sustained that community and was sustained by the labour of the steelworkers who operated it, and the women who supported them. These are among small-scale efforts that give us some reason to hope for a more equal and socially just world.
Colin Bradley
https://aeon.co//essays/what-can-we-learn-from-john-rawlss-critique-of-capitalism
https://images.aeonmedia…y=75&format=auto
Thinkers and theories
The philosopher understood that learning – of a concept, of ourselves, of each other – is the undertaking of a whole life
When I first read Ludwig Wittgenstein’s Philosophical Investigations, I was a student struggling to make sense of it. Now as I read it on the 70th anniversary of its posthumous publication, I am a teacher struggling to make sense of it. In my job, I teach adults who speak English – or at least have a good grasp of it as a spoken language – how to read and write. And not how to read and write ‘professionally’, but rather how to connect sounds to shapes on a page and vice versa, how to spell the language’s most common words, and how to write a complete sentence. There is a member of my class who, although he can say and use the words we study, and has a good grasp of consonants, will not include vowels when he spells. I will ask him to spell the word ‘went’, for example, and he will spell out ‘wnt’. If I correct him once, this currently makes no difference – the next time, he will spell without vowels just the same. As you may imagine, I can find this very frustrating. Wittgenstein writes of a similar case in one of the central sections of the Investigations. He describes teaching a pupil the series 0, n, 2n, 3n, etc, where n = 2. Only, when the pupil gets to 1000, he writes 1000, 1004, 1008, 1012: We say to him: ‘Look what you’re doing!’ – He doesn’t understand. We say: ‘You should have added two: look how you began the series!’ – He answers: ‘Yes, isn’t it right? I thought that was how I had to do it.’– from §185 of Wittgenstein’s Philosophical Investigations (4th ed, 2009) Wittgenstein then compares this to a case of someone who does not react naturally to a gesture of pointing: someone who looks in the direction from fingertip to wrist instead of following the line beyond the fingertip. We might also think of a cat, staring blankly at a pointing finger. He goes on to suggest that the rules we take for granted as governing all manner of human activity, from mathematics to the grammar of propositions, cannot be explicated by the Platonic tradition of reference to ineffable objects, nor by a subjective ‘interpretation’ at the moment of each instantiation of the rule. Rather, they in a sense rely on shared agreement in natural inclination, or in common practices. Our understandings are just what we do. They are our form of life. (This is, it strikes me, a very teacherly attitude. Every teacher knows that it is no use just having a learner say they understand: we have to watch them do it.) But if you were to read the Investigations expecting to find this conception of meaning and understanding presented as a thesis, logically derived from explicit premises, you would be sorely disappointed. The book is instead composed of a series of remarks, each spinning off from the anxiety of the last. They are not remarks made by a single speaker – rather, Wittgenstein engages a series of imaginary interlocutors in a back-and-forth in response to philosophical stimuli. In so far as there is a single voice of ‘Wittgenstein’ to lead the discussion, it is a voice of questioning, of doubt, of self-correction, and of self-criticism (the interjections come without quotation marks as often as they come with them). Its form is not so much dialogical as polyphonic. In this sense, the Investigations presents almost as a dramatic work. And the drama that takes place is one with which every teacher will be familiar: it is the drama of the classroom. What does it mean to say that the Investigations dramatises the pedagogical moment? What does it mean to say that the Investigations is controlled by a concern about the method, and indeed the possibility, of teaching? One angle of entry would be Wittgenstein’s idea that the meaning – the ‘essence’ – of a word is to be found not by searching for the object or referent ‘behind’ it, but by looking at its use in the language games in which it is deployed. Wittgenstein says again and again that one of the best language games to study for this is the one in which the word is taught: In this sort of predicament, always ask yourself: How did we learn the meaning of this word (‘good’, for instance)? From what sort of examples? In what language-games? (§77) What is the criterion for how the formula is meant? It is, for example, the kind of way we always use it, were taught to use it. (§190) How am I to explain it? Well, only in the way in which you can teach someone the meaning of the expression … (§361) So, in this sense, education (particularly foundational education in language) is fundamental to how concepts are used – to their meaning. … the end of all our exploringWill be to arrive where we startedAnd know the place for the first time.– from Four Quartets (1943) by T S Eliot Another way of reading the Investigations pedagogically is to situate it biographically as a response to Wittgenstein’s own time spent in various classrooms. In 1929, he returned to Cambridge to begin work on what would become the Investigations, by which point he had come to reject the picture of language explicated in his book Tractatus Logico-Philosophicus (1921). In the intervening years, Wittgenstein had done a curious thing. He had given up his claim to be a philosopher (as well as all of his considerable family wealth) and gone to teach young children in various poor country schools in Austria. His letters tell us how difficult he (and his pupils) found this, and he was shamefully involved in multiple instances of excessive corporal punishment. Tormented by the shortcomings of his instruction, he died despairing that no one could grasp his philosophy In one case, he was alleged to have hit an 11-year-old child on the head so hard that he knocked him unconscious. Then, during the 1930s, when he was most involved with the struggle of his new thinking, he became wracked by shame at how he had behaved as a teacher, and travelled back to make a grovelling apology to the children. A first-time reader of the Investigations may be struck by the frequency of the appearance of children. It is possible to read the Investigations as a reflection on how time spent in the hustle and bustle of a working schoolroom transformed his conception of what language really is – and therein what we really are. And even, then, as a kind of confession. Ludwig Wittgenstein (far right) with his pupils in Otterthal, Austria in 1925. Courtesy Wikipedia Meanwhile, as he was developing this later philosophy, Wittgenstein held classes in Cambridge to work through his thoughts. He eschewed large lectures, preferring to teach a small class including his favourite five students: Francis Skinner, Louis Goodstein, H M S Coxeter, Margaret Masterman and Alice Ambrose. Wittgenstein grew to place great belief and trust in these students and, in the case of Skinner, fell in love. But he was also tormented by the shortcomings of his instruction, and died despairing that no one could grasp his philosophy. We know also that some of the voicings in the Investigations come directly from these classes and lectures at Cambridge. The Investigations is therefore immediately the product of the pedagogical struggle. It reads as a teacher searching for the right approach, the right phrasing, the right language, that will unlock the thought for the learner. (Of course, many philosophers claim that their work is not, or never will be, properly understood – Wittgenstein is one of the few who blames this not on the intelligence of his students, but on the limitations of his teaching.) There is perhaps a third, and most important, way of approaching the text pedagogically, though it is deeply related to the others: namely, that Wittgenstein views the teacher-student relationship as exemplary of, and instructive to, the very confrontation of the Self with the Other. In his exegesis of the Investigations – the book The Claim of Reason (1979) – the US philosopher Stanley Cavell ascribes to Wittgenstein a unique ethical-therapeutic reading of the traditional ‘problem of other minds’. Traditionally, this ‘problem’ is characterised as being that we don’t know the contents of other people’s minds as well as we know our own: indeed, we can’t know for certain that there even is a mind there at all – perhaps they could be automata with no inner life, merely ‘feigning’ human behaviour? On this view, the evidence, or criteria, for ascribing a mind to something cannot ever be strong enough to bridge the gap of knowledge. But Wittgenstein asks: if these ‘automata’ are feigning being in pain, for example, how do you know that it is pain that they are feigning? For Wittgenstein, these criteria do not determine the certainty of statements, but the application of the concepts employed in these statements. Our understanding of that concept, our inclination to characterise whatever they are ‘feigning’ as pain, is nothing more or less than our ability to speak the language that we do. But if all the criteria are thus manifested, you are not missing anything. There is not a piece of knowledge (a glimpse ‘inside’ them) that you lack: what is at stake is whether you acknowledge that they are a someone, in pain. As Cavell writes, ‘the slack of acknowledgement can never be taken up by knowledge.’ Analogously, your own mind, your own feelings, are not logically occluded from being grasped by another. This relation is fragile, not in the sense of a metaphysical barrier, but in the sense that some people do fail to acknowledge the humanity of other people. Wittgenstein knows that the crux of the teacher-student relation is when the student does something wrong On this reading, Wittgenstein is witness to what Rupert Read calls a ‘proto-Levinasian moment’, wherein a Self finds themselves face-to-face with the Other. In this confrontation, whatever degree of acknowledgment we adopt will constitute an attitude towards them. This attitude is necessarily of an ethical nature, for two reasons. Firstly, because it delimits the scope of one’s obligations to another. But also because to deny another’s humanity involves the simultaneous denial of the very mutuality upon which our attempts to mean, or express, anything at all depend. This in turn implies the absence of meaningful grounds even for constituting ourselves as subjects. As Cavell puts it, in typically tragic register: ‘I am fated to stand to myself in the relations in which I may or may not stand to others.’ It is only by acknowledging the Other that the Self can really begin to be a human being. And what are the examples of such an encounter in the Investigations? Well, as I’ve pointed out, Wittgenstein’s examples are pedagogical. There is no starker encounter than the one we examined: the case of the deviant pupil (described above, in §185 of the Investigations). As a teacher, Wittgenstein knows that the crux, the jeopardy, of the teacher-student relation obtains when the student does something wrong and unexpected. This pupil will not follow our instructions. They do not see it as we do. What, then, do we do with them? In the lecture notes that anticipate the Investigations, known as the Brown Book, Wittgenstein wryly observes the answer society has tended to give: If a child does not respond to the suggestive gesture, it is separated from the others and treated as a lunatic.– from §30 of The Blue and Brown Books (1958) This, I’m sure you can imagine, is an approximation of the experience some of my learners have had in education their whole lives. They have been excluded, ostracised, medicalised, humiliated and abandoned – in short, unacknowledged. And yet, still they come to my class and try to learn. My learners are living, breathing testaments to the truth that it is only through vulnerability that one can ever come to know anything at all. Furthermore, it is only by reciprocating this attitude – by acknowledging their humanity – that education can take place. If I am to get through to them, I must acknowledge that it is a them I am getting through to, with all the history, struggle, suffering and joy that that entails. And once I have acknowledged their humanity, I find in their eyes my own humanity, my own vulnerability as the teacher. Suddenly, in that moment of jeopardy, I find that my words are somehow not enough – they are not doing what I want them to, what I expected them to. I see that I too am a human being, alone in myself, reaching out with what sounds, symbols and gestures I can muster in order to make a connection. And the human being who is the subject of my attempt, whether they can spell or not, is no less capable than I of judging, acknowledging or rejecting another. My frustration with the learner – particularly in so far as I assert power in response to it – is thus really a projection of my own fear of rejection. In the pedagogical instant, then, what is at stake is not (merely) whether the learner can make themselves understood to me, but whether I can make myself understood to them. We make a mutual step towards communication: towards knowledge of each other. If this fails, what is at stake is our co-intelligibility: our relationship. My methods are thrown back onto me as an ethical problem: as a political problem. Here we might think of the words of Paulo Freire in Pedagogy of the Oppressed (1968): The raison d’être of libertarian education [as opposed to the banking model] lies in its drive towards reconciliation. Education must begin with the solution of the teacher-student contradiction, by reconciling the poles of the contradiction so that both are simultaneously teachers and students.This drive towards reconciliation is the animus of the Philosophical Investigations, and the radical heart of its pedagogy. But wait a moment. Didn’t we say that the point of the example of the deviant pupil was that he did not have the same natural reactions as us? That teaching him was like trying to teach a cat? Doesn’t Wittgenstein argue that natural reactions are impenetrable bedrock? Doesn’t he say that meaning is specific to different forms of life, different cultures? Is Wittgenstein not the prophet of incommensurability? Of relativism? Of ‘postmodernism’? This indeed is how he is often received today, 70 years after the publication of Philosophical Investigations. But that is not my reading. I believe we should read the sections on the deviant pupil not as saying that the possibility of teaching is predetermined, but that the solution to whether or not an individual can be taught won’t be found in the philosopher’s armchair: it will be found in the classroom. Successful teaching (or translation, or cross-cultural communication, or self-knowledge) is not impossible, but it is not a given either. It takes attempts; it takes dialogue. And we cannot give any criteria for its success beyond the testimony of the human beings for whom it is a meaningful practice. Moreover, the barriers to its success that do exist are not metaphysical: they are political. They implicate the teacher and their society as much as they implicate the learner. Success will require the teacher to confront their own failure to communicate: their own teachability. In this sense, Wittgenstein might be said to lead us from the armchair of the sceptic to the doorstep of Freire. My vowelless learner wrote a sentence for me the other day. He didn’t want to show me at first. It didn’t contain any vowels, but it was readable. I read it out loud to him. His face lit up in a way I have never seen before. This was perhaps one of the first times he’d ever used writing to make himself understood. He seemed to relax and become available. When I repeated the sounds he had missed, he was able to rewrite the sentence to include the vowels. The next lesson, he was back to missing the vowels. We make no claim to finality. The learner’s education is an ongoing project. In this, he shares with Wittgenstein a conviction that understanding – of a concept, of ourselves, of each other – is the undertaking of a whole life.
Calum Jacobs
https://aeon.co//essays/learning-for-wittgenstein-is-a-whole-life-undertaking
https://images.aeonmedia…y=75&format=auto
Psychiatry and psychotherapy
Group therapy promised to be both democratic and radical, but it failed to take hold. Has its time finally come?
The Austrian-born psychoanalyst Marie Langer began to think about how psychoanalysis might work in a more collective form in the mid-1950s. She had grown up in the politicised culture of ‘Red Vienna’ and – though trained in Freudian and Kleinian orthodoxies – her communist politics increasingly drew her to an emerging field of group analytic therapy. After emigrating to Argentina during the Second World War, Langer began writing on this new kind of therapy, publishing the first book in Spanish on the subject. Looking back on her career in 1983, Langer and her colleague Ignacio Maldonado told an interviewer: We work in groups not only because in a society that desires the integral development of all, individual psychotherapeutic attention is insufficient, but because problems and mental suffering are generated in groups and it is in group situations that they can best be resolved. Group activity … strengthens solidarity and teaches people to view their pain in social terms and to alleviate it together.In the long run, Langer believed, group therapy could bring about ‘structural change’. In the mid-1950s, Langer analysed a group of women who shared a common symptom: they’d all struggled to conceive, despite there being seemingly no physiological impediments to their pregnancies. Langer was interested in how these women’s bodily responses might have been conditioned by their social position and fraught feelings towards motherhood and identity. Like Langer herself, many of the women were European exiles who had a complicated relationship to the motherland. Rather than analyse the women individually, Langer wanted these women to view their complex feelings about childbirth through the filter of the relationships they formed with each other in collective analysis. Langer documents how, over a series of weeks, the participants came to identify with one particular woman, whom she calls ‘The Marked One’ (in part on account of her skin depigmentation). The most promiscuous of the women, and a former actress and prostitute who’d had numerous abortions, she was the object of shared hostility and guilt among the group. As the women assumed a group identity, it became clear that their individual feelings of failure or ambivalence stemmed from a shared psychosocial condition; that a collective anxiety about femininity and reproduction pulsed behind the medical establishment’s inability to fulfil their desire to be mothers. Shortly after these feelings were acknowledged in the group, ‘The Marked One’ became pregnant. Langer forged revolutionary mental health movements in Nicaragua, Mexico and Argentina There’s something a little fantastical, even cultish, in picturing these women working through their anxieties collectively, seeking to remedy infertility through psychoanalytic means. Usually, in group analytic therapy, participants have little in common: symptoms emerge and enter the group dynamic along the way. But here, the symptoms drive the form that the group takes. This is a group anxious about fertility, as Langer herself was, having miscarried while volunteering with the International Brigade during the Spanish Civil War. Langer came to see her miscarriages as psychosomatic; indeed, her group analysis upheld some fairly conservative assumptions about motherhood, sexuality and the nuclear family. Later, once exiled from Argentina for her political associations, Langer adopted a more politicised model of group therapy that allowed her to question these assumptions (in keeping with other radical groups of the 1970s). In this switch, Langer was influenced by R D Laing and a growing set of countercultural and Marxist analytic movements that championed a more social psychoanalysis as a way to open up the relationship between psychic states and political change. By including more than two participants in analysis, and also swapping the private space of the clinician’s office for the public realm of hospital rooms and community centres, Langer broke with psychoanalytic convention. In 1971, she presented a paper at the International Psychoanalytic Congress in Vienna on the potential of psychoanalysis to bring about collective change, under the provocative title ‘Psychoanalysis and/or Social Revolution’. Both the International Psychoanalytical Association and the Argentine Psychoanalytic Association refused to publish it. The traditional psychoanalytic community was deeply hostile to any therapeutic practice that might breach the ethical framework developed around one-to-one analysis, a framework built on the pillars of patient confidentiality and – transference notwithstanding – the neutrality of the analyst. And so Langer, who was more interested in understanding mental health in social terms, left, along with 22 colleagues, and regrouped under the Argentine Federation of Psychiatrists, which positioned psychoanalysis as a revolutionary force and prioritised group work. Langer forged revolutionary mental health movements in Nicaragua, Mexico and Argentina that aligned themselves with campaigns for political reform, and fostered a vision of psychoanalysis that facilitated progressive social movements. If these progressive group analytic methods remain, by and large, a kooky and peripheral wing of psychoanalysis, might they nonetheless have the potential to bring about the structural change that Langer envisaged? Might group analysis offer a genuine alternative to a bourgeois practice that has been largely the prerogative of a wealthy few? Langer was far from alone in experimenting with group analysis in the middle decades of the 20th century. While visiting England in 1945, the French psychoanalyst Jacques Lacan (who would be expelled from the International Psychoanalytic Association in 1963, in part for brutally truncating the length of the analytic session) noted in almost utopian terms the innovations that had occurred in British psychoanalysis during the Second World War. To play with the psychoanalytic frame, as he saw it, is to radically expand who psychoanalysis is for: ‘group analysis’, he wrote, is ‘a revolution which transports all our problems to the collective scale’. Lacan was thinking, in particular, of how group analysis might, at the moment of the founding of the British welfare state, provide a model of psychoanalytic provision for all – a significant departure from the long-lasting idea that analysis is a privilege afforded only to the middle classes. It would not only have a social dimension (thus making it less shameful for those who don’t come from a class that is expected to get psychic attention) but would be an integral part of social citizenship. So inspired was Lacan that he wrote of ‘the miraculous feeling’ of rediscovering Sigmund Freud’s work all over again: the emergence of group analysis as no less than a second chapter in the history of psychoanalysis, every bit as important as its founding at the turn of the 20th century. Freud, of course, never worked with groups – though his endorsement of free outpatient clinics that sought to extend psychoanalysis to the working-class communities of Europe after the First World War suggests an interest in connecting psychoanalysis with social justice. Freud also wrote about the psychology of groups, in reflecting on people’s collective desire for the intoxicating force of a strong leader. And yet, even as he acknowledged that postwar psychoanalysis had done ‘nothing for the wider social strata, who suffer extremely seriously from neuroses’, Freud remained wedded to the strict one-on-one precepts of psychoanalysis he’d outlined in the early 1910s. In this group structure, the analyst is positioned merely as a ‘conductor’ The group analysis that led to Lacan’s rapturous insights was the result of a series of experiments in British military psychiatry, designed to respond to the need to scale up psychoanalytic provision in the face of wartime emergency. The turn to group analysis was grounded not in any utopian social principle, but emerged instead as a practical way to manage a military population. It was vital, as Wilfred Bion writes in his study Experiences in Groups (1961), to ensuring rehabilitation and keeping up a ‘good group spirit’. Bion remains one of the most influential thinkers in group analysis, and at Hollymoor Hospital in Birmingham – the largest military psychiatric hospital in England during the war – he set out, along with the psychoanalyst John Rickman, to treat neurosis through group therapy in a 600-bed rehabilitation unit. Bion’s aim was twofold: to provide effective psychiatric treatment in a hospital beset by overcrowding, and to encourage the men to take a more active role in their recovery, reminding them they were soldiers rather than patients. By distancing himself from a position of authority (which the analyst naturally assumes in one-to-one analysis), Bion created a situation where the men had to work out a power structure of their own. Bion saw this group structure, in which the analyst is positioned merely as a ‘conductor’, as a way of encouraging a less authoritarian kind of power structure; but it was nonetheless directed towards a larger project of military and social compliance. As group analysis grew increasingly entangled with the state’s capacity to ensure its own health, it became an extension of the administrative arm of the welfare state. As such, its postwar modes of organisational harmony, notably the Tavistock Centre in London, where Bion and his colleague S H Foulkes institutionalised a model of group analysis, would prove central, both to the postwar vision of the nation as a ‘therapeutic community’, as well as to postwar institutional management culture. The German-born Foulkes, who’d joined a consulting practice in Exeter in southwest England in 1939, experimented, along with his colleague Eve Lewis, in leading men’s and women’s therapeutic groups. ‘During the past year,’ Foulkes wrote in a letter to the military psychiatrist J R Rees in 1942, ‘I have introduced a new method of psychotherapy in groups with very encouraging results.’ He surmised that if psychoanalysis works through free association – by allowing the mind to wander – then it should work on a group level, as a ‘free-floating discussion’. Part of this innovation was driven by numbers – the consulting practice covered three counties, and Foulkes notes how the demand to provide therapeutic provision on a mass scale, in wartime conditions, pushed him from his training in psychoanalysis, understood in its strictest form, into the broader realm of psychotherapy. Foulkes had trained in medicine in Frankfurt and was influenced by Herbert Marcuse and the Frankfurt School. When he set up these groups in 1941, he had already spent several years thinking about the inherent possibilities of collective treatment. It turned out that his experiments ‘far exceeded our expectations’, as he and Lewis wrote in a 1945 paper. ‘Not only is it an economy of time for the therapist, while enabling him to devote even more time to each patient, but at the same time it actually intensifies the effect and thus shortens the duration of treatment.’ We depend on one another not simply to be kind, but to allow us to see ourselves In an interview for an American documentary on group therapy in 1975, at the height of its popularity in the United States, Foulkes said he was sure that Freud would have said ‘baah’ to the concept of group analysis: ‘He would never have thought at this time that one could do serious therapy in a group.’ Foulkes saw in the crossing of a Freudian and a Gestalt approach (which prioritises a more experiential form of analysis) a creative opportunity, rather than a tension. Foulkes also suggested a different temporal distinction between individual and group analysis. Whereas the former changes the individual through ‘the reconstruction of much of the past’, he explained that, in groups, ‘the emphasis is shifted to the present situation’. Foulkes’s groups started off more or less as discussion groups – as ‘an alternation of separate case-studies’. Initially, as in many accounts of group analysis, silence holds the group in suspension; boredom brews, and participants struggle to form attachments with one another. Then, as the weeks progress, fraught emotions begin to surface: one member of Foulkes’s women’s group reports that for half an hour she had wanted to scream because she knew another member ‘wanted to, and she felt like screaming for her’. She does ‘not sympathise in opinion with’ this woman ‘but felt her emotion like an electric shock. She had to keep stretching herself and stuffing her hands into the couch to stop screaming’. When group analysis works, participants carry, and sometimes express, other members’ emotions so that the person mirrored is faced with a psychic state they have not yet reckoned with. We are used to thinking of emotional relating in terms of empathy, yet group analysis reveals that our relations to one another are far more complex. We depend on one another not simply to be kind, but to allow us to see ourselves; to experience ourselves as social creatures with an array of fraught emotional histories and aggressive projections that we only just conceal in order to get through ordinary daily social interactions. One of the woman’s chief symptoms is loneliness, a state driven in part by her sense of geographical and social isolation, and her difficulty in speaking (she requires an operation that she has been putting off, but that she gradually, over the weeks, commits to). As she allows herself to participate in the group, her symptoms slowly improve. In the US, group therapy took a very different trajectory. At its extreme, it embraced a variety of almost cultish incarnations that were frowned upon by the analytic community. At the Sullivanian Institute on the Upper West Side in New York, members of an urban cult were offered group therapy alongside the countercultural package of parties, sex and low rent. Group analysis also burgeoned in eccentric, hippyish forms at the Esalen Institute, at Big Sur in California, many of which combined the precepts of Mitteleuropean analysis with some of the wackier elements of existentialism and Zen Buddhism. At Esalen, Fritz Perls, the founder of Gestalt therapy, encouraged workshop participants to project aspects of their personalities into the room where he would address them in an attempt to recoup their splintered, lost selves and so transform them into a Gestalt or whole. Perls’s workshops aimed to awaken greater consciousness, and drew on techniques of psychodrama that originated with Jacob Moreno in Vienna in the 1910s, as well as on Wilhelm Reich’s ideas of bodily liberation. Where Bion and Rickman sought to make the psychoanalyst an invisible participant who allows the leaderless group to generate its own form, Perls took on a much more active role. Like a magician, he would isolate a psychoanalytic block or complex and reveal it before his stunned audience, who become essentially spectators of one another’s inner world. In the 1960s, Perls declared individual therapy obsolete, and threw in his lot with group analysis, explaining that the ‘collective conviction of the group’ can create a ‘safe emergency’ through which ‘the neurotic discovers that the world does not fall to pieces if he gets angry, sexy, joyous, [or] mournful’. The group, especially in its most extreme psychodramatic versions – which includes primal screaming – has something of Brechtian theatre about it. It serves not simply as a sounding board for individual psychic ills but as an artificial frame at once stranger and more intense than everyday life. They were a kind of analysis without the analyst, as if therapy could be removed from its institutional history By any reckoning, these group therapies break with the ethical framework of psychoanalysis, creating a situation of ‘professional anarchy’, as Renata Adler noted in The New Yorker in 1967 in her report on the new fad for all things ‘group’. In other words, group therapy is a much more ‘disparate and eclectic’ practice than group analytic work. Adler predicted that, despite its theoretical unruliness, group therapy would find a distinctive place in postwar US life. It would become a routine affair, like going to the gym. A psychotherapist-guided group replays scenes of trauma and neurosis; Paris, 1973. Photo by Manuel Litran/Paris Match/Getty But in the immediate years that followed, psychodrama – like free love – fell out of fashion, and group therapy found its popular footing in ‘encounter’ groups, which emphasised processes of ‘being’ and ‘becoming’ and attracted people interested in personal growth and self-actualisation. T- or training groups were their industrial equivalent – a style of emotional learning where members of a small group are encouraged to become alert to their own reactions, perceptions and behaviours so that they might work more productively together. They were a kind of analysis without the analyst, as if therapy could be removed entirely from its institutional history and its interest in emotional complexity. Widespread in the 1960s, encounter and T-groups have since dissolved into industrial and social organisational strategies, and the wellness industry. Some of the more cultish, popular forms of group therapy that were prevalent in the 1960s were taken up and reworked by the women’s and Leftist movements of the 1970s. While group therapy often risked charges of ethical and cultural malpractice, it was group therapies’ unorthodox approaches to psychoanalysis that allowed feminists to suddenly find a way into psychoanalysis that did not seem too dependent on what was widely conceived as the patriarchal model of Freudian analysis. The ‘trajectory from feminism into the therapy world was the same for me as for lots of women and community activists of my age. It was a very common way of moving at that time because the personal is political.’ So said the London-based group analyst Sue Einhorn, recollecting that time of foment and innovation. In the same interview from 2019, Einhorn, who qualified as a group analyst in 1991, laments that talking about politics and group analysis is ‘rather painful for me because I think that group analysis has never really grasped its political focus … and has increasingly betrayed it …’ At the heart of the idea of group analysis as Einhorn sees it, is ‘a very deep understanding that there’s no such thing as an individual. We’re all part of the social in which we live.’ But this reckoning with the social has been sidelined in the focus on the analytic institution. For Einhorn, ‘the competition between the men in group analysis meant that for a long time it wasn’t a dialogue about developing Foulkes’s ideas, it was about who owned the social unconscious.’ She regrets not owning her own identity as a feminist more in these circles. In many ways, the 1970s was one of the most exciting decades in the history of psychoanalysis because it fostered so many grassroots models and collective forms that came together to imagine a type of analysis that might work for the people and be grounded in social change. In the women’s movement, an analytic professional was sometimes brought in to lead groups. Often, they drew on Freudian, encounter and Gestalt approaches but, at other times, the groups were truly leaderless and productively chaotic. These provisional forms of group analysis modelled a radical vision of what psychoanalysis might be, even if they invariably proved difficult to sustain long-term. What was most innovative about these grassroots models is that they situated individual psychic discontent in a social setting that allowed women to see how their feelings of anxiety and ambivalence might have a social origin and to muster the political tools to transform the world around them. In their collection In Our Experience (1988), the editors Sue Krzowski and Pat Land write that: ‘It was often a startling experience to identify for the first time a personal shame or anxiety as part of a social reality, and it made changes in many women’s lives.’ The psychoanalytic establishment continues to view group therapy as implicitly less valuable The grassroots therapy groups were frequently allied to local community practices, such as nursery provision or women’s shelters, so that analytic group practice could be properly rooted in the community. Such groups were difficult to sustain as the 1980s progressed, amid rent rises and social welfare cuts, and with the idea of the individual becoming ever more central to socioeconomic prosperity. But they had a lasting impact on therapeutic culture, leading to the creation of community-based groups such as the Women’s Therapy Centre (WTC) in London, which ran for more than four decades until it was forced to close in 2019 due to lack of funding. In particular, the WTC offered therapeutic provision to women on very low incomes who find it difficult to use normal mental health channels and who tend to fall through the usual safety nets. As one single mother, who had been living in a refuge for 15 months, explained in an interview dating from 2014, she had felt very lonely before participating in the group. But the group provided her with a collective way to heal, and so the depression and anguish she felt – as a lone mother, in a foreign country, without a home (‘the nightmare’) – had now gone. In spite of the growing crisis in mental health care provision, exacerbated by the COVID-19 pandemic, the psychoanalytic establishment continues to view group therapeutic methods as by and large unorthodox, as diluted or left-field versions of the talking cures: as a form of therapy that is always implicitly less valuable. The hangover associations of groups with cults or fads, eccentricity or even brainwashing linger on too, in the public perception of group analytic therapy, which has ultimately failed to occupy the collective psychic space in the same way that individual and indeed couples therapy have in recent years. Psychoanalysis is a kind of mirror space. A topsy-turvy world. A creative illusion through which, if it is successful – and we have to concede that it is not always – one experiences moments of breakthrough. Long-held assumptions of who we are jostle and shift before us, releasing us from a history that has been imposed on us, and allowing us not only to adapt better to the world around us, but to see how change occurs and the freedom that change confers. Group analytic therapy does not so much break with analysis as offer a different kind of analytic frame, one in which the goals, values and institutional factors of psychoanalysis are less unconscious and more starkly on display. As a result, it always risks being seen as a less sophisticated, more cultish and more anti-institutional form of analysis because it offers its own therapeutic wager: a negotiable, less costly, adaptable context. And yet, for all the outlandish and dubious incarnations that have been practised under its name, group analytic therapy continues to offer the practical resources for a vision that has yet to be realised – a psychoanalysis for the people. This article was written with funding from the Leverhulme Trust.
Jess Cotton
https://aeon.co//essays/why-didnt-group-therapy-become-a-psychoanalysis-for-the-people
https://images.aeonmedia…y=75&format=auto
Nations and empires
Amid the chaos of the First World War, a new pan-Arab empire was proclaimed. It faltered, but its historical lessons remain
In December 2022, Abdullah II, the king of Jordan, gave an interview to the CNN anchor Becky Anderson. Sitting close to the Jordan River, not far from where Jesus is believed to have been baptised, this Muslim ruler expressed his concerns about the status of Jerusalem and the Christians under pressure from the new, extremist Israeli government. He emphasised that the ‘Hashemites’, his family, are the custodians of both Christian and Muslim sites in the holy city. Abdullah II cited his great-great-grandfather Sharif Hussein. It was from Hussein’s time, sometime at the end of the First World War, according to Abdullah II, that the Hashemite custodianship of Jerusalem’s holy sites originates. His ancestor even gave sanctuary to Christian Armenian refugees in Jordan, said the king proudly on CNN. Abdullah II’s remarks presented him as a confident and reassuring monarch but they also refer to a history of modern Arab kingship and the modern Middle East that has mostly been forgotten. Wikipedia in English, for instance, tells us that the custodianship of the Muslim sites in Jerusalem by the Hashemites follows from a ‘verbal agreement’ of Hussein with the Supreme Muslim Council of Palestine in 1924. The Indonesian version of Wikipedia repeats the claims of the English article. The Arabic version, however, tells us about the financial help Hussein gave for restoring the holy sites of Jerusalem and subsequent donations by the Hashemite dynasty for further improvements to the holy city. So, who was Hussein and what’s his relevance today? Sharif Hussein is a legendary figure of the 1910s and ’20s. For some – certainly for Abdullah II – Hussein was the nationalist leader of the ‘Arab revolt’ during the First World War who won the war for the Arabs. In an alliance with Britain, he revolted against the Ottoman Empire in 1916 in order to establish a giant independent state that he called the ‘Arab Kingdom’. Others see him in less heroic terms. They blame him for ‘stabbing the Ottomans in the back’, the inability to stop the partitions decided by Europeans, and the Zionist settlement of Palestine – so, in a way, for losing the war. The importance of Hussein and his Arab Kingdom for today is a forgotten experiment with state-formation exactly 100 years ago. Modern states do not originate only from nationalism. Abdullah II’s remarks at the Jordan River evoke Islam as a principle of government and Muslim rulers as protectors of Christians. This use of Islam is very different from what we usually hear about religion in the Middle East – for instance, ‘sectarianism’ (religion-based claims to institutionalised representation within nation states, often erupting in violence) or the fascist brutality of ISIS. But neither should we follow the king of Jordan into a monarchist-nationalist nostalgia. His great-great-grandfather Hussein was not born a nationalist. Here, I tell Hussein’s story as an exercise in unearthing ideas about Muslim government that we can call ‘imperial’. This is important because the imperial techniques of state-making defined the early 20th century in many regions of the world, and not nationalist or egalitarian revolutions. Sharif Hussein bin ‘Ali was the scion of an important family from the sacred city of Mecca. Sharif means ‘nobleman’. Individuals who claim that they are descendants of the Prophet Muhammad use the Arabic honorific terms sharif (plural ashraf) and sayyid (meaning ‘master’, plural sada). Tens of millions of Muslims today claim this heritage. Saddam Hussein, the Iraqi dictator until 2003, was one of them, for example. The rulers of Morocco, too, are ashraf. (The Saudi kings are emphatically not.) Furthermore, among all ashraf and sada, only the Jordanian ruling family and their relatives are called ‘Hashemites’ publicly, after Hashim, a legendary ancestor of the Prophet Muhammad. So, yes, both Hussein and his great-great-grandson King Abdullah II, sitting next to the Jordan River talking to CNN, are also ashraf, descendants of the Prophet. Panoramic view of Mecca, c1845. Courtesy the Khalili Collections Yet Hussein was born not in Mecca, but in Istanbul, at the metropolitan centre of the Ottoman Empire, sometime in the 1850s. The Ottoman Empire, a vast, three-continental administration in Europe, Asia and Africa, existed roughly between the 14th century and 1922. This empire was the Mediterranean Muslim superpower. The Ottoman emperor – sultan – assumed the title of the caliph of (Sunni) Islam, too. Today, in its final former territory across Europe and Asia, we find the states of Turkey, Albania, Bulgaria, Syria, Lebanon, Iraq, Jordan, Israel, the Palestinian Territories and Saudi Arabia, as well as Egypt, Libya and Tunis in North Africa. In 1914, at the threshold of the First World War, its directly ruled population was estimated at around 25 million (at that time, the US population was about 100 million; Austria-Hungary was about 50 million). For the Ottomans (a non-Arab, Turkic Muslim imperial dynasty), the most important ashraf were those in Mecca and Medina, the sacred cities in the Hijaz region of Arabia. Hence the value of Sharif Hussein bin ‘Ali for this Muslim empire. The loyalty of the Meccan descendants of the Prophet meant the symbolic recognition of the Ottoman caliphate. Since their conquest in the early 16th century, the Ottoman sultans usually appointed a sharif to serve as the emir of Mecca, its local ruler. From the mid-19th century, the descendants of the Prophet became closer and closer to Istanbul, literally. Hussein was born in Istanbul because his family branch in exile competed for the emirate of the holy city. He knew Turkish, his wife was Turkish-speaking, and his sons received Ottoman education. Hussein, known in the Ottoman administration as Şerif Ali Paşazade Hüseyin Bey (in Turkish transliteration), became quite an Ottomanised descendant of the Prophet. The logic of the time was not to create nation-states but to transform empires into looser organisations From the 1870s, the descendants of the Prophet received political roles in the Ottoman imperial capital. Many other more ordinary Arabs from the provinces also became part of the modernising imperial bureaucracy. Hussein and his sons (and the rival sharifian Meccan family members), circulating between Mecca and Istanbul, benefitted from this modern experiment fusing Islam with imperial patriotism. It’s helpful to think of this as an ‘unelected system of representation’, for the sultan suspended the imperial constitution in 1878 and substituted the parliament with these new practices. The ashraf ‘represented’ their regions (in a way, Hussein’s family stood for Mecca and the Hijaz region) but also in general the Muslim community. Many ashraf sat on imperial councils, travelled on steamships and the new railway lines, and so provided a symbolic cover for the empire. After the coup d’état usually known as the Young Turk Revolution to restore the constitution in 1908, Hussein’s sons became elected members of the new imperial assembly. And from 1908, Hussein held the imperial office of the emir of Mecca. Being a descendant of the Prophet and an Ottoman imperial notable was a uniquely powerful combination in a city where a growing number of Muslims from all over the world came to perform pilgrimage in the age of steam. No wonder that the European empires (with large Muslim colonies and domains) were keen on gaining Hussein’s attention, and Hussein was also keen to gain their attention, especially the British. Hussein had been loyal to the Ottoman Empire before 1908 but hated the Young Turks and the restored Ottoman constitution. He thought that the Quran should be the only constitution in the empire; and he also feared losing his position as emir of the holy city. In the 1910s, Hussein and his sons made cautious contact with the British consul in Cairo. Intriguing, in early 1914 Hussein’s son Abdullah asked the British consul to consider a British protectorate over the emirate of Mecca like the British did with the subdued Afghan emir. This 1914 intrigue of the Ottoman ashraf of Mecca in order to switch empires was part of a much more complex momentum of imperial transformation in the Eastern Mediterranean and the Red Sea. We must understand that the logic of the time, despite the popularity of ethnicity- and language-based patriotic ideas, was not to create sovereign nation-states but to transform empires somehow into looser organisations. By the 1910s, many faith- and ethnicity-based groups in the Ottoman Empire demanded reforms to transform the empire into a federation. Bourgeois Arabs were no exception as some Syrians started to imagine a decentralised Ottoman Empire with Arab autonomy. Other Arab groups – for instance, the religious entrepreneur-journalist Sheikh Rashid Rida and his activists, with some European encouragement – imagined a new empire as a Muslim association of emirs, and some other sheikhs even advocated for an Arab caliph instead of an Ottoman one. In many of these 1910s plans, the ashraf had a role and Hussein, as the ruler of Mecca, personally could expect a potential caliphate. European commentators imagined this would-be Arab caliphate as a type of papacy, restricted to the holy cities in the Hijaz. This would have ended the age-old Ottoman system of combining the emperor and caliph titles. In short, the spirit of the time was to create autonomous polities in some sort of federation as a better way to accommodate economic and political demands of ethnic groups, and to challenge the Ottoman leadership of Sunni Islam. A flag of Hijaz, also known as the flag of the Arab revolt, presented by Sharif Hussein, King of Hijaz, to King George V of the United Kingdom in 1918. Courtesy the Royal Collection, London And in October 1914, the Ottoman Empire joined the Great War as a member of the Central Powers. Germany, Austria-Hungary and the Ottomans fought together against the Allied Powers, the British-French-Russian alliance. The Ottoman caliph declared jihad on the Allied Powers (not, to be noted, on his own Central Power allies, the Germans and Austro-Hungarians). For the Allied Powers, Hussein, the emir of Mecca, was the most useful symbol against the Ottoman caliph. As a descendant of the Prophet, as an Arab, he was a potential challenger of the Ottoman claim to the caliphate (and, for the better, this emir of Mecca had already requested British protection). After an exchange of letters with the British High Commissioner in Cairo (this correspondence came to be known as the Hussein-McMahon correspondence), Hussein declared his revolt – the ‘Arab revolt’ – against the Ottoman government in June 1916. Ever since, there has been a debate over what the British promised exactly, what a promise means in informal diplomacy, and whether the British betrayed their promises later. Bedouin Arabs with the flag of Hijaz during the Arab Revolt in 1917. Courtesy the Library of Congress Despite the assurances about a large Arab polity in the correspondence with McMahon, no Allied planners really expected that the emir of Mecca would want something more than a small emirate with the holy cities in the Hijaz. When, in October 1916, Hussein and his sons announced their claim to a giant polity, with Hussein as ‘King of the Arabs’, it took the Allied Powers by surprise. The ‘Arab Kingdom’ was an idea about a new empire stretching from the Levant (what is today Palestine, Israel, Jordan, Lebanon) to the Iraqi regions, even Arabia, thus including most of the Arabic-speaking Asian Ottoman provinces (but not the North African ones). Overcoming their surprise, in January 1917 and later repeatedly, the Allied Powers recognised Hussein as king only over the Hijaz, a small portion of Arabia. But this new ruler and his sons were not satisfied with a kingdom of the Hijaz. They maintained their claims to a much larger state, a new Muslim-Arab empire. This is why, when the sharifian troops entered Ottoman Damascus in October 1918 under the orders of his second son Faisal, many Damascenes understood that they are now in the ‘Arab Kingdom’, being the subjects of Hussein, a new Muslim sultan. Empire is often a rhetorical term to mean something evil. Think about the empire in Star Wars. But we historians use ‘empire’ as an historical-analytical category of government, whose organising logic differs from the ideal of the nation-state. Empire is a large organisation that uses all available means (violence, dynastic marriage, religion and ethnicity) to establish political and economic claims on diverse regions with diverse peoples. As Jane Burbank and Frederick Cooper call to our attention, empires welcome and embrace ‘diversity’; it is nation-states that require a homogeneous population. Historical empires subjugated and colonised peoples, but the important issue for our purposes is that empire is a different way of subjugating and organising peoples from that of the nation-state. At the end of the Great War, the political visions about the future of what became the Middle East – the Allied agreements about partition, the well-known 1917 British promise of Palestine to Zionists as ‘the establishment in Palestine of a national home for the Jewish people’, Hussein’s Arab Kingdom, some bourgeois Syrian federative visions, and the very much existing Ottoman loyalists – were not about sovereign nation-states. These plans and visions all implied some type of empire. Perhaps, the most fitting for post-Ottoman Arabs was a federative polity, with or without a dynasty. The imperial logic of organising peoples and territories dictated the political imaginations up to about 1922. During 1918 and 1919, the sharifian advocates of the Arab Kingdom projected Islam and Arab ethnicity as the founding norms of a new political order. From early 1918, the official journal in Mecca and his sons called Hussein ‘the Commander of the Faithful’ in Arabic (amir al-mu’minin) while the new king craved for the title of caliph. Both the sharifian and British propaganda started to advertise Prophetic descent as an important quality for Muslim rulership. The Arab Kingdom was to be ruled by Hussein and his sons, the descendants of the Prophet Muhammad. Islam, Prophetic genealogy and ethnicity were to serve as the constitutional foundations of Hussein’s Arab Kingdom. We can call this idea of a state a ‘genealogical empire’. Using religion in state-formation is considered today outside of international norms Hussein’s genealogical empire was the first of many post-Ottoman Muslim imperial projects in the 20th century. Like the case with Christian, Hebrew and Buddhist imperialisms, there had been various kinds of Muslim empires in history, from the late-antique Muslim-Byzantian caliphates to the last great empires of the Mughals in India, the Qajars in Iran, and the Ottomans in the eastern Mediterranean. In a way, the Arab Kingdom was to contain recycled Ottoman institutions: the caliphate, a monarchy, Islam, the ashraf, and of course the ex-Ottoman peoples, such as Arabs, Turks, Armenians, Jews and Kurds, some of whom were Christians and even Shi‘i Muslims. The Ottoman politics of diversity had to be transformed into a new Muslim framework. Using religion in state-formation is considered today outside of international norms. In 1919, the sharifian makers of the Arab Kingdom had to face the Ottoman Arab urban bourgeoisie who were rather advocating some type of federation, perhaps preserving an association even with Istanbul. For instance, in Ottoman Damascus, the sharifian occupiers had to compromise for a constitutional, federative ‘United States of Syria’, in which Faisal, the son of Hussein, was declared king in March 1920. But Sharif Hussein was not a federalist. In his imagination, this unrecognised Syrian princely polity was still part of his larger Arab Kingdom. Next to the Arab federalists and the still-strong Ottoman loyalists, the sharifian imperial project also bumped into the intentions of the Allied Powers. This is the more familiar story about the modern Middle East. The French and British (and Russian and Italian) governments aimed at partitioning the Ottoman provinces. Just think about the Balfour Declaration in 1917, given by Britain’s foreign minister Lord Balfour to the Zionists to establish a ‘national home for the Jewish people’ in Ottoman Palestine, practically a promise for settler colonisation, a typical imperial gesture. The Covenant of the League of Nations in 1919 codified these agreements in the new international system. Hussein remained to be recognised only as the king of the Hijaz. There thus existed in 1919 a split situation – while the ‘Kingdom of the Hijaz’ was a minor Allied Power and as such participated in the Paris Peace Conference, the Hijazi (sharifian) representatives and administrators in the occupied regions projected the idea of the ‘Arab Kingdom’ with full force. And even more complicated was the fact that Hussein’s polity was to be a subordinate to the British Empire. For instance, Hussein had no problem with a British appointment of his minister of war and often repeated that the British gave him power over lesser rulers in Arabia. Even more importantly, the British treasury financed the Hijazi (the sharifian) army and their occupation administration in Damascus, and in December 1919 the British government gave over the financing of this occupation zone to the French treasury. The French army did not trust Faisal, a would-be sharifian monarch in Damascus, who was too closely tied with his father Sharif Hussein, a potential caliph in Mecca, and with British politics. Besides, the French colonial empire had their own sharifian monarch in Morocco. In July 1920, the French army invaded the internal lands of Syria, expulsing the Hijazi sharifian regime and Faisal, the new king of the United States of Syria, and killed the Ottoman Syrian general Yusuf al-Azma. Thus, accidentally, the French army also ended the hopes of the local Syrian Ottoman loyalists about returning to Istanbul’s umbrella. The possibility of a large Arab kingdom was not yet crushed as sharifian troops still held the Ottoman Hijaz railway stations in Transjordan, the mountains above the river Jordan. This is where Hussein’s third son Abdullah arrived in November 1920 to represent his father and establish his own emirate within the sharifian empire. The British planners agreed to this arrangement in April 1921 and at the same time transferred the defeated Faisal to rule a new country, the Iraqi kingdom. Thus emerged a chain of sharifian monarchies (the Kingdom of the Hijaz, the Emirate of Transjordan, the Kingdom of Iraq) in a loose association, under British control. This modular association of three Muslim rulers was still an empire, with Mecca as its centre. In 1921, British officials were astounded when Sharif Abdullah presented them with a constitutional draft of his new emirate that derived his authority over Amman from his father, Sharif Hussein, in Mecca. The operation of recycling the Ottoman Empire into a series of emirates held together by Prophetic genealogy, Islam, ethnicity, a railway, and dynastic claims was the defining project of the Middle East until 1924. Abdullah II’s remarks today about the Hashemite protection of Christians and Jerusalem’s holy places originate in this moment and in this project. Muslim emperors had offered protection to persecuted communities in the past, and possibly Sharif Hussein was also glad to exercise this imperial gesture when his troops found Christian Armenian refugees in the occupied Syrian provinces. Furthermore, as a potential ruler of Jerusalem and a caliph – in fact, in March 1924 he did assume the caliphate in public – Hussein and his sons got in touch with the Jerusalem Muslim, Christian and Jewish communities. A Jerusalem delegation arrived in Amman in March 1924 to acknowledge Sharif Hussein as caliph (and another Jerusalem group to anxiously express their doubts). About this time, the Hashemites started to act as protectors of the holy sites in a symbolic competition (but also cooperation) with interwar Zionists. But the core of the imagined Arab Kingdom – Mecca and the Hijaz – was gone by the end of 1925. Capitalising on the general dissatisfaction with King Hussein’s politics, a new conqueror, Sultan Abdulaziz of Najd in Central Arabia (‘Ibn Saud’) conquered the holy cities and expulsed the sharifian family. A new, this time Saudi, kingdom started in the Hijaz. King Hussein lived in exile – he was pointedly buried in Jerusalem near the Al-Aqsa mosque in 1931. So, when today Abdullah II claims protection over the holy sites, in fact he also claims his own ancestor’s grave. In this story about the rise and fall of the sharifian Arab Kingdom – although never entirely gone, as Jordan is still with us – we have observed that religion, genealogy, federative ideas, ethnicity and monarchy were fundamental in the local making of modern Arab polities. While the Allied Powers partitioned peoples and regions, there was significant local involvement in the political furnishing of new states. The mixture of constituent fictions was not created by the Allied occupiers; instead, it emerged from how societies that succeeded the old Ottoman order continued to carry out imperial programmes in lieu of radical revolutions. At the same time, these successor societies were to be integrated into new European imperial orders, Greater France and Greater Britain, respectively. Western and Arab politicians, Orientalists, artists and the press further entrenched the essentialisation, racialisation and feudalisation of post-Ottoman Arabs in the 1920s and ’30s. The local and external logics of the imperial imaginary about Arab politics with its centrality on religion retained their force well into the second half of the 20th century, and, as we could observe in Abdullah II’s interview to CNN, even until today.
Adam Mestyan
https://aeon.co//essays/sharif-hussein-and-the-campaign-for-a-modern-arab-empire
https://images.aeonmedia…y=75&format=auto
The ancient world
Passersby could wander at will into grand public libraries in imperial Rome. Could they trust what they found inside?
It’s around 200 CE, in Ephesus, an Aegean city of Greek roots, now a major hub of the Roman Empire. Meandering down marble-paved Curetes Street, a dweller is lost in the bustle of the town, procuring produce and wares in shops tucked beneath the colonnades, attending the public baths – even a conveniently placed brothel. It all plays out alongside merchants from across the Mediterranean, who disembark their ships to transport cargos and conduct business in the great depot between West and East. They make their way past the shrine to the emperor Hadrian and the nymphaeum of the emperor Trajan, bold reminders that the Ephesians, in their prosperity, are now part of the realm in faraway Rome. And there, culminating at the end of this lively thoroughfare at a slight angle, as though gradually revealing itself, lies a theatrical marble-clad façade of elegant Corinthian columns, exquisite reliefs and wordy inscriptions. Up a short flight of stairs, flanked by statues, three large doors offer a glimpse into a single large room, colonnaded and high-ceilinged. Thousands of scrolls are carefully stacked into rectangular recesses in the walls. The doors to the towering Library of Celsus are flung wide open: anyone can enter this shrine to the written word. The scene is millennia old, but hardly alien to modern times. Libraries have, through history, represented the ultimate repository of information and knowledge, and produced some of the most spectacular architecture in the Western world and beyond. Today’s libraries are a direct legacy of the Roman impulse to transcend practicality and invest arenas of knowledge with a sense of scale akin to that of churches – temples to a different creed – with their imposing porticoes and columns, their elaborate ornaments and staircases, their rows of desks and lofty shelves. Yet a library cannot be reduced to the mere embodiment of universal education. The philosopher Michel Foucault once asserted that ‘power and knowledge directly imply one another’ – one cannot exist without the other. Where libraries have been designated as public, free and accessible to all, there has always existed the risk of echo chambers with curated contents and truths. The institution of the public library goes further back than imperial Rome, of course. During the earlier Roman Republic (509-27 BCE), libraries were a private affair, concealed in the properties of the educated elite and off-limits to the masses. They were overseen by a vir magnus (great man), the owner of the house, who would open his collection to his amici (friends), providing an intimate atmosphere for intellectual exchange and cultural influence and posturing. In its later years, the Roman Republic was beset by a series of cataclysmic civil wars and political showdowns that sent it hurtling towards one-man rule. The Republic, with its imperfect checks and balances on political power, had failed; but to the Roman people, defined by their aversion to monarchism and the conviction of liberty, it was inviolable. The first emperor, Octavian, found himself navigating a careful balancing act of titles and epithets without ever assuming himself as Rex (king): from Princeps Civitatis (first citizen) to primus inter pares (first among equals), all with connotations of authority but not monarchical supremacy. He would settle on Augustus (the venerable one) as his official title. The end of the Roman Republic was never announced – despotism prevailed in the superficial likeness of the Republic. Amid this upheaval, the masses were manipulated with imperial cults and vanity projects and placated with bread and circuses. But still there remained a class of educated Republican aristocracy whose raw memories and ambitions had to be reined in. Hoping to resonate with the disaffected Roman public, Julius Caesar had already intended ‘to make as large a collection as possible of works in the Greek and Latin languages, for the public use,’ as the Roman historian Suetonius wrote. Caesar was ultimately beaten to the task by a soldier and politician named Gaius Asinius Pollio, who, by 28 BCE – just a year before Octavian became the Emperor Augustus – used his war plunder to fund Rome’s very first ‘public’ library in the Atrium Libertatis, Rome’s census record building. Augustus understood his gesture was laden with symbolic, even revolutionary, significance We can be sure that Pollio was watching then-Octavian closely as his power intensified; indeed, right after becoming Augustus, the first emperor opened his own public library within the Temple of Apollo on Palatine Hill in Rome. This cannot be a coincidence. Pollio, a member of the Republican senatorial order, had snatched credit for gifting the Roman public the first public collection of books. From then on, Augustus had no reason to be diplomatic. He probably populated his growing collections with the books confiscated from the heirs of the generals he defeated in the late Republican wars. If the emperor sought submission, he had to subjugate minds and ideas. An imperial tradition had begun. When Augustus opened his first library, he was, in effect, usurping the role of a Republican patronus (patron) of culture and knowledge, except on a much larger scale. Opening his large libraries to an ever-wider public, he was a vir magnus writ large. The perception of the imperial libraries as ‘public’ resources stood in glaring contrast to the closed collections that continued to operate in private properties. The Latin verb publicare (to make public/release) and its cognates appear throughout Roman literature in reference to these new libraries. The poet Ovid, banished in 8 CE by Augustus to distant Tomis on the Black Sea, marvelled at his visit to the Palatine library, where ‘all that men of old and new times thought, with learned minds, is open to inspection by the reader’. Augustus understood his gesture was laden with symbolic, even revolutionary, significance. He was offering the Roman people access to knowledge that had once been confined by the Republican elite behind closed doors. And as though to accentuate this sense that the library was the emperor’s private space – and its contents his personal property – in which all others were welcome guests, Augustus’ private Palatine residence was connected to the Temple of Apollo and its library by special access. The library itself was an ode to imperial achievement – the atmosphere of the temple seems to have leaked inside. Colonnades resembled those of Latin and Greek libraries that came before; statues alternated with exotic pillars; and its walls were adorned by portraits of famous authors. The earliest known reference to Rome’s public libraries was made by the poet Horace in 20 BCE. And in contrast to Ovid’s sensation of cultural apogee, Horace’s words are fraught with warning: What, pray, is Celsus doing? He was warned, and must often be warned to search for home treasures, and to shrink from touching the writings which Apollo on the Palatine has admitted: lest, if some day perchance the flock of birds come to reclaim their plumage, the poor crow, stripped of his stolen colours, awake laughter.Glance through the poetic diction, and the implication is clear. The books of the Palatine library are compromised, ‘stripped’ of their substance. We sense a suspicion that must have been widely held in elite circles – better to stick to the ‘home treasures’ of the old-school private collections; they were not yet tainted by the venom of despotism. Internal View of the Atrium of the Portico of Octavia (1760) by Giovanni Piranesi. Courtesy the Met Museum, New York This wasn’t paranoid. In wrangling control of the elite’s intellectual world, Augustus was intrusive and possessive. To keep the growing number of collections, he appointed his own educated freedmen (manumitted slaves who remained legally and socially distinguished from freeborn Romans): Gaius Julius Hyginus, a polymath scholar and grammarian, took charge of the Palatine, while the library in the Porticus of Octavia was entrusted to the grammarian Gaius Maecenas Melissus. Forever beholden to their liberator and presumably unambitious, these were men Augustus could trust. Meanwhile, in a nod to their new inferior status, he seems to have cast most of the elite from their traditional domain. Only freeborn Pompeius Macer was left to ‘the arrangement of his libraries’, as far as we know. ‘Strict examination’ of library volumes was a euphemism for state censorship Like any good autocrat, Augustus didn’t refrain from violent intimidation, and when it came to ensuring that the contents of his libraries aligned with imperial opinion, he need not have looked beyond his own playbook for inspiration. When the works of the orator/historian Titus Labienus and the rhetor Cassius Severus provoked his contempt, they were condemned to the eternal misfortune of damnatio memoriae, and their books were burned by order of the state. Not even potential sacrilege could thwart Augustus’ ire when he ‘committed to the flames’ more than 2,000 Greek and Latin prophetic volumes, preserving only the Sibylline oracles, though even those were subject to ‘strict examination’ before they could be placed within the Temple of Apollo. And he limited and suppressed publication of senatorial proceedings in the acta diurna, set up by Julius Caesar in public spaces throughout the city as a sort of ‘daily report’; though of course, it was prudent to maintain the acta themselves as an excellent means of propaganda. We can sense the typical behaviour of an absolutist ruler soothing his anxieties amid the fragility of his new regime – but this was also a calculated attack, a forewarning to anyone who could write, and anyone who, even if indirectly, might cause offence. ‘Strict examination’, the imperial biographer Suetonius surely perceived when he wrote it, was a euphemism for state censorship. Horace suggested the insidious encroachment of imperial truth into the public libraries, but Ovid paid the price. Betrayed by ‘a poem and a mistake’ – with his Ars Amatoria allegedly considered too salacious for the prudish emperor and, more decisively, having found himself entangled at the wrong end of a succession conspiracy – the pitiful poet deplores in his Tristia that, at the Palatine, ‘the guard, from that house that commands the holy place, ordered me to go. I tried another temple, joined to a nearby theatre: that too couldn’t be entered by these feet. Nor did Liberty [ie, Atrium Libertatis] allow me in her temple…’ Ultimately, just ‘a short and plain letter to Pompeius Macer’ was enough for Augustus to forbid the publication in his collections of any specific writings – even those composed by the revered Julius Caesar in his youth, including such trivialities as his take on the Oedipus tragedy. And just as examination of the works implied censorship, with the libraries open for gatherings, participation implied supervision – and the chance to sway the crowd. Augustus presented himself to attendees not as a dreaded force but as a generous and genuinely involved patron of the intellectual community. Indeed, writes Suetonius, he gave ‘every encouragement to the men of talent of his own age, listening with courtesy and patience to their readings’ – yet even then, the easily affronted autocrat could not help but take ‘offence at being made the subject of any composition except in serious earnest and by the most eminent writers.’ As the imperial order consolidated itself, the early emperors became more secure in their authority. The logic of appointing freedmen almost exclusively to keep the collections appears to have persisted with Augustus’ successor Tiberius (reign 14­-37 CE), though this became less absolute, and acts of censorship, like book burning, declined in frequency. Still, Tiberius was spooked by the Sibyl foretelling ‘civil strife upon Rome’. In the words of the Roman historian Cassius Dio, Tiberius denounced the verses as ‘spurious and made an investigation of all the books containing prophecies, rejecting some as worthless and retaining others as genuine.’ For the astute observer at the time, it would have made for a sombre realisation that imperial policy could be inconsistent and viciously irrational – Virgil’s Aeneid, a foundational epic claiming the destiny of Augustus’ reign, had once served to extol and immortalise the legacy of the first emperor, and his work and image were once received with great approval into the imperial collections. Yet Caligula (reign 37-41 CE) removed the writings and the busts of Virgil, who he said was talentless, and Titus Livius, who was dismissed as careless and verbose. These libraries were not just handy repositories for all literate citizens to enjoy – they presented to the emperors an opportunity to contrive a formative instrument of state opinion and policy, curated to reflect what they wanted public knowledge and truth to be. Eventually, maintenance of public collections became a matter of procedure. When he was roaming about the Palatine one day, Claudius (reign 41-54 CE) is said to have caught the historian Nonianus and his audience off-guard as he recited his work (presumably in the library) when ‘he suddenly joined the company to every one’s surprise’ – a surprise that was, no doubt, laced with honour as much as some trepidation. Intellectual interest and expertise were not even prerequisite for such imperial obligations. The attempt by Domitian (reign 81-96 CE) to restore the burnt-down library built by Tiberius in the Temple of Augustus ‘at a vast expense’ – hurriedly sending his agents to collect manuscripts from all parts, and even sending scribes to Alexandria – is impressive. But this is the same Domitian who also allegedly never bothered himself with ‘the trouble of reading history or poetry, or of employing his pen even for private purposes’. So what if the emperor, as patron of the intellectual community, was not intellectually inclined himself? His job was one of maintenance. The rooms themselves were a luxuriant, polychrome feast of Egyptian granite and Anatolian marble That maintenance could not escape the imperial Roman tendency to brazen ostentation and excess. A number of libraries sprung up in the capital – Augustus’ collections in the Palatine and the Porticus of Octavia (perhaps built by Augustus’ sister Octavia herself), Tiberius’ libraries in the Temple of Augustus and in the imperial palace on the Palatine, and Vespasian’s (reign 69-79 CE) collection located within or attached to the Temple of Peace. There is even one associated, if tenuously, with the Baths of Agrippa complex near the Pantheon. Even beyond Rome, where in the absence of emperors provincial politicians were handed the prerogative of building institutions, the emphatic Hellenophile Hadrian (reign 117-138 CE) wished to imprint his legacy on the cradle of philosophy (and, almost sardonically, democracy) itself, and constructed his own library in Athens. It was later eulogised by the Greek geographer Pausanias for its ‘hundred pillars of Phrygian marble’ and its rooms ‘adorned with a gilded roof and alabaster stone, as well as with statues and paintings.’ Only later, as though an afterthought, does he mention that ‘in them there are books’. Perhaps at the pinnacle of the imperial project was the Ulpian Library, inserted prominently into the Forum of Trajan in the centre of Rome by the emperor himself in 114 CE. It was distinguished by its twin Latin and Greek book collections, located directly opposite one another, with the colossal 38-metre-high Column of Trajan slipped between them. The separate collections were dominated by their single, high-ceilinged rooms, and flooded with desks and Corinthian columns that adorned their front porticoes and flanked the statues and cabinet niches. The rooms themselves were a luxuriant, polychrome feast of Egyptian granite, Numidian golden/purple giallo antico and Anatolian marble, all hauled on ships from across the Mediterranean and up the River Tiber at what must have been considerable expense. This overwhelming sensory experience was a reflection of the metropolis itself – as magisterial as it was chaotic. And with space at a premium in this dense, unplanned urban fabric, the collections were incorporated as smaller components of other larger complexes of temples, porticoes and forums, concentrated in the ‘historic centre’ of the city rather than dispersed, creating a sort of clustering effect. Rather than drown out the libraries, the association with contexts of religious and civic importance only enhanced their importance in Rome’s sea of monuments – just like that legendary epitome of libraries across the Mediterranean, the Great Library of Alexandria, itself a branch of the Musaeum complex. Reviewing the elaborate masterplan, we cannot ignore one glaring fact: the overwhelming majority of people in the Roman world were illiterate. It’s hard to picture masses of urban dwellers frequenting and availing themselves of these collections. Should we therefore dismiss the level of architectural effort and investment that supported the establishment of these grand ‘public’ libraries as a bizarre waste? They seem even more gratuitous when we unravel some of the architectural impracticalities. The esteemed architect Vitruvius had long ago posited that a library should acquire an eastward orientation to provide morning light and good natural protection against damp – however, in the small town of Timgad (in modern Algeria), the architect of the Library of Rogatianus instead opted to position the public entrance to its single book room facing west onto the settlement’s principal thoroughfare rather than eastward onto an unimpressive backstreet. And though housing the scrolls in a single, large book room would have heightened the sense of accessibility, this choice also exposed them to potential ill-doers and thieves, as well as damp, that could have much more easily been avoided by smaller upper-floor accommodation. Yet futility was not a flaw: if the focus was on appearances and exhibition, the rules of preservation would not be worth much. It did not matter if the masses could not read any of it In a world of mass, normalised illiteracy, the notion of a ‘public’ library would have carried different connotations – and, as today, we can imagine that libraries must have been public in the most literal sense to varying extents. If collections such as the Ulpian Library or those of Ephesus and Timgad, with their book rooms opening right onto the public space in a direct embrace of passersby, suggest an emphasis on walk-in accessibility, those located within temples and palace precincts, even if supposedly public, would surely have been restricted for random passersby. Perhaps these operated as ‘public’ repositories, accessible to those of interest on an appointment basis, not too dissimilar from many contemporary collections. Ultimately, Rome’s ‘public’ libraries were monuments parading as civic institutions, a conspicuous display of imperial benevolence to the Roman people and the victory of the new order – one in which the emperors ostensibly endorsed free enquiry, and where knowledge was liberated from the shackles of the Republican aristocracy. If Asinius Pollio was the one who ‘first by founding a library made works of genius the property of the public [rem publicam]’, it was Augustus and his successors who instilled an ideology of the public ownership of knowledge. And it did not matter if the masses could not read any of it – they could satisfy themselves with the illusion that masked yet another mechanism of autocratic control. Democratic access to the written word was never the intention. The integrity of the library, even in mature, modern democracies, cannot be taken for granted. Just consider the revelation of the American Library Association that, in 2022, in the United States, there were 1,269 documented demands for the censorship of 2,571 titles in school and public libraries, the vast majority of which were works by or concerning the LGBTQIA+ community and people of colour. According to a report from PEN America, in the school year 2021-22, book bans ‘occurred in 138 school districts in 32 states. These districts represent 5,049 schools with a combined enrolment of nearly 4 million students.’ In the UK too, the Chartered Institute of Library and Information Professionals has voiced increasing concerns following a survey in 2022 that revealed a rising incidence of censorship requests. Libraries are easily malleable and inherently vulnerable to conflicts of power, ideological chasms and thought control – the Romans themselves taught us this, and sometimes it seems like not much has changed. In a distant echo from the past, as though appealing to our apprehensions, the imperial historian Tacitus, introducing his Histories, betrays an envy of his past Republican counterparts – once power was ‘concentrated in the hands of one man’, he observes, ‘historical truth was impaired in many ways.’ Wistfully, he longs for the past ‘rare good fortune of an age in which we may feel what we wish and may say what we feel.’
Fabio Fernandes
https://aeon.co//essays/romes-libraries-were-shrines-to-knowledge-and-imperial-power
https://images.aeonmedia…y=75&format=auto
Philosophy of religion
His name is now the byword for a fool, yet his proof for the existence of God was the most rigorous of the medieval period
I am not nearly old enough to remember dunce caps, but I do remember a pedagogical illustration of a sad little boy sitting in the corner of a classroom wearing a pointy hat while his peers gaze joyfully at their teacher. My teacher explained that the pointy hat was called a dunce cap, and was used in olden times to humiliate and so punish the dunces, that is, the students who cannot or will not learn their lessons. Our own lesson was clear: we might not have the pointy hats anymore, but only sorrow and ostracisation await children who do poorly in school. Ironically, John Duns Scotus (c1265-1308), after whom the dunces are named, did very well in school, impressing his Oxford Franciscan colleagues so much that they sent him to the University of Paris. His brilliance at Paris eventually earned him the temporary but prestigious post of Regent Master of Theology. His writings, despite their difficulty, were enormously influential in Western philosophy and theology, so much so that universities all over Europe established Chairs of Scotist thought side by side with Chairs dedicated to Thomism. In the 19th century, the Jesuit poet Gerard Manley Hopkins declared that it is Scotus ‘who of all men most sways my spirits to peace’, and halfway through the 20th century the celebrity monk Thomas Merton could say that Duns Scotus’s proof for God’s existence is the best that has ever been offered. This prestigious legacy notwithstanding, as early as the 16th century educated Englishmen were appropriating ‘Duns’ as a term of abuse. In 1587, the English chronicler Raphael Holinshed wrote that ‘it is grown to be a common prouerbe in derision, to call such a person as is senselesse or without learning a Duns, which is as much as a foole.’ But in the same age a bookish person might also be labelled a dunce: ‘if a person is given to study, they proclayme him a duns,’ John Lyly explains in his Euphues: The Anatomy of Wit (1578). Humanist contempt of scholastic methods and style – of which Scotus’s own tortuous texts sometimes read like a parody – is probably an adequate explanation of the unfortunate union of ‘fool’ and ‘studious’ in ‘dunce’. A person must be a fool to waste time reading John Duns Scotus! From Super secundo libro Sententiarum (c1475-1500) by John Duns Scotus. Courtesy the BnF, Paris Scotus remains a polarising figure, but his humanist detractors would be horrified to learn that here in the 21st century we are witnessing a Scotus revival. Philosophers, theologians and intellectual historians are once again taking Scotus seriously, sometimes in a spirit of admiration and sometimes with passionate derision, but seriously nonetheless. Doubtless this is due in part to the progress of the International Scotistic Commission, which has in recent years completed critical editions of two of Scotus’s monumental works of philosophical theology: Ordinatio and Lectura. As these and other works have become more accessible, Scotus scholarship has boomed. According to the Scotus scholar Tobias Hoffmann, 20 per cent of all the Scotus scholarship produced over the past 70 years was produced in the past seven years. This explosion of interest in Scotus offers as good an occasion as any for introducing this brilliant and enigmatic thinker to a new audience. Some of Scotus’s theological concerns are bound, at first glance, to seem irrelevant to secular readers, but theology for Scotus was both a subject in its own right and the context in which to engage in distinctively philosophical activity: from the problem of universals to the grounds of moral authority, from the mind-body relation to the relations between mind, word and world, from the intelligibility of religious language to rational proofs of God’s existence, Scotus has something interesting to say in most of the major contemporary subfields of philosophy. Of his life, there is, sadly, not much we can say. Probably he was born in the town of Duns in Scotland, in 1265 or 1266. He got involved in the Franciscan movement as a boy, and his Franciscan superiors sent him to their house of studies in Oxford, perhaps around 1280. There he studied the liberal arts and went on to study theology. He was ordained a priest in 1291. By the early 1290s, he had made his first steps as a professional theologian, lecturing at Oxford on Peter Lombard’s Sentences, a standard textbook of theology that served as a de facto syllabus for theology courses at the universities of Oxford and Paris throughout the 13th and 14th centuries. But he also began what was to be a lifelong side interest in writing on Aristotle, producing commentaries on most of the logical works, and at least beginning commentaries on On the Soul and Metaphysics, which he later finished at Paris. Why Scotus was sent there is not known. Also unknown is the cause of his untimely death He continued lecturing on the Sentences after his move to Paris sometime before the start of the academic year in 1302. The published versions of these lectures form the bulk of his literary output. We have three distinct versions: the early Lectura, completed and published at Oxford; the middle Ordinatio, started at Oxford; and the later Reportationes, a chaotic collection of student reports on Scotus’s lectures. Of these, the Ordinatio is the most polished and is the closest we have to a complete commentary by Scotus on the Sentences – ‘ordinatio’ itself means, roughly, ‘carefully edited’. In 1303 he was temporarily exiled from Paris for his support of Pope Boniface VIII over King Philip IV in their dispute over taxation of Church properties. It is not known what Scotus did during this exile, but probably he returned to Oxford and may have spent at least part of the time lecturing at Cambridge. After a year, he was able to return to Paris, where, in 1305, he finally earned his doctorate in theology and presided for a couple of years as the Regent Master of Theology. During his Regency, Scotus conducted a ‘quodlibetal dispute’, a formal academic event at which members of the audience could ask the Master questions on any topic whatsoever. Scotus later published a set of Quodlibetal Questions based on this dispute. In 1307, Scotus left Paris and took up the far less prestigious post of lector at the Franciscan house of studies in Cologne. A lector at such a house would have the primary teaching responsibility of the friars residing in that house. Compared with the Franciscan house at Paris, let alone the University of Paris, the Franciscan house at Cologne was a backwater. Why Scotus was sent there is not known. Also unknown is the cause of his untimely death in 1308, about a year after arriving in Cologne. It is, of course, disappointing to have so few details of Scotus’ life. And yet in this very lack there is a lesson about what Scotus’s life was really about. We do not know why he was sent to Cologne at the height of his Parisian success, but we do know that it is very Franciscan to shun worldly acclaim. Scotus was, after all, a Franciscan friar, and the religious order St Francis founded is officially called the order of the Little Brothers of Francis, as a testimony to the poverty and humility they aspired to. It is easy to imagine Scotus the Franciscan willingly taking on a job in Cologne that would result in less time to write, fewer opportunities to dazzle influential peers in philosophical disputation, and hence less fame and prestige than he would have had by staying at Paris. Given his vocation as a Franciscan friar and a priest, it comes as little surprise that God’s existence and nature, and how we ought to live in light of God, were the central (but not only) topics of Scotus’s philosophical work. But it would be a mistake to think of Scotus’s philosophical efforts as so many attempts to rationalise previously settled dogma – this would be unfair to Scotus, given the extremely high argumentative standards he set for himself. He was confident that we can know God’s existence by the unaided work of natural reason One dogma that he thought philosophy could demonstrate was the existence of God. As a Catholic theologian, he believed by faith that God exists, but he also thought that philosophy, or natural reason, could demonstrate that there is a supreme nature that is the first cause of everything else, is the ultimate purpose for which everything else exists, and is the most perfect being possible. Moreover, this supreme nature has an intellect and will, and so is personal, and has all the traditional divine attributes such as wisdom, justice, love and power. In short, Scotus thinks that philosophy, unaided by theology, can demonstrate God’s existence. His case is elaborate, developed over 30,000 words in his Tractatus de primo principio – a work I recently translated and wrote a commentary on (forthcoming this year with Hackett Publishing Company) – a virtuosic exercise in the high scholastic style. It develops a sort of hybrid argument influenced by both Aristotelian-Thomistic ‘cosmological’ arguments that approach God from the causal structure of the world, and Anselmian ‘ontological’ arguments that try to establish God’s actual existence from peculiar features of the idea of God. It is widely regarded by specialists as the most rigorous effort to prove God’s existence undertaken in the medieval period. But while Scotus was confident that we can know God’s existence and many divine attributes by the unaided work of natural reason, he did not think we can, in this way, know everything that there is to know about God. As a Christian, Scotus believed that God is a ‘Trinity’ of divine persons – three persons sharing the one divine nature. But he did not think that we could know this fact about God apart from divine revelation. He extended this intellectual modesty to other distinctively Christian doctrines such as the resurrection of the dead: he thought that philosophy can show that it is probable that human beings have immortal souls, but that belief in the resurrection of the dead (and so the reunification of souls with bodies) is something believed by faith – not opposed to reason but not discoverable by reason. While Scotus thought that some of his religious commitments could not be proved by reason, he did not think that his religious commitments contradicted anything that reason could show to be true. In this respect, Scotus is an heir of the long tradition of Christian thought that affirms the harmony of faith and reason. Here Scotus is in lockstep with Thomas Aquinas: both think that God’s existence can be demonstrated but that God’s being a Trinity cannot. Scotus and Aquinas were not in lockstep on every topic, however. One of the most infamous differences between these two great medieval thinkers concerns their views about how our words and concepts work when we try to think and speak about God. Each believed that our thought and language develop from our experience of the world around us. And each recognised that God is not among these familiar objects of experience. So, for both thinkers, it is equally important to offer some sort of theory about how it is that we can think and speak coherently and meaningfully about God using concepts and words tailored to finite, sensible things. Aquinas adopted the view that, applied to God, our concepts and words have only analogous meaning. For example, ‘wisdom’ as applied to God is only analogously related to ‘wisdom’ as applied to a creature, such as Socrates. Scotus offered a slightly different theory. He argued that at least some of our words and concepts have exactly the same meaning when applied to God as they have when applied to creatures – they are ‘univocal’ (same in meaning), not merely analogous. ‘Being’ itself is the most important of these univocal concepts and terms. Scotus thinks that when we say ‘God is a being’ and ‘Socrates is a being’, ‘being’ has exactly the same meaning in the one as in the other. Scotus affirms that this is exactly the gap that yawns between creatures and God To some, this view is startling, even scandalous. Influential writers like Amos Funkenstein and John Milbank think that Scotus’s doctrine of univocity caused monumental changes to Western society. In The Unintended Reformation (2012), Brad Gregory argues that univocity led to the ‘domestication of God’s transcendence’ and the rise of secularism, an ontological flattening in which God and creatures are metaphysically on par, where God is just one more theoretical entity among many, able to be discarded if alternative scientific theories explain data better than theological alternatives. As the sciences progressed and found less and less need of God, religious belief and practice found itself more and more relegated to a subjective realm of feelings and blind faith. Eventually, the sciences, now operating on totally naturalistic assumptions, were given sole responsibility for describing the world objectively. Whether one welcomes or laments these societal changes, the dunces know that Scotus cannot be responsible for them. To hold that we human beings possess a concept that applies equally to God as it does to creatures does not entail or even remotely suggest that God exists just like creatures exist. Scotus’s controversial doctrine of univocity is, at worst, harmless for theology. To see this, it is important to keep in mind that Scotus’s doctrine of univocity is itself undergirded by a theory of concepts according to which most of our concepts are themselves complex, able to be analysed down into simpler conceptual components. For example, the most general concept we have by which to think about a creature, as a creature, is finite being. This complex concept does not apply to God. But the complex concept infinite being does apply to God – in fact, it is, according to Scotus, the most adequate concept we have (by natural reason alone) for thinking about God. And infinite being, of course, applies to no creature. But notice that each of these concepts – finite being and infinite being – is complex, and each includes being as a simple conceptual component. So, on Scotus’s view, if something is a finite being, then it is a being; and, likewise, if something is an infinite being, then it is a being. At this simplest conceptual level, we have just one concept of being that applies to God and creatures. There cannot be a greater ‘gap’ than that between finite and infinite – and Scotus affirms that this is exactly the gap that yawns between creatures and God. But this gap has nothing to do with the fact that the concepts finite being and infinite being share the simple conceptual component of being as such. If Scotus’s doctrine of univocity is to be faulted, therefore, it cannot be for failing to mind the gap between God and creatures. Relevant criticism might take issue with his theory of complex concepts that gives rise to his theory of univocity – but that is a topic for philosophy of mind and philosophy of language, not theology. Scotus declares over and over again that God is the highest good, indeed goodness itself, and that God is truth itself. Given his understanding of how our concepts work when we apply them to God – univocally, as we saw above – Scotus did not think that, when we call God ‘good’ or ‘true’, we are in the dark about what God’s goodness and truth amount to. Sure, we cannot comprehend the infinity of God’s goodness, but we can be confident that, if it’s true that God is good, then God’s goodness is intelligible to us. The intelligibility of divine goodness acts as a sort of conceptual constraint to Scotus’s theorising about God’s relationship to morality. In Scotus we find two grounds or sources of moral norms: on the one hand, following Aristotle, Scotus thought that it is evident from the natures of human beings what is good and bad for us, and this sort of ‘natural goodness’ yields a wide range of norms about right or wrong. But on the other hand, Scotus emphasised God’s freedom over the moral order. God’s commands – eg, thou shalt love thy neighbour as thyself; thou shalt not kill – themselves generate moral obligations, and God’s commanding need not track in every way what can be discerned merely by reflection on human nature. Scotus considers the command to Adam and Eve not to eat the fruit of a certain tree in the Garden of Eden – if God had not commanded them not to, it wouldn’t have been wrong. But God’s freedom over morality itself neither negates what we can discover on our own about right and wrong, nor entails that God’s freely instituted moral norms can invert the natural moral order. Scotus’s traditional insistence that human nature is a source of moral norms is itself supported by his broader realism about universals. In the old dispute, realists hold that there is something real, independent of our thinking, about common natures (nowadays more often called universals). Each of us is a human being, and the humanity we share is itself something real, existing independently of anyone’s forming a concept of humanity. Nominalists, by contrast, deny that common natures like humanity have any sort of mind-independent existence. For them, there are indeed individual humans, but humanity is merely a concept or word. Duns Scotus is one of the more emphatic realists of the Middle Ages, while William of Ockham, a Franciscan who died four decades after Scotus, is probably the most famous medieval nominalist. Scotus innovates, inventing an entirely new kind of entity: a property that, at most, one thing can have Realism about common natures gives rise to a philosophical puzzle that the nominalist need not take up: if humanity is something we all share, what makes us the individuals we are? Put another way, if our collective humanity is one, what explains how there are many humans? It is in answer to this question that Scotus develops his doctrine of ‘haecceity’: each individual belongs to the kind it does due its common nature, but is the individual it is due to its haecceity. ‘Haecceity’ literally means ‘thisness’. It is that feature, unique to each of us, that makes each of us some particular human being. Every other type of property that a thing can have – colour, shape, size, duration, place, and so on – is in principle shareable by something else. Therefore, these shareable properties cannot explain our individuality. So Scotus innovates, inventing (or discovering) an entirely new kind of entity: a property that, at most, one thing can have. Your haecceity is that feature of yours that only you can have. To see how radical this theory is, consider Thomas Aquinas’ own answer to the question about what individuates things that share a common nature. Aquinas thought that each individual has a particular chunk of matter of a certain quantity, and this chunk of ‘quantified matter’ serves to individuate individual things. So you and I share humanity in common, but I am I because of my matter, and you are you because of your matter. There is something wholesome and simple about Aquinas’ theory, but Scotus criticises it on the grounds that, even if we suppose that you and I cannot share the same matter at the same time, it remains that matter itself, even some particular quantity of matter, is shareable (even if only at different times) and so is unsuitable for making an individual thing to be the very individual it is. Scotus’s haecceity really is a new kind of thing in the history of metaphysics: something real, something that really characterises the thing that has it – but something that is entirely unique to its bearer. Scotus’s doctrine of haecceity is yet another of his views in which some have discerned world-historical significance. In A Secular Age (2007), Charles Taylor, inspired by Louis Dupré, said that Ockham the nominalist and Scotus the realist share a focus on individuality that gives ‘a new status to the particular’, and marks ‘a major turning point in the history of Western civilisation, an important step towards that primacy of the individual which defines our culture.’ I confess I am often tempted to make sweeping historical conclusions about the medieval figures I work on. If I could believe them, I might think my research is more important than it is, and conduct my work with extra vigour. In a Taylorian spirit, for example, I might say that Ockham and Scotus, along with their predecessor Aquinas, with the focus on individuals these three share, gave rise to the primacy of the individual that defines our culture. Or, in the same spirit but with a greater sense of boldness, I might say that Aquinas, with his materialistic answer to the problem of individuation, along with Scotus and Ockham, who believed in the existence of matter, together ushered in the pervasive materialism of contemporary science and culture. It is just as possible for a person of the 21st century as of the 14th to wonder whether God exists Of course, it would take a reckless frame of mind to believe either of these assertions: the connections drawn between Aquinas, Scotus and Ockham are insufficiently robust to unite them as common causes of the historical events attributed to them. But that’s the point: a theory of nominalism is about individuals in some sense (since it asserts there are only individuals) and so, too, a theory of haecceity is about individuals in some sense (since it asserts an individuating entity in addition to the common nature). But these theories are about individuals in radically different senses, just as Aquinas’s materialistic solution to the problem of individuation is about matter in a sense radically different from the sense in which, say, Thomas Hobbes is a materialist about human minds. Therefore, they should not be lumped together as common causes of the same historical event. Ockham’s denial that there is such a thing as human nature does seem like the sort of denial that would affect the way ordinary people live their lives, if it ever came to influence them. The same can be said of Scotus’s affirmation that there is such a thing as human nature. But it would be rather surprising – and a mere accident – if the denial and affirmation of exactly the same view had exactly the same influence on how people live their lives. As a Scotus scholar, I welcome this century’s revival of interest in Scotus. But a more fruitful way to indulge that interest, especially for those just starting their intellectual journey with Duns Scotus, is simply to try to take him on his own terms, engaging first-order questions of philosophy and theology with Scotus, and resisting the storyteller’s urge to situate this or that feature of Scotus’s thought within a narrative that explains why we are where we are now. It really is just as possible for a person of the 21st century as it was for a person of the 14th to wonder whether God exists, or whether universals are real, or whether objective morality requires a divine lawgiver. When we ask these questions now, we’re asking the very same questions they were asking then. And, thanks to the efforts of the dunces who for centuries have kept alive Scotus’s memory, editing and transmitting his texts, and writing papers and books trying to explain his thought, we can welcome Scotus into our own puzzlings over these and other perennial questions. At the speed of philosophy, 1308 is not so very far away after all.
Thomas M Ward
https://aeon.co//essays/duns-scotus-was-no-fool-but-a-brilliant-enigmatic-thinker
https://images.aeonmedia…y=75&format=auto
Thinkers and theories
Twin forces marginalised the women of early analytic philosophy. Correct those mistakes, and the next generation benefits
A couple of years ago, the library of the University of Groningen in the Netherlands was subject to a massive reclassification. Hundreds of books were provisionally placed higgledy-piggledy on the shelves, atlases leaning against poetry collections, folios of sheet music wedged between a tome on malaria treatments and a study of birds in the Arctic. In the midst of this jumble, one of us was preparing the valedictory lecture that would mark her official retirement as professor of philosophy. After two hours of thinking and writing, it was time for a break and a leisurely look at the miscellany of intellectual effort on the shelves. A bright blue book drew attention. It was the fourth volume (the rest were nowhere to be seen) of A History of Women Philosophers (1995) edited by Mary Ellen Waithe, which deals with female philosophers in the 20th century. Upon inspection, it contained not only essays on thinkers such as Simone de Beauvoir and Hannah Arendt, but also a chapter on a completely unknown English philosopher, E E Constance Jones (1848-1922). The authors of this chapter, Waithe and Samantha Cicero, argued that Jones had solved Frege’s Puzzle two years before Gottlob Frege himself had done so. Emily Elizabeth Constance Jones (1916) by John Lavery. Courtesy Girton College Cambridge/Wikipedia This was by all accounts a spectacular claim. Frege, the German mathematician and philosopher born in the same year as Jones, had been the major inspiration for Principia Mathematica, the bible of modern logic that Alfred North Whitehead and Bertrand Russell published between 1910 and 1913. Frege’s grand aim was to find a foundation from which the whole of number theory could be derived. In carrying out this project, however, he encountered a philosophical problem. How to account for the fact that an equation like 2 x 2 = 1 + 3 is informative, whereas 4 = 4 is not? It is not just that the symbols on both sides of the identity sign are different. After all, in 7 = VII the symbols on either side of the identity sign differ, but the statement is not informative in the way that 2 x 2 = 1 + 3 is; it simply represents the number seven in two different symbol systems. In later work, Frege used a non-mathematical example to illustrate his problem. Why is the statement ‘The morning star is the evening star’ informative, whereas ‘The morning star is the morning star’ is not? Since both ‘the morning star’ and ‘the evening star’ refer to the planet Venus, both sentences seem to say nothing more than that Venus is Venus. Frege solved the problem in his paper ‘On Sense and Reference’ (1892). He argued that the meaning of a term like ‘morning star’ is not just its reference (Venus), but also contains another component – the sense – which is the way in which the reference is given to us, in this case as a star that appears in the morning. ‘The morning star is the evening star’ is informative because the references of ‘morning star’ and ‘evening star’ are the same, while their senses are different. In fact, it took the Babylonians quite some time to discover that this star that appears in the morning is the same heavenly body as the star that appears in the evening. ‘The morning star is the morning star’, on the other hand, is trivially true – for the Babylonians as well as for us. Waithe and Cicero discovered that Constance Jones was struggling with a problem similar to that of Frege, for she wanted to know: why is the statement A is B significant while A is A is trivial? Waithe and Cicero argued that in 1890 – two years before Frege wrote his classic paper – Jones had published a solution that was basically the same as Frege’s. For any scholar in analytic philosophy, this was breaking news. Both of us have long been teaching the history of analytic philosophy, one of us for more than 30 years. We have taught countless students how, at the University of Cambridge, Bertrand Russell and George Edward Moore revolted against traditional logic and traditional philosophy, thereby founding what became known as analytic philosophy. We have described how, in the 20th century, analytic philosophy branched out in two different directions, a formal one that led to Ludwig Wittgenstein’s Tractatus Logico-Philosophicus (1922), the Vienna Circle, and W V Quine’s naturalised philosophy; and an informal one consisting of the ordinary language philosophy associated with J L Austin, Gilbert Ryle, and the later work of Wittgenstein. Nowhere did we mention Constance Jones. We simply did not know about her, much less did we suspect that she could have anticipated that crucial building block of analytic philosophy, Frege’s distinction between sense and reference. When we subsequently read Jones’s work ourselves, we found that the story is a bit more nuanced than what we had gathered from the chapter by Waithe and Cicero. There are similarities between Jones and Frege, but also some salient differences. It is not just that Jones’s approach is simpler than Frege’s, dealing only with elementary sentences such as ‘A is B’ – there are differences that cut much deeper than this. Frege’s distinction between sense and reference (in German: Sinn and Bedeutung) does not coincide with Jones’s more traditional distinction between what she calls ‘determination’ and ‘denomination’, and later ‘connotation’ and ‘denotation’, or ‘intension’ and ‘extension’. The extension of the predicate term ‘is red’, for example, is simply the class of all red things in the world. The Fregean Bedeutung of this term is, however, a concept, more particularly a mathematical function. And while Jones’s ‘intensions’ are properties of real or imagined things, Fregean Sinne (senses) constitute an objective realm separate from any actual or fictional world. (For details on the differences, see the chapter ‘E E Constance Jones and the Law of Significant Assertion’ by Jeanne Peijnenburg and Maria van der Schaar, forthcoming in the Oxford Handbook of British and American Women Philosophers in the Nineteenth Century, edited by Lydia Moland and Alison Stone.) By their choices, they influence our ideas about who are and who are not important philosophers None of this alters the fact that Jones was completely forgotten, even though she had been a very active and respected member of the philosophical community. From 1884 to 1916, Jones taught Moral Sciences at Girton, the first residential college for female students in the UK, where she became Vice-Mistress and later Mistress. Her specialisation was logic: she wrote four books on the subject and many articles in leading philosophical journals such as Mind and Proceedings of the Aristotelian Society. Although her work is firmly rooted in the old Aristotelian syllogistics, it is in some respects surprisingly modern. At a time when logic was generally seen as being about subjective laws of thought, Jones anticipated later developments by staunchly asserting that logic was objective. Moreover, her problem-driven approach and remarkably clear style make her work different from the florid prose of some of her contemporaries and more akin to the later analytic tradition. In 1892, she became a member of the Aristotelian Society. Four years later, she was the first woman to address the Cambridge Moral Sciences Club, and established philosophers such as F C S Schiller, W E Johnson and Bernard Bosanquet engaged in public discussions of her work. Then why was she forgotten? The history of 20th-century philosophy is largely shaped by handbooks, textbooks, companions or anthologies. By the choices they make, by the texts they rely on, historians, editors and educators influence our ideas about who are and who are not important philosophers. Jones’s name is not in the handbooks. Why not? Perhaps it was due to the supremacy of modern mathematical logic, which reduced the old Aristotelian logic that Jones uses to a mere special case. The fact that Russell was personally exasperated by Jones and her Victorian mindset, describing her in a letter to Ottoline Morrell as ‘motherly’ and ‘prissy’, may not have helped either. But, whatever the precise causes, Jones does not deserve to be consigned to oblivion. The case of Constance Jones is one of what we may call historiographical marginalisation: although she was a prolific and respected writer during her lifetime, her work never entered the canon because historians and textbook authors for some reason chose not to include it in their overviews. There are also cases where the marginalisation is historical: a philosopher’s significance is insufficiently recognised by her contemporaries. An example of historical marginalisation is the reception of work by the German philosopher, physicist and mathematician Grete Hermann (1901-84). After the dawn of quantum mechanics at the beginning of the 20th century, physicists and philosophers were baffled by its spectacular empirical successes. How could an essentially indeterministic and counterintuitive theory be so effective? Was the world really that weird? Following Albert Einstein, many people suspected the existence of ‘hidden variables’ that, once discovered, would reveal that quantum mechanics was deterministic after all. Their hopes were dashed in 1932, when the mathematician John von Neumann seemingly proved that any theory about hidden variables is incompatible with quantum mechanics. The quantum mechanical structure, he argued, is such that it simply does not allow the addition of variables that would enable us to identify deterministic causes, on pain of becoming inconsistent. But he had a challenger. In a paper of 1935, Hermann showed that von Neumann’s argument was flawed. The source of difficulty is an assumption he makes about a sum of noncommuting operators. Von Neumann was right that this assumption holds in quantum mechanics, but he failed to see that it may well be false in an extended theory, encompassing both quantum mechanics and the new or hidden variables. Hermann explained that this failure made his proof essentially circular. Her voice, however, was not heard. Thirty years later, the Irish physicist John Bell independently criticised von Neumann on similar grounds, and the subsequent experimental check of his findings earned Alain Aspect, John Clauser and Anton Zeilinger the Nobel Prize in 2022. The causes of marginalisation are strong and manifold, ranging from the political, social, cultural or even personal Although Hermann’s argument against von Neumann was mentioned by Max Jammer in his standard work The Philosophy of Quantum Mechanics (1974), and by David Mermin in a paper of 1993, it received little attention at the time. This changed in 2016, when Guido Bacciagaluppi and Elise Crull discovered an unpublished manuscript by Hermann in the archives of the English theoretical physicist Paul Dirac. As it turned out, in 1933, one year after von Neumann’s book, Hermann had sent a paper of 25 pages to Dirac, explaining the flaw in von Neumann’s argument. Dirac never responded. It is, however, no exaggeration to say that the history of 20th-century physics would have been different if he had, and if the papers by Hermann had been noted earlier. Historical and historiographical marginalisation are of all times and places: they arise in arts, sciences, and in all corners of philosophy. While generally lacking justification, the causes of marginalisation are strong and manifold, ranging from the political, social, cultural or even personal. More women than men were affected by it, and the history of analytic philosophy is in this respect no exception. In our recent book Women in the History of Analytic Philosophy (2022), we collected the metadata of articles published in main outlets for analytic philosophers in the first half of the 20th century. In particular, we looked at all the 3,288 articles that appeared in six philosophy journals between 1896 and 1960: Mind, The Monist, Erkenntnis, Analysis, Journal of Symbolic Logic, and Philosophical Studies. In 99.6 per cent of the cases, that is, in 3,274 articles, we were able to identify the gender of the authors. We found that, on average, only 4 per cent of these 3,274 articles were authored by women. Most of these women, 70 in number, are presently forgotten, as is illustrated by recent meetings of the Society for the Study of the History of Analytical Philosophy. Only four of the 246 papers presented at meetings of this society in the period 2015 to 2019 were about female philosophers – less than 2 per cent. In practice, it is often hard to separate historical and historiographical marginalisation, for they typically go hand in hand. If work by female authors is not much read or cited by contemporaries, historians will be disinclined to include it in their textbooks. And if these female philosophers’ views are not discussed in textbooks, anthologies or introductions, they are less likely to be studied by the next generation of philosophers. Susanne K Langer photographed by Richard Avedon. Courtesy the Smithsonian National Museum of American History A prominent example of the interplay between the two types of marginalisation is the reception of work by Susanne K Langer (1895-1985), one of the first to use the term ‘analytic philosophy’ in print. Langer was an American logician and a student of Whitehead, the co-author of the aforementioned Principia Mathematica. Whitehead had worked at the University of Cambridge in the UK his entire career but had taken up a position at Harvard University in Massachusetts in his 60s. This move greatly stimulated the dissemination of logical analysis in US philosophy, and Langer was among the most active proponents of the new approach. In 1964, she recalled having been part of a small group of students ‘who looked forward to a new philosophical era, that was to grow from logic and semantics’. After completing her PhD, Langer actively contributed to the spread of the new ‘analytic’ philosophy. She published a number of papers on Principia Mathematica, wrote one of the first American logic textbooks, and co-founded the Association for Symbolic Logic, the first international society for logicians. Langer’s book sold more than half a million copies and was cited in the academic literature c10,000 times In the beginning, Langer’s work was much respected by her colleagues. Her first books and papers were frequently discussed by analytic philosophers, both in print and in private discussion groups. Members of the celebrated Vienna Circle studied her work in the early 1930s and saw her as one of the major representatives of the analytic approach in the US. (For details, see the chapter ‘Susanne Langer and the American Development of Analytic Philosophy’ by Sander Verhaegh in our book.) Then, Langer published what would become her most influential work: Philosophy in a New Key (1942). It sold more than half a million copies and has been cited in the academic literature almost 10,000 times. The book is a plea to expand the scope of logical analysis. Until then, analytic philosophers had used the new logic to analyse science, philosophy and language in general. But Langer suggested to apply it to a broader range of phenomena: abstract paintings, sculptures, symphonies, rituals, dreams and myths. All these things, Langer argued, are complex symbols with an internal structure and are therefore suitable subjects for logical analysis. Much as we can investigate the logical form of propositions such as ‘2 x 2 = 1 + 3’ and ‘The morning star is the evening star’, we can analyse the logical structure of J S Bach’s Air on the G String and Piet Mondrian’s Composition with Red, Blue, and Yellow. In the years that Philosophy in a New Key went through reprint after reprint, Langer’s work began to be ignored by her former analytic companions. In advocating the study of art, myths and rituals, Langer had proposed research topics that many analytic philosophers relegated to the realm of the irrational. While her colleagues were reconstructing the foundations of probability, arithmetic and quantum mechanics, Langer was studying subjects that were taken to be expressions of emotions and feelings. As a result, there was hardly any discussion of her book within the analytic community, despite her rising fame outside it. Even analytic colleagues who were demonstrably influenced by her book, such as Quine, failed to cite it. By the time that analytic philosophers started to compile anthologies and took the first steps towards documenting the history of their own discipline in the late 1940s, Langer’s work was pushed into the background: it was not mentioned, not even her contributions to the development of logic and analysis in the first phase of her career. Today, Langer is well-known among philosophers of art, but her role in analytic philosophy has been forgotten. In recent years, quite a lot of attention has been given to the ways in which sociopolitical and other external factors shaped the development of analytic philosophy. Were it not for the grim political situation in the 1930s, members of the Vienna Circle would not have immigrated en masse to England and the US. And were it not for the amenable climate at US universities, where rigour and clarity had become key virtues across the humanities and social sciences, their logical positivism would not so quickly have caught on. Even demographic factors played a role. When the first ‘baby boomers’ started to enter college, in the 1960s and ’70s, many departments had turned analytic, and profited from the explosive growth of higher education, creating more and more jobs for analytically minded philosophers. Textbooks on analytic philosophy tend to present its development as a more-or-less continuous line, where key figures respond to one another: Russell reacting to Frege, Wittgenstein and Rudolf Carnap to Russell, Quine to Carnap, and so on. This way of telling the history has been very effective: it is no exception to find that, at a conference on the history of analytic philosophy, more than half of the papers are about Frege, Russell, Wittgenstein or Carnap. But the actual spread and growth of analytic philosophy is of course richer, more varied and more complex than is suggested by the stylised and regimented narratives that authors of textbooks are necessarily bound to relate. Like the development of any other historical movement, the development of analytic philosophy is full of interesting details that not only fail to match, but even contradict and undermine the general textbook outline. Had scholars given these details more attention, we might have enjoyed a broader and intellectually more diverse canon. For then we might have seen that the development of analytic philosophy was not only driven by purely philosophical arguments, but also by political, sociological and cultural circumstances, some of which made it difficult for particular academics, such as women, to be heard. Historians can play a role in correcting the omissions, oversights and downright mistakes of our predecessors We are not suggesting that a broader recognition of the consequences of historical and historiographical marginalisation will lead to a completely novel canon or a radically new history of the tradition. What happened happened: we cannot go back in time and undo the processes that pushed female philosophers into the periphery. We will have to deal with the facts, even if we do not like them and believe they were preventable. It is a fact that only a small percentage of the publications in analytic philosophy were written by women. And it is also a fact that most of them were junior academics and therefore relatively young. Even if women were allowed to get a degree and were able to make it to the vanguard in a male-dominated intellectual climate, they often stopped publishing when they got married. This is why the 70 female authors we identified were responsible for just 131 publications in the journals we investigated, less than two articles per person on average. Only a very small number of women, such as Jones and Langer, had the time and the opportunity to build a comprehensive philosophical research programme. What we are saying is that historians can play a role in correcting the omissions, oversights and even downright mistakes our predecessors made in writing about (or worse, not writing about) the contributions of female philosophers. For there is an ‘internal’, purely philosophical point to be made. Although external factors influenced its development, analytic philosophy is more than the product of sociopolitical and cultural circumstances. In documenting the history of analytic philosophy, there is something to be right or wrong about. Hermann’s discovery really was a significant contribution to the debate about the existence of hidden variables, even if her colleagues and later historians failed to see it. And Langer really did play a major role in the development of US analytic philosophy, even though her name is missing in companions and anthologies on the subject. It is true that, until the 1960s, only a few women actively contributed to the development of analytic philosophy, but many of them had ideas that are worth studying. In examining and re-assessing their work, we will be able to discover interesting but forgotten theories, proofs and arguments, shed new light on the development of the tradition, and contribute to a richer, more diverse and philosophically more fertile canon.
Jeanne Peijnenburg & Sander Verhaegh
https://aeon.co//essays/the-lost-women-of-early-analytic-philosophy
https://images.aeonmedia…y=75&format=auto
War and peace
Anne Frank’s diary is one of thousands of desperate, secret and vivid journals each bearing witness to the reality of war
‘I have the feeling that I am an unofficial reporter covering a shipwreck,’ wrote the Dutch Jewish journalist Philip Mechanicus on 29 May 1943, from the Westerbork transit camp on the sodden soil of Drenthe in the northeastern Netherlands. He’d been a prisoner at the camp since the previous November, after he was arrested for appearing in public in Amsterdam without the yellow Star of David affixed to his jacket. Mechanicus was a 54-year-old seasoned journalist, foreign desk editor for the national newspaper Algemeen Handelsblad, who had written from Indonesia, Russia and Palestine as a foreign correspondent. It may have been his reputation that did him in – someone recognised him on the street and informed on him. After his arrest, he was sent to Amersfoort Polizei-Durchgangslager, a German punishment camp, where he was apparently tortured. The details are not known but, when he arrived at Westerbork two weeks later, he weighed 80 pounds and both his hands were broken. ‘Gradually, I have developed the notion that I wasn’t brought here by my persecutors, but that I took the trip voluntarily to do my work,’ he continued. One hand, at least, had healed enough so that he could write. ‘I’m busy all day long, without a second’s boredom, and sometimes I feel as if I have too little time. Duty is duty; work ennobles.’ 13171A page from Philip Mechanicus’s diary, 28 May 1943. Courtesy Wikipedia 13172Philip Mechanicus, late 1930s. Courtesy Wikipedia He scribbled his words into a thin school exercise book with a blue cover that he’d got from the camp school. It was one of 15 such notebooks that he would use to jot down his daily impressions of life at Westerbork during the 17 months that he remained at the camp, before being deported ‘on’ to Bergen-Belsen and ultimately to Auschwitz, where he was shot and killed. Mechanicus was aware of his predicament; he managed to avert deportation for almost a year and a half, and during that time he produced what is undoubtably the most valuable eyewitness account of Westerbork camp in operation, a record of daily life for the tens of thousands of Jews temporarily housed there, before they were shipped off to their deaths. Rare footage of deportations from Westerbork transit camp to the eastToday, Mechanicus’s diary is one of more than 2,100 in an Amsterdam collection held at the NIOD Institute for War, Holocaust and Genocide Studies, housed in the underground archives of a grand, doublewide mansion on the Golden Bend of the Herengracht Canal. The NIOD collection didn’t come together by accident. It was part of a concerted effort to collect, preserve and potentially publish the personal correspondence of ordinary citizens living through the occupation. The idea to do so was hatched simultaneously by Loe de Jong, a Dutch Jewish journalist in exile in London, who worked for Radio Oranje, the broadcast station for the government in exile, and a group of local Dutch scholars led by the economics and social history professor, Nicolaas Wilhelmus Posthumus, who had already established a few archives of social movements. More than a year before the war ended, De Jong had convinced the exiled Dutch Cabinet to establish a study centre of the occupation; it would open its doors as soon as the war ended. On 28 March 1944, Gerrit Bolkestein, the Dutch minister of education, arts and sciences, addressed the nation on Radio Oranje, in a speech that De Jong had written for him. Loe de Jong at work in London in 1942. Courtesy Wikipedia ‘History cannot be written on the basis of official decisions and documents alone,’ said Bolkestein to his countrymen back home. ‘If our descendants are to understand fully what we as a nation have had to endure and overcome during these years, then what we really need are ordinary documents – a diary, letters.’ Diaries and letters were often deemed suspect because they were tainted by experience It was a relatively new notion that personal documents could illuminate history. Scholars of the early 20th century, above all, valued ‘objectivism’, a concept developed by the 19th-century German historian Leopold von Ranke, who sought to turn ‘historiography’ into a scientific discipline; this required ridding it of its moral dimension. Ranke argued that facts were central to objective history-writing and, to maintain a scholarly distance from facts, historians should eliminate personal bias and take a neutral attitude. But, between the two world wars, this notion of ‘objectivism’ was already losing its grip. Official documents kept by the Germans as part of their notoriously meticulous record-keeping project, for instance, were naturally subjective in their advancement of Nazi aims. A more accurate way to differentiate between subjective and objective documentation would be through the prism of power. Sources considered ‘objective’ were typically associated with the dominant power elite; documents like diaries and letters, oral histories and first-hand witness accounts, by contrast, were often deemed suspect because they were tainted by experience. In their book Testimony: Crises of Witnessing in Literature, Psychoanalysis and History (1992), the psychiatrist Dori Laub and the literary critic Shoshana Felman argued that the Holocaust was an ‘event without a witness’ because anyone who had witnessed the Nazi concentration camp system first-hand could no longer be regarded as sane. The victim’s exposure to a brutal and delusional ideology ‘eliminated the possibility of an unviolated, unencumbered, and thus sane, point of reference’. German police round up Jewish men at Jonas Daniël Meijerplein, a square in Amsterdam, in February 1941. Courtesy Wikipedia But in the midst of the Second World War, a period of extreme propaganda, totalitarian media control and widespread rumour – what the theorists term ‘epistemic instability’ – the individual voice began to emerge as a counterweight to the dominant public narrative. That voice, in turn, formed a chorus of testimony. A personal narrative about the Warsaw ghetto soup kitchen, for example, would combine with another, like a poem about the carousel outside the Warsaw ghetto, to make a social memory – a memory of a group of people whose stories were ignored, disregarded or forgotten. The French philosopher and sociologist Maurice Halbwachs suggested that these small, atomised memories of a communal experience formed a ‘collective memory’ – a term he coined between the two world wars – that could be a history that ran counter to the dominant historical narrative. Individuals remember, and then the group constructs the memory for the whole. Photographs, memoirs, diaries, poetry, letters, children’s drawings were buried underneath the ghetto In the Netherlands, Posthumus pioneered the notion that an individual’s voice could contribute to the construction of history as a whole. In 1935, he established the International Institute of Social History in Amsterdam, as the Nazis rose to power in Germany. His aim was ‘to acquire archival treasures from the possessions of the hunted and the defrauded’ in a time of ‘political crisis and persecution’. Almost immediately after the Nazis invaded Holland, Posthumus began collecting source material from the citizen’s point of view. His National Bureau for War Documentation, secretly launched in an Utrecht café, was up and running by 1944. Others were thinking in much the same way. Before the 1942 liquidation of the Warsaw Ghetto, a group of writers, journalists and archivists led by the Polish Jewish scholar Emanuel Ringelblum collected as much material as possible – photographs, memoirs, diaries, poetry, letters, children’s drawings – and buried it underneath the ghetto. Today that extraordinary trove, Oneg Shabbat, is probably the world’s largest recovered archive of Jewish prewar and wartime documentation. Similar collections were discovered from the ghettos of Vilna, Białystok, Łódź and Kovno. ‘In hundreds of ghettos, hiding places, jails, and death camps, lonely and terrified Jews left diaries, letters and testimony of what they endured,’ says the historian Samuel D Kassow in his book Who Will Write Our History? (2007). ‘For every scrap of documentation that surfaced after the war, probably many more manuscripts vanished forever.’ The workers for the Oneg Shabbat realised, he wrote, that they ‘might be writing the last chapter of the 800-year history of Polish Jewry’. Isaac Schiper, a leading Polish Jewish historian of the interwar period, understood the value of these materials not just for telling the Jewish side of the story, but for establishing the future of history. ‘Everything depends on who transmits our testament to future generations, on who writes the history of this period,’ he told his fellow inmate at Majdanek concentration camp, not long before he was killed. ‘Should our murderers be victorious, should they write the history of this war, our destruction will be presented as one of the most beautiful pages of world history, and future generations will pay tribute to them as dauntless crusaders. Their every word will be taken as gospel. Or they may wipe out our memory altogether, as if we had never existed…’ There was a danger, too, that the cry in the dark would not be heard, according to Schiper. ‘[I]f we write the history of this period of blood and tears – and I firmly believe we will – who will believe us? Nobody will want to believe us, because our disaster is the disaster of the entire civilised world …’ The civilised world was jettisoned in the Holocaust but, by collecting witness testimonies, first-hand memoirs and other personal artefacts attesting to the lives of those who would soon die, some hope could be extended to Jewish communities facing extinction. A new form of ‘history of the present time’ arose in the aftermath of the First World War, wrote the Egyptian French historian Henry Rousso in The Latest Catastrophe (2016), out of a need to explain the vast destruction of civilian populations, attacks on noncombatants, massacres of prisoners of war, and demolition of nonstrategic urban centres. ‘The terrible question had to be confronted: how to preserve the memory of the dead and disappeared without sepulchers, how to come to terms with the collective losses, give meaning to events that seemed beyond the reach of reason?’ 13181A handmade diary by Clara Lefkowitz Kempler at the Sömmerda slave labour camp. Courtesy the USHMM 13182The Second World War was not merely a military conflict but ‘an extraordinary assault on civilians’, wrote the historian Peter Fritzsche in his book An Iron Wind: Europe Under Hitler (2016), a work that relied heavily on first-person documentation, including many diaries. The war’s ideological violence played out in urban centres, in public squares, on public transportation, and inside businesses and homes. Often, it was characterised by civilian betrayals among neighbours, even within families. ‘The war erased whole horizons of empathy,’ according to Fritzsche. It fundamentally altered human relationships and, as such, it was intimate, personal and close to home. The result was that a great number of citizens felt compelled to write about these experiences, for themselves, and for future generations. ‘Across Europe diarists recorded the conversations and rumours they heard and the impressions they gathered,’ wrote Fritzsche, and many of those writings survived. In the aftermath of two calamitous world wars, humanity needed new forms of history writing Historians of the postwar era recognised that they had a role in shaping the new collective memory, as a way not just to record events, but to transform human behaviour, to try to heal society. This new way of writing recent history emphasised what Carolyn Jean calls the ‘moral witness’, voices of survivors who could speak on behalf of the dead, because the dead had a lesson to impart to humanity. In his Aeon essay about the German historian Reinhart Koselleck, Stefan-Ludwig Hoffman writes: ‘Dismantling the concept of history and coming up with a new theory of how histories actually unfold – chaotic, contingent, messy and ferocious, yet with discernible patterns – was therefore the most important task for historians.’ Koselleck had been trained as a Hitler Youth, sent by the Nazis to the Eastern Front, and survived Stalin’s Gulag, emerging as a pivotal thinker of the postwar era. He understood that, in the aftermath of two calamitous world wars, humanity needed new forms of history writing. A form of history writing that was objective in the extreme could lead to the dangerous formation of ideologies – and, as the wars had shown, ideological differences could lead to catastrophic social rupture. Koselleck argued for an open-ended discourse between the objective and subjective. The professional historian who reconstructs history ‘impartially’ can claim the domain of objective truth, but, as Aleida Assmann wrote in 2010, individuals also have a right to claim their own subjective truths, drawn from specific, distinctive and authentic memories. By forcing these different types of truth into conversation with one another, historians could attempt to close the gap. After the Second World War ended, these historians and theorists tried to implement their new notions of the history of the present moment, quite quickly. The Dutch government officially founded the National Bureau for War Documentation (later renamed NIOD Institute for War, Holocaust and Genocide Studies), on 8 May 1945, just three days after Liberation. People from all walks of life arrived to donate their notebooks, scrapbooks, collections of battered loose pages, index cards dug up from holes in the ground, unsent letters, drafts of memoirs, personal photographs, and notes scribbled on Monopoly money and cigarette rolling papers. The NIOD’s founders actively solicited materials through a radio and poster campaign, too, and went door to door asking people to submit their personal documents. De Jong, who was appointed director of the NIOD in October 1945, personally travelled across the country soliciting submissions, including from former collaborators, from leaders of the Dutch Nazi party and from Hanns Albin Rauter, the head of the German police in the Netherlands. Materials could also be dropped at the central office on the Herengracht, or at additional bureaus in The Hague and even in Batavia, the capital of the colony then known as the Dutch East Indies, now Jakarta in Indonesia. The Netherlands was the first country to actively preserve such materials from the war era, a pioneer in focusing on the individual, civilian, subjective experience of the occupation, but many other European countries quickly followed suit, including France, Italy, Austria and Belgium. ‘Everywhere in Europe, often at the impetus of the state and on the margins of the academic world, history institutes and committees were created with the mission of collecting documents and testimony and of producing the first histories of an event that had only just ended,’ Rousso wrote. De Jong set up a Diaries Department at the NIOD in March 1946, led by his deputy director, A E Cohen, who strove to ensure that ‘all categories of diaries’ would be represented among those preserved. This meant he wanted journals written by farmhands and schoolteachers, wealthy landlords and poor ragpickers, Nazi sympathisers and communists – people from all walks of life. They ‘need not be many but they should be various’, Cohen wrote. The collection was not only amassed but also curated. The NIOD would read and review each submitted diary and decide whether or not to keep it, copy it, or return it. Whether retained or not, each diary handed in to the NIOD received a number. By the end of the 1950s, it had logged #1001. Anne Frank’s diary stands alone as a work of literature, but it is also part of a larger collective memory Anne Frank’s diary was assigned #248, when the NIOD noted efforts to obtain the original, but it didn’t come into their collection until 1980, when Otto Frank died and bequeathed all of his daughter’s manuscripts and three photo albums to the Institute. First published in Dutch in 1947 as Het Achterhuis [‘The Secret Annex’]: Dagboekbrieven van 14 Juni 1942–1 Augustus 1944, and later translated into English as Anne Frank: The Diary of a Young Girl (1952), it subsequently became one of the most translated books in the world, a defining personal document of the Second World War. Frank’s diary alone proved the theorists correct: to make an impact, history must be told from a subjective perspective. 13176Anne Frank at the 6th Montessori School, 1940. Courtesy Wikipedia 13177Anne Frank’s diary. Courtesy Wikipedia Frank’s diary stands alone as a work of literature, but it is also part of a larger collective memory created by the entire diary collection at NIOD, and part of the story of the Dutch Jewish community who died in the Shoah. While Frank’s diary falls silent when she was arrested on 4 August 1944, the rest of her journey is filled in through the writings of others who followed a similar path to the death camps. Mechanicus gave us a view of Westerbork, where Frank and her family would spend just under a month, before they were put on the last transport to Auschwitz on 4 September 1944. His journals were smuggled out of the camp somehow, to his ex-wife, Annie Jonkman, a non-Jewish woman who lived in Amsterdam with their daughter, Ruth, a resistance fighter. Thirteen of his 15 journals survived the war and, once Jonkman learned of the NIOD’s campaign to collect wartime diaries, she hand-delivered her late ex-husband’s to De Jong; it was added to the collection as #391. ‘How grateful we must be to him for not wanting to be anything more than the man who goes about with his notebook, noting down events from day to day,’ wrote the Dutch historian Jacques Presser, in his prologue to Mechanicus’s diary, published in Dutch as In Dépôt (1964); in English, Waiting for Death (1968). ‘In fact, [he was] a war correspondent setting down his record while his life was constantly in danger, although he hardly ever seemed to realise it.’ The writings of those sinking, shipwrecked, submarined diarists were a hope in a bottle The first surviving diary entry from Mechanicus’s life in Westerbork was dated Friday, 28 May 1943, in a journal labelled #3, as two previous journals were lost. His second was about the ‘shipwreck’. ‘We sit together in a cyclone, feeling the ship leaking, slowly sinking,’ he elaborated on 29 May, ‘yet, we’re still trying to reach a harbour, though it seems far away.’ The shipwreck metaphor was employed frequently by Jewish diarists to describe their sense of impending catastrophe. The Oneg Shabbat archivist Rachel Auerbach wrote that the ‘scenes of panic’ in the Warsaw Ghetto will ‘vanish with the sinking ship, or with a burning house from which nobody manages to escape, or from a coal pit at the time of an explosion, when the bodies of the miners are buried alive’. The Polish Jewish poet, lyricist, journalist and actor Władysław Szlengel, who organised cultural activities in the Warsaw Ghetto, also described the inhabitants as trapped sailors in a submarine accident. ‘When Jews compared themselves to trapped miners or shipwrecked sailors, they emphasised the fact that their physical connection to the rest of the world had become broken,’ writes Fritzsche in An Iron Wind, while at the same time affirming ‘their existential connection to the readers who would pore over their last words.’ Oskar Rosenfeld, an Austrian Jewish playwright and journalist in the Łódź ghetto, observed that the writings of those sinking, shipwrecked, submarined diarists were a hope in a bottle: ‘They did not know where [their records] would be washed up nor by whom it would be read.’ Sometimes their words, at least, did reach another shore – the beaches of another time, the future. Together, their words provide us with a collective memory, of a society on the brink. Their jottings, scribblings and cries in the dark give us not just an understanding of the Jewish community at the edge of disaster, but of a continent sinking under the weight of its own hatreds, cruelties and self-delusions. In other words, the fate of ‘the entire civilised world’. If only these testimonies were enough to convince the civilised world to refrain from genocide and crimes against humanity. Despite ample evidence of atrocities of the past, and cries of ‘never again’, we have, since the Second World War, seen many more cataclysmic events that require us to learn from eyewitnesses – victims, perpetrators and bystanders. Historians and sociologists are still recording testimonies from the 1994 genocide in Rwanda and the 1995 Srebrenica massacre of Bosniaks (Bosnian Muslims); they are attempting to collect eyewitness accounts from the ongoing violence against the Rohingya in Myanmar. And who knows what kind of accounts will emerge from the Uyghur population and other mostly Muslim ethnic groups, now being held as captives in Xinjiang, northwestern China. Ukrainians both inside and outside their home country, still under attack a year since Russian bombardments began, have turned to modern tools to record their personal accounts – online diary blogs and podcasting. The urge to record such moments of crisis – to reveal the personal in the public realm – persists. Mila Teshaieva, a Berlin-based Ukrainian artist who returned to her home country to record a diary of daily life in the embattled country, said that wherever she found people under bombardment, in Bucha and Borodyanka, she found people writing diaries. ‘I met a number of people, very simple people, who actually never write diaries, but in this time, especially in places under occupation, they started writing diaries,’ she told me. ‘It was not because they wanted to keep a record, but because they were thrown out of their lives. They hear explosions and gunshots, and Russian tanks are rolling past. They needed to make some sense of their daily lives.’ Diaries help them do that.
Nina Siegal
https://aeon.co//essays/thousands-of-desperate-vivid-diaries-remain-from-occupied-europe
https://images.aeonmedia…y=75&format=auto
Stories and literature
Enheduana is the first known named author. Her poems of strife and upheaval resonate in our own unstable times
About 4,200 years ago, the area we now call southern Iraq was rocked by revolts. The once-independent Sumerian city states had been brought under one rule by the legendary king Sargon of Akkad. Over the course of what modern historians call the Old Akkadian period, the reign of Sargon and his successors reshaped the newly conquered cities in countless ways: old nobles were demoted and new men brought to power, old enemies were defeated and new standards of statecraft imposed. The Sumerian world grew much bigger and richer, but also more unstable. Discontent with the new empire festered, provoking a steady stream of uprisings as the cities attempted to regain their independence. One such revolt is depicted in a fascinating poem known as ‘The Exaltation of Inana’. Besides being a poetic masterpiece in its own right, ‘The Exaltation’ bears the distinction of being the first known work of literature that was attributed to an author whom we can identify in the historical record, rather than to an anonymous tradition or a fictional narrator. The narrator of the poem is Enheduana, the high priestess of the city of Ur and the daughter of Sargon. According to ‘The Exaltation’, she was cast into exile by one of the many revolts that plagued the Old Akkadian Empire. Tablets inscribed with The Exaltation of Inana in three parts. Old Babylonian period, c1750 BCE. Courtesy of the Yale Babylonian Collection We do not know for sure whether the poem was written by the historical Enheduana herself, as a literary retelling of a real event, or by a later poet writing in her name, in the ancient version of a historical fiction that was meant to celebrate the famous high priestess. What we do know is that the poem conveys a sense of what it would have been like to live through a period of profound turbulence, whether as a personal account or an echo preserved in the cultural memory of later periods. That is one reason why the works attributed to Enheduana – which, besides ‘The Exaltation’, comprise four other poems – continue to speak to us, millennia after their composition. As the world grows increasingly unstable, we have much to learn from this ancient priestess. The poems do not merely register the reality of an historic upheaval – they go one step further by turning that instability into a cosmic insight, an occasion to reflect on what the world is really like. They contain, compressed within their often-cryptic verses, the germ of an ancient philosophy of change. If the poems seek to turn radical change from a transient political phenomenon into a universal principle, they do so primarily by exalting the goddess Inana, better known by her Babylonian and Assyrian name Ishtar. Two of the poems attributed to Enheduana, ‘The Exaltation’ and ‘The Hymn to Inana’, seek to elevate Inana to the head of the Sumerian pantheon, eclipsing the older male gods, Enlil and An, who rule the cosmos according to most other Sumerian texts. Tablet inscribed with The Hymn to Inana. Courtesy the British Museum, London Inana is often described as the patron deity of war and love, and that is arguably true. But in Enheduana’s poems, she appears more precisely as the goddess of change. The destruction of war and the passion of love are alike in that they are both powerful forces that overwhelm and transform human lives, and it is this sense of sweep that defines Inana in all her manifestations. To describe the force of this sweep, Enheduana’s poems often recruit metaphors from the natural world, such as storms and wild animals. In ‘The Exaltation’, she is told: You are likea flash flood thatgushes down themountains …Meanwhile, ‘The Hymn’ describes her as a hawk that swoops to feed on other gods. Another common source of metaphors in these poems is war, but this is not war as the playground of heroes that we know from the epic genre. For Enheduana, the battleground is not a place where men may prove their might: humans appear in her poems not as the agents but only as the victims of warfare, as when the poet sings of the soldiers who are led away in chains while the wind fills the squares where they danced … War is depicted as an impersonal force that is powered by Inana’s merciless will: hatches crush heads, spears eat flesh, and axes are drenched in blood … But the savage warrior who grinds the skulls of her enemies to dust is just one of Inana’s personas. In other texts, she appears as an innocent young girl pining for her lover, the shepherd god Dumuzi. Another of the poems attributed to Enheduana is called ‘The Temple Hymns’, and it is a collection of 42 short odes to the Sumerian gods, temples and cities. Here, we find two hymns to Inana in her sensuous guise, as the goddess of desire and bodily delight, as well as one hymn to Inana in her guise as a terrifying warrior who ‘washes her weapons for battle’. This split in Inana’s persona is neatly captured in the planet that served as her sign in the skies: Venus. Because it is so close to the Sun, Venus can be seen by the human eye only just before the dawn, when it rises before the Sun on the eastern horizon, and at dusk, when it lingers after the Sun on the western horizon. One cannot follow the planet’s path through the heavens, only observe it in these two opposite positions: east and west, sunrise and sunset. That sense of total contradiction is a perfect astronomical metaphor for Inana’s character. The hymnic praise of Inana was a venue for thinking about the instability of the Old Akkadian period for two reasons. One was political. Inana was the patron deity of Sargon’s empire, so Enheduana’s devotion to the goddess carries a clear political overtone: in elevating Inana, the poems are also implicitly elevating the imperial regime that Inana was thought to support. The other has to do with the goddess’s character. Just like Inana, the Old Akkadian period came to be associated in Sumerian and Babylonian culture with constant upheavals and change. Not just in Enheduana’s poems, but across a range of other historical sources, the Old Akkadian period is depicted as a time of legends and larger-than-life drama, not unlike the special place held by the Viking raids or the Golden Age of Piracy in the Western historical imaginary. Sargon and his grandson Naram-Sîn, who succeeded to the throne in 2254 BCE and brought the empire to its zenith, were remembered for millennia as near-mythical figures: Sargon as a paragon of power, Naram-Sîn as a hubristic despot who brought disaster upon his people. These depictions are often dismissed by modern historians as the mythmaking of later periods, but no one would doubt that the Old Akkadian period was a time of great change. Since the early 3rd millennium BCE, the area that is now southern Iraq had been dotted by dozens of city states, each with its own local deity, dialect and administration, and, though the cities were part of a wider network of trade, conflict and cultural exchange, they remained separate statelets. The king had to suppress nine uprisings in a single year Sargon’s army swept through the cities: Kish, Uruk, Eridu, Nippur, Larsa, Umma, Lagash, Girsu, Isin, Eshnunna, Sippar, Eresh, and many more besides. Over the following century, the Old Akkadian kings worked hard to bring the cities into line, suppressing local differences to establish an imperial standard of administration. The nobles who had ruled the cities for centuries were replaced by new elites, drawn from the army and propelled to previously unknown heights of power. The world must have felt larger than ever before. Soldiers from the flatlands of Iraq made their way into the mountains of Turkey and Iran, traders journeyed as far as Afghanistan to fetch precious stones, and the art of the time shows the influence of Egyptian styles. The ports of Ur would have witnessed the arrival of exciting, exotic wares such as lapis lazuli, ivory, carnelian, cat’s eye, jasper, diorite and serpentine. And these are just the objects that survive in the archaeological record. There were also the foreign foodstuffs, drinks, clothes and perfumes that the citizens of Ur had never seen before. The influx of wealth and materials in turn allowed for major advances in technology and art. The best-known example of both these developments is a bronze statue, traditionally thought to depict Sargon, that vividly displays the sophistication of the court’s artists as well as the technical skill of its smiths. Mask of Sargon of Akkad. From Ninevah, c2250 BCE. Courtesy Wikipedia The newly empowered especially liked to commission cylinder seals, which were used as a form of personal identification, much like a signature today: one would roll one’s seal over a document to sign it. But the seals also became markers of prestige, as their intricate patterns reflected the status of their owners. The Old Akkadian stonecutters were pushed hard to satisfy their new patrons: the seals of this period are tiny, gorgeous works of sculpture. Akkadian cylinder seal depicting the goddess Inana resting her foot on the back of a lion, while Ninshubur stands in front of her paying obeisance, c2350-2150 BCE. Courtesy Wikipedia But all this power, wealth and artistic expression was tightly concentrated around the king’s court. Dissatisfaction among the old elites was huge, as shown by the constant revolts of the period. This resistance reached a pitch under Naram-Sîn, in what is known as the Great Revolt, when the king had to suppress nine uprisings in a single year. On doing so successfully, Naram-Sîn declared himself a living god – the first king in ancient Near Eastern history to do so. No wonder, then, that he gained a reputation for hubris. Later poets would turn his claim upside down: the story of his nine victories became the story of his nine defeats. Naram-Sîn’s kingdom also had to contend with a slower, but ultimately deadlier threat. The climate was changing. For reasons that remain unclear to geologists, much of the world went through a severe drought at this time, in what is known as the ‘4.2-kiloyear event’ (that is, an event of uncertain nature that happened 4,200 years ago). Whatever its exact cause, the drought hit especially hard in the Old Akkadian Empire, causing famine and waves of migration. Eventually, the hard-pressed state could bear no more: under Naram-Sîn’s son Shar-kali-sharri, the Old Akkadian Empire collapsed, leaving behind a complex legacy of legends and changes. It is a perfect metaphor for the period’s place in history that, during this time, the signs of the cuneiform script turned 90 degrees anticlockwise, as they became more abstract and easier to write. Even writing underwent its own literal revolution. We can only guess at what an average smith in Ur or a shepherd in Akkad felt about the Old Akkadian Empire’s rise and fall. Even for the elites, their lives are most often captured only by brief snapshots, such as the cylinder seals that attest to their existence and little else. That is part of the intrigue that clings to Enheduana’s poems. ‘The Exaltation’ seems to offer a personal account of the political drama of the Old Akkadian period, told by the daughter of the emperor herself. Enheduana was not so much an eyewitness to the insurrection as the eye of its storm. However, we must bear in mind that the poem is, at best, a literary reworking of a real experience (if it was written by Enheduana herself), and at worst a reconstruction by much later writers of what the rebellion might have been like (if it was written by others in her name). The story of Naram-Sîn’s nine victories that were turned into nine defeats by later poets should remind us that cultural memory is not a reliable guide to what actually happened. Still, the poems are clearly a literary response to the Old Akkadian period, written either in the thick of it or as a later meditation on its legacy and, as noted, that legacy lasted for centuries. Sargon and Naram-Sîn were still very much alive in Babylonian memory when cuneiform culture died out in the last centuries BCE. That would be enough to make them a fascinating source for understanding this turbulent period, but I would argue that ‘The Exaltation’ and ‘The Hymn’ go on to make the turbulence they are describing the foundation of a specific kind of cosmic insight. Change, even catastrophic change, was an occasion to see the world more clearly When faced with social disturbance, one can choose to intellectually suppress it, explaining it away as a momentary aberration of a stable world order, or one can turn it into the foundation for a new worldview. I believe that, in Enheduana’s poems, we see the later strategy pursued with striking insistence: ‘The Exaltation’ and ‘The Hymn’ are dedicated to a goddess whom she portrays as: … a raging,rushing flood thatsweeps across theland and leavesnothing behind. In making this claim, I am inspired by the Popol Vuh, the myth of creation of the K’iche’ Mayans, which was put down in writing around the year 1550, in the wake of the arrival of European colonisers that brought the Indigenous populations to the brink of annihilation. In Emergency (2022), his fascinating study of the Popol Vuh, the poet and literary critic Edgar Garcia shows that, rather than offering a resistance to, or a condemnation of, the Western violence, the text seems to do something much more subtle: it folds the experience of colonialism into a cosmic rhythm of crisis and creation, of emergencies leading to the emergence of new possibilities. The cataclysmic transformation that the Mayan peoples underwent during the 16th century is thus cast as part of a universal principle of repetitions and interruptions that, as far as we can tell from the Popol Vuh, already structured the K’iche’ worldview. Change, even catastrophic change, was not viewed as a conceptual threat that needed to be overcome, but as an occasion to see the world more clearly. And the same is true of Enheduana’s poems. In elevating Inana, ‘The Hymn’ makes two points. The first is that Inana controls everything around her. This is conveyed by three myths that the poem relays in a miniature format. It tells of how Inana destroyed Ebih, a mountain that failed to pay respect; of how she terrified the god An into sharing his temple with her; and of how she changed the gender of her ritual devotees, turning men into women and women into men. People, gods, even the landscape: all are subject to Inana’s powers of transformation. The second point is that Inana is subject to no control but her own. As the poem repeatedly stresses, no order can be imposed on her actions: She overturns whatshe has done; nobodycan know her course. Even her fellow deities have no hope of predicting her decisions: She confounds the assembly of the greatgods with her advice, nobody knows why. Change emanates outwards from Inana, but she herself remains untouched by the attempts of others to change her. Having established these two points, ‘The Hymn’ breaks into a long litany that outlines Inana’s contradictory attributes: To destroy andto create, to plantand to pluck outare yours, Inana …To turn brutesinto weaklingsand to make thepowerful punyare yours, Inana.To reverse peaksand plains, to raiseup and to reduceare yours, Inana …To make small ormajestic, weak orhuge, to withholdand to give freelyare yours, Inana.To bestow therituals of kingsand gods, to obeyand to mislead,to speak slander,to lie, to gaud,and to overstateare yours, Inana.The list goes on for some 60 lines, stretching our minds with its length and many contradictions to give us a glimpse of what a divine force of constant change might look like. ‘The Hymn’ and ‘The Exaltation’ seek to elevate Inana to the head of the pantheon, over and above the traditional rulers of the gods, An and Enlil. But, in so doing, they are not just making a point about religion, they are also making a claim about cosmology. If the world is ruled by this kind of goddess, what does that tell us about the world? What would it mean to live in a universe governed by the embodiment of capriciousness, conflict, contradiction, chaos and complexity? Enheduana’s poems summon a world of radical impermanence, in which nothing around us – neither the mountains on the horizons nor the genders of our own bodies – can be taken for granted. In these poems, confusion is not an intellectual failure to grasp the world, but an appropriate response to the world as it really is: if the figure of Inana should inspire anything in us, it is intellectual humility. The nature of the universe is neither single nor predictable, since the universe is ruled by a goddess who is herself deeply divided and constantly changeable. Many religions have sought to depict the realm of experience as a changeable surface beyond which lies a transcendent, eternal truth in which we may find comfort. But no such transcendence or mental tranquillity is to be found in the poems attributed to Enheduana. The only truth established in ‘The Hymn’ and ‘The Exaltation’ is that change, as personified by Inana, is supreme. To live in this world, according to Enheduana, is to live with the knowledge that the future may differ radically from the present, just as the present of the Old Akkadian period differed radically from its past. This vision of the world may seem a far cry from how we mostly make sense of it in our modern age. Academics of all stripes are united in their search for regularities that help explain the world we see around us, from chemical laws through psychological patterns to social systems such as capitalism or the patriarchy. In other words, when we try to understand a given phenomenon today, we most often do so by exploring the structures that shape it, meaning that the change and confusion embodied by Inana can seem like a rather naive way of explaining the world. But there are signs that our increasingly changeable times are being met with an increasing interest in change as a strategy for understanding the world. In their book Impermanence (2022), a team of British, Danish and Australian anthropologists promote what they see as an emergent theory of flux within the social sciences. This budding body of research draws inspiration from non-Western philosophies such as Buddhism – which has reflected on the significance of change and radical impermanence for millennia – to reframe familiar topics within anthropology and sociology. The question that propels that book, and the academic movement behind it, can be summarised as follows: how does a focus on change affect our perception of our objects of enquiry? Do social phenomena (the book explores examples such as migration, museums and alcoholism) look different when they are seen as being in a continuous state of flux rather than as determined by semi-static social structures? How does one stay sane at a time like this? How does one live at a time like this? Of course, one swallow does not make an academic summer. But, even in popular culture, one can detect a rising interest in philosophies that explicitly base their worldview on constant change. A key example is Stoicism, which promotes detachment and equanimity in the face of a world that it sees as inconstant and chaotic. According to Google Trends and Google Ngram – which track the mentions of a given word in, respectively, Google searches and texts uploaded to Google Books – interest in Stoicism is on a steady rise, which may signal that people around the world feel a growing need for intellectual guides to turbulent times. And who can blame them? I grew up in what now seems like a historical bubble of relative calm. The Western world around the turn of the millennium was sufficiently free from war, ideological conflicts and global pandemics for pundits to speak confidently of a lasting world order. Francis Fukuyama declared ‘the end of history’ and, while I was painfully aware of the injustices that plagued much of the globe, my corner of it, at least, felt safe. Now, having just turned 30, I confront a very different reality. Before me, before us, lie the catastrophes of the climate crisis and the social upheavals required to reduce emissions and adjust to extreme weather. Then there is COVID-19, which is unlikely to be the last global pandemic of this century, as no serious measures were put in place to prevent a recurrence of similar diseases. Then there is the war in Ukraine, and the economic turbulence it has brought; and the ongoing divisions that plague Western democracies and seem to grow deeper with each passing year. And so on. The list is familiar, almost clichéd, but each entry in it contains real horror. By the end of my lifetime, the world is overwhelmingly likely to look even more unstable than it does today. Collectively, we will have gone through a period of transformation that is all but unmatched in human history. I do not know how to feel about that, because it is not the sort of thing I was raised to feel any way about. How does one stay sane at a time like this? How does one live at a time like this? Enheduana does not offer clear answers to these questions, and I would not hold her up as a model for good living in difficult times. But her poems fascinate me in part because they describe, with dazzling intensity, a world where change is the norm. These are poems from, and about, unstable times. That is one reason I am drawn to them: I want to understand what it means to live in such a world, because I will probably have to.
Sophus Helle
https://aeon.co//essays/ancient-sumerian-poetry-turns-instability-into-cosmic-insight
https://images.aeonmedia…y=75&format=auto
Thinkers and theories
Ambedkar was not only a politician, but a profound thinker whose philosophy of democracy challenged the caste system
When one thinks of American pragmatism, one often puts too much emphasis on the American part. It might even stunt our enquiry, irrevocably fixating on thinkers such as John Dewey, William James, and Jane Addams. But there is more to the story of pragmatism than what happened in the United States around the turn of the 20th century. Pragmatism itself was a flexible, loosely allied approach to thinking that held few maxims in common other than the idea that our theorising and arguing ought to come from lived experience and ought to return back to experience as the ultimate test of its value. Its advocates such as Dewey greatly affected nations such as China through his teaching and lecturing, leading us to see that pragmatism has a global narrative connected with it. Is there a similar tale to be told about pragmatism and its interactions with India? Portrait of John Dewey (1932) by Samuel Johnson Woolf. Courtesy the National Portrait Gallery, Washington, DC Any narrative of pragmatism’s influence and evolution in India will centre on Bhimrao Ambedkar, a student of Dewey’s at Columbia University in New York. Some might recognise Ambedkar (1891-1956) as a chief architect of the Indian constitution in the 1940s. Others might recognise him as the indefatigable leader of India’s ‘untouchables’ (now denoted by the self-chosen label ‘Dalit’), given his constant advocacy for the rights of those oppressed by the complex and long-rooted caste system. Ambedkar himself was a so-called untouchable, which only fortified his commitment to seeking justice in the law and in social reforms for India’s most vulnerable populations. At the end of his life, he channelled his frustration at the prevailing caste consciousness within Hindu society into a conversion effort that tried to convince his fellow Dalits to convert away from Hinduism and into a more egalitarian Buddhism. On 14 October 1956, just weeks before he died, he led what was at the time one of the world’s largest voluntary mass conversions. This event held in Nagpur featured Ambedkar, his wife Savita, and an estimated 500,000 Dalits converting to Buddhism. For reasons such as these, Ambedkar was voted the ‘greatest Indian’ in post-independence India in a poll that included more than 20 million votes being cast. Ambedkar was not merely a political figure or leader. He was also a philosopher. One can see the evidence for this in the reconstructed Buddhism that he advanced in his final years, coalescing in his rewritten ‘Buddhist Bible’, The Buddha and His Dhamma, which was completed just before his death on 6 December 1956. In this book, Ambedkar reconstructed the narrative of the Buddha, de-emphasising traditional formulas such as the four noble truths, and foregrounding poverty, injustice and the building up of social communities. In short, he reconstructed the Buddhist tradition and its myriad texts to show how it could function as a social gospel, or an engaged philosophy that could even meet the growing waves of those inspired by Karl Marx and Russian communism in the 1950s. The intersections of Ambedkar’s political activism and his philosophical acumen were vividly displayed earlier in his life. The 1930s was a period marred by Ambedkar’s conflict with the powerful symbol of the Indian independence movement, Mahatma Gandhi. Having lost faith in the Hindu tradition as amenable to social reform, Ambedkar grew so disillusioned that, by 1935, he proclaimed in a speech that, although he was born a Hindu, ‘I solemnly assure you that I will not die a Hindu.’ By 1936, he was very loudly criticising Hinduism and its holy texts, and imploring his fellow Dalits to convert away from Hinduism to escape their oppressive status as ‘untouchables’. In an infamous speech – undelivered because of its explosive criticisms of sacred Hindu shastras (holy texts) – Ambedkar argues that the caste system is harmful not only because it is oppressive, but most importantly because it destroys the unity and respect among members that are essential to democracy. Ambedkar’s activism on the specific issue of caste oppression was underwritten by a full-throated philosophy of democracy. Ambedkar (middle row, far right) with colleagues at the London School of Economics and Political Science, London, 1916-17. Courtesy Wikipedia His thought was creative and powerful. But no thinker springs fully formed onto the intellectual scene. Where did Ambedkar find the inspiration for the image of democracy he was going to construct – and then employ – in his fight for social justice in India? Ambedkar was one of the most well-read Indians of his period, possessing a personal library of around 50,000 books at the time of his death. But he was also one of the most highly educated Indian leaders of his day, with academic and legal credentials from institutions such as Columbia University, the London School of Economics, and Gray’s Inn. One of the most intriguing things about Ambedkar is that he pursued much of his Western education in the US, and not just in the well-known universities so many of his Indian (and upper-caste) compatriots frequented. Ambedkar went to Columbia in 1913-16 and was exposed to one-of-a-kind progressive intellectuals bent on using academic research to change societies and practices for the better. At Columbia, Ambedkar would study intensively with the US economist and taxation specialist Edwin R A Seligman; the Russian-born economic historian, ornery anti-Marxist and amateur gardener Vladimir Simkhovitch; and perhaps the most impressive philosopher of the day, John Dewey. In his later years, Ambedkar remembered all these figures but Dewey and his pragmatism stood out. While Ambedkar was making his way back to Columbia in 1952 to receive an honorary degree, Dewey died of pneumonia. Distraught, Ambedkar wrote to his wife Savita from New York lamenting the fact that he missed the chance to see his beloved teacher: ‘I was looking forward to meet[ing] Prof Dewey.’ Ambedkar’s letter then revealed what Dewey had meant to him: ‘I am so sorry. I owe all my intellectual life to him. He was a wonderful man.’ The courses Ambedkar took from Dewey gave the young Indian reformer a powerful overview of pragmatism Ambedkar had accomplished so much, both in the millions of words he wrote or spoke to intellectual or general audiences, and in the political and social activism he pursued. But what sort of intellectual debt did he owe Dewey? While many have noted this intriguing letter, none have truly explored the historical and intellectual relationship between Dewey and Ambedkar. This is a shame, since both are intellectual giants in their own rights, and their confluence can show us what Ambedkar saw as valuable in Dewey – and how Ambedkar’s own pragmatism extends the pluralistic tradition of pragmatism. Ambedkar was not just an activist – he was also a philosopher. The philosophy he advocated was a form of pragmatism fitted to the concerns of democracy amid social divisions such as those of caste. While taking classes at Columbia University, Ambedkar stumbled into Dewey’s classroom. He shouldn’t have been there – the young Indian had signed an agreement in 1913 with the Maharaja of Baroda, his financial supporter, that he would study only finance and sociology at Columbia. But Dewey had a profile that would have been difficult for Ambedkar to resist in his coursework. The US pragmatist was at the top of his game in the 1910s, engaged in the philosophical work that would inform his book Democracy and Education (1916). Dewey was also hard at work creating institutions such as the American Association of University Professors in 1915 – with figures such as Ambedkar’s advisor, Seligman – dedicated to protecting academic freedom at US universities. Dewey’s philosophy was also making its mark on the US scene. By the time he had joined Columbia’s faculty, he had already gained fame with his early writings from his time at the University of Michigan and his operation of the ‘Laboratory School’ at the University of Chicago, an example of how Dewey’s practical experiments informed his philosophical writings. By the time Ambedkar heard Dewey at Columbia, Dewey had left his older philosophically Idealist vocabulary behind and was engaged in exploring the interaction of community and experience in human life. Resisting the older emphases in much European philosophy toward the unchanging and certain, Dewey revelled in an ever-changing and uncertain world. Dewey’s thought emphasised this important nexus of experience. It merged his two guiding lights – G W F Hegel and Charles Darwin – into a vision of philosophy as doing justice to the lived qualities of experience, as well as the human capacity for reflection or enquiry. Our powers of reason were not godly or divine, but they came from and returned to courses of experience that called for our engaged attention to reconstruct them. Dewey’s philosophy dovetailed with his work on education and pedagogy, as they both saw the human as a habit-bearing being that could bring these habits to bear on experience that offered more problems than resources. He saw the power in our ability to intelligently change the habits of self and other to become better adapted to our social and natural environments. Dewey’s philosophy aimed to theorise the world so as to enable us to better adjust to it or to adapt it to our needs. His thought was oriented at reconstructing ourselves and our communities, more so than simply to describe the truths of the world. The courses that Ambedkar took from Dewey in psychological ethics and political philosophy gave the young Indian reformer a powerful overview of pragmatism. He saw Dewey as extending and enlarging the tradition of philosophy that William James (1842-1910) and Charles S Peirce (1839-1914) had helped to shape, and that contemporary figures such as Jane Addams (1860-1935) and Mary Kingsbury Simkhovitch (1867-1951) were both theorising and putting into practice in the social sphere. There are many stories to be told about Ambedkar, but there is one that has yet to be fully explored. It is a story of influence, imagination and emancipation. It is the story of Ambedkar as a pragmatist. What does it mean to consider Ambedkar as a pragmatist? Does it mean we are somehow capturing his essence, and excluding the other important labels often attached to him and his life story? In short, no. Just as Ambedkar can be described as a Buddhist and as a politician, he can also be described as a pragmatist. Each of these labels gives us a way of understanding and foregrounding certain facts and themes in his life story; no label captures everything or is the ‘final’ descriptor of who one is. It is in this spirit that Ambedkar can be talked about as a pragmatist, meaning that he and his thought developed to some extent in reaction to other pragmatists such as Dewey. Thinking about Ambedkar as a pragmatist highlights certain themes in his approach and his thought that we may not have appreciated before. It also makes sense of the newly discovered archival evidence that I explore in my book The Evolution of Pragmatism in India (2023), indicating that Ambedkar sought to combine Dewey’s views on democracy with Buddhism as early as 1914. Ambedkar’s reception of Dewey in forming his own philosophy – his own pragmatism, as I see it – is complex. Dewey inspired Ambedkar to evolve a sort of pragmatism that targeted caste oppression, but which built up a vision of democratic social systems that allowed individuals to matter. Dewey gave Ambedkar ideas, ideals and even methods to experiment with or even resist. He saw in the philosophy espoused by his American teacher a source of novelty and creativity. Through his courses with Dewey, and in the many books by Dewey that he continued to purchase and annotate into the 1950s, we can see how Dewey’s pragmatism was an important touchstone or inspiration for Ambedkar. It was not something he would blindly copy or duplicate. Instead, it became a resource and a source of motivation to do certain things in certain ways once he returned to India. In a methodological sense, it showed him the value of reconstruction. Dewey had problems with the quest for certainty among philosophers ranging from Plato to Immanuel Kant to many of his contemporaries; Ambedkar felt a similar constriction when it came to the claims to timelessness and divine certainty made on behalf of the sanatan (eternal) tradition stemming from the ancient Vedas. Ambedkar saw this same tradition as underwriting the customs of caste that had divided Indian society and oppressed individuals like him for thousands of years. There was nothing Ambedkar could do in this lifetime to remove the stain of untouchability in the eyes of others The pragmatist commitment to philosophy as a way not just to grasp the eternal truths of the world and hold on, but instead to purposefully change or reconstruct it, struck a chord in Ambedkar. One common thread across all the disparate parts of his intellectual and practical life was the idea that he should not remain content with the world as received by him or his surrounding culture. He felt the command to change this world, and to change those that might have power over it, through his activism, his political manoeuvres, and even his impassioned speeches. The world for Ambedkar was what we could make of it, and he saw a path to reconstructing it in a more just manner that would erase the sort of hate and suffering he felt as an ‘untouchable’. But reconstruction must aim for something. What did Ambedkar’s selective and creative pragmatism aim for as its goals or ends? What sort of moral ideals did it strive to realise? One of the recurring themes in Ambedkar’s harsh criticisms of caste throughout his life was that this graded social system suppressed the ‘human personality’ of those in ‘lower’ castes. It limited the occupations that individuals could pursue, the clothes they could wear, and even the paths they could travel, to birth status. It was at birth that one received one’s special mix of traits or potentialities from past lives, as Ambedkar saw the caste system play out in his life. He was an untouchable because of his birth placement, one that resided at the very bottom of the graded hierarchy of caste groups, and one that most other ‘higher castes’ saw as ritually polluting. There was nothing that Ambedkar could do, at least in this lifetime, that would remove the stain of untouchability in the eyes of others enraptured by these customs. For Ambedkar, this was an affront to the worth of the individual. Drawing from Dewey’s early works – especially his essay ‘The Ethics of Democracy’ (1888) – Ambedkar came back from his education in the West and argued that caste customs hurt the ‘growth of personality’ and developed only ‘the personality of the few at the cost of the many – a result scrupulously to be avoided in the interest of Democracy.’ Each person was unique in their mix of impulses, drives and interests, and the best sort of society would help individuals create and recreate themselves with their social engagement. All he saw with caste was a restraining and limiting of what roles and talents an individual could develop. Ambedkar would often refer to his battle against the caste system – epitomised by his hatred for the practice of untouchability – as ‘a battle for the reclamation of human personality, which has been suppressed and mutilated by the Hindu social system.’ For Ambedkar, as well as for young Dewey, society worked best when it offered freedom and opportunity for each individual to develop as a valued member of a community. Democracy became the philosophy that facilitated this evolution of each person beyond strictures of separated classes or castes. Ambedkar’s philosophy orbits around another recognisably pragmatist commitment – the idea that communities matter in both science and ethics. Indeed, Ambedkar would maintain that vital senses of democracy went beyond the overtly political. Democracy was a habit for Ambedkar, as well as for Dewey, and not just a formal way of decision-making among elected officials. Throughout his writings, Ambedkar was fond of echoing Dewey’s phrase from Democracy and Education, saying that: Democracy is not merely a form of Government. It is primarily a mode of associated living, of conjoint communicated experience. It is essentially an attitude of respect and reverence towards fellowmen.Later in his life, Ambedkar would refer to democracy as a way of life. All of this pointed to the central idea that social democracy was an ideal to be achieved in our everyday experience. But what can we make of social democracy as a habit or a way of life? For Ambedkar’s pragmatism, democracy became more about how we interact with our fellow community members or citizens than about constitutions and voting exercises. Political institutions and laws are important, of course, but what was of primary importance for Ambedkar – and his teacher, Dewey – were the customs and habits that animated us in our myriad interactions with our friends and foes in social experience. Part of Ambedkar’s philosophical genius lies in how he reworks this idea of deep democracy into a normative framework used to critique caste. He saw caste as both a group custom and as an individual habit of how one reacts to others. It was inherently and essentially divisive. Caste habits, and caste labels, told Ambedkar and his fellow community members how they should value and act toward each other. In his own case, it led others to exclude or limit contact with him. If democracy meant the formation of groups where each individual mattered, caste was, Ambedkar surmised, inherently antidemocratic. But on what standard ought we judge systems and communities as a whole? This was a problem for the sort of philosophy that young Ambedkar heard in Dewey’s courses. Dewey would advise that ideals and moral values came from within a historical or community context. He was reluctant to appeal to sources of transcendental certainty, like God or pure reason, to settle matters. Ambedkar appreciated this intuition, but he needed something more than an appeal to culture or a tradition. For him, the problems of India were inherently connected to a millennia-old stratification of its communities into a hierarchy of occupation and value-determining classes based upon birth. As he would put it in an early publication, Indian society under caste was a tower with many floors but no stairways upon which one could ascend. He criticised Russian communism for achieving equality through violent means that sacrificed the liberty of many Ambedkar knew that much of this caste superstructure was grounded on claims to ‘timeless’ or divinely revealed matters in holy texts. Like Dewey, he could not appeal to moral certainties to counter other divine truths. But his pragmatic approach became speckled with constant appeals to three values – liberty, equality and fraternity. These were the values of the French Revolution that Ambedkar heard in Dewey’s course in March 1916, perhaps for the first time. Later in his life, Ambedkar would make overt efforts to translate these terms into Buddhist concepts. But the trichotomy remained. These terms were not tethered to one culture – including French – but instead became semi-transcendent values that could be used to critique any community as to its adherence to the democratic ideal and the value of developing the personality of each individual. Justice was the tense balance among these three valued aspects of individual and communal experience. The power of these values, seen across Ambedkar’s speeches, as well as the preamble to the Indian constitution he took a heavy hand in drafting, was that they revealed the problems with caste and with potential solutions such as communism. Ambedkar would bring these values to bear to show how caste customs functioned to destroy the liberty and equality of those judged ‘untouchable’. He would also, later in his life, criticise the communism he saw in Russia for achieving equality through violent means that inevitably sacrificed the liberty of many and the sense of fraternity among the opposing groups in society. The way to make individuals matter must focus on their equal treatment and their ability to freely direct their lives. It also must result in the creation of a community characterised by shared interests and mutual respect, a state of affairs so central to the often-overlooked value of fraternity. Ambedkar’s philosophy is an anti-caste philosophy but, in drawing upon the pragmatist ideal of deep democracy, it became something even more encompassing – a philosophy of democracy. Talking of Ambedkar’s pragmatism is a way of highlighting a constellation of important ideas and ideals from Dewey, retasked for the Indian context. It also allows us to see Ambedkar as a global philosopher, one concerned with caste and with other problems that undo the search for democratic communities. What results is something absolutely unique in the pragmatist tradition: an evolution of social democracy that brings new insights into the problems of oppression and division, and a creative way to reconstruct society through a tense and ever-changing balance of freedom, equality and fellow-feeling among all those who share a common fate together.
Scott R Stroud
https://aeon.co//essays/what-ambedkar-learned-from-dewey-and-brought-to-india
https://images.aeonmedia…y=75&format=auto
Art
When it comes to our complicated, undecipherable feelings, art prompts a self-understanding far beyond the wellness industry
He is standing in front of an old, intricately decorated urn in a museum, looking at the images etched into its surface, when he begins to wonder: What leaf-fring’d legend haunts about thy shapeOf deities or mortals, or of both,In Tempe or the dales of Arcady?What men or gods are these? What maidens loth?What mad pursuit? What struggle to escape?What pipes and timbrels? What wild ecstasy?These lines come from the opening of ‘Ode on a Grecian Urn’ written by John Keats in 1819. Across the poem’s five wandering and acutely detailed stanzas, Keats chooses not to seek an understanding of the urn in front of him through research or historical data; instead, he observes and imagines through questions and narratives. A person etched into the surface is playing a pipe under a tree – music that can’t be heard, Keats muses, and a tree whose leaves will never shed. Nearby, two lovers are frozen while leaning in for a kiss. To the poet, it seems their love is never-ending: though they will never kiss, they’ll never grow old or apart. Absorbed by the figures depicted on the urn, Keats creates an imaginative space, a space for thinking-by-looking. Who are these coming to the sacrifice?To what green altar, O mysterious priest,Lead’st thou that heifer lowing at the skies,And all her silken flanks with garlands drest?What little town by river or sea shore,Or mountain-built with peaceful citadel,Is emptied of this folk, this pious morn?‘Ode on a Grecian Urn’ gives us a sense of the poet’s mode: he asks question after question about the urn, not to uncover facts or ‘answers’, but rather to sustain his experience of wonder and curiosity. There is something else, too. Keats is not only speculating, inventing and describing, he’s also seeking out the effects of his imaginative engagement with the urn itself. What exactly are these effects? And is there something about this particular mode of engaging with images and objects – with art – that could prove valuable in other contexts? One evening, I tell my friend that I feel as though there is a wall in my mind, blocking me from my own thoughts. And all I’m left with, I say, are feelings I can’t explain. There is no story or reason or event that helps them make sense. I’m left sad, confused. My friend, who is a painter, wonders: well, how will you get around the wall? Maybe you could turn yourself into air, or water. For a moment, we take this image of the wall seriously, imagining its architecture and the logic of its construction: what are its dimensions? What material is it made from? Is it porous? Could water seep through? What was, moments ago, just an analogy for my feelings of powerlessness and uncertainty starts to take shape as an open-ended image – like an abstract painting or a figure etched into an old urn. Though the image doesn’t solve my problems, wandering the space inside it with curiosity starts to change the way I’m thinking. I’m no longer seeking explanations for inscrutable emotions, wondering: ‘Why am I feeling this way?’ I’m now exploring somewhere new, asking: ‘What is this space and how does it work?’ Rather than trying to force an explanation on to my feelings, I explored the ambiguous image of the wall with no predefined outcome. The objectives were open-ended and unarticulated. Yet the effect was transformative. I was reminded of Keats’s urn, and the potential in his mode of questioning and imagining. Perhaps, I wondered, there is a pathway among these urns and walls that leads toward mental wellness? But if there was, it was unlike any other pathways I was familiar with. We need more open-ended forms of understanding and reflection – self-help beyond the self The tools that dominate the mental wellness landscape today – from mindfulness apps to certain forms of cognitive behavioural therapy – offer very different approaches. The shared strategy behind these forms of self-help is often defined by a kind of self-surveillance, in which wellbeing emerges from looking inwards. Through practices, prompts and language that encourage this inward focus, these tools aim to create calm, understanding or epiphanies. These are good things. The concern, however, is that being so inwardly attentive – to your behaviour, feelings, bodily changes and social interactions – may lead to hypervigilance or the hyper-articulation of the self. For the philosopher Byung-Chul Han, writing in Psychopolitics: Neoliberalism and New Technologies of Power (2017), this kind of ‘self-optimisation’, driven by ‘the compulsion always to achieve more and more’, can lead to burnout and exhaustion. This may happen when a person becomes overly focused on the self and learns to measure themselves against the pre-determined benefits of a self-help exercise; or when someone is constantly surveilling themselves, parsing their behaviour and thoughts through the limited vocabulary and logic of dominant mental wellness tools. These tools are meant to heal but, for Han, ‘healing’ now ‘refers to self-optimisa­tion that is supposed to therapeutically eliminate any and all functional weakness or mental obstacle in the name of efficiency and performance’. Self-optimisation, he writes, ‘amounts to total self-exploitation’. The alternative? Take pressure off the self by looking elsewhere and engaging outward. Ambiguous art can help with this practice. Such an approach involves learning to observe and question creative work, rather than observing and questioning the self. Unlike self-help strategies that come with their own set vocabularies and categories of wellness, a practice of imaginatively looking at art involves the viewer developing their own ideas and vocabularies as they grapple with an image or object they are encountering for the first time. Today, as wellness fatigue begins to set in and the facile promises of self-help become more difficult to trust, perhaps we need more open-ended forms of understanding and reflection – self-help beyond the self. An alternative mode, based on the practice of ‘reading’ images, objects and other artwork, might shift focus from the self to that which is less easily accessed or understood. Art has the power to hold our attention, draw us away from ourselves, and keep us looking closely at something we don’t entirely understand. Learning to explore something unfamiliar and ambiguous, by wielding our imagination and curiosity, is like developing a kind of muscle, which could prove useful to other aspects of our lives. Perhaps the muscle is what Keats called ‘negative capability’: the ability to withstand doubt or uncertainty, remain open to that which is not readily understandable, resist the urge to explain away what we don’t comprehend, and to accept the impossibility of a singular conclusion. Figuring ourselves out doesn’t need to begin and end with the self, or with the surveilling tools of mental wellness. It can just as easily start by looking at a painting. While walking through the halls of the permanent collection at the Blanton Museum of Art in Austin, Texas, I come across a large abstract painting. Though I’m standing in front of it, I have little idea what it depicts. The title, Alchemist, is a clue. I look at the painting as though it is a dictionary, searching for the meaning of ‘alchemist’ in its colours, shapes, textures. Above me, on the top right of the canvas, there seems to be a blue face, and next to the face is a dark object that could be a tree – or is it the black screen of a heart-rate monitor at a hospital? I go with hospital. It suits the expression on the face, with its eyes looking up, as though lying in a hospital bed. The face looks a little like that of Claude Monet’s wife, from his painting Camille on Her Deathbed (1879). The brush strokes in Alchemist are large, irregular and overlapping, making the scene more troubled, confused, chaotic. Was there really a face? A tree? A hospital scene? Perhaps these shifting forms are the transmutation, the ‘alchemy’ that the title refers to. Alchemist (1960) by Philip Guston. Courtesy the Blanton Museum of Art, the University of Texas at Austin; gift of Mari and James A Michener As I spend time with Philip Guston’s Alchemist (1960), I follow these questions, allowing myself to remain unsure of what I’m seeing, while staying in conversation with that uncertainty. However, rather than becoming frustrated or bored, I find that following the deeper pathways through my imagination makes the uncertainty itself interesting. This is the negative capability muscle at work. With patience, the stamina for this kind of practice increases. Camille on Her Deathbed (1879) by Claude Monet. Courtesy the Musée d’Orsay, RMN-Grand Palais/Patrice Schmidt; gift of Mme Katia Granoff, 1963 This painting, like many other kinds of art, refuses to explain itself directly or clearly. The familiar becomes unfamiliar. This is precisely what the practice requires: ambiguity. It is because the image is not immediately understandable – thanks to Guston’s use of materials, scale, form and visual language – that it piques our curiosity, drawing our attention and inviting questions. Ambiguity, in its various forms, produces the conversation between a viewer and an artwork, allowing our imagination to assign different narratives, ideas or feelings to the work. Ambiguity in the literary arts has a similar power. One way this is expressed is through the fragment, a literary form found in the writing of James Joyce, Roland Barthes, Maggie Nelson and others. The fragment is used to produce a kind of disorientation in readers and to push them to make sense of sentences or phrases that seem disjointed. Like an abstract painting, the literary fragment asks readers to grapple with several possible meanings that lines of text could have, to ask why they have been placed side by side, and to piece together what the resulting narrative could be. For example, on the opening page of Joyce’s A Portrait of the Artist as a Young Man (1916), we find these fragmented lines: His father told him that story: his father looked at him through a glass: he had a hairy face.He was baby tuckoo. The moocow came down the road where Betty Byrne lived: she sold lemon platt.O, the wild rose blossoms On the little green place.He sang that song. That was his song.O, the green wothe botheth.On the first reading, the sentences may seem incoherent, as though there are several missing links between each line. Joyce uses this technique, in part, to transport the reader into the mind-world of a child. Words like ‘wothe’, ‘botheth’ and ‘platt’ are not in any modern dictionary – the reader must imagine what the words could mean using the sound, feel and associations of the words as clues, and the surrounding sentences as context. This is how a child might develop understanding while listening to adults having a conversation. From the perspective of semiotics, all images carry the possibility of multiple meanings Nelson’s Bluets (2009) abandons even the comfort and sense-making of the paragraph form and, along the lines of Barthes’s A Lover’s Discourse: Fragments (1977), employs a list: 10. The most I want to do is show you the end of my index finger. Its muteness.11. That is to say: I don’t care if it’s colourless.12. And please don’t talk to me about ‘things as they are’ being changed upon any ‘blue guitar’. What can be changed upon a blue guitar is not of interest here.Deciphering Nelson’s work is closer to the experience one has with the abstract painting: as readers, we hold on to the clear images of the index finger and the blue guitar, and attempt to piece them together using the tone of the writing (such as ‘please don’t talk to me’) as a clue to the mood of the story, arriving at multiple possible narratives. For Barthes, writing from the perspective of semiotics, all images carry the possibility of multiple meanings. In his essay ‘The Rhetoric of the Image’ (1964), he describes the terror and promise that comes from this ambiguity: … all images are polysemous; they imply, underlying their signifiers, a ‘floating chain’ of signifieds, the reader able to choose some and ignore others. Polysemy poses a question of meaning and this question always comes through as a dysfunction … Hence in every society various techniques are developed intended to fix the floating chain of signifieds in such a way as to counter the terror of uncertain signs …In other words, the multiple signs (or units of meaning) that are present in any artwork – visual, literary or otherwise – are what allows us to wonder about the work by reading it in different ways as we consider multiple interpretations. Even a seemingly non-ambiguous object, like a stop sign, has the potential to generate different ideas. Though these kinds of objects have one dominant meaning that we understand immediately, looking in a wandering and open-ended way can bring out less obvious meanings and ideas. When Barthes writes about the desire to ‘counter the terror of uncertain signs’, he is referring to the way in which ambiguity is generally done away with in our daily interactions. The stop sign is meant to mean one thing only. In this case, it is for good reason: signs like this need to be read quickly and uniformly by all, for public safety. Clarity and precision are championed in most communications in society. And yet, this is not how we usually feel or operate within our inner worlds. The practice of looking, of building a negative capability, involves developing the ability to withstand the ‘terror’ of uncertainty. It also presents us with the possibility of re-looking at things that we feel we have understood well – the possibility of seeing in new ways, re-imagining objects that we assumed carried singular, obvious meanings. It is the middle of night. I wake up from a bad dream. I feel uneasy. I know it was a dream but in the pit of my stomach I am carrying feelings for which I don’t have full, linear explanations. My thinking might not be clear or comprehensive. The ability to tolerate this – or even be curious about the experience, and not collapse under its confusing, shapeless, nameless ambiguity – is not dissimilar from the experience I had with Guston’s Alchemist. Waking from a bad dream or confronting a work of abstract art are both examples of moments when we are presented with the uncertainty of our feelings or experiences. By learning to dwell on the uncomfortable images in these moments, by reflecting and imagining, we might begin to intuit, or tolerate, what is wrong. This practice of looking does not prioritise academic or historical perspectives on art. It is divorced from the artist, the industry and the formal study of the arts. By paying attention to the form, title and other perceptible ‘clues’ in the work, this practice is primarily interested in using the intuitive, sensory, suggestive and aleatory to engage in conversation with a creative work. The point is not to develop an answer, an interpretation that ‘settles’ the ‘question’ of the painting, or to intellectualise the work in terms of form, style, history or the concerns of the artist. Rather, in this practice, a piece of art or writing becomes a test or opportunity for working one’s imagination – an exercise in making associations. The focus stays within the frame of the image or text. The art object, which cannot speak, can contain only a collection of polysemous signs that become visible when pulled out of the canvas by the viewer’s imagination. The ideal engagement is not one that dispels that which is challenging in a creative work, but rather one that builds tolerance for exploring the ambiguous through Keats’s negative capability. It’s like the interpretation of a dream, which can be full of ideas but not answers. It is not an escape from understanding but rather the strategy that makes understanding possible Another idea crucial to maintaining this open-ended engagement is ‘suspension’, borrowed from the literary critic Gayatri Chakravorty Spivak’s reflections on ways of reading. In her essay ‘Righting Wrongs’ (2004), Spivak describes good reading practice as ‘suspending oneself into the text of the other’, but ‘suspension’ doesn’t only mean lingering in the uncertainties of a text (or artwork). For Spivak, what needs to be suspended is a reader’s conviction that they are ‘necessarily indispensable’ to the ultimate meaning of a text or a work of art. In other words, open-ended engagement with creative work does not involve ascribing meaning by asking ‘How does this painting feel relatable to me and my world?’ Instead of making a work bend toward ourselves, Spivak insists we bend ourselves toward it by practising ‘patient reading’ and asking the work for a response. She acknowledges that it ‘cannot speak’, but that this shouldn’t stop us seeking a response from the ‘distant other’ – whether it’s a painting, a sculpture, a film, a photograph, a text, or any other ambiguous, creative work. This suspension maintains the focus on the object one is looking at, rather than on the self. It is not an escape from understanding but rather the exact strategy that makes understanding possible. Of course, suspending the self is difficult. We can look only through our eyes and imagine using the associations or references we’ve been exposed to. But it’s not impossible. Suspension emerges when I consider a painting to be a world of its own; different to my own world, yet one that I can still peer into. It emerges when I show restraint towards how I assign my own experiences or feelings to the meaning of the painting, and instead allow the painting to lead me through its myriad meanings. In other words, the painting is not a mirror of my emotions and ideas, where I might find connections to my own experience. It is part of its own world. And in the process of observing, imagining and deciphering – of seeking a response from the ‘distant other’ – I begin to build my capacity for uncertainty, ambiguity and the inarticulable. Why is this valuable? The point is not to distract or escape the self through the imagination, but to find ways of keeping the unknown tolerable, and even interesting. Crucially, in addition to creating a sense of personal wellbeing, this practice of looking or reading, by engaging with something outside the self with an interest and desire to figure it out, can potentially build a critical empathy in our approach to the world beyond us and the Other. The effect that works of art can have on people has been the focus of a long, ongoing conversation within the visual arts. In the book Again the Metaphor Problem and Other Engaged Critical Discourses About Art (2006), Liam Gillick remarks to his fellow conceptual artists John Baldessari and Lawrence Weiner, and the curator Beatrix Ruf: People either functionalise the work, instrumentalise it, or use it as a metaphorical structure. The truth is that the work is none of these things alone. The object is neither just functional nor is it exactly a metaphor of the idea of a place for something to happen. It has potential, it is in a constant state of ‘becoming’.Gillick’s quote calls to mind a series of works that Baldessari began in the mid-1980s, in which the faces of people in black-and-white photographs have been obscured with coloured dots. Through this move, Baldessari manipulates the meanings of the photographs, pushing us to see the images in new, counterintuitive ways. We’re invited to make sense of the figures’ obscured expressions, and to consider what the bright colours and perfect circles add to the story. In one of these works, Cutting Ribbon, Man in Wheelchair, Paintings (Version #2) (1988), an awkward raised elbow, possibly jabbing the person standing nearby, takes on heightened meaning. The friendly sun-yellow dot that covers the jabber’s face suggest the action is innocent or all in good fun – a narrative we layer on top of the photo, like a sticker. As Gillick says, the image ‘has potential’; it holds many narratives that surface one after the other, depending on what we notice and the questions we ask. Sustaining an engagement with art on these terms can produce organic, unpredictable, generative experiences. The stakes of extending this practice go beyond leisure or entertainment. Through this kind of looking and thinking, we have the power to change how we approach our experiences or know other worlds. And in today’s era of inward hypervigilance, open-ended engagements with Art, bound by no predefined outcomes, have the ability to help us seek understanding beyond the self, free from surveillance and hypervigilance. Baldessari believes that ‘[t]he public can make what they want’ of his work. Gillick agrees – art is unstable. ‘[T]here is a constant negotiation of the terms of critique as much as negotiation of the thing itself, the signified thing, the subject,’ Gillick says. ‘The work is about constantly negotiating the terms of engagement.’ That negotiation can work wonders for our bad dreams, and our complicated, undecipherable feelings as well.
Aparna Chivukula
https://aeon.co//essays/it-is-art-not-apps-that-helps-us-with-our-complex-feelings
https://images.aeonmedia…y=75&format=auto
Human rights and justice
Thousands of Indigenous children suffered and died in residential ‘schools’ around the world. Their stories must be heard
Between 1890 and 1978, at Kamloops Indian Residential School in the Canadian province of British Columbia, thousands of Indigenous children were taught to ‘forget’. Separated from their families, these children were compelled to forget their languages, their identities and their cultures. Through separation and forgetting, settler governments and teachers believed they were not only helping Indigenous children, but the nation itself. Canada would make progress, settlers hoped, if Indigenous children could just be made more like white people. In 1890, this curriculum of forgetting was forcibly taught in the few wooden classrooms and living quarters that comprised Kamloops Indian Residential School. But in the early 20th century, the institution expanded, and a complex of redbrick buildings was constructed to accommodate an increase in students. In every year of the 1950s, the total enrolment at the ‘school’ exceeded 500 Indigenous children, making this the largest institution of its kind in Canada. Plan of Kamloops Indian Residential School, 1917 View of Kamloops Indian Residential School, date unknown Today, the redbrick buildings are still standing on the Tk’emlúps te Secwépemc First Nation’s land. You can still look through the glass windows and see the old classrooms and halls. You can walk the grounds, toward the site of the former orchard or the banks of the nearby river. And you can stand over the graves of 215 children who died right here, at Kamloops Indian Residential School. Some never saw their fourth birthday. You might think the Kamloops ‘school’ and its unmarked graves are an isolated and regrettable part of Canadian history, which we have now moved beyond. But that is a lie. Those 215 graves are part of a much larger political project that continues to this day. When the burial sites at Kamloops were identified in May 2021 using ground-penetrating radar, news of the ‘discovery’ spread through international media. First-hand accounts of former students and Indigenous community members began to spread, too, and it soon became clear to the wider world that the ‘discovery’ was really a confirmation of what Indigenous peoples in Canada had known for generations. As Rosanne Casimir, the current Kúkpi7 (chief) of Tk’emlúps te Secwépemc, explains it, the search for bodies was a deliberate attempt to verify a knowing: We had a knowing in our community that we were able to verify. To our knowledge, these missing children are undocumented deaths … Some were as young as three years old. We sought out a way to confirm that knowing out of deepest respect and love for those lost children and their families, understanding that Tk’emlúps te Secwépemc is the final resting place of these children. The testimonies from survivors and their descendants were met with expressions of shock and disbelief from settler Canadians: how could this have happened? Why didn’t we know anything about this? But the knowledge was no secret. It was publicly available in institutional records; it was in the testimonies of Indigenous peoples; and it was in 20th-century reports made by government officials. We didn’t just choose to forget, we participated in a grand project of forgetting. Evelyn Camille, 82, a survivor of Kamloops Indian Residential School, beside a memorial to the 215 children whose remains were discovered there; 4 June 2021. Photo by Cole Burston/AFP/Getty During the past decade or so, I have been finding out what I can – as a white British psychologist with longstanding interests in education and social justice – about this forgetting and the attempts made to forcibly assimilate Indigenous peoples through residential ‘schooling’. I am grateful beyond measure to the Indigenous peoples from Canada and elsewhere who have generously shared their experiences and stories with me over the years. Very often, their parting advice to me has been something along the lines of: ‘You should educate your own people about this.’ This essay is my most recent attempt to do so. Abuses didn’t take place only in the dim and distant past Yes, I’ve been honoured and privileged to have had Indigenous survivors of ‘educational’ systems, and their descendants, share their experiences and perspectives with me. But hearing the truth directly isn’t the only way for settlers and Europeans to learn and remember. The records are there, filled with the stories of those left to drown in the wake of settler colonisation. So, what does that say for our apparent ‘shock’? What does our ‘surprise’ really mean? These questions become more confronting when we accept that abuses didn’t take place only in the dim and distant past. Consider this testimony from 1998 of Willie Sport who was a student, in the 1930s, of Alberni Indian Residential School in British Columbia: … I spoke Indian in front of Reverend Pitts, the principal of the Alberni school. He said: ‘Were you speaking Indian?’ Before I could answer, he pulled down my pants and whipped my behind until he got tired. When I moved, he put my head between his knees and hit me harder. He used a thick conveyor belt, from a machine, to whip me. That Principal Pitts was trying to kill us. He wouldn’t tell parents about their kids being sick and those kids would die, right there in the school. The plan was to kill all the Indians they could, so Pitts never told the families that their kids had tuberculosis. I got sick with TB and Pitts never told anyone. I was getting weaker each day, and I would have died there with all those others but my Dad found out and took me away from that school. I would be dead today if he hadn’t come.Abuses took place well into the 20th century. The revelation of the burial sites at Kamloops and the ensuing ‘shock’ of settler Canadians shows that forgetting – in the form of unlearning, concealment, or deception – is an integral part of the very system that killed those children and erased them from settler memories. 13149Frank E Pitts and Nellie Pitts, principal and matron of Alberni Indian Residential School (c1930s) 13150Boys at Alberni Indian Residential School (c1930s) 13151Alberni Indian Residential School (after 1939) This forgetting is nothing new. It is part and parcel of the European colonial project. It enabled such endeavours as the ‘discovery’ and ‘claiming’ of territory, the physical slaughter of Indigenous populations, and the attempts to forcibly assimilate Indigenous peoples by interring their children in residential institutions. However, deception has also been used against European populations, too – the forgetting that accompanies forced assimilation goes both ways. When frameworks for dispossession become entrenched through educational, social and political systems, settler states can compel their citizenry to ‘forget’ the horrors of colonisation, to deny that these things ever happened, and to aggressively demand that others join them in this deliberately cultivated collective ‘amnesia’. Settler ‘forgetting’ isn’t just a lapse in memory. It inherits an older impulse: the intentional annihilation of Indigenous knowledge systems. It’s epistemicide. For centuries, Indigenous peoples around the world have known that their children were taken away, that great harm was done to those children, and that their families and communities suffered. From the late 1800s to the late 1900s, roughly 150,000 First Nations, Métis and Inuit children were interned in residential ‘schools’ in Canada. At almost the same time, Indigenous children around the world faced similar experiences, including Māori children in New Zealand; Aboriginal and Islander children in Australia and the Torres Strait Islands; Sámi, Inuit and Kven children in the Nordic countries; and Native children in the United States, among others. Many survivors have shared their memories of these experiences, and their lasting effects: My soul was damaged. These are the most barren and fruitless of my learning years. They were wasted, so to speak, and a wasted childhood can never be made good.– Anders Larsen, a Sámi teacher, reflecting on his days as a residential ‘school’ student in Norway in the 1870s They just started using English, you could only – you could not use any other language … It’s like I had to be two people. I had to be Nowa Cumig, I had to be Dennis Banks. Nowa Cumig is my real name, my Ojibwa name. Dennis Banks had to be very protective of Nowa Cumig. And so I learned who the presidents were, and I learned the math, and I learned the social studies, and I learned the English. And Nowa Cumig was still there.– Dennis Banks, leader in the American Indian Movement, describing his arrival at the Pipestone Indian Boarding School, Minnesota, in the 1940s, in the documentary series ‘We Shall Remain’ (2009) I was sent out to work on a farm as a domestic … [I]t was a terrifying experience, the man of the house used to come into my room at night and force me to have sex … I went to the Matron and told her what happened. She washed my mouth out with soap and boxed my ears and told me that awful things would happen to me if I told any of the other kids … Then I had to go back to that farm to work … This time I was raped, bashed and slashed with a razor blade on both of my arms and legs because I would not stop struggling and screaming. The farmer and one of his workers raped me several times … I was examined by a doctor who told the Matron I was pregnant … My daughter was born [in 1962] at King Edward Memorial Hospital. I was so happy, I had a beautiful baby girl of my own who I could love and cherish and have with me always. But my dreams were soon crushed: the bastards took her from me and said she would be fostered out until I was old enough to look after her. They said when I left Sister Kate’s I could have my baby back. I couldn’t believe what was happening. My baby was taken away from me just as I was from my mother.– Millicent D, an Aboriginal woman, describing her experiences at Sister Kate’s orphanage, Western Australia, in the 1960s, as part of the report ‘Bringing Them Home’ (1997) ‘Of one school on the reserve, 75 per cent were dead at the end of the 16 years since it opened’ Most Europeans and settlers have not attached any importance to first-hand Indigenous knowledge and experience because these accounts do not serve the colonial project. But some did pay attention, and they were horrified. As early as the 1920s, government officials in Canada and the US had raised serious concerns about the appalling conditions that existed in the ‘schools’ by faithfully (and statistically) documenting what they had observed. In a government report entitled The Story of a National Crime (1922), the Canadian physician Peter Bryce (who here refers to himself in the third person) noted that: For each year up to 1914 he wrote an annual report on the health of the Indians, published in the Departmental report, and on instructions from the minister made in 1907 a special inspection of 35 Indian schools in the three prairie provinces. This report was published separately; but the recommendations contained in the report were never published and the public knows nothing of them. It contained a brief history of the origin of the Indian Schools, of the sanitary condition of the schools and statistics of the health of the pupils, during the 15 years of their existence. Regarding the health of the pupils, the report states that 24 per cent of all the pupils which had been in the schools were known to be dead, while of one school on the File Hills reserve, which gave a complete return to date, 75 per cent were dead at the end of the 16 years since the school opened.The US statistician Lewis Meriam recorded similar concerns in a report titled ‘The Problem of Indian Administration’ (1928): The survey staff finds itself obliged to say frankly and unequivocally that the provisions for the care of the Indian children in boarding schools are grossly inadequate … At the worst schools, the situation is serious in the extreme … The term ‘child labour’ is used advisedly. The labour of children as carried on in Indian boarding schools would, it is believed, constitute a violation of child labour laws in most states.However, as Bryce noted, these reports were ignored or never published. When he attempted to publicise his findings, he was persecuted and forced into early retirement. Strategies of concealment and silencing continued. The last residential ‘schools’ for Indigenous children in North America only closed their doors between 1995 and 1998 – seven decades after the Bryce and Meriam reports. What were the avowed purposes behind the global spread of Indigenous residential ‘schools’? And why was so much time, money and energy spent on building and operating these educational systems? One of the more direct explanations of the mindset that justified residential ‘schools’ appears in a speech given by Canada’s first prime minister, John A Macdonald, to the House of Commons in 1883: When the school is on the reserve the child lives with its parents, who are savages; he is surrounded by savages, and though he may learn to read and write his habits, and training and mode of thought are Indian. He is simply a savage who can read and write. It has been strongly pressed on myself, as the head of the Department, that Indian children should be withdrawn as much as possible from the parental influence, and the only way to do that would be to put them in central training industrial schools where they will acquire the habits and modes of thought of white men. Macdonald was repeating ideas that had become widespread and were uncontroversial at the time. These ideas were also echoed by Richard H Pratt, a US army captain who founded Carlisle Indian Industrial School in 1879 in Pennsylvania after ‘transforming’ 72 Indigenous prisoners of war who were in his charge. In 1892, Pratt gave a now-infamous summary of his educational philosophy: A great general has said that the only good Indian is a dead one … In a sense, I agree with the sentiment, but only in this: that all the Indian there is in the race should be dead. Kill the Indian in him, and save the man. The ‘kill the Indian, save the man’ dictum (as it became known) was positioned as philanthropic at the time because it seemed to mark a seemingly progressive transition from the policy of killing to ‘saving’ through educative assimilation. Pratt’s biographer described him as the ‘red man’s Moses’. But what is obvious in the words of both Macdonald and Pratt are the ideas that informed them: that the white man knows what is best for Indigenous children – better than their own families and communities – and what is best is assimilation, against which the ‘contaminating’ influences of family, culture and tradition must be held at bay. The intention was to play out these ideas until the very end. During a parliamentary committee in 1920, Canada’s Deputy Superintendent of Indian Affairs, Duncan Campbell Scott, explained the state’s ultimate goal: ‘Our object is to continue until there is not a single Indian in Canada that has not been absorbed into the body politic.’ 13141Girls at Kamloops Indian Residential School (date unknown) 13142Kamloops Indian Residential School (date unknown) 13143Children at Kamloops Indian Residential School, 1931 13144A classroom at Kamloops Indian Reservation School (date unknown) Today, some might wonder why separating children from their families and cultures could have seemed like a good idea as recently as a century ago. We forget that these practices of separation and forgetting weren’t incidental, or historical accidents. Instead, they were deliberately positioned as progressive and philanthropic. And residential ‘schooling’ hasn’t been the only means by which these progressive and philanthropic practices of separation and re-education have been implemented. Forcible adoptions and ‘care’ systems – as in the ‘Stolen Generations’ in Australia and the ‘Sixties Scoop’ in Canada – show that those ‘progressive’ practices have continued and diversified. The fact that Indigenous children today are disproportionately highly represented in care systems worldwide suggests that these practices are still being implemented. Peoples of colour were left in an intermediate position between Europeans and the animal world The ideas that informed the Indigenous residential ‘school’ systems did not spring up in a vacuum. They are part of a longer history of Eurocentrism – whether informed by Christianity, Social Darwinism or today’s neoliberalism – in which peoples of colour are deemed to be ‘less than’ or ‘Other’. One of the sources of this way of thinking was the papal bull Inter caetera (1493), issued by Pope Alexander VI, which deemed non-Europeans as pagans whose souls would be damned without intervention from Christians. The Inter caetera explicitly allowed ‘full and free power, authority, and jurisdiction of every kind’ to colonising Europeans, effectively permitting the dispossession, enslavement and mass murder of Indigenous peoples. The papal bull is part of the so-called ‘doctrine of discovery’, a set of legal and religious precepts that certain European nations saw as giving them carte blanche to colonise the world. For many Indigenous people in Canada, the Pope’s 2022 apology for the abuses at the country’s residential ‘schools’ was meaningless without the Roman Catholic Church rescinding the doctrine of discovery. In an interview on CBC News in 2022, the Cree singer-songwriter, activist and educator Buffy Sainte-Marie said: The apology is just the beginning, of course … The doctrine of discovery essentially says that it’s OK if you’re a [Christian] European explorer … to go anywhere in the world and either convert people and enslave, or you’ve got to kill them … Children were tortured.In reconciliation efforts, the damaging legacy of the doctrine has rarely been acknowledged – and neither have other tools of colonisation, such as the electric chair used on Indigenous students at St Anne’s Indian Residential School in Ontario, Canada. As Saint-Marie said: [The Canadian Museum for Human Rights] want my guitar strap and they want handwritten lyrics … happy, showy things. But I want them to put the damn electric chair right there and to actually show people the doggone doctrine of discovery.In March 2023, following decades of pressure from Indigenous peoples, the Vatican issued an official statement formally repudiating the doctrine of discovery. Hailed by some as evidence of progress, the statement did not indicate that the doctrine had been (or would be) rescinded, and even suggested that the legacy of suffering ascribed to them was the result of misinterpretations and ‘errors’. Whether the Vatican’s statement constitutes, or could ever constitute, the type of rescindment that would be meaningful to Indigenous peoples is less than clear. The doctrine of discovery, however, weren’t the only pieces in the puzzle of ideas that informed the Indigenous residential ‘school’ systems. As far as the colonial project went, the major practical result of the philosophical struggles between science and religion during the 17th, 18th and 19th centuries was the partial replacement of theological dicta with a pseudoscientific justification of Eurocentric might and right that allowed Europeans to falsely biologise cultural, historical and economic differences. In the 19th-century European mind, most of the links of the medieval Christian ‘great chain of being’ were still very much intact. God and the angels might have been lopped off the top, but peoples of colour were left where they had always been, in an intermediate position between Europeans and the animal world. In the settler states that emerged from European colonisation, governing powers attempted to create a sense of national security by positioning the emerging nation state – whether it was New Zealand, Australia, Canada or the US – as a single, unified people. The traditional motto that appears on the Great Seal of the United States, E pluribus unum (‘Out of many, one’), reflects the settler dream of a unified country. In the case of the US, this unification threatened to come apart during the Civil War in the early 1860s but was consolidated through the fantasy of ‘Manifest Destiny’ (the belief among settlers that, having reached the ‘promised land’, it was their duty to settle the continent from coast to coast). In reality, what ‘manifested’ was a long and bloody war with Indigenous populations for territory. Following the physical slaughter, dispossession and subjugation of Indigenous populations around the world, survivors were to be assimilated. Education became the prime mover in these assimilative efforts with ‘school as the battlefield and teachers as frontline soldiers’, in the words of the Norwegian historian Einar Niemi. For Europeans in the late 1800s, notions of genetic determinism (stemming from Social Darwinism) began to be understood differently. A new idea was growing: though the ‘inferiority’ of Indigenous populations was probably ‘in the blood’, as scholars at the time believed, patterns of ‘savagery’ might be unlearned. Re-education was seen as the way to give younger generations their ‘best’ chance of living in the new society. You can almost hear the ‘progressive’ 19th-century Christian saying: Who knows, they might even become as good, civilised and enlightened – almost – as we white people. Residential ‘schools’, like the one in Kamloops, relied on more than the informing ideas of Eurocentrism or Social Darwinism. They were also built on mechanisms of assimilative reform developed when new institutions – workhouses, reformatories and industrial schools – emerged in England (and other European nations) from the 1700s following the criminalisation of poverty and nomadism through the Poor Laws. The Canadian-born sociologist Erving Goffman considered them to be ‘total institutions’: [A] social hybrid, part residential community, part formal organisation; therein lies its special sociological interest … In our society, they are the forcing houses for changing persons; each is a natural experiment on what can be done to the self.What is ‘total’ in a total institution is the unidirectionality of power. This can be seen in psychiatric hospitals, leprosariums, nursing homes, orphanages, sanitaria or religious retreats, penitentiaries, prisons, poor houses, prisoner-of-war camps, or Indigenous residential ‘schools’. A 1994 report to the Royal Commission on Aboriginal Peoples – written by the Haudenosaunee activist, psychologist and professor Roland Chrisjohn, an expert on residential institutions in Canada, and his colleagues – described what these total institutions can do to the self: Whether it was preparing prisoners for their eventual release into society, novitiates for service to a religious order, inductees to follow without question the orders of their superior officers, or victims of genocide to submit with minimal resistance to their destruction, the point of total institutions was the total war on the inner world … and the reconstitution of what was left along lines desired, or at least tolerated, by those in power.Carceral archipelagos smoothed the way for philanthropic methods of reform to become techniques of ‘purification’ By their very nature, total institutions play up their roles as philanthropic sites of social good and reformation while playing down their role as sites where ‘enemy’ populations are confined, abused and sometimes murdered. In Europe, beginning around the 18th century, the hard line between helping and harming often dissolved as societal norms became institutionally enforced. Outsiders were seen as hostile to social progress. Europe’s ‘enemies’ included the psychiatrically ‘ill’, the intellectually and physically disabled, the children of the poor, women who were deemed sexually promiscuous (such as the ‘fallen women’ of Ireland), Indigenous populations around the world, religious minorities and any other disempowered ‘minorities’ who found themselves Othered by the state’s political and ideological whims. The emergence of workhouses, reformatories and industrial schools as state-commissioned carceral archipelagos smoothed the way for philanthropic methods of reform to seamlessly become techniques of ‘purification’ and destruction. The ‘good work’ initiated and undertaken by powerful agencies in societies (often through total institutions) erases life-worlds, which extends the social power of these institutions far beyond their walls. And those of us who have been fortunate enough to remain on the outside of those walls are compelled towards a genuine or cultivated ignorance of what happens inside. We explain or rationalise the abuses that occur. We forget. By the late 19th century, settlers wanting to address the ‘problem’ of Indigenous populations were armed with a surety of cultural superiority, a guiding principle of assimilation via education, and an institutional model. These tools seemed to elude any form of scrutiny. What could possibly go wrong? On the inside of residential ‘schools’, the great chance being offered to (or more accurately, enforced upon) Indigenous children often didn’t look that great. Assimilation, as Pratt and others understood it, meant separating a child from their environment and erasing the ‘Indian’ inside them. Indigenous children were expressly forbidden to speak their own languages, wear their own clothes or jewellery, keep their hair long, keep their own names or, indeed, to express anything of their pre-institutional identities and cultures. In 2008, the NPR journalist Charla Bear reported on the experiences of Bill Wright, a Patwin elder, who was sent to the Stewart Indian School in Nevada in 1945, aged six: Wright remembers matrons bathing him in kerosene and shaving his head … Wright said he lost not only his language, but also his American Indian name. ‘I remember coming home and my grandma asked me to talk Indian to her and I said: “Grandma, I don’t understand you,” Wright says. ‘She said: “Then who are you?” Wright says he told her his name was Billy. “Your name’s not Billy. Your name’s TAH-rrhum,” she told him. And [Wright] went: ‘That’s not what they told me.’In the residential ‘schools’, physical abuse, often under the guise of castigations for the most minor of transgressions, could be extremely brutal. In 1995, Archie Frank told the Vancouver Sun what happened in 1938, when his friend and fellow Indigenous student Albert Gray, then aged 15, was caught stealing a prune at Ahousaht Indian Residential School in British Columbia: The day after he got strapped so badly [by the school principal, Reverend Alfred E Caldwell] he couldn’t get out of bed. The strap wore through a half inch of his skin. His kidneys gave out. He couldn’t hold his water anymore … They wouldn’t bring him to a doctor. I don’t think they wanted to reveal the extent of his injuries.Archie and another friend had tried to look after Albert by bringing him food, and changing his urine-soaked sheets but, after lying in bed for several weeks, Albert died. Reverend Caldwell was also accused of causing the death of a girl called Maisie Shaw in 1946, who died at Alberni Indian Residential School, after Caldwell had kicked her down a flight of stairs. He was also named as having sexually assaulted another girl, Harriet Nahanee, who was sexually abused by the school’s administrators for years. No charges were ever brought against him. ‘We owed our unspeakable boarding schools to the do-gooders, the white Indian-lovers’ Horrific sexual abuse in the ‘schools’ has been widely documented and seems to have been commonplace. In some institutions, the children became the involuntary subjects of medical experiments. The goal of the ‘schools’ was a total transformation through re-education. This typically took place through labour training intended to prepare graduates for menial work. As the Sicangu Lakota activist and author Mary Crow Dog explains it in her book Lakota Woman (1990), those who graduated were trained to occupy the lowest occupational and social rungs in settler society: Oddly enough, we owed our unspeakable boarding schools to the do-gooders, the white Indian-lovers. The schools were intended as an alternative to the outright extermination seriously advocated by generals Sherman and Sheridan, as well as by most settlers and prospectors over-running our land … ‘Just give us a chance to turn them into useful farmhands, labourers, and chambermaids who will break their backs for you at low wages.’The system was, in the US historian David Wallace Adams’s words, an ‘education for extinction’. If those words seem strong, that’s only because we forget. As Bryce and Meriam documented in their (ignored) 1920s reports, hunger, disease and neglect were rife in residential institutions for Indigenous children. Death rates were horrific. In May 2015, the chair of the Truth and Reconciliation Commission of Canada, Justice Murray Sinclair, estimated that at least 6,000 Indigenous children died while in the residential ‘school’ system, which would mean that the odds of dying were around the same for Indigenous children in residential ‘schools’ as for Canadian soldiers in the Second World War. However, conjectures of 6,000 may be a considerable underestimate. Coverage across CBC News in the months that followed the Kamloops recovery reported more than 1,300 potential unmarked burials at nine locations, and there were 139 Indigenous residential ‘schools’ in Canada. Furthermore, in those first five months, the National Centre for Truth and Reconciliation had documented 4,118 children who died in the residential ‘schools’, with less than a fifth of the records having been worked through. For centuries, Indigenous peoples have had no option but to live with the consequences of assimilation via ‘education’. This includes abuse, separation (and ongoing disconnection) from family, the loss of cultural identity and language, intergenerational trauma, and a layered history of unresolved grief. Significantly, the ‘discovery’ of burial sites at Kamloops and elsewhere in Canada has coincided with the nation’s supposed engagement in processes of ‘truth and reconciliation’ with Indigenous populations. Even in these endeavours, settler populations continue to privilege their own knowledge. The idea that settlers and Europeans have superior insights into the ‘best interests’ of Indigenous peoples has remained largely intact. Even attempts to apologise or seek ‘reconciliation’ are informed by a desire to draw a line and move on. Land return and Indigenous sovereignty are never on the table. The United Nations Declaration on the Rights of Indigenous Peoples delimits Indigenous ‘sovereignty’ to near-tokenism in its insistence on the preservation of the territorial integrity of nation states. Instead, what Indigenous peoples are being asked to reconcile with is loss, thus cementing the colonisation process. In many truth and reconciliation processes, and in gestures of apology on the part of nation states, there seems to be an attempt to skip forward to reconciliation without taking the necessary interim steps of accountability and justice. Yet again, the wars we need to consider are not only the ones that have historically been fought against Indigenous bodies, but also the wars that continue to be waged on Indigenous and settler memories. What is at stake in these wars is a specific kind of loss. It’s not only the suffering and deaths of Indigenous people, or the loss of land and language. It’s something more fundamental: genocide. Some Indigenous leaders in Canada felt that one of the most important outcomes of the papal visit in 2022 was the Pope’s subsequent recognition that what occurred was indeed, in his words, ‘genocide’. Following the papal visit, it became clear that the word ‘genocide’ apparently confuses many people. How many people know that the Indigenous populations of the Americas declined by 90-98 per cent since 1492? If one were to ask for an example of genocide, it is likely that most people would respond with the Shoah, the holocaust committed against Europe’s Jewish populations in the 1930s and ’40s. It is also likely that they would tell you that 6 million Jewish people died in the course of those atrocities, and that those 6 million comprised two-thirds or more of Europe’s Jewish population. Today, there remain those who deny (sometimes publicly so) that such appalling actions ever took place. Repulsive in the extreme, this politically motivated historical revisionism has meant that, as of 2021, some 25 European countries as well as Israel have laws that address the phenomenon of Holocaust denial. But how many people know that the Indigenous populations of the Americas declined by between 90 and 98 per cent in the four centuries following the landing of Christopher Columbus in the Caribbean in 1492? How many of them would know that the 1948 UN Convention on the Prevention and Punishment of the Crime of Genocide accurately describes the experiences of Indigenous peoples at the hands of European colonists and settlers? How many of them would know that the man who coined the word ‘genocide’, the Polish lawyer Raphael Lemkin, described the process as having ‘two phases; one, destruction of the national pattern of the oppressed group; the other, the imposition of the national pattern of the oppressor’? Following Lemkin’s description, all acts of settler colonisation should be understood as genocidal. But the sad reality is that using the term ‘genocide’ to refer to colonial and settler-colonial actions against Indigenous populations is still hotly disputed. Indeed, the New York Post marked the anniversary of the ‘discovery’ of the burial sites at Kamloops with an article in which participants in the project of forgetting were given free rein to ‘debunk’ the finding as the ‘biggest fake news story in Canada’. Make no mistake, the wars on Indigenous and settler memories continue, and their perpetrators are finding new ways to wage them in the 21st century. This essay, too, will become part of that war. What would it mean to be on the right side of that war? For Roxanne Dunbar-Ortiz, an American historian and activist, and Jack Forbes, a Powhatan-Renapé and Delaware-Lenape historian, it means accepting a specific and necessary form of responsibility. Building on Forbes’s ideas, Dunbar-Ortiz writes: [W]hile living persons are not responsible for what their ancestors did, they are responsible for the society they live in, which is a product of that past. Assuming this responsibility provides a means of survival and liberation.Being on the right side of that war, and taking responsibility for the society we live in, demands finding a way out of the forgetting mindset. More than half a century ago, the Scottish psychiatrist R D Laing encapsulated much of what I think is at stake in his book The Politics of Experience and the Bird of Paradise (1967): It is not enough to destroy one’s own and other people’s experience. One must overlay this devastation by a false consciousness inured … to its own falsity.Exploitation must not be seen as such. It must be seen as benevolence. Persecution preferably should not need to be invalidated as the figment of a paranoid imagination, it should be experienced as kindness … The colonists not only mystify the natives … they have to mystify themselves. We in Europe and North America are the colonists, and in order to sustain our amazing images of ourselves as God’s gift to the vast majority of the starving human species, we have to interiorise our violence upon ourselves and our children and to employ the rhetoric of morality to describe this process.In order to rationalise our industrial-military complex, we have to destroy our capacity both to see clearly any more what is in front of, and to imagine what is beyond, our noses. Long before a thermonuclear war can come about, we have to lay waste our own sanity. We begin with the children. It is imperative to catch them in time. Without the most thorough and rapid brainwashing their dirty minds would see through our dirty tricks. Children are not yet fools, but we shall turn them into imbeciles like ourselves, with high IQs if possible.Can we say that Laing’s observations do not ring true today? Through our genocidal and epistemicidal actions, we Europeans and settlers tried to ‘brainwash’ Indigenous peoples and their children, but those who survived saw through our ‘dirty tricks’, and despite our ‘educational’ systems, we failed to turn them into ‘imbeciles with high IQs’. Our epistemicidal actions have also produced successive generations of European and settler ‘imbeciles … with high IQs’, thus compounding the colonial project through self-mystification – we’ve brainwashed ourselves. In my view, the burial sites at the residential ‘schools’ should force settlers and Europeans to question and challenge our cultivated amnesia, our continued obviation and obfuscation of truths and, above all, our ignorant and arrogant attempts to compel others to forget with us. But learning the facts is just the beginning. The Swedish author and journalist Sven Lindqvist stressed this point in the opening pages of Exterminate All the Brutes (1992), his exploration of colonisation and genocide: You already know enough. So do I. It is not knowledge we lack. What is missing is the courage to understand what we know and to draw conclusions.If we cannot do this – if we cannot find the courage to face the truth – then surely we have abandoned, or lost forever, whatever tenuous claim we might have held towards progressive humanity. The burial sites of Indigenous children on the sites of residential ‘schools’ at Kamloops and elsewhere are some of the most recent reminders of the urgent, long-overdue necessity to do things differently.
Steve Minton
https://aeon.co//essays/we-must-not-forget-what-happened-to-the-worlds-indigenous-children
https://images.aeonmedia…y=75&format=auto
Religion
It took a tremendous effort to distinguish early Christianity from the finely tuned world of pagan beliefs and rituals
Christianity developed in a world with a well-articulated understanding of a multilayered and hierarchical universe that was, above all, animated. Most inhabitants of the ancient world envisioned cosmic energy as alive, meaning that the essence of physicality, spirituality and ethics rested in a host of supernatural sentient beings. Among those beings were demons who dwelt in the space between the earth and the Moon. In the mid-2nd century, CE Justin Martyr explained the role of demons in Christian thought. The sons of God succumbed to intercourse with human women, and they begot children called the Nephilim (meaning giants). The progenies of the Nephilim were demons. These demons enslaved the human race, sowing wars, adulteries, licentiousness and every kind of evil. All the pagan gods, Justin warned, were, in fact, demons who haunt the earth. The North African bishop Augustine offered a different genealogy. He identified demons as the rebel angels who fought alongside and suffered the same fate as Lucifer (also known as Belial, Beelzebub, the Devil, Satan, and the ‘Day Star’) whom God cast out of heaven after he mounted a failed rebellion. Both pagan and Christian ideologies envisioned demons in prominent roles but, for pagans, demons could be both good and bad. They resembled deities in that they shared in their immortality, but they were also subject to obnoxious, irrational cravings. Demons were positioned between humans and gods, and could act as guardian angels. Demons were corporeal, though of a material much lighter than, and superior to, the human form; they could move faster than mortals, read thoughts, and slip in and out of spaces impossible for the human body to occupy. For the Church, all demons were malevolent. Christians saw demons as shape-shifters who copulated promiscuously with human beings, controlled the weather, sickened their victims, flew through the atmosphere, impersonated the dead, predicted the future, and were always to be feared. The 4th-century Christian writer Lactantius wrote: Because these spirits are slender and hard to grasp, they work themselves into people’s bodies and secretly get at their guts, wrecking their health, causing illness, scaring their wits with dreams, unsettling their minds with madness.It is important to note that in the 4th century when he wrote, the notion of a super-demon, that is Satan or ‘the Devil’, had not yet developed. Until the high Middle Ages (c1050-1200) Satan was just one more demon, albeit a particularly nasty one. Augustine was the most instrumental of the Church fathers in articulating the theology governing the relationship between human beings and demons. Miracles are allowed by God and wrought by faith, not by incantations and spells. Marvels not performed for the honour of God are illicit sorcery accomplished by the deceitful tricks of malignant demons. Magic took place when humans trafficked with demons in order to carry out particular deeds such as divination, casting spells, love magic, raising storms, and astrology. Demons feasted on the smoke, incense and odour of blood rising into the clouds from animal sacrifices. They craved blood, so, in order to lure demons, people mixed gore with water or offered up burnt sacrifices. This exchange created a contract by which humans could enlist demons to do their bidding. Feasting on sacrificial flesh in cultic ceremonies was not the only way to attract demons. Any ritual activity that resembled pagan worship, such as honouring idols, casting spells or worshipping in the outdoors – regardless of intention – was magic. The Christian clergy had to be ever vigilant that the people under their care were not inadvertently interacting with demons. In its attempt to distinguish itself from the many cults and belief systems that formed a veritable mosaic in the ancient world, early Christians had to confront demons, the magic they facilitated, and the contumely of other religionists. That was an awesome task because magic was ubiquitous. One of the earliest undertakings of Christian apologists was to counter slurs against Jesus and his apostles that they were nothing more than charlatans taking advantage of the superstitious disposition of the ignorant. Pagans slung insults at Christians for passing off tricks as miracles. The 2nd-century pagan philosopher Celsus referred to Christian miracles as masquerades for scandalous ‘trickery’, less impressive than the stunts of jugglers who performed in the marketplace. Nothing filled demons with dread and kept them at bay like a sanctified church The foundational metaphors of Christianity and paganism differed and conflicted with one another. The importance of place emerged for Christians as they crafted a new identity and a way to express it through ritual. Pagans looked to the natural world for meaning. Christian identity, on the other hand, was manifest in human-made consecrated structures such as churches and shrines. The new place of worship had to be one where demons did not feel welcome. When Christians established consecrated sites (the settings of ritual), they were often competing with pagan holy places that abounded in the world of nature – spots near lakes, beneath trees, at hallowed rocks, and in forests. Although Near Eastern and Mediterranean religions were temple-oriented with a sophisticated concept of enclosed ceremonial, the common person did not, as a rule, enter the hallowed domain, and most popular ritualistic, religious activity took place in the fields or outside the temple precinct – in short, out of doors. Christians created a new kind of space where demons dared not tread and in which continuity with old rites and the worldview they stored were thwarted. These churches provided a clean slate on which Christians could write in the language of ritual. The building became a symbol for the new religion. It was more than just a different location from those frequented by pagan celebrants and inhabited by their demonic deities. It was a new concept of place particular to Christianity – cleansed of demons, consecrated to that special creator god who does not inhere in his creation (trees, rocks, springs) and should not be worshipped through it. Nothing filled demons with dread and kept them at bay like a sanctified church. The motif of demons fleeing in terror from a consecrating bishop was familiar in late antiquity when the fight against idolatry was a matter of openly confronting pagan cults. In the 3rd century, Gregory the Miracle-Worker prayed at the local temple, and the next morning the temple warden could not induce a lingering demon to enter. Christian structures were fortifications against demons. Christian and pagan symbols also diverged in regard to shrines of the dead found in cemeteries outside the city walls. Christian and pagan approaches to death differed starkly. For pagans, the grave was a feared, polluted and haunted space from which the living recoiled. Early Christians fashioned a new kind of hallowed place where the dead and the living commingled, and these shrines were protected from the infiltration of the insidious demonic powers swirling around the tombs because they were protected by the supervision of the Church. In his 4th-century Life of Constantine, Bishop Eusebius advocated that pagan temples built over Christian holy sites be demolished and replaced by Christian shrines. He lamented that the emperor Hadrian and ‘a tribe of demons’ had defiled a Christian sacred place by building a temple to ‘impure’ Aphrodite over Christ’s tomb and had proffered ‘foul sacrifices there upon defiled and polluted altars’. The distinctive Christian approach to death emerged as a central feature in the competition with pagans for cultural dominance. Despite the radical differences in pagan and Christian notions of mortality, there were also similarities, and these frustrated the new religion in its effort to establish itself as unique. Necromancy in the ancient world pertained to the practice of calling the dead back to life for the purpose of learning the future. Pagan works portray contact with the dead as ghoulish and repugnant, but, if approached gingerly and undertaken for desirable ends, it was justified. Revivification of the dead was a major feat that required concentrated syncopation with cosmic powers, and such collaboration was realised and made safe through carefully executed rituals. For example, in his novel The Golden Ass, the 2nd-century pagan philosopher Apuleius relates a story of the corpse of Thelyphron, whom the Egyptian prophet Zatchlas temporarily revivifies so that the deceased can solve a mystery regarding his sudden demise. Thelyphron had recently married, but he died shortly afterward. As his funeral procession winds through the streets of a city in Thessaly, the rumour goes out that his wife had killed him by the use of poison and the ‘evil arts’. She protests, and the crowd settles the matter by asking Zatchlas to recall the spirit from the grave for a brief time and to reanimate the body as it was before his death. Zatchlas agrees. He begins the resurrection by placing a herb on the cadaver’s mouth and on his chest. Then the priest turns to the east and prays silently to the majestic sun, asking that the corpse be granted a momentary reprieve. The irritated dead man comes to life and complains that he was already being ferried over the river Styx; he asks why he had been dragged back among the living and begs to be left to return to his rest. The shade then confirms that his wife murdered him. In this case, the motive for interaction with the dead was worthy and accomplished with a careful, simple rite and a silent prayer. To pagans, Christian practices seemed mordant and cannibalistic A different and chilling case of pagan necromancy comes from the 1st-century Roman historian Lucan. In this story, Lucan describes the craft of Erictho, a medium who summons a spirit from the grave to reveal to the consul Pompey (who’d died in 48 BCE) the outcome of his impending battle with Julius Caesar (who’d died in 44 BCE). Lucan writes: [S]he chose a corpse and drew it along with the neck noosed, and in the dead man’s noose she inserted a hook … Then she began by piercing the breast of the corpse with fresh wounds, which she filled with hot blood … [Erictho mumbled:] ‘I never chant these spells when fasting from human flesh’ … She raised her head and foaming mouth and saw beside her the ghost of the unburied corpse … [T]he dead man quivered in every limb; the sinews were strained, and he rose, not slowly or limb by limb, but rebounding from the earth and standing erect at once. The tale of Erictho captures the pagan horror of necromancy and the repulsion they felt toward not just magic but mortality. The scene bespeaks the ugliness of death, which Romans found anathematic and polluting. This dread shaped pagan views of Christians, who seemed to savour the dead. They frequented burial grounds, celebrated death days, held up martyrs as role models (cherishing their body parts), and circulated stories of Jesus as a heroic figure because he could bring the deceased from the grave. This pursuit of intimacy with the dead repulsed pagans. They suspected that initiates to the new religion engaged in eating human flesh when, during the Eucharistic ritual, they consumed the body and blood of the dead Jesus. To pagans, Christian practices seemed mordant and cannibalistic. Many people in late antiquity saw Jesus and his followers as necromancers. This perception brought forth persistent denials from some of the best minds of the Patristic era. In one respect, pagans were right, Jesus had redefined death, and Christians did approach the deceased differently than their polytheistic neighbours. Whereas most pagan cults dreaded, shunned and burned the dead, Christians formed tender and mutually beneficial relationships with the spirits (and, in some cases, the material remains) of those who ceased to exist on a mortal plane. Rather than ostracising the dead beyond the city limits, by the 2nd century, Christians sought out the remains of their loved ones. The idea that the dead could live again was a central tenet of Christian belief. Following his resurrection, Jesus assured humanity that they could have eternal life. In the Gospel of Matthew, Jesus invests the disciples with the power to emulate his miracles, including resuscitating the dead. In the Gospel of John, Jesus revivifies Lazarus who had been gone for four days: [He] cried with a loud voice: ‘Lazarus come out.’ The dead man came out, his hands and feet bound with strips of cloth, and his face wrapped in a cloth. Jesus said to them: ‘Unbind him; let him go.’ For Christians, it was easy to distinguish between Jesus reviving a dead man for purely charitable purposes and the practice of fiends such as Erictho dragging a slain soldier back from Hades for mantic designs, revenge and personal gain. Erictho brought the soul back to the world against its will, not for its own benefit but to assuage the fears of those who engaged her services. The work of Erictho was avaricious, bloody and unnatural. The shade shrank from its former body and entered it only when threatened, and then with great pain. The unfortunate soldier did not receive the gift of life, but an agonising and bitter jolt back to an unwanted consciousness. The resurrection Jesus undertook was unguarded, altruistic, loving and selfless. The Healing of the Blind Man and the Raising of Lazarus, first half of the 12th century, Spain. Courtesy the Met Museum, New York Erictho used rituals involving plants, poisons, cannibalism and spells, while in John’s gospel the rite is a simple, controlled word formulation. The same could be said of the ritual performed by Zatchlas, however a distinction can be drawn between Jesus’ revivification and that by the pagan priest. Zatchlas brought the dead man to life for the purpose of telling the future, and the motive was just, but, by Christian reckoning, the act was demonic in that the priest was seeking information beyond human ken. Jesus’ favour to Lazarus, on the other hand, was a miracle done by the Lord – Jesus expected nothing in return. Magic is antipodal to miracle because of the source of power that actualises each. However, distinctions between miraculous resurrection and necromantic revivification were not clear-cut. Jews and pagans routinely represented Jesus as a magician Accounts of non-Christian revivification plagued Christian religionists. Stupendous miracles constituted a vital component of Christianity’s claim to authenticity, and the fact that many pagan holy men claimed to bring people back from the grave fed into the rivalry between the fledgling faith and dominant pagan cults. In the early 4th century, a provincial governor named Hierocles, seeking to defame Jesus and the Christian movement, wrote a treatise about Apollonius of Tyana, a Pythagorean magus who lived in the 1st century and was reputed to have miraculous powers to heal the sick, predict the future, and raise the dead. Hierocles compared Apollonius and Jesus, to Jesus’ disadvantage. He cast Jesus’ miracles as conjuring and cheap stunts – the kind any street magician could pull off. In his treatise, Hierocles describes a resurrection by Apollonius that closely resembles Jesus’ miracle. On one occasion, Apollonius revives a maiden who is being borne to the grave, simply by touching her and speaking a few words, very similar to the way in which Jesus raised the lifeless Lazarus. Neither Apollonius’ nor Jesus’ acts required grandiose rites or ritual substances such as saliva, blood or hairs. Jews and pagans routinely represented Jesus as a magician, and non-Christians commonly compared the marvels of Apollonius with those of Jesus. As late as the 4th century, Augustine alluded to the fact that some praised the miracles of Apollonius along with those of Christ. The sting in the comparison was that Christians considered Apollonius’ powers to be demonic and Jesus’ to be miraculous. Early Christians bristled when others censured them for necromancy, certainly because the efficacy of the necromantic art rested on demons of the lower air, but also because they sought to distinguish themselves from the many other religions and belief systems in the ancient world. Christian authors worked tirelessly to defend Jesus specifically and Christians generally against accusations of maleficium (malignant magic). Throughout the Early Middle Ages (c500-1000), Christian writers insisted that the power of their holy men and women rested not on demons that lurked between the Moon and the earth, and not on elaborate rites, but on faith, simple Christian rituals, and ultimately on God alone. Elaborate rituals equated to demonism. In an early Christian text called the Recognitions, the apostles repeatedly find themselves in situations where they are forced to defend Jesus and themselves against charges of magic. According to one story in the text, James sends Peter to Caesarea to refute the magician Simon Magus who is claiming to be Jesus Christ. The character Niceta questions how it is possible to distinguish between Jesus’ miracles and claims to divinity as put forth in the Gospels from those that Simon Magus and false prophets generally proffer. The answer to Niceta’s question emerged from an unexpected quarter. In Matthew and Luke, the virgin birth demonstrates Jesus’ preeminent and singular authority over other itinerant preachers and healers. According to the Patristic interpretation of these two gospel passages, the virginity of Mary was the critical sign that Jesus was not just another prophet, but the Christ called Immanuel. That Jesus was born of a virgin, thus fulfilling Old Testament prophecy, was the most demonstrable evidence of his godhood. Christians promoted this argument, at least in part, because the ancient world was full of holy men, prophets and magicians who could perform wonders, including raising people from the grave; this was in no way a unique claim. But the fulfilment of an ancient prophecy involving a virgin birth separated religion from common sorcery. Christians walked a tightrope on the issue of revivification. The earliest Christian theologians were univocally in harmony with their pagan neighbours on the evils of using (or trying to use) the deceased either for fortune-telling or to exploit the power of death’s liminal state for nefarious purposes. Dealings with reanimated corpses involved the worst sort of traffic with demons. Yet Jesus and his closest male followers resuscitated the deceased, and all Christians honoured the spirits and bodily remains of departed saints and fostered friendly relationships with these special dead. In the end, through sermons from the pulpit and private correction in the confessional, Christian intellectuals were able to convince converts that Christian resurrection was different from necromancy. At the same time that the clergy expressed ambivalence about ritualism because of its association with paganism, the Church was developing its own vocabulary of pious rites that all Christians could employ in place of those pagan customs that flirted with the demonic. Tracing the sign of the cross, baptism and exorcism all had the specific virtue of keeping demons at bay. One of the symbols that was easiest to manipulate was the ritual signing of the cross. In keeping with the general prejudice of the early Church against elaborate rites, signing with the cross was simple and employed casually. Crossing as a sign or symbol was a referent to the resurrection of Christ and the salvation of humankind, and it left no room for demonic infiltration like other signs might, in fact quite the opposite; the act of signing with the cross was meant to ward off demons. Beginning with the earliest Church literature, Christians were enjoined to ineffectuate evil and ensure the protection of persons and property by signing with the symbol of the cross instead of employing other superstitious apotropaic procedures. In his On the Military Garland, the 3rd-century Tertullian writes: At every step and movement, at every entering and exiting, in dressing, in putting on shoes, at the bath, at the table, while lighting candles, when lying down or sitting, whatever we are doing, we mark our forehead by the sign [of the cross].In his lectures for Lent, the 4th-century Bishop Cyril of Jerusalem says that the cross is ‘a terror to devils … For when they see the Cross, they are reminded of the Crucified; they fear Him who has “smashed the heads of the dragons”.’ Only Christ, working through his designated vicars, could make the whole person sound The basic initiatory rite of Christianity was baptism, which acted as a foil to demonic infiltration and was rich in evocative and introspective rituals. It is a good case study for seeing how the early struggle for identity was waged on the field of ritual. A central component of the ‘rebirth’ inherent in baptism was renunciation of devils. Demons resided in water and frequented watery places, so the purifying power of the font challenged demons head on. The baptismal sacrament incorporated an exorcism, an explicit renunciation of Satan, and a command that ‘all evil demons depart’. The repudiation amounted to an abandonment of wrongheaded ritual; the catechumen was to say: ‘I renounce you, Satan, and all your service [displays or rituals] and all your works.’ Rather than drawing on demonic power, these Christian usages combated it. They were palliative and a counter to magic-ridden pagan rites, while exorbitant ceremony and complicated machinations with gaudy objects (all absent from baptism) were offensive to early Christians’ sense of the proper approach to God. In Latin, the word ‘health’ (salus) can also mean salvation, and, since soundness of the body and the soul were interwoven, spiritual and physical wellness continued to be expressed in the language of healing. The clergy and the saints were thought to administer the most effective medicine in the form of prayers, blessings and miraculous cures. Secular physicians were a suitable second choice, but magic was never an acceptable option for healing. To receive bodily cures from magic imperilled the soul and was ultimately self-defeating, even if it worked in the short run. The early Church was particularly sensitive about pagan facility with medicine because pastors felt it was critical for their flocks to understand that, although other gods (demons) could heal the body, only Christ, working through his designated vicars, could make the whole person sound – body and soul – and perpetuate that wellness into the next world. The earliest Christian writings use the discourse of healing to describe the benefits of the new religion and cast Jesus or the Church as ‘physician’. In some contexts, this characterisation was metaphorical, but it was just as often literal. Prayer, penance, supplication of saints and pious living were thought to be genuinely curative. Augustine wrote: Just as physical medicines, applied by humans to other humans, only benefit those in whom the restoration of health is effected by God, who can heal even without them.He submitted that both the mind and the body can be ‘cleansed’ best by Christ, who is a better physician than doctors or sorcerers. The very name of Jesus, when spoken, vanquished demons and ensured healing. Tertullian affirmed that all mastery and power over demons came from naming the word ‘Christ’. In the field of therapeutics, the Christian struggle against magical superstitions was long-lived. It was not easy for the new religion to suppress age-old remedies that were generally applied in intimate and quasi-private settings: the home and the monastery. The time-honoured feel of traditional paganistic cures and the texts that transmitted them added legitimacy to the rites that had kept people safe for generations. The Church’s sought-after ownership of health provoked a rivalry with pagan cults, because certain of the deities had always been healers. The most renowned of the healing deities was the Greek god Asclepius. Of all the healing cults, his sect posed a particularly competitive challenge to Christians in the fierce rivalry over healing. Justin Martyr maintained that demons introduced the ‘myth’ of Asclepius to challenge Jesus’ prowess as a healer. Justin claimed that the Devil so feared Jesus’ popularity that the ‘Evil One’ brought forth Asclepius to imitate the gospels and cheat men of their salvation. Christianity was ultimately successful at establishing itself as the only legitimate religion in the Roman world. However, the struggle for supremacy was protracted and hard fought. The Church was met with the challenge of facing down an ancient, finely chiselled and much beloved cultural system of which demons and magic were a part. Christianity’s success was due, in part, to the development of a new and thoroughgoing system of rituals responsive to its own worldview.
Martha Rampton
https://aeon.co//essays/early-christians-struggled-to-distinguish-themselves-from-pagans
https://images.aeonmedia…y=75&format=auto
Art
Noticing first one then many parrots, peacocks, owls and more birds in Old Master paintings taught me to truly see the world
I am an accidental birder. While I never used to pay much attention to the birds outside my window, even being a bit afraid of them when I was a child, I have always loved making lists. Ranking operas and opera houses, categorising favourite books and beautiful libraries – not to mention decades of creating ‘Top Ten’ lists of hikes, drives, national parks, hotels, and bottles of wine. My birding hobby grew out of this predilection. Specifically, out of my penchant for writing down the birds I found in the paintings by the Old Masters. Hieronymus Bosch, for starters. Bringing my opera glasses to the Museo del Prado in Madrid, I delighted in sitting across the room and counting the birds in Bosch’s painting, today called Garden of Earthly Delights (1490-1510). The triptych, which visualises the fate of humanity in three large panels, is exploding with birds. So far, my list of Bosch birds includes spiralling flocks of starlings amid posing peacocks and pheasants. Closer to the water are storks, egrets and two kinds of herons. A jackdaw and a jay can be identified near a giant ‘strawberry tree’, below which are two spoonbills. And lurking in the trees are three kinds of owls, serving as signs of heresy. 13155Details of The Garden of Earthly Delights (1490-1510) by Hieronymus Bosch 1315613157In his book A Dark Premonition: Journeys to Hieronymus Bosch (2016), the Dutch poet and novelist Cees Nooteboom describes seeing Bosch’s work when he was a young man of 21 – and then seeing it again when he was 82. He asks of one picture: How has the painting changed? How has the viewer changed? Am I even the same man now? These are the questions I ask myself while standing in front of a certain picture by Raphael in the Uffizi. The first time I saw the Madonna del Cardellino (c1505-06) was more than 30 years ago. I was 19. My college boyfriend and I had stopped in Europe on the way back from two magical months in India. It was my first time in Italy. And Florence was so damn pretty. Madonna del Cardellino or Madonna of the Goldfinch (c1505-06) by Raffaello Sanzio (Raphael). Courtesy the Uffizi gallery, Florence I vividly recall what a warm day it was, and how overwhelmed I felt by the grand museum. Walking past picture after picture, I turned back to look for my boyfriend, who was trailing behind. And there he was, utterly gobsmacked in front of a painting. So I walked back to look at it too. It was a Madonna by Raphael. A beautiful blonde Madonna, in a rich red dress with her cloak of ultramarine draped over her shoulders, and seated with two babes at her feet. One was holding a goldfinch. Being young Americans, we couldn’t understand any of it. Why were there two baby boys? If the second was John the Baptist, where was the child’s mother? And were those violets and chamomile under their feet? In an enchanted world, everything seems to be telling a story Serious birders sometimes talk about their first bird memory. My own earliest bird-in-a-painting memory was that goldfinch in the painting by Raphael in the Uffizi. Its composition is much like Raphael’s Madonna del Prato (1506), in Vienna – but at the Uffizi, instead of a cross, the children play with a tiny bird. Thirty years later, standing in front of the same painting, I now know the bird symbolises the Christ Child and the Passion. Madonna del Prato or Madonna in the Meadow (1506) by Raphael. Courtesy the Kunsthistorisches Museum, Vienna In Catalonia in Spain, there is a wonderful legend that suggests that the jagged and holy mountains of Montserrat rose from the earth at the precise moment that Christ was crucified in Jerusalem – as if the earth itself rose in anger. There was a similar story from the Middle Ages about how the goldfinch received its red spot. Flying down over Christ on the Cross, the bird tried to help Him by picking out a thorn from the Crown – and in this way was forever after splashed with the drop of His blood. In an enchanted world, everything seems to be telling a story. Second marriages are notoriously difficult. My new husband had been wiped out financially and emotionally by his previous marriages (yes, there was more than one). By the time I met Chris, he was barely hanging on to the house, his kids showing varying degrees of alienation. It was impressive that he wanted to try again – and so soon? Not six months after our first date and whirlwind romance, we had done it! I sometimes think we were like survivors of a shipwreck; his life was a wreck, but mine was worse. Of course, we underwent couples therapy and laughed off the obligatory (but serious) warnings about our dim hopes of survival. We were just happy to have found each other; happy to be still breathing; for, as Voltaire said in 1761: ‘[E]verything is a shipwreck; save yourself who can! … Let us then cultivate our garden …’ My first marriage had been to a Japanese man. Having spent my adult life in his country, where I spoke, thought, and dreamt in Japanese, I hoped marrying an American would be easier. After all, we shared a language and a culture. But it wasn’t easier. Marriage is tough in any language. And so, I have tried much harder this time to cultivate shared values and interests – which is challenging when you are married to an astrophysicist! ‘Hunting birds with a bow and arrow?’ Chris wondered I do love watching Chris look at art. He becomes intensely attentive, as if every nerve-ending in his body is switched on. It’s not like he’s trying to figure out the nature of galaxy evolution or doing the complicated mathematics that he does when he’s working. He just stands there before the picture, fully present. Most of the time, I have a hard time understanding what he’s thinking about. I know he can build things that go into space. And that he teaches quantum mechanics at Caltech and can perform multivariable calculus. He can even make a cat die and not die at the same time. This is mainly lost on me, which is why I love looking at art together with him. It’s something we can share, something over which we can linger, in each other’s company. That was how my husband and I started going on what we call our ‘art pilgrimages’. From the very beginning of our marriage, we spent enormous amounts of time standing side by side silently looking at Old Masters. Sometimes we might talk a bit, hold hands, and exchange a knowing smile, but mainly we stood there silently soaking it all in. Shortly after getting married, I took Chris to the Getty Museum, in Los Angeles. I was excited to share my favourite picture in the collection, Vittore Carpaccio’s Hunting on the Lagoon (c1490-95). The museum acquired the painting in 1979, from the collection of the Metropolitan Opera basso Luben Vichey and his wife. Hunting on the Lagoon (c1490-95) by Vittore Carpaccio. Courtesy the Getty Museum, Los Angeles Hunting on the Lagoon shimmers with atmospheric effects. Painted in azurite, yellow ochre and lead white, there are touches of costly ultramarine used for the sky and mountains, while vermilion is used on the servant’s jacket. Hunting on the Lagoon depicts a group of aristocratic gentlemen hunting from a small boat on the water. ‘Hunting birds with a bow and arrow?’ Chris wondered. Looking carefully, you can see they are shooting clay balls at what appear to be grebes. I tell him that it was apparently the custom to hunt birds in this way so as not to damage their pelts. ‘But what about those dark birds with the serpentine necks sitting one to a boat?’ he asked. I watched his eyes move to the same birds posing on pylons in the water. ‘Unmistakably cormorants.’ And the theory is, I tell him, that the birds were used for hunting fish. In Japan, you can still see this traditional way of fishing, called ukai. I am always so excited to share something of my life in Japan with Chris, even though it was in the days before we met. King James was known to have kept a large, and costly, stock of cormorants in London, which he took hunting I tell him how I watched this kind of fishing years ago. ‘It was at night by lamplight on boats that ply the Kiso River, in Aichi Prefecture.’ The birds, held by what seem to be spruce fibre leashes, were trained to dive for ayu sweetfish and deliver them back to the fishermen on the boats, I say, wishing I could show him. ‘Do you think the custom came to Europe from Japan?’ he wonders. I think it arrived from China, though that story might be made up. In 17th-century England, King James I was known to have kept a large – and very costly – stock of cormorants in London, which he took hunting. Looking at the painting, however, I thought the practice I’d seen in Japan had been altered almost beyond recognition. During the Renaissance, the lagoon in the painting must have been jam-packed with fish and mussels and clams and birds. A perfect place to spend an afternoon. But those men, with their colourful hose, with their bows and clay balls, are clearly no fishermen. It was then that Chris noticed the strange, oversized lilies protruding from the water in the foreground of the painting. It took him long enough to notice, I thought. Those flowers have driven art historians crazy for generations. ‘Don’t tell me,’ he said, ‘There must be another picture? One with a missing vase, right?’ Right he was! There is an even better-known painting by Vittore Carpaccio, Two Venetian Ladies (c1490-1510), hanging in the Museo Correr in Venice. We went to see it a few years later. And, sure enough, there is a pretty majolica vase sitting on the wall of the balcony, which seems ready and waiting for those lilies. The two works (painted on wooden panels) fit together, one on top of the other. Two Venetian Ladies (c1490-1510) by Vittore Carpaccio. Courtesy the Museo Correr, Venice/Wikipedia Before this was figured out, art historians believed the two bored-looking ladies to be courtesans. One of the reasons for thinking this was the two doves sitting on the balustrade, which are ancient symbols of Venus and romantic love. But the ladies are also shown sitting next to a large peahen, symbols of marriage and fidelity. Looking bored, with their tall wooden clogs tossed to the side, they were declared by art historians to be courtesans. Definitely courtesans. Like pieces of a puzzle, the matched set of paintings has now convinced art historians that these ‘ladies’ are in fact wives of the ‘fishermen’, who are themselves no longer believed to be fishermen but, rather, aristocratic Venetians out hunting waterfowl for sport on the lagoon. A great painter of dogs, Carpaccio was even better at birds. Beyond his doves, grebes and cormorants, he is perhaps best known for his colourful red parrots. According to Jan Morris writing in 2014, the Victorian art critic John Ruskin was much taken with Carpaccio’s menagerie. At the Ashmolean Museum in Oxford, there is a small watercolour drawing that is a copy of Carpaccio’s red parrot, made by Ruskin in 1875. Calling it a scarlet parrot, Ruskin wondered if it wasn’t an unknown species, and so decided to draw a picture of it in order to ‘immortalise Carpaccio’s name and mine’. Drawing of a Red Parrot and Plant from Carpaccio’s ‘Saint George Baptises the Selenites’ (19th century) by John Ruskin. Courtesy the Ashmolean Museum, Oxford It might be classified as Epops carpaccii, he suggested – Carpaccio’s Hoopoe. Chris and I were delirious to have found each other. Grateful for this chance to have our spirits reborn, we celebrated by taking multiple honeymoons that first year. And without a doubt, the most romantic was the trip we took to Venice – on the hunt to find Carpaccio’s red parrot, which, happily, one can see in the place for which it was originally commissioned: in the Scuola di San Giorgio degli Schiavoni. Baptism of the Selenites (detail, 1502) by Vittore Carpaccio. Courtesy Wikipedia Today, when introducing foreign visitors to Venice’s scuole, tour guides will sometimes compare the medieval confraternities to modern-day business associations that carry out philanthropic activities, like the rotary club. That is probably not far off the mark. Carpaccio’s great narrative cycles were created to adorn the walls of these scuole. The pictures were not merely to decorate, but there to tell stories relevant to the confraternity. Perhaps the best known of these are two of the paintings commissioned by the Scuola di San Giorgio degli Schiavoni. The red parrot that Ruskin adored is still there in one of the paintings, the Baptism of the Selenites (1502). Baptism of the Selenites (1502) by Vittore Carpaccio. Courtesy Wikipedia Chris and I barely made it in time before the small scuola closed for the day. It was hot and the air heavy in the dark interior. When the author Henry James visited the Schiavoni in 1882, he complained that ‘the pictures are out of sight and ill-lighted, the custodian is rapacious, the visitors are mutually intolerable …’ However, then he magnanimously added: ‘but the shabby little chapel is a palace of art.’ Flannery O’Connor loved her peacocks, calling them the ‘king of the birds’ Eventually locating the parrot, we marvelled at how often such exotic birds can be counted in religious paintings from the Renaissance. We assumed they must be prized like the tulips of Amsterdam during the Dutch Golden Age of paintings, coveted and displayed for their rarity. I learned only later that it was also because they were a symbol of the Virgin birth. Art historians suggest that this is due to an ancient belief that conception occurred through the ear (and parrots can speak…?) Another more interesting explanation is something found in the Latin writings of Macrobius, who said that when it was announced in Rome that Caesar’s adopted nephew Octavian was triumphant at the Battle of Actium in 31 BCE, at least one parrot congratulated him with: ‘Ave Caesar.’ This was seen as prefiguring the Annunciation and Ave Maria. In another painting in the scuola, Saint Jerome and the Lion (1509), Carpaccio has drawn what looked to us as an entire bestiary – including a beautiful peacock that seems to be trying to get as far away from the lion as it can. Saint Jerome and the Lion (1509) by Vittore Carpaccio. Courtesy Wikipedia Peacocks always remind me of Flannery O’Connor, who lived on a farm in Georgia with ‘forty beaks to feed’. She loved her peacocks, calling them the ‘king of the birds’. No matter how her family complained, she remained firm in her devotion. Recently re-reading her essays in the posthumous collection Mystery and Manners (1969), I learned that the Anglo tradition is very different from the Indian one, when it comes to peacocks. In India, they are viewed as symbols of love and beauty, while Europeans typically associate peacocks with vanity and pride. This notion stretches all the way back to Aristotle, who remarked that some animals are jealous and vain, like a peacock. That is why you find them aplenty in Bosch’s paintings. A warning against the pride of vanity. O’Connor knew that the peacock was a Christian symbol of resurrection and eternal life. Others concurred. The ancient Romans held that the flesh of the peacock stayed fresh forever. Augustine of Hippo tested this with a live peacock in Carthage, noting that: ‘A year later, it was still the same, except that it was a little more shrivelled, and drier.’ Thus, the peacock came to populate Christian art from mosaics in the Basilica di San Marco to paintings by Fra Angelico in the Renaissance. Perhaps this is one of the reasons I came to love peacocks so much; as after all, I was experiencing my own kind of resurrection of the spirit with Chris. The late German art historian Hans Belting wrote about the exotic creatures found in Bosch’s triptych. Belting’s interpretation is interesting, as he views the middle panel – the eponymous Garden of Earthly Delights – as being a version of utopia. By Bosch’s day, the New World had been ‘discovered’ by Europeans – and, indeed, the painting can be dated because of the New World pineapples seen in the central panel. When Christopher Columbus set sail to the Indies, he believed, like many of the theologians of his time, that an earthy paradise existed in the waters antipodal to Jerusalem, just as Dante Alighieri described. But what is Bosch trying to say? I don’t think anyone really understands. What we do know is that the triptych was never installed in a church – but was instead shown along with exotic items in the Wunderkammer of his patrons. Albrecht Dürer, my beloved painter of owls and rhinos, visited Brussels three years after the completion of Bosch’s painting but said not one word about it in his copious journals. Was he disappointed? Scandalised? Belting thinks his silence speaks volumes, and he describes Dürer’s astonishment when visiting the castle and seeing the wild animals and all manner of exotic things from the Americas and beyond. The lockdowns became a time for me to see the world with new eyes There was a reason why the Europeans of the time called the Americas the New World, instead of just the ‘new continent’. For this was a revelation, not just of new land, but of sought-after minerals, like gold and silver. It was a new world of tastes. From potatoes to tomatoes and chocolate to corn, the dinner tables of Europe would be transformed in the wake of Columbus’s trip. There were animals never seen in Europe, like the turkey and the American bison. And hummingbirds. How wide-eyed those Europeans must have been. In 1923, Marcel Proust wrote that: ‘The only true voyage of discovery … would be not to visit strange lands but to possess other eyes.’ And this was how I felt coming back to California after two decades in Japan. It was also how I felt during the early days of the COVID-19 pandemic, when time took on a stretched-out quality. To feel oneself slowing down was also to discover new eyes – to begin to savour the seasons changing, the birdsong, or the peaceful sound of the rustling leaves in the palm trees. To listen to the loud rustle of the grapefruit tree just before a huge, round fruit falls smack onto the ground was like a revelation the first time I heard it. And how did I reach 50 years old and never once hear baby birds chirping to be fed – like crickets! The lockdowns became a time for me to see the world with new eyes. And it continues, wave after wave. It was during that time when our ‘birdwatching in oil paintings’ obsession, mine and Chris’s, was transformed into real-life birding. The pandemic, and lockdown, changed everything. When restrictions lifted, rather than taking off to museums in Europe, we travelled to Alaska, where we spent weeks traipsing across the tundra in Denali National Park. So often looking down at my feet, I’d marvel at the wondrous tangle of green and yellow lichen; of moss and red berries; and at a variety of dwarf willow and rhododendron, none more than an inch tall. It created a beautiful pattern, like a Persian carpet. Enchanted, I wanted to take off my shoes and feel the spongy earth between my toes. When was the last time I had walked anywhere barefoot? Even at the beach, I usually keep my shoes on. And not only that, but I had never in my life walked off-trail, much less traipsed across tundra. When I was young, I once camped along the Indus River, in India, but that was so long ago. How had I become so alienated from wild things? Life is, after all, constantly shuffling the deck, with each moment precious and unique. All those heightened moments we experienced in our favourite paintings are precisely what the great artists were celebrating. The perfect unfolding of now. And what was true in the paintings was also true out in the world. Birding alone and then later in groups, we have savoured those moments when a bird is spotted, and we all grow instantly quiet. Frantically training our binoculars on the object, it seems we are all frozen in a great hush. With laser focus, we attune ourselves to the bird, on a hair’s breadth of losing it, aware of the tiniest flitter, flutter and peep. It is enchantment. And through this, I have felt a little of how birds must have exerted power over the Renaissance imagination too. I continue to marvel at these free creatures of the air, symbolising hope and rebirth, messengers from distant lands, inhabitants of a canvas of beauty and life in this great garden of earthly delights. The two Carpaccio paintings were reunited last year in an unprecedented exhibition, Vittore Carpaccio: Master Storyteller of Renaissance Venice, at the National Gallery in Washington, DC. It was the first time they were displayed together since 1999, when they were both on show in Venice.
Leanne Ogasawara
https://aeon.co//essays/noticing-the-birds-in-great-paintings-taught-me-to-see-the-world
https://images.aeonmedia…y=75&format=auto
Stories and literature
In the face of an inscrutable, indifferent universe, Pessoa suggests we cultivate a certain longing for the elusive horizon
An elusive point sits on the horizon. A deep yearning stirs within to move closer to this point, perhaps in search of the unknown, perhaps in search of questions without answers. It is a yearning that will never be fulfilled. It is a point never reached. This yearning is the all-too-human inclination for our lives to somehow be different than they are, and for the universe not to be indifferent to our cares and concerns. In her essay ‘The Blue of Distance’ (2005), the US author Rebecca Solnit associates this point never-reached with the colour blue. She writes: For many years, I have been moved by the blue at the far edge of what can be seen, that color of horizons, of remote mountain ranges, of anything far away. The color of that distance is the color of an emotion, the color of solitude and of desire, the color of there seen from here, the color of where you are not. And the color of where you can never go. When combined with the longing for something absent, for something that simply can’t be, this is saudade, a Portuguese expression for a state akin to melancholic longing. A complex emotion where a melancholic grey seeps into the distant blue. Lacking any easy English translation, saudade seems to be an emotion that can be expressed only through poetry or other evocations of its melancholic longing. Whereas nostalgia is a longing for something that once existed, a person or place or experience that lives in our memory, saudade encompasses a longing for something that never was, something not attainable. Within the yearning, a sense of incompleteness exists, a feeling of loss for something we never actually had. We want, for example, to connect to the divine, to the universe, in a personal and meaningful way. We long to find meaning in our existence and our experiences – and the meaning we tend to attach to the confusion and loss we feel when this fails to happen is of some sort of providential punishment or karmic backlash. No matter how we attempt to make sense of what we experience, the indifference lingers, an unsettling realisation that nothing, ultimately, matters. We long for the things we do and say to make a difference, for the universe to respond to our call in a way that is just and kind. But it simply can’t. How can we still find solace living in such a world, where indifference is all there is, to reach a place where our yearning has not disappeared but yet has, in some way, been transformed? In her essay ‘“Saudade” and “Soledad”: Fernando Pessoa and Antonio Machado on Nostalgia and Loneliness’ (2007), the Lusophone scholar Estela Vieira provides a possible solution. She writes: ‘Saudade’ in Pessoa is a lot more related to loneliness since the absence of others is what causes the painful feelings regularly associated with nostalgia. Yet the absence is itself a creative presence populated with imagined others a lot more real than the emptiness of reality. Like all feelings, loneliness for him is nothing more than one of the sources of creation.​​Fernando Pessoa (1888-1935) lived what was in many ways an astonishingly modern, transcultural and translingual life. He was born in Lisbon, the point of departure for Vasco da Gama’s voyage to India as commemorated by Pessoa’s forebear, the poet Luís de Camões. Pessoa grew up in Anglophone Durban in South Africa, acquiring a life-long love for English poetry and language. Returning to Lisbon in 1905, which he would never again leave, Pessoa set himself the goal to travel throughout an infinitude of inner landscapes, to be an explorer of inner worlds. He published very little during his lifetime but left behind a renowned trunk containing a treasure trove of scraps, on which were written some of the greatest literary works of the 20th century, mainly in Portuguese but also substantially in English and French. Pessoa wrote poems under a variety of heteronyms, the ‘virtual subjects’ of his imagination; and also, importantly, a novel, or rather the anti-novel, The Book of Disquiet (1982), whose protagonist, Bernardo Soares, ruminates in detail on the meaning of being. Vieira’s interesting idea is to fashion a link between saudade and Pessoa’s creation of a coterie of heteronyms, virtual other selves through which he could live a multiplicity of imagined lives. If saudade is a melancholic yearning for something the universe will never provide, perhaps the very absence to which it draws our attention can be a creative opportunity, an empty space that Pessoa seeks to fill, and the invention of heteronyms is his way to fill it. If everything is unimportant, then all we do is unimportant too And yet this doesn’t quite work. In The Book of Disquiet, for instance, Pessoa has his novel’s putative protagonist Soares – the literary vehicle through which he explores the idea of saudade – say: What I confess is unimportant, because everything is unimportant.– all quotes are from the Richard Zenith translation of The Book of Disquiet (2002) The indifference of the universe is not, then, a creative opportunity but instead directly confronts us with the fact that nothing we might try to create is of any importance. If what Soares expresses here is true, we find no solace in inventing heteronyms, or anything else. If everything is unimportant, then all we do is unimportant too. This is only to reinforce the sentiment Soares expresses earlier in the same passage, saying: These are my Confessions, and if in them I say nothing, it’s because I have nothing to say. Pessoa isn’t finding solace in his creations, or even in his confessions, but in his acceptance of their unimportance, an acceptance that the universe is indifferent to anything he creates. In the voice of Soares, Pessoa himself says as much: Ah, no nostalgia [saudade] hurts as much as nostalgia for things that never existed! The longing I feel when I think of the past I’ve lived in real time, when I weep over the corpse of my childhood life – this can’t compare to the fervour of my trembling grief as I weep over the non-reality of my dreams’ humble characters, even the minor ones I recall having seen just once in my pseudo-life, while turning a corner in my envisioned world, or while passing through a doorway on a street that I walked up and down in the same dream.My bitterness over nostalgia’s [saudade] impotence to revive and resurrect becomes a tearful rage against God … In saudade, which Zenith translates as ‘nostalgia’, our yearning then begins to take on a new kind of melancholy – one that takes away any kind of comfort we might have found in our attempts to seek connection with an indifferent universe, one in which the longed-for blue point on the horizon turns a blue-grey. Saudade is woven throughout Pessoa’s work. It reflects that he has found solace in the understanding that there is no meaning, that he has accepted everything as it is, an acceptance of the indifference of the universe. Pessoa has Soares say: The inscrutability of the universe is quite enough for us to think about; to want to actually understand it is to be less than human, since to be human is to realise it can’t be understood.Perhaps, then, the idea is that finding acceptance in indifference is a way to move closer to being fully human. Rather than the sufferings of loneliness, it is more helpful here to think about the process of grieving. Moving through denial, anger, bargaining and depression can lead us, eventually, to an acceptance that we cannot change the one we have lost and that their absence is now a permanent feature of the universe we inhabit. We try so hard to deny the indifference of the universe by finding meaning in religion, spiritual practices or our experiences. When these meaning-making efforts stop making sense, then anger, bargaining and depression surface, sometimes all at once when the wave of shock at the world’s indifference subsides. As we emerge on the other side of these emotions, a glimpse of hope appears in the form of acceptance. We can still yearn for something to fill the empty space in what feels like an incompleteness of life, but that yearning takes on a new purpose: it exposes a new humanness that before was obscured. The Dutch photographer Nanouk Prins is one who finds this connection between saudade and grief, and many of her photographs have a blue-grey tone. Photo by Sarah Seymour Here is how Pessoa (as Soares) makes the link between grief and meaninglessness: In these times of acute grief, it is impossible – even in dreams – to be a lover, to be a hero, to be happy. All of this is empty, even in our idea of what it is. It’s all spoken in another language that we can’t grasp – mere nonsense syllables to our understanding. Life is hollow, the soul hollow, the world hollow. All gods die a death greater than death. All is emptier than the void. All is a chaos of things that are nothing.If, on thinking this, I look up to see if reality can quench my thirst, I see inexpressive façades, inexpressive faces, inexpressive gestures. Stones, bodies, ideas – all dead. All movements are one great standstill. Nothing means anything to me … And in the bottom of my soul – as the only reality of this moment – there’s an intense and invisible grief, a sadness like the sound of someone crying in a dark room.In this movement of grief, Pessoa arrives at a self-awareness, an acceptance of the world’s inevitable insouciance. This allows him a kind of clarity to see the meaninglessness and inexpressiveness of everything. And it is within this clarity and stillness that solace is finally found. The longing hasn’t vanished but is now truly seen and accepted without giving it meaning or importance. He does not claim understanding, but instead embraces just how things are. To find solace here without the yearning for meaning is to find stillness and to experience it as what it is to be fully human. The French philosopher Simone Weil develops the idea of acceptance in a particular direction. ‘At the centre of the human heart, is the longing for an absolute good, a longing which is always there and is never appeased by any object in this world,’ she writes in her ‘Draft for a Statement of Human Obligations’ (1943). We must stop searching for meaning, and instead accept that all we can do is wait, with an open and ready heart In her posthumously published Waiting for God (1950), Weil finds resolution in a new concept of attention, attention as ‘waiting’ not as ‘searching’: In every school exercise there is a special way of waiting upon truth, setting our hearts upon it, yet not allowing ourselves to go out in search of it. There is a way of giving our attention to the data of a problem in geometry without trying to find the solution, or to the words of a Latin or Greek text without trying to arrive at the meaning, a way of waiting, when we are writing, for the right word to come of itself at the end of our pen, while we merely reject all inadequate words.Rather than straining ourselves in a supreme effort to find answers, to achieve goals, to reach destinations, we should instead – and this is equally difficult – learn to wait. Waiting means making oneself receptive, and being ready to recognise a truth when it shows up. We must, in other words, stop searching for meaning or for the things that will satisfy our melancholic longings, and instead accept that all we can do is wait, with an open and ready heart, for such truths as there are to turn up. In The Book of Disquiet, Pessoa writes: All that we love or lose – things, human beings, meanings – rubs our skin and so reaches the soul, and in the eyes of God the event is no more than this breeze that brought me nothing besides an imaginary relief, the propitious moment, and the wherewithal to lose everything splendidly.Pessoa acknowledges the relief in finding meaning, but also that it is not real. The true solace is in this acknowledgment, this acceptance of its fleeting nature on the other side of grief. If, as we have argued, the visual vocabulary of colour is as appropriate as that of taste in describing the quality of an emotion, and if blue is the colour of solitude and desire, somewhere on the distant horizon of understanding, then perhaps the solace in saudade, the unfulfilled melancholic longing, is where blue begins to turn shades of grey – a colour called Payne’s grey, of landscapes much further away, a sombre atmosphere filled with distances, a blue-grey of shadows, storm clouds, and winters with no end. Payne’s Grey 14 (from series of 14, 2007-08) by George Shaw. Courtesy the Anthony Wilkinson Gallery, London, private collection, and © the artist Payne’s Grey 2 (from series of 14, 2007-08) by George Shaw. Courtesy the Anthony Wilkinson Gallery, London, private collection, and © the artist The colour that the US novelist Henry Miller attributes to the wintry streets of Paris is surely not so dissimilar to that of the Lisbon streets along which Pessoa walked. As Miller wrote in a letter to his friend, the novelist Emil Schnellock, in March 1930: It is winter and the trees do not obscure the sky. One can look between the naked boughs and observe the colors changing from rust and purple to lilac, to Payne’s gray and then to deep blue and indigo. Along the Boulevard Malesherbes, long after the crepuscular glow of the evening, the gaunt trees with their black boughs gesticulating, stretch out in infinite series, somber, spectral, their trunks vivid as cigar ash. Where is the Seine? I inquire at intervals. Tout droit, monsieur, tout droit.The Seine: a destination never found but perpetually longed for. Some scholars have suggested that the Portuguese term saudade derives from the Arabic word saudā, which, according to the dictionary of Hans Wehr, means ‘melancholy, sadness, gloom’. And yet the colour associated with this emotion is black, which is the root meaning of saudā, and what black lacks is the blueness in Payne’s grey, the blue of longing, which offsets the greyness of melancholy alone. Where the black of sadness and gloom – saudā – meets Solnit’s distant blue, colours blend into Payne’s grey. If saudā describes a nothingness beyond death, and the distant blue is the impossible hope of reaching what is out of reach, such as finding meaning in the meaningless, the grey of saudade is the acceptance of never reaching what sits over the horizon. It is a place we can never go back to – or, in the instance of finding meaning in an indifferent universe, a place we cannot reach. Photo by Sara Seymour Only the Portuguese have a single term for an emotion that is, nevertheless, arguably universal. More than merely loneliness or nostalgia or homesickness, saudade instead evokes a melancholic yearning for something absent, something that perhaps never was and never will be, but still haunts one’s psychological life in one’s memory and desire. A sense of loss for that which one never had; the anticipation of a future that will never be. In some translations of saudade, there is a beauty or enjoyment in the longing for what was Nowhere in literature and philosophy has this quintessential feeling been better studied than in The Book of Disquiet, Pessoa’s brilliant if enigmatic book. The question for us is this: how should one live a life in the face of such a feeling? How does one find, if not meaning, then at least solace? To our knowledge, only Vieira has provided a possible answer, linking saudade in Pessoa to his creation of heteronyms. Yet this does not do justice to the feeling, a feeling of being at sea in a life without horizons. Nor does it do justice to Pessoa, for whom solace is, rather, to be found in an acceptance that this is, after all, the human lot. In the end, saudade’s closest cousin isn’t loneliness but grief, and the solace we must hope for is akin to that of coming to terms with a loss that can never go away. Saudade permeates our desire for truth beyond what we can possibly know. In her book The Rock That Is Higher: Story as Truth (1993), the US author Madeleine L’Engle longs for a garden of Eden that she is certain once existed. She writes: We are all strangers in a strange land, longing for home, but not quite knowing where home is. We glimpse it sometimes in our dreams, or as we turn a corner, and suddenly there is a strange, sweet familiarity that vanishes almost as soon as it comes.In literature the longing for home is found in many stories of paradise, of the forgotten place where we once belonged.L’Engle yearns for something that is no more, but that she deeply believes in. Her hopeful language reflects that she has not gone through a grieving process to reach the state of acceptance that what she longs for never did exist. Her point on the horizon is still blue, but out of reach, not quite yet a Payne’s grey. In some translations of saudade, there is a beauty or enjoyment in the longing for what was. And in some passages of The Book of Disquiet, Pessoa’s indifference comes across as something like delight, as though he has found beauty in saudade, an enjoyment in his longing for the non-existent: The sweetness of having neither family nor companions, that pleasant taste as of exile, in which the pride of the expatriate subdues with a strange sensuality our vague anxiety about being far from home – all of this I enjoy in my own way, indifferently.Even these moments of enjoyment or bliss are fleeting. As our emotions shift while we move through the grieving process, the beauty we may see in it or experience does as well. In her essay ‘The Blue of Distance’, Solnit touches on this: If you can look across the distance without wanting to close it up, if you can own your longing in the same way that you own the beauty of that blue that can never be possessed? For something of this longing will, like the blue of distance, only be relocated, not assuaged, by acquisition and arrival, just as the mountains cease to be blue when you arrive among them and the blue instead tints the next beyond.Solnit’s views here begin to reflect those of Pessoa’s indifference – saudade without beauty or sadness or joy. The beauty of the blue on the horizon vanishes or moves further away as we move closer to it. The more we try to find meaning in the meaninglessness, in the indifference, the further away from us it becomes. When the moments of happiness or beauty in our longing fade back to indifference is when the blue on the horizon shifts to shades of grey. It is in the grey, not the blue, where we find solace in the indifference of the universe.
Jonardon Ganeri & Sarah Seymour
https://aeon.co//essays/how-to-find-a-strange-solace-in-the-indifference-of-the-universe
https://images.aeonmedia…y=75&format=auto
Environmental history
How the trees of China – fir, camphor, ironwood and nanmu – were used to build an empire that lasted for centuries
In 1676, the Qing navy engaged a fleet of the Zheng kingdom, a trading empire based in Taiwan that controlled sea lanes from Japan to Southeast Asia. Despite regarding the Zheng state as little more than a pirate organisation, a Qing captain had to respect the size of its fleet, describing its ships as ‘uncountable, we saw only masts like a forest.’ Others have written at length and in detail on the political and military aspects of the Qing-Zheng war. To me, the sheer scale of this naval battle requires us to pay attention to other places far from the coast and, indeed, to other species entirely. After all, the trees that ended their lives at the bottom of the Taiwan Strait began their lives in the mountains of southern China and Taiwan. Some had been planted by human hands mere decades before they were felled, others had fallen as seeds below their ancient ancestors, and themselves grown into multi-century-old giants before suffering two deaths – the first by axe, the second by cannonball. A standard Qing warship. Images courtesy the Staatsbibliothek, Berlin Naval timber was a strategic good in 1676, and for centuries before and after. In this era before widespread use of coal and oil, before uranium, lithium and cobalt, governments needed wood for nearly everything: firewood and charcoal to fuel smelters; timber for temples and palaces, shops and factories, bridges and dikes; but, most of all, they needed wood for ships. In an age of sail – of growing trade and explosive episodes of naval warfare – ships were necessary for exchange, defence and power projection. Exposed to repeated cycles of wetness and dryness, their hulls rotted rapidly; their masts and spars were often damaged in storms or battle; and because they were used under dangerous conditions, many ships ended up at the bottom of the sea. Finding new trees, large trees, with the right structural characteristics to make rot- and cannon-resistant hulls and replace storm-damaged masts and spars was a constant concern. Yet, as shipbuilders understood, naval timber was not a single commodity. A proverb recorded in the early 20th century reflected this reality: ‘To build a ship to last for son and grandson, [use] fir, cypress, catalpa, camphor and nanmu’ (it rhymes in Chinese: yao zao zisun chuan, shan bai zi zhang nan; 要造子孫 船、杉柏梓樟楠). Timbers were delineated to different uses based on their structural characteristics. In 16th-century Nanjing, government ships were built primarily of fir and nanmu – both species used for planking and masts. In Guangdong, some ships were hulled with ironwood – a tropical wood as hard as its name suggests. In 18th-century Fujian, warships had fir hulls; cabins and rails of camphor; a paper mulberry rudder; and a miscellany of smaller components built of various other species. But for almost all these components, the do-everything wood was China fir – a fast- and straight-growing conifer that is relatively resistant to both rot and insects, and that was used for everything from hull planking to masts. Regardless of the tree species, shipyard officials used fundamentally similar procedures to turn each log into standard-sized components. By the late 1540s, the Ming shipyards in Nanjing established standard sizes for planking – one Chinese pole (zhang; 丈) by one Chinese foot (chi; 尺) by one Chinese inch (cun; 寸) – approximately 3 metres by 30 cm by 30 cm. They worked with customs officials to mark these logs before they even reached the shipyards. To make processing easier, each incoming log was branded with one character for each foot in circumference. Officials compiled tables for how many standard boards could be cut from each log of a given dimension. If logs came from trees that had not grown absolutely straight – with hollows, bends, knots or twists, all flaws in the eye of the carpenter – they were discounted in proportion to how many boards were lost. Some timber merchants even contracted to provide the large timbers required for masts and spars as a package, including one six-foot log, three five-foot logs, and three four-foot logs. Logging of nanmu timber in southwest China; a flying bridge. Images courtesy the Library of Congress Dragging logs Standardisation continued over time. In the 17th century, a new formula calculated timber prices based on the estimated volume of an idealised conic log. These formulas were developed by Guo Mingzhu, the eldest daughter of a timber merchant, and only later adopted by the state, which used them until the 1950s. In 18th-century Fujian, another set of standards divided fir logs into four grades with colourful names like ‘big lucky wood’ (daji mu; 大吉木) and ‘high-seas wood’ (gaoyang mu; 高洋木). Officials compiled dimensions and processing instructions for each part of each grade of warship, providing specifications to the sawyers, joiners, caulkers and painters working the wood, and recording worker-days to two decimal places of precision. Yet shipyard specifications belied the far more complex reality of obtaining timber, not to speak of the conditions in which the trees grew. Most conifers – including the ever-present fir – were grown on plantations. Most broadleaf hardwoods – including camphor, nanmu and ironwood – were not. Fast-growing species that thrived in cleared land and tolerated human interventions in their life cycles worked well as tree crops. Slower-growing species, or trees that preferred shade and responded poorly to disturbance, could reliably be cut only from natural-growth forests, often forests of substantial age. Yet both types of tree communities were in relationships with human communities, they were just different kinds of relationships. Let us start by considering the fir plantation. The alliance between fir and foresters was one of the most successful trans-species partnerships The China fir is actually not a close relative of true firs (Abies species). Its Latin name – Cunninghamia lanceolata – derives from an English botanist who spent only a short time in the region. In China, where it is the dominant timber tree south of the Yangtze, it is classified using the same term – shan; 杉 – also used for true firs, and for several other structurally similar conifers. For these reasons, I prefer to simply call it ‘fir’. Like several other small genera of conifer, it diverged from the rest of its family in the early Jurassic, and survived the ice ages in southern China’s minimally glaciated river valleys. While the earliest human interactions with the species are opaque to history, it is likely that firs were grown on small estates by the 1st millennium CE. But it was in the 2nd millennium that humans created a new set of relations with the tree. The 11th and 12th centuries in China witnessed a growing pace of urbanisation, trade and some of the largest-scale naval warfare to that point in history. These dynamics drove up demand for timber, and southern landowners began planting firs in larger numbers. In 1149, a new policy allowed them to claim these trees as their permanent property by surveying the uneven boundaries of plots rather than calculating their area through a simple rectilinear formula. In some areas, registered acreage doubled almost overnight, most of the new area representing forest plantations. By the mid-1500s, mountainous areas across four southern provinces featured substantial acreage of taxable forests, most of which were tree plantations. By the late 18th century, fir trees spread into the upper reaches of the Yangtze River, the smaller watersheds draining to the southeast coast, and northern portions of the Pearl River watershed. The fir tree is now so widespread throughout this region that it is nearly impossible to determine its point of origin. Given the rapid spread of plantations across southern China, we might see the alliance between fir and foresters as one of the most successful trans-species partnerships of the past millennium. But other features of the relationship call this conclusion into question. It turns out that China fir sprouts new shoots from the trunk of a cut tree – a rare feature among conifers. These slips, transplanted across the landscape to this day, are clones rather than sexual offspring of their mothers. Because the lifetime of plantation-grown firs is often as short as 30 years, they are not just clones, but young clones. Many tree plantations were managed corporately, with shareholders buying and selling the rights to future timber harvest. To shareholders, clonal, rapid-maturing firs were the ideal, fungible investment. But this raises the question: was the multi-province assemblage of fir plantations an interspecies alliance or a clone army of child-soldiers? And indeed, if fir clones were the perfect foot soldiers for southern China’s early corporate overlords, what relationships defined the interactions between people and the other trees that ended up dockside? Let us turn next to nanmu (楠木), a broadleaf evergreen native to the Yangtze River’s interior tributaries. To builders, nanmu was, in many ways, a superior version of fir. It grew straight and tall, relatively quickly, and featured a particularly attractive grain when cut. Unlike fir, nanmu does not appear to have been planted on any scale. The logs floated out of the mountains and grew to maturity in mixed forests featuring many other species of plants and animals, including humans. But unlike the early fir-planters, these people were generally not full-time farmers. Instead, they depended on a mix of small-scale farming, hunting, gathering and herding. Nanmu is now widely used as a shrine tree throughout the region, and may have been venerated by many of these agro-silvo-pastoralists. They also traded forest products – including nanmu timber – with outsiders. Medieval Chinese records speak of ‘wood guests’ (muke; 木客) who came out of the mountains to trade timber for textiles and metal tools. Chinese merchants were to remain in the river valleys and allow their counterparts to transport the trees out of the mountains. The story of nanmu took a sharp turn in the late 14th century, when the armies of the Ming dynasty pursued a Mongol prince into the upper reaches of the Yangtze River watershed. This area had been integrated into the Mongol empire in the 13th century, but largely ruled by ‘native officials’ (tuguan; 土官). As Ming armies penetrated the region to remove this rival, they likewise granted titles to non-Han rulers, recognising a degree of Indigenous sovereignty as long as these native officials in turn recognised the authority of Beijing. Part of this relationship entailed presenting timber to the court, at first on a relatively small scale. Then, in 1406, the third Ming emperor, Zhu Di, greatly scaled up the extraction of tribute. A junior son of the dynastic founder, Zhu Di had deposed his nephew to seize the throne, and had few compunctions about projecting power into the borderlands. He sent dozens of officials and thousands of forced labourers to cut and ship hundreds of thousands of nanmu trunks to Beijing. There, he built palaces on an unprecedented scale, entombing the majesty of ancient trees in lacquer and erecting them to support the enormous roofs of the Forbidden City. Just a few large trees could purchase a promotion This scale of logging could not be maintained and, after Zhu Di died, the expeditions were cancelled. But when palaces burned down in the 16th century, logging was reopened. Logging supervisors looked everywhere for nanmu: the estates of imperial princes, the graveyards of prominent families, mountains at less cultural and geographic distance from Beijing. They complained constantly of the conditions at the frontier: malaria, attacks by Indigenous groups, starvation, and the extreme difficulty of extracting enormous trees from their remote mountain homes. But, more than anything, they relied on timber tribute. Indigenous rulers could submit timber – valued at 3,000 silver taels for the highest-ranking ‘pacification commissioners’ – in exchange for promotion and elaborate regalia, including the four-clawed dragon (mang; 蟒) that represented the highest rank given to people outside the imperial family. This was, in essence, a ‘trees-for-titles’ scheme. Timber tribute probably represented the top end of the nanmu marketplace, a market that also supplied timber for shipbuilding. Based on modern data on nanmu growth rates, trees of six-foot circumference – the largest grade used at the shipyards – would have been about a century old, and worth about two silver taels. But extrapolating to larger sizes, a 12-foot-circumference log from a 200-year-old tree might be valued at 70 taels or more. Just a few large trees could purchase a promotion, while nanmu of lesser size could be sold to the merchants who supplied the shipyards. While trees-for-titles offered the court a way to lessen the expense of palace-building, it also initiated an unsustainable competition among the native potentates. Several times, the granting of mang robes to one ruler inspired his or her rivals (there were several female rulers in the region) to a spate of violent competition to find and log the best remaining nanmu. The most extreme case came in the 1590s, when Yang Yinglong, ruler of the Bozhou Pacification Commission, raised an army said to number 140,000 soldiers. Ultimately, Yang’s army was defeated by an even larger force, he immolated himself rather than face capture, and his domain was integrated under direct Ming control. While there were multiple overlapping factors leading to his uprising – including a conflict over sending troops to fight Hideyoshi in Korea – the timber tribute was clearly significant. Long after Yang Yinglong’s death, the descendants of his former subjects venerated him for protecting them from Beijing’s excessive demands. The Bozhou uprising represented the beginning of a violent period in southwestern China lasting nearly a century. But as soon as the Qing dynasty established control in 1681, they ordered officials back into the mountains to cut timber. These officials soon reported that the largest trees were gone, or unreachable from the rivers. As the historian Meng Zhang has shown, timber tribute was then transformed into a system of licenses granting merchants the first choice of timber on the market, but, unlike in the Ming, the only species specifically noted in the tributary regulations was fir. Supplying timber tribute to the court was nonetheless a money-losing proposition, so compradors were allowed to underwrite their official business by buying and selling their own logs on the side. Meanwhile, the Indigenous settlements of southwestern China – including people now identified as Miao, Yi and members of several other ethnic groups – were increasingly beset by settlers from the lowlands of eastern China. By 1700, the largest native offices had largely been eliminated; in the following decades, the Qing targeted the ‘Miao frontier’, one of the last large areas resisting direct control from Beijing. By the late 18th century, Miao villagers in eastern Guizhou began to plant fir trees to make up a growing share of their timber exports. To prevent interethnic violence, Beijing granted several Miao towns a joint monopoly on timber sales out of the region, a position they defended well into the 19th century. Further west, in the lands of the Nasu and Nuosu people, uprisings likewise led Beijing to establish ethno-ecological frontiers between Chinese settlers in the lowlands and non-Han groups in the uplands. But increasingly, the fringes of Sichuan, Yunnan and Guizhou began to look more like eastern China, with dense agriculture in the river valleys, and fir-based silviculture extending to the lower slopes of mountains. The distinct agro-silvo-pastoral lifeways that had protected, venerated and logged nanmu from old-growth forests were on the decline. Now let us turn to a tree native to a region south of the nanmu heartland: ironwood (Mesua ferrea – tielimu; 鐵力木). In Guangdong and Guangxi, ships built almost entirely of ironwood were renowned for their durability. Outside this region, ironwood was rare enough that it was mostly used for ships’ rudders. By the mid-1500s, a tariff station was established on the West River to tax ironwood as it came out of Guangxi. Like fir and nanmu, ironwood was graded in standard sizes ranging from one to two poles long and one to six feet in circumference. Like Yunnan and Guizhou, much of 16th-century Guangxi was controlled by Indigenous potentates, known to Beijing as ‘native officials’, so it is possible that the ironwood trade was comparable with the nanmu timber tribute, although the state of research is currently hazy. The ironwood tree population suffered in the face of rampant logging; its decline was even more precipitous than that of nanmu. In the 1530s, an ironwood-planked ship cost 500 silver taels; by the 1570s, the price had doubled, even as the cost of fir ships remained roughly constant. It grew so scarce that, in 1629, officials were dispatched to Vietnam to buy just two poles of ironwood; in 1663, another group of buyers found it only on a Dutch ship coming from Southeast Asia. As shipyard officials reported greater difficulties obtaining ironwood at market, gazetteers show a clear decline in the availability of these trees in the interior. In the early 1600s, Yangchun County in Guangdong had ‘lots of ironwood’ but, by the turn of the 18th century, it reported ‘a little ironwood … but none that fills an arm span; if you want larger materials, they can only be found in Guangxi.’ Other counties that previously exported ironwood simply reported that ‘there is no longer any ironwood’ or ‘ironwood is now rare.’ By the late 1750s, Pehr Osbeck, a Swedish botanist and disciple of Carl Linnaeus, reported that anchors were still made of ironwood, but Cantonese ships were now planked with fir. By the 20th century, even rudders were made from lychee or camphor. As of 2007, ironwood was so rare that the compilers of Flora of China considered it unlikely to be native to the region. A shipyard in Taiwan. Courtesy the National Palace Museum in Taiwan These three timbers from the Chinese interior – fir, nanmu and ironwood – were all used to build ships on the coast, including those that fought in the Qing-Zheng wars of the 17th century. Yet these wars themselves opened another front in the assault on the forests of Eastern Asia. In 1683, the Qing navy defeated the Zheng fleet’s ‘masts like a forest’ and seized control of Taiwan, or more properly, southwestern Taiwan – the rest of the island was inhabited by a variety of Indigenous groups who relied primarily on hunting and gathering. As the Qing solidified control of the island, 18th- and 19th-century Taiwan came to feature its own system for supplying timber to the shipyards. Two generations after the Qing conquest, in 1722, the government established a line of 514 stones to divide the western plains inhabited by Han Chinese farmers from the eastern mountains that were home to Indigenous Taiwanese hunters. Yet just three years later, the Qing built official shipyards in southwestern Taiwan, which required timber from the eastern interior, an area where logging was explicitly forbidden to Han settlers. The court’s solution was to establish another office, the ‘military works foreman’ (jungong jiangshou) with exclusive logging rights in Indigenous territory. As in the southwest, this system featured both a border between ethnic groups, and an institution that could cross the border to supply timber to the shipyards. Gifts of rice, salt, wine, pigs, guns and gunpowder were exchanged for permission to log While a variety of trees grew in Taiwan’s forests, its conifers were too deep in the mountains to be accessible to shipyard officials. In fact, Taiwan’s shipyards imported the fir and pine used for hulls and masts from the mainland. Instead, the foreman focused on the most-accessible desirable timber – camphor (Cinnamomum camphora), which grew in the hills just beyond the Han-Indigenous frontier, and was used to build cabins and the other upper structures of the ship. But, like the timber tribute in the southwest, supplying camphor timber to the shipyards was a money-losing proposition. To enable the office of works-foreman to pay for itself, the foremen were given a monopoly not only on timber, but on other forest products, including the highly profitable business of refining camphor hearts into terpene crystals used for medicine and ritual purposes: ‘All the camphor timber produced is purchased by the … materials office,’ noted one essay from the Danshui Gazetteer, ‘each furnace household that boils camphor in the mountains of the interior is also controlled by the materials office.’ If logging in 16th-century Sichuan and Guizhou operated under a ‘trees-for-titles’ system, Taiwan’s 18th- and 19th-century camphor monopoly was a ‘timber-for-terpenes’ arrangement. Yet the military works-foreman represented only the Qing side of the trade. In addition to paying taxes to the official materials office, furnace households relied on intermediaries, often Indigenous people acculturated to Chinese ways, Han men married to Indigenous women, or their mixed-race offspring. These go-betweens negotiated ‘mountain fees’ with the less-acculturated Indigenous people of the interior, including gifts of rice, salt, wine, pigs, guns and gunpowder, in exchange for permission to log and make camphor crystals. As the historian Faizah Zakaria has written, shared feasting ‘made kin of potential enemies’. From one perspective, the Han-Indigenous border established in the early 18th century enabled both sides to negotiate to their own advantage. From another, the limited flow of goods allowed both the Qing state and Indigenous Taiwanese to frame the exchange in terms that they understood – the former as an official monopoly, the latter as kinship sealed with gift exchange. From the perspective of the shipyards, fir, nanmu, camphor and ironwood were strategic goods. But the wood that ended up sawn into boards and nailed into hulls did not begin its life as commodity timber – at least not all of it did. To be sure, most fir trees were cut as slips and planted in uniform plots alongside their clonal sisters, their express purpose to end their lives sawn into boards and nailed to a hull. But nanmu, camphor and ironwood – and dozens of other tree species – did not respond as happily to the pressures of forest clearance and plantation. Most of these trees began their lives in ancient woodlands below their parent trees, surrounded by mixed canopies of other species. Some were venerated by the human communities living in and around their forests, others were largely ignored until they were decades, or even hundreds of years old. But then, unforeseen and unprecedented demand drove people to search out the oldest and largest of these trees – the very individuals that would previously have been objects of worship – and fell them to make ships’ hulls and imperial temples. Indigenous forests were targeted as resources for radical political campaigns Perhaps because of this escalated demand, the 15th through 18th centuries saw the institutionalisation of an old practice: drawing lines between farm and forest, and between the domains of Chinese states and those of non-Chinese peoples. These were not impermeable borders. In every place where Beijing marked a frontier limiting Han colonisation, it also created an institution with the exclusive right to cross that frontier: the timber tribute, the military works-foreman. While less visible in the historical records, the parties on the other side of the border – whether Yang Yinglong in Bozhou, or Indigenous villagers in Taiwan – created their own systems for managing the trans-border exchange. For hundreds of years, these semi-permeable boundaries protected the lifeways of people who depended on the forest for hunting, gathering and shifting agriculture. They also protected the other species that inhabited these forests, including trees such as nanmu, camphor and ironwood. In the 19th century, even the semi-permeable boundaries that provided some provisional level of protection for non-Han communities and wild-growth forests began to fall. Facing unprecedented pressures from Han settlers, Miao communities at the heart of the nanmu-growing region revolted against Beijing twice, in 1795-1806, and in 1854-73. Ethnic tensions in Guangxi and Guangdong, the former ironwood heartland, gave birth to the Taiping movement, a millenarian rebellion turned civil war that swept across much of southern China in the 1850s. In response to colonial pressures from Britain and Japan, the Qing opened the mountains of Taiwan to Han settlers in the late 1870s and ’80s, only to transfer the island to Japanese control in 1895, as spoils of the first Sino-Japanese War. Chinese Ship (Tosen Zu) with Listing of the Sea Route from China to Japan (c1850). Courtesy the Brooklyn Museum Over time, the Japanese Empire, and later the Republic of China (ROC, or Taiwan), worked to survey, classify and control both Indigenous forests and Indigenous people on the island. After a period of de facto independence following the fall of the Qing in 1912, the southwest faced another sometimes-violent recolonisation under the People’s Republic of China (PRC), its people surveyed and classified into the 56 official ethnic groups, its forests targeted as resources for radical political campaigns. It is only in the past few decades that both the PRC and the ROC have turned to new forms of conservation. In the meantime, the importance of naval timber declined, as iron warships displaced wooden ones in national navies, and private Chinese shipbuilding moved to Southeast Asia, where both wood and labour were less expensive. States and corporations still value forests, but largely for different purposes – as plywood and laminates; paper pulp; chemical precursors; or to grow industrial crops such as rubber. Even structural timber is now valued more by its volume than the special characteristics of different tree species. The fates of the trees reflect both the old and new realities. Ironwood is now all-but-extirpated from China, except where reintroduced from Southeast Asia. Camphor grows patchily throughout an extensive territory, often as street trees or in temple complexes, but is far less prevalent in the broader landscape. Nanmu’s range is now a fraction of the territory formerly controlled by native offices in the middle and upper Yangtze River drainage. And fir – the clonal foot-soldier of merchants and empires – is everywhere.
Ian M Miller
https://aeon.co//essays/the-tree-as-foot-soldier-of-chinese-merchants-and-empires
https://images.aeonmedia…y=75&format=auto
Information and communication
A good conversation bridges the distances between people and imbues life with pleasure and a sense of discovery
Good conversation mixes opinions, feelings, facts and ideas in an improvisational exchange with one or more individuals in an atmosphere of goodwill. It inspires mutual insight, respect and, most of all, joy. It is a way of relaxing the mind, opening the heart and connecting, authentically, with others. To converse well is surprising, humanising and fun. Above is my definition of an activity central to my wellbeing. I trace my penchant for good conversation to my family of origin. My parents were loud and opinionated people who interrupted and quarrelled boisterously with each other. I realise that such an environment could give rise to taciturn children who seek quiet above all else. But, for me, this atmosphere was stimulating and joyful. It made my childhood home a place I loved to be. The bright, ongoing talk that pervaded my growing up was overseen by my mother, a woman of great charm and energy. She was the maestro of the dinner table, unfailingly entertaining and fun. We loved to listen to her tell stories about what happened to her at work. She was a high-school French teacher, a position that afforded a wealth of anecdotes about her students’ misbehaviour, eccentric wardrobe choices, and mistakes in the conjugation of verbs. There were also the intrigues among her colleagues – how I loved being privy to my teachers’ peccadillos and romantic misadventures, an experience that sowed a lifelong scepticism of authority. My mother had the gift of making even the smallest detail of her day vivid and amusing. My father, by contrast, was a very different kind of talker. A scientist by training and vocation, he had a logical, detached sort of mind, and he liked to discuss ideas. He had theories about things: why people believed in God, the role of advertising in modern life, why women liked jewellery, and so on. I recall how he would clear his throat as a prelude to launching into a new idea: ‘I’ve been thinking about why we eat foods like oysters and lobster, which aren’t very appealing. There must be an evolutionary aspect to why we have learned to like these things.’ Being included in the development of an idea with my father was a deeply bonding experience. The idea of ideas became enormously appealing as a result. And though my father was not an emotional person – and, indeed, because he was not – I associated ideas with our relationship, and they became imbued with feeling. Perhaps my family was exceptional in its love of conversation, but all families are, to some extent, learning spaces for how to talk. This is the paradox of growing up. Language is learned in the family; it solidifies our place within it, but it also allows us to move beyond it, giving us the tools to widen our experience with people very different from ourselves. My family inculcated in me a life-long love of conversation – of sprightly, sometimes contentious, but always interesting talk that allowed me to lose myself for the space of that engagement. My pleasure in conversation has led me to think about the activity at length, from both a psychological and philosophical perspective: what makes a good conversation? What role has conversation served in history? What does talk do for us, and how can it ameliorate aspects of our current, divided society, if pursued with vigour and goodwill? Sigmund Freud began his groundbreaking work as the father of psychoanalysis by postulating that his patients’ symptoms were physical responses to traumatic events or taboo desires dating from childhood. He found that if these people could be encouraged to talk without inhibition – to free-associate on what they were feeling – they would eventually find the source of their problems and the cure for what ailed them. With this in view, he made talk central to his therapeutic method – hence, the ‘talking cure’. Although many of Freud’s theories have since been refuted, the talking cure has endured. Clinical psychologists still recommend talk therapy as a treatment for both generalised anxiety as well as more severe mental health issues. And though Freud’s talking cure is not, by any stretch, a real conversation – the patient talks, the analyst listens and strategically intervenes – the phrase ‘talking cure’ strikes me as a useful one in referring to the nature and use of conversation in our lives. The need for conversation is one that many people have not fully acknowledged, perhaps because they have not had occasion to do enough of it or to do it well. I am not suggesting that, in conversing, we serve as each other’s therapists, but I do believe that good talk, when carried on with the right degree of openness, can not only be a great pleasure but also do us a great deal of good, both individually and collectively as members of society. For me, one particularly useful concept derived from Freud’s talking cure is the idea of transference. In the course of therapy, Freud found that some patients felt that they had fallen in love with their therapists. Since he believed that all love relationships recapitulate what occurs within one’s family of origin, he saw these patients’ infatuation as a repetition of earlier, intense feelings for a parent that could now be analysed and controlled – directed toward more productive and transparent ends. A relationship can be over once consummated in sex. But friendships are never over after a good conversation I think this idea is relevant to our understanding of conversation as an important activity in connecting with others. Putting aside the familial baggage that Freud saw as accompanying transference, a deep sense of affection seems to be, always, part of good conversation. Surely readers can identify with that welling up of positive feeling – that almost-falling-in-love with someone that we engage with in an authentic way. I have felt this not only for friends and even strangers with whom I’ve had probing conversations but also for whole classes of students where it can seem that the group has merged into one deeply lovable and loving body. If love can be understood as important in conversation, so can desire, another element central to Freud’s thought. Sexual desire has its consummation in the sex act (a form of closure that accounts for why a poet like John Donne, among others, used ‘death’ to refer to sexual climax). Conversation, by contrast, does not consummate; it merely stops by arbitrary necessity. One may have to get across town for a meeting, pick up a child from school, or generally get on with the business of life. Such endings are in medias res, so to speak, or mid-narrative. I find it interesting that a relationship can sometimes be over once the partners have consummated it in sex. But friendships are never over after a good conversation; they are sustained and bolstered by it. The search for satisfaction by our desiring self seems to me at the heart of good conversation. We seek to fill the lack in ourselves by engaging with someone who is Other – who comes from another position, another background, another set of experiences. Everyone, when taken in a certain light, is an Other by virtue, if nothing else, of having different DNA. To recognise this difference and welcome it is the premise upon which good conversation is built. Conversation also helps us deal with the human fear of the unexpected and the changeable. Talk with others allows us to practise uncertainty and open-endedness in a safe environment. It offers exercise in extemporaneity and experiment; it deconverts us from rigid and established forms of belief. There is no better antidote for certainty than ongoing conversation with a friend who disagrees. Good conversation is an art that can be perfected, and the best way to do this is to converse regularly with a variety of people. As the fat man says to Sam Spade in Dashiell Hammett’s novel The Maltese Falcon (1930): ‘Talking’s something you can’t do judiciously unless you keep in practice.’ The next best thing to practising conversation is reading those authors whose writing seems to channel the spirit of good conversation or give insight into its mechanics. ‘How can life be worth living … which lacks that repose which is to be found in the mutual good will of a friend? What can be more delightful than to have someone to whom you can say everything with the same absolute confidence as yourself’? wrote the lawyer and orator Marcus Tullius Cicero, who lived in ancient Rome in the 1st century BCE. Expanding on the subject hundreds of years later, in the 16th century, was Michel de Montaigne, whose pioneering work in the personal essay form is, in its intimate and meandering style, a tribute to his love of conversation. ‘[I]f I were now compelled to choose,’ he writes in the essay ‘On the Art of Conversation’, which addresses the subject directly, ‘I should sooner, I think, consent to lose my sight, than my hearing and speech’. One feels the pathos of this statement, given that Montaigne lost his most cherished friend, Étienne de la Boétie, at an early age and never ceased to mourn that loss. Indeed, some feel that the loss of La Boétie, by depriving Montaigne of his companion in conversation, accounts for the Essays, written to fill that void. The 18th century was a great age of conversation; Samuel Johnson, Jonathan Swift, Oliver Goldsmith, David Hume, Joseph Addison and Henry Fielding are among the venerable authors of the period to provide commentary on what they considered to be important for good talk. The Literary Club in London, frequented by many of these luminaries, is said to have been organised in 1764 to help Johnson from succumbing to depression – through conversation, among other things. Conversation was one of the activities that an aspiring gentleman was expected to learn The book The Words That Made Us (2021) by Akhil Reed Amar, on the founders of the American Republic, makes the point that the American Revolution was successful in mobilising disparate people to its cause as a result of long and probing conversations among constituents across the colonies. The British were fated to lose the war, Amar argues, because George III refused to listen, let alone converse with his American subjects. In the 19th century, especially in the United States where shaping the self alongside shaping the country became something of a national obsession, conversation was one of the activities that an aspiring gentleman was expected to learn. We see publication of numerous etiquette books during this period, with titles like Manners for Men (1897); The Gentleman’s Book of Etiquette and Manual of Politeness (1860); and Hints on Etiquette and the Usages of Society: With a Glance at Bad Habits (1834) – all of which give guidance on conversation, though mostly of a utilitarian kind. In the 20th century, the most notable figure in conversational self-help was Dale Carnegie, who created an entire industry out of teaching aspiring social and business climbers based on his most famous book, How to Win Friends and Influence People (1936). Carnegie began writing and giving courses in the 1910s, and his business survived him to grow into an empire (‘over 200 offices in 86 countries’, according to Forbes magazine in 2020) with supporting textbooks, online resources, newsletters and blogs that boast the tag line: ‘Training options that transform your impact.’ The message dovetails with the US myth of upward mobility and getting ahead. Carnegie’s self-improvement programmes have an offshoot in the self-realisation movements of the past few decades. A deluge of books in recent years link conversational skills to creative and relationship goals. Having surveyed the abundant literature on conversation over the past two centuries, I find myself particularly charmed by a short but entertaining work, The Art of Conversation (1936) by Milton Wright. The book is full of citations from philosophy and literature, with thumbnail sketches of the ancient symposia and the ‘talkers of Old England’ while also exhaustively outlining conversational scenarios. In one case, the author describes a wife explaining to her husband how he should converse over dinner with his boss about his love of fishing and pipe smoking (Wright gives a verbatim account of the wife practising the conversation in advance of the dinner). In a chapter on ‘developing repartee’, Wright gives minute instruction on how to come up with a clever thought and insert it into conversation, advising: It must be prompt. It must seem impromptu. It must be based upon the same premise that called it forth. It must outshine the original remark.The author advises practising imaginary scenarios so as not to suffer l’esprit d’escalier (carefully defined for the reader: ‘you think of the scintillating remarks you could have made back there if only you had thought of them’). The book has sections on using flattery, seeking an opinion, and how to ‘let him parade his talents’. The book’s erudition combined with its unadorned acknowledgment of human vanity is charming. It is perhaps no coincidence that Wright reminds me of Baldassare Castiglione and Niccolò Machiavelli in his tone; they too were writing at a high point of their civilisation, were both astute about human nature but optimistic about how the individual could rise through deliberate study and strategy. And yet, even as Wright explains the levers by which one can manipulate others to become a ‘successful’ conversationalist, he ends on a surprisingly moving note that undercuts his own lessons: ‘If … you can forget yourself, then you have learned the innermost secret of the art of good conversation. All the rest is a matter of technique.’ I love this book for its unabashed willingness to put forward this contradiction. One can make one’s conversation better by following certain instructions about listening well and employing choice opening gambits, transitions and techniques for putting one’s partner at ease; one may even practise ‘repartee’. But the secret to conversation, that of forgetting oneself, cannot be taught. It is akin to the double bind that psychologists refer to when someone tells us to ‘be spontaneous’. The admonition goes against the grain of what is involved: a state of being that happens by being swept up in the ‘flow’ of the moment. Ideally, one would want to converse with someone who is open and trusting, curious and good with words. But this is not always the case, and it often takes ingenuity and persistence to jump-start a good conversation. It is also a mistake to write off others simply because they don’t share your politics, religion or superficial values. While it is true that partisanship has become more pronounced in recent years, I don’t think this is irreparable. Probing and spirited engagement can break apart ossified patterns of thought and bring to bear a more generous and flexible view of things. I have experienced the exhilaration of having an insight in the course of a conversation that didn’t fit with my pre-existing ideas, and of connecting with someone I might otherwise have written off. Most of us fear talking about important subjects with people we know disagree with us, much like we fear talking to people about the untimely death of a loved one. And yet these conversations are often, secretly, what both parties crave. We discover new elements in our nature as we converse Finally, there is the creative pleasure of conversation. If writing and speechifying can be equated with sculpture (where one models something through words in solitary space), conversation is more like those team sports where the game proceeds within certain parameters but is unpredictable and reliant on one’s ability to coordinate with another person or persons. Words in conversation can be arranged in infinite ways, but they wait on the response of a partner or partners, making this an improvisational experience partially defined by others and requiring extreme attentiveness to what they say. Also, like sport, conversation requires some degree of practice to do well. The more one converses – and with a variety of people – the better one gets at it and the more pleasure it is likely to bring. Since conversation is, by definition, improvisational, it is always bringing to the fore new or unforeseen aspects of oneself to fit or counter or complement what the other is saying. In this way, we discover new elements in our nature as we converse. Over time, we incorporate aspects of others into ourselves as well. One could say that in the flow of conversation the distance between self and Other is temporarily bridged – much as happens in a love relationship. It is sometimes hard to recall who said what when a conversation truly works – even when people are very different and stand ostensibly on different sides of issues. Conversation is both a function of and a metaphor for our life in the world, always seeking to fulfil a need that is never fulfilled but whose quest gives piquancy and satisfaction, albeit temporarily and incompletely, to our encounters. In good conversation, there is always something left out, unplumbed and unresolved, which is why we seek more of it. Adapted, in part, from Talking Cure: An Essay on the Civilizing Power of Conversation by Paula Marantz Cohen, published by Princeton University Press, 2023.
Paula Marantz Cohen
https://aeon.co//essays/a-good-conversation-relaxes-the-mind-and-opens-the-heart
https://images.aeonmedia…y=75&format=auto
Human evolution
Our childhood is preposterously long compared to other animals. Is it the secret to our evolutionary success?
The average human spends at least one quarter of their life growing up. In the careful calculus of the animal kingdom, this is patently ridiculous. Even most whales, the longest of the long-lived mammals, spend a mere 10 per cent or so of their time growing into leviathans. In no other primate has the mathematics gone this wrong but, then again, no other primate has been as successful as we are in dominating the planet. Could the secret to our species’ success be our slowness in growing up? And if so, what possible evolutionary benefit could there be to delaying adulthood – and what does it mean for where our species is going? The search for the secret to our success is at the heart of anthropology – the study of humans and their place in the world. This most narcissistic of disciplines piggybacked on the fascination for cataloguing and collecting the entirety of the world that rose up during the colonial expansions of 18th-century Europe and the growing popularity of ‘natural laws’ that explained the workings of the world in terms of immutable truths, discoverable to any man (and it was open largely only to men) with the wit and patience to observe them in nature. Early anthropology collected cultures and set them end on end in a line of progress that stretched from fossils to frock coats, determining that the most critical parts of Man – the secrets to his success – were his big brain and his ability to walk upright. Everything we are as a species was taken to be a result of our canny forebears playing a zero-sum game against extinction, with some monkey-men outbreeding some other monkey-men. In this grand tradition, we have Man the Hunter, Man the Firestarter, Man the Tool Maker, and the other evolutionary archetypes that tell us the reason we are the way we are is because of a series of technological advances. Mother and Child (1883) by Christian Krohg. Courtesy the National Museum, Oslo However, about 50 years ago, anthropologists made a shocking discovery: women. Not so much that females existed (though that might have taken some of the old guard by surprise), but rather that they could do quite interesting research, and that the topic of their research was not, inevitably, the evolution of Man. It was the evolution of humans, women and children included. New research reframed old questions and asked entirely new ones – ones that did not assume what was good for the gander was good for the goose, and that there might be more drivers to our evolutionary history than the simplistic models that had come before. Among these new ideas was one that had been consistently overlooked: the entire business of reproducing our species is absolutely off-the-charts weird. From our mating systems to maternal mortality to menopause, everything we do with our lives is a slap in the face to the received wisdom of the animal kingdom. After all, the pinchpoint of evolution in any species comes at reproduction. Making more of your species is how you stay in the game and, judging by the numbers, we are far and away the most successful primate ever to have walked the earth. Pioneering researchers such as Sarah Hrdy, Kristen Hawkes, and many others of this new generation finally thought to ask, is it about the way we make more humans that has made us the species that we are? Our unlikely childhoods begin well before gametes meet. As part of our social organisation, humans have a specific type of mating system, a form of reproduction that scaffolds the relationships between animals in our society in a specific way, with specific aims. Despite a tendency by a certain insidious strand of pseudo-intellectual internet bile to use pseudo-scientific terms such as ‘alpha males’ and ‘beta males’ for human interactions, our species is in fact rather charmingly non-competitive when it comes to mating. While it may be difficult to believe that humans are largely tedious monogamists, our pair-bonded nature is a story written in our physical beings. Not for us the costly evolutionary displays of the male hamadryas baboon, who grows his fangs to 400 times those of his female relatives in order to show off and fight for mates. (Male human fangs are, in fact, slightly bigger than females – but only about 7 per cent, which is nothing in animal terms.) Furthermore, in animals with more competitive strategies for mating – ones where there is any extra advantage in remaining coupled, depositing sperm, or preventing other couplings from happening – evolution has provided an array of genital morphologies ranging from penis bones and spikes to outsized testes. Humans lack distinction in any measure of genitalia so far studied, though it is worth noting that most anthropologists have chosen to focus on male genitalia, so surprises may remain in store for future research. This physical lack of difference between sexes sets up a social system that is, in animal terms, weird: pair bonding. Virtually no other animals reproduce in pair bonds – only about 5 per cent, if you discount birds, who do go for pairing in a big way. But an outsize proportion of primates opt for this monogamous arrangement, about 15 per cent of species, including, of course, our own. There are a variety of evolutionary theories for why pair bonding should appeal so much to primates, including maintaining access to females that roam, supporting offspring, or increasing certainty about paternity. One prominent theory is that pair-bonded males have less motivation for infanticide, though as the anthropologist Holly Dunsworth pointed out in her Aeon essay ‘Sex Makes Babies’ (2017), this does suggest a type of understanding in primates that we don’t always even ascribe to other humans. Other theories point to female roaming requiring a pairing system so mating opportunities aren’t lost whenever she moves on. Pair bonding has emerged perhaps as many as four separate times in the primate family, suggesting that the motivation for the invention of the mate may not be the same in all monkeys. What does seem clear is that humans have opted for a mating system that doesn’t go in as much for competition as it does for care. The evolution of ‘dads’ – our casual word for the pair of helping hands that, in humans, fits a very broad range of people – may in fact be the only solution to the crisis that is the most important feature of human babies: they are off-the-scale demanding. A trivial question about furniture logistics is in fact a huge impediment to our species’ successful reproduction Our babies require an intense amount of investment, and as a species we have gone to staggering lengths to give it to them. As placental mammals, we solved the limitations placed on babies who are gestated in eggs with a fixed amount of resources by capturing the code of an RNA virus in our DNA to create the placenta: a temporary organ that allows our embryos and foetuses to draw sustenance directly from our bodies. As humans, however, we have gone a step further and altered the signalling mechanisms that maintain the delicate balance between our voracious young and the mothers they feed off. Our species’ pregnancies – and only our species’ pregnancies – have become life-threatening ordeals specifically to deal with the outrageous demands of our babies. Gestational diabetes and preeclampsia are conditions virtually unknown in the animal kingdom, but common killers of pregnant humans thanks to this subtle alteration. Babies grow to an enormous size and plumpness, and they’re so demanding that the resources in one body aren’t enough to sustain them. They emerge into the world with large brains and a hefty 15 per cent lard, but still unripe and unready. The question of why we have such large but useless babies – unable to cling like other primate babies can, eyes and ears open but with heads too heavy for their necks – is one that evolutionary theory has long treated as a classic moving sofa problem. As posed by the author Douglas Adams, or the popular TV series Friends, the moving sofa problem asks the question: how do you get something big and awkward through a small and awkward space? Our babies have very large heads, and our mothers quite narrow pelvises, and what seems a trivial question about furniture logistics is in fact a huge impediment to the successful reproduction of our species: this makes human birth dangerous, and mothers die giving birth at a far higher rate than any other species. Classically, this was viewed as an acceptable trade-off between competing evolutionary demands. This is what the anthropologist Sherwood Washburn in 1960 called the ‘obstetrical dilemma’: the dangerous trip down the birth canal is necessitated by our upright posture and the tight fit required by our big brains. This widely accepted theory provided functional explanations as to why male and female hips were different sizes and why our births are so risky. Until recently, it was thought that humans had in fact developed a mitigation of this size mismatch in a unique twist performed by the baby as it travels through the birth canal, forcing the baby to emerge with head to the side rather than facing towards the mother’s front. There is one problem with this particular explanation: we are not the only species to sneak in a twist at the end of our grand pelvic-canal dive – in fact, we’re not even the only primates. Research by Satoshi Harota and colleagues has shown even chimpanzees, who have ‘easy’ births, do the twist. Even the pelvis size and shape differences we identified as critical in human evolution turn out to be less-than-unique. Many animals have differences between male and female pelvises that surpass those of humans, without having difficult births. Shape difference might be something that is far more ancient in the mammal line. For human hips, variation tracks many factors, such as geography, rather than just male/female divides. But human babies really do have a terrible time coming into the world, above and beyond other species, due to that tight fit. So what gives? The answer may be in that glorious pinchable baby fat. Having precision-engineered our offspring to siphon resources from their mothers in order to build calorifically expensive structures like our big brains and our chubby cheeks, we have, perhaps, become victims of our own success. Our babies can build themselves up to an impressive size in the womb, one that comes near to being unsurvivable. But the truly fantastic thing is that, having poured so much into our pregnancies, after we hit the limit of what our babies can catabolise from their mothers’ bodies, they are forced to emerge into the world still fantastically needy. For any mammal, survival after birth calls for the magic of milk, and our babies are no different, but here we find another very unusual feature of humans: our long childhood starts with cutting off infancy early. Even accounting for differences in size, human babies are infants on the breast for a far shorter time than our closest relatives. Breastfeeding can go on for four to five years in chimpanzees and gorillas, and eight years or more in orangutans. Meanwhile, babies in most known human societies are fully weaned by the age of four, with a lot of agricultural societies past and present opting to stop around age two, and of many modern states with capital economies struggling to get breastfeeding to happen at all, let along go on for the WHO-recommended two years or more. After the first few months, we start complementary feeding, supplementing our babies with solid foods, including the rather unappealing pre-chewed food that seems to nonetheless support not just human but all great-ape infants as they grow. Our fat, big-brained offspring require a huge investment to support the amount of brain growth required in our babies’ first year, but they don’t – and can’t – get what they need to build the adult 1,200 g brain from milk alone. This is where those pair bonds come in handy. Suddenly there are two food-foragers (or chewers) to hand, which is convenient because we kick off our babies from the breast quick – but, once they’ve moved from infancy into childhood, there is yet another surprise: we let them stay there longer than any other species on the planet. Luckily for science, there is a timer built into our bodies Childhood in humans is extended, by any measure you care to use. We can look at the 25-odd years it takes to get to physical maturity (in fact, the tiny end plate of your clavicle where it meets the sternum doesn’t fully finish forming until your early 30s) and compare it with our nearest relatives, to see that we have slowed down by a decade or more the time it takes to build something great-ape sized. To find a mammal with a similarly slow growth trajectory we have to look to the sea, at something like a bowhead whale. A bowhead whale, however, which will top out at about 18 metres and around 90 tonnes, is on a trajectory of growth well beyond a piddling human. We can look at our markers of social maturity and find they are even more varied. Our individual cultures tell us very specifically when adulthood is – ages of legal responsibility, for instance, or the timing of major rituals – and these might hover near our physical maturity or they might depart from it entirely. Perhaps the most clear-cut definition describes childhood in terms of investment: it is the period when you are a net resource sink, when other people are still investing heavily in you. One of the most fascinating things in the study of humans is our ability to extend our lens back, beyond the borders of our species, and look at the adaptive choices our ancestors have made to bring us to this state. We look at the shape of fossil hips and knees and toes to learn how we came to walk upright; we measure skulls and jaws from millions of years ago to see how we fed our growing brains. Palaeoanthropology allows us to reconstruct the steps that brought us here, and it is where we can find microscopic tell-tale signs of the journey that carried us into our extended childhood. Cast in three parts: endocranium, face and mandible of the Australopithecus africanus specimen Taung child, dating back around 2.5 million years, discovered in South Africa. Courtesy Wikipedia There are a handful of juvenile fossils in the hominin record, a very small proportion of the already vanishingly small number of remains from the species living over the past 3 to 4 million years that form the family tree that led to humans. Two of these, the Taung child and the Nariokotome boy, provide some of the best evidence for how our species evolved. The Taung child is an australopithecine dating back about 2.5 million years, and the Nariokotome boy belongs to Homo erectus, about 1.5 million years in our past. Looking at the teeth and skeletons of these fossils, we see that the teeth are still forming in the jaws, and the bony skeleton has not yet taken its final form. If our ancestors grew like modern humans – that is, slowly – then the absolute chronological age they would have been at that stage of development would be about six and 12 years old, respectively, though it would be younger if they grew more rapidly, like apes. Luckily for science, there is a timer built into our bodies: a 24-hour rhythm recognisable in the minute tracks left by the cells that form dental enamel that can be seen, perfectly fossilised, in our teeth, and a longer near-weekly rhythm that can be seen on the outside of teeth. When we count the growth tracks of enamel in the Taung child’s teeth, we can see they were closer to three than to six, and the Nariokotome boy only about eight. Our long childhood is a uniquely evolved human trait. There is one more adaptation at play in the support of our needy offspring that should be accounted for: the utter unlikeliness that is a grandmother. Specifically, it is the almost unheard-of biological process of menopause, and the creation of a stage of life for half of our species where reproduction just stops. This is outrageous in evolutionary terms and it occurs only in humans (and a handful of whales). If the goal is to keep the species going, then calling time on reproduction sounds catastrophically counterintuitive, and, yet, here we are, awash in post-reproductive females. Why? Because, despite the denigration many older women face, women do not ‘outlive’ their sole evolutionary function of birthing babies. If that was the only purpose of females, there wouldn’t be grandmas. But here they are, and ethnographic and sociological studies show us very clearly that grandparents are evolutionarily important: they are additional adults capable of investing in our needy kids. If you remove the need to invest in their own direct offspring, you create a fund of resources – whether it is foraged food, wisdom or just a pair of hands – that can be poured into their children’s children. All the unique qualities of human childhood are marked by this kind of intense investment. But that raises the big question. If ‘winning’ evolution looks like successful reproduction, then why would we keep our offspring in an expensive holding pattern for longer than necessary? The long childhood our ancestors took millions of years to develop is being stretched yet further It is only when we start to consider what this extension is for that we get close to understanding the evolutionary pressures that brought us to this state. And we actually have quite a good idea of what childhood is for, because we can see the use that other animals put it to. Primates have long childhoods because you need a long time to learn how to be a better monkey. The same principle applies to social species like crows, who need to learn a complicated series of social rules and hierarchies. We, like monkeys and crows, spend childhood learning. Growing up human is such an immensely complicated prospect it requires not only the intense physical investment in our big brains and high-fat bodies but an extended period of care and investment while our slow-growing offspring learn everything we need them to learn to become successful adults. The cost of this investment, 20 to 30 years’ worth, is staggering in evolutionary terms. A long childhood is our greatest evolutionary adaptation. It means that we have created needy offspring, and this has surprising knock-on effects in every single aspect of our lives, from our pair bonds to our dads to our boring genitals to our dangerous pregnancies and births and our fat-cheeked babies and even that unlikely creature, the grandmother. The amount of time and energy required to grow a human child, and to let it learn the things it needs to learn, is so great that we have stopped the clock: we have given ourselves longer to do it, and critically, made sure there are more and more investors ready to contribute to each of our fantastically expensive children. What’s more, as humans, our cultures not only scaffold our evolution, but act as bore-drills to open up new paths for biology to follow, and we find ourselves in a position where the long childhood our ancestors took millions of years to develop is being stretched yet further. In many societies, the markers of adulthood are increasingly stretched out – for the most privileged among us, formal education and financial dependence are making 40 the new 20. Meanwhile, we are taking time away from the most desperate among us, placing that same education out of reach for those foolish enough to be born poor or the wrong colour or gender or in the wrong part of the world. A human child is a rather miraculous thing, representing a huge amount of targeted investment, from mating to matriculation. But given the gulfs in opportunity we are opening up between those that have and those that do not, it would benefit us all to consider more closely the childhoods we are investing in, and who we are allowing to stay forever young.
Brenna Hassett
https://aeon.co//essays/why-have-humans-evolved-to-have-a-long-journey-to-adulthood
https://images.aeonmedia…y=75&format=auto
Food and drink
How French cuisine became beloved among status-hungry diners in the United States, from Thomas Jefferson to Kanye West
Fresh off a public breakdown after making antisemitic Tweets and threats, divorcing a Kardashian, and earning 60,000 votes standing for the US presidency, Ye (né Kanye West) is an intriguing figure in global pop culture. Nothing if not complex, he has won 21 Grammy awards for his music performances and productions while maintaining an ever-growing, and increasingly horrifying, hall of shame. Who can forget him interrupting Taylor Swift’s 2009 MTV Video Music Awards win with the now memetic phrase ‘Imma let you finish’, after which President Obama deemed him a ‘jackass’? Or West piping up in 2018 to claim that 400 years of slavery was a ‘choice’ that Black people made; or his assertion two years later that Harriet Tubman ‘never actually freed the slaves’; or his recently proclaimed appreciation for Adolf Hitler? What follows here isn’t a redemption arc: you can have great cultural insight and still be a bigot. That said, amid all the ‘White Lives Matter’ T-shirts and public performances of mental breakdown, West’s canny affinity for using food as a symbol for status in the United States has gone largely unnoticed. In his catalogue of 11 studio albums, West includes more than 140 references to foods, drinks, cooking and eating. As Food Studies professors, we became interested in what he was trying to express through these references. Many of them namecheck brands West admires. There’s Grey Poupon (‘Yeezy, Yeezy, Yeezy, this is pure luxury/I give ’em Grey Poupon on a DJ Mustard, ah!’); Nobu restaurant and Whole Foods grocery (‘Huh? I swear my whole collection’s so cruise/I might walk in Nobu wit’ no shoes/“He just walked in Nobu like it was Whole Foods”’); also fast-food outlets (‘Beggars can’t be choosers, bitch, this ain’t Chipotle’ and ‘Closed on Sunday, you my Chick-fil-A’). The food rap that made the most impact on US popular culture, as measured by thousands of memes and tweets, comes from West’s track ‘I Am a God (Feat. God)’ on the album Yeezus (2013). To wit: ‘In a French-ass restaurant/Hurry up with my damn croissants.’ Expediency in service is only what a deity might expect – though many people on the internet could relate. The best of the memes superimpose West’s face over a croissant, or over concertgoers waving at him from the foot of a stage, pastries in hand. Others set him alongside Napoleon Bonaparte or the Pillsbury Doughboy. The memes caught the attention of W David Marx, a former editor at The Harvard Lampoon, who in 2013 wrote a satirical response on behalf of a fictitious ‘Association of French Bakers’ chiding West for failing to consider the time it takes to achieve excellence in boulangerie. Marx’s letter, posted on Medium, went viral, and was picked up by Time magazine and USA Today, and prompted subsequent apologies for mistaking Marx’s parody for fact. ‘The Association of French Bakers does not exist,’ wrote Alexander Aciman in Time. Yet mocking West is too easy. He is, after all, a contemporary cultural provocateur. The implicit logic of those meme-makers and critics, bent on turning his rap into a joke on its creator, suggests they took West at face value, as if he were merely complaining that an American Black man in a French restaurant doesn’t belong or is somehow out of place. But West was, in essence, exposing a politics of food consumption that, in tying French food to class status in the US, and concerning itself only with dominant Eurocentric and colonial foodways, silences the labours of indigenous chefs, chefs of colour and queer chefs. There’s powerful evidence that French food still aligns with class – which in the US can never be separated from race. So, when West tsks about French-ass restaurants, he reminds us that such covert forces of class allegiance persist, and that they need critiquing. West may or may not have visited Thomas Jefferson’s neoclassical plantation Monticello, in Charlottesville, Virginia. But, there, historical tourism nuts can purchase Dining at Monticello (2005), a 200-page tome that catalogues the foods and wines that Jefferson brought back to the US when he served as minister to France. It includes Jefferson’s handwritten recipe for Biscuit de Savoie, a cake created in 1358 in honour of the Holy Roman Emperor. The recipe may be in Jefferson’s handwriting, but Jefferson did not work in his own kitchen. His meals at Monticello, and later in the White House, were prepared by the enslaved James Hemings, who had been the property of Jefferson since he was eight, and who was brought to Paris for the purpose of serving his master. Hemings had been the property of Jefferson’s father-in-law, who was also Hemings’s father – making him the half-brother of Jefferson’s wife, Martha. And Hemings was the older brother of Sally Hemings, who also travelled to France under Jefferson’s direction and later bore at least six of Jefferson’s children. It must be said of this complicated family tree and its attendant power dynamics that consent would be impossible, as enslaved women had no legal protections against unwanted sexual advances. It is also true that James Hemings, an enslaved Black man, was the first American on record to train as a chef in France. It was his work, meeting Jefferson’s insistence on the excellence of French cuisine, that set into motion the attitudes Kanye West had in his target – namely, that a French restaurant is where classy things happen. Despite recent efforts of people inside and outside the restaurant business to make room for more chefs of colour, more female chefs, and more acknowledgment of the excellence of cuisines once deemed too ‘exotic’ to be mainstream, French dominance of the US culinary imagination in terms of high-end, high-class dining endures. Instagram reels and TikTok feeds show that posting images of oneself eating French food and, even better, eating French food in France, indexes social status. Next to predictable tourist shots of an ice-cream cone held aloft in front of the legendary Berthillon Glacier on Paris’s Île Saint-Louis, or a fork dipping into the steak tartare at Brasserie Lipp – which featured prominently in Ernest Hemingway’s posthumous memoir A Moveable Feast (1964) – foodie influencers such as ‘312 Craves’ share #frenchfood from fashionable hotspots around the US. In our Food Studies courses, there’s always a student or two eager to tell us of the magic of French baguettes, discovered while studying abroad, and who note that ‘over there, flour is less processed.’ (They happen to be right.) Like other students across the US, many of ours have binge-watched the TV fantasy Emily in Paris (2020-), which dominated quarantine-era viewing by fetishising brasseries, Champagne and omelettes au fromage. Even if the beret-sporting Emily is as clichéd as driving a Citroën 2CV, there is a measure of charm in her observing, with more than a dollop of romantic homage, that ‘The entire city looks like Ratatouille.’ Emily in Paris. Courtesy Netflix This reification of French food by US media merely highlights how, in a country that recently crowned the burrito bowl the most popular dish (followed by the trusty burger), French cuisine is still the epitome of high culture. As Nikita Richardson, senior staff editor for The New York Times food section, put it: ‘We just can’t quit the French.’ It turns out that there are reasons for this that even Jefferson cannot claim. They have to do with how ‘French cuisine’ came into being and, most importantly, became canonised. Of course, French people ate ‘French food’ before the French Revolution, but there was no such thing as a ‘French chef’ for anyone but the noble courts before 1789. Prior to that, Paris was home to fewer than 50 taverns meant for common travellers. The aristocracy entertained on their own estates, courtesy of their cook-servants. Sofia Coppola’s film Marie Antoinette (2006), set in the final years of the Ancien Régime, imagines the high-end fare that the queen consumed in a montage scene showing Marie and her court snacking on dozens of platters of cakes and confections – pink macarons, bite-sized redcurrant tarts and strawberry ladyfingers decorated with edible pink flowers. This was a time when peasants (imagine Marie’s chambermaid) ate mostly bread – not brioche made with eggs and milk, as Marie got, but dense loaves made with the far cheaper barley, oats and buckwheat. It’s no wonder that the ensuing grain shortage was a major cause for the revolution. In a stroke of branding genius, the chef was canny enough to include a portrait of himself in his book One toppled monarchy, a new constitution and 30 years later, there were approximately 3,000 restaurants serving as many as 100,000 Parisians per day. These establishments entertained a new class of Parisian, who had a little money and a lot of ambition. Around this time, the world’s first celebrity chef was enjoying success in his career. Marie-Antoine Carême came from a very large, very poor Parisian family; by age 10, he was on his own, illiterate and broke. To sustain himself, he signed a six-year contract to wash dishes at a tavern called Fricassée de Lapin, on the edge of Paris. By 16, he’d landed a job at a pastry shop near the Palais-Royal. Carême blossomed under his new boss, who recognised his talent and encouraged him to learn to read and write. Young Carême spent his free hours in the library, poring over architectural drawings, then returning to the shop to reproduce them in sugar and pastry. The boss began marketing Carême’s creations as banquet centrepieces, and within two years Carême had amassed such a following that Bishop Talleyrand, Napoleon’s chief diplomat, hired him to work under his personal chef – a landing spot that allowed Carême to meet other cooks of royal regimes. 13069From Le pâtissier royal parisien (1815) by Marie-Antoine Carême. 13071Images courtesy the Bibliothèque Nationale, Paris In 1815, Carême left Paris to work in London, as head chef for George, Prince of Wales. He stayed three years, during which he wrote his first book, Le pâtissier royal parisien, a 482-page manual with sweet and savoury recipes and accompanying schematic line drawings. Published in 1834, the introduction is nothing short of prophetic: This work … will throw additional lustre on our national cookery so long and so justly esteemed by foreigners … attributed to the well-known fact that our modern cookery has become the model of whatever is really beautiful in the culinary art … and the art of French cookery, as practised in the 19th century, will be the pattern for future ages.In many ways, Carême was born at the right time. Post-revolution France was flourishing economically, creating conditions that allowed restaurants to cultivate ‘regulars’ who could afford to eat out often. Another change was technological: the printing press meant Carême’s recipes and drawings could be mass produced on the cheap. And, in a stroke of branding genius, the chef was canny enough to include a portrait of himself in his book, which helped cement him as an authority of record – and helped cement French cuisine as an important subject. Carême’s archive established the very basis of French cuisine – the ‘mother sauces’, mutable formulas that, with the entry-level know-how, create the architectural foundations of a multitude of dishes. That these sauces – béchamel, velouté, Espagnole, and tomato (Hollandaise was included a few years later) – were the invention of, among others, an 18th-century chef named François Pierre La Varenne mattered less than that they were inscribed authoritatively by Carême. In time, his cookbooks became more than guides: the food historian Alan Davidson described Carême’s mother sauces as ‘an intellectual platform for cooks to redefine their professional status’, because they broke down complex practices into practical steps and in the process became canonical. (Incidentally, ‘mother’ is also the word used for the moulds necessary to make many beloved French cheeses, including brie and Comté, as well as the term for the biofilm composed of cellulose, yeast and probiotic-dense bacteria that develops on fermenting liquids such as vinegar.) No one helped spread the world of the canon more than Auguste Escoffier, who, in the decades following Carême’s career cooking for the nobility, focused on improving everyday restaurant experiences. Escoffier was born in a village on the outskirts of Nice in 1846, where, as a teenager, he apprenticed in his uncle’s restaurant, and later reported that he was often bullied. At the time, cooking was not a profession held in high social regard: it was a hot, thankless job that typically involved disordered workspaces and even uncleanliness. Young Escoffier was drafted into the army, where he became enamoured of the military brigade system, the adoption of which changed not only his life but the lives of millions of future restaurant workers. In keeping with this military brigade system, Escoffier reorganised the canteen kitchen according to a chain of command, with the chef as ‘the general’ directing the campaign and various sub-stations flowing from his own, each with its own commander. Escoffier’s hierarchy professionalised the atmosphere of the kitchen, and many food scholars argue that its emphasis on hygiene and teamwork produced a new pride in the profession. In 1890, Escoffier formed a partnership with the Swiss hotelier César Ritz. Their first joint project was the Savoy Hotel in London, and their goal was to transform it into the place in the world to experience culinary excellence. From there, they moved to the Ritz Hotel in Paris, then the Carlton Hotel in London, repeating a pattern of setting up chains of command in the kitchens (in the process, they were also convicted of fraud in the form of taking kickbacks from restaurant suppliers, a reputational stain each managed to overcome). While managing smart hotel dining, Escoffier advocated a system in which the food comes out of the kitchen as the guest requests it, in the order in which it should be eaten – this was his idea of meal pacing. He also partnered with Cunard, consulting with chefs on menus for their luxury ships, thereby ensuring that French food was what the international set should demand. Escoffier was also prolific, writing more than 5,000 recipes; even today, his book Le guide culinaire (1903) remains a bible for students enrolled in cooking schools around the world. Into every kitchen Escoffier entered, the mother sauces and the brigade system followed, opening up the pleasures of French cuisine to the world, but also putting them into a kind of coffin. Every canon has its drawbacks: by nature, it is fixed. Yet despite its rigidity, the French culinary canon allows for homage and suggests that tribute itself is a kind of innovation. Legions of American women were willing to undertake such ardours for social status Enter Julia Child, who in the mid-20th century singlehandedly and successfully promoted middle-class American interest in French cuisine. Child was a native Californian, who trained in the finishing-school style of Le Cordon Bleu and was equipped with a flair for communicating her passion. Her two-volume book Mastering the Art of French Cooking (1961, 1970) demonstrated that the many strictures of French food were accessible to home cooks in the US. Having fallen for the French reverence for food after following her husband to Paris, Child believed she could ‘translate’ its pleasures to Americans and improve their lives in the process. Thanks to the post-Second World War economic boom, the average US consumer had money to spend and a social ladder to climb. Child’s cooking show The French Chef, which aired on Boston public television in 1962 and then nationally for the next 10 years, demystified Carême and Escoffier while creating room for failure: ‘Always remember: if you’re alone in the kitchen and you drop the lamb, you can always just pick it up. Who’s going to know?’ she told viewers, in the signature warbly high voice that made her caricature-fodder for comedy impresarios, as well as a national treasure. Child’s teachings nonetheless came with a cost. The culinary historian Betty Fussell describes her own slavish adherence to the formulas that Child popularised as an exercise in which she ‘spent time and money mastering the art of competitive cooking’. She spoke for legions of American women who were willing to undertake such ardours for social status. And yet Child’s influence endures. Her roast chicken recipe is an American classic – reprinted time and again in newspapers, and a staple of instructional YouTube videos and TikTok feeds – while her stuffed eggplant has become a culinary litmus test for aspiring cooks. The food journalist Alan Richman compared the fiddly task of preparing mushrooms for the aubergines’ stuffing to ‘drying laundry by hand’. Only the most dedicated of cooks would bother perfecting it. At the same time, Child herself became an American classic, so thoroughly representing the relationship between French food and social aspiration that when Dan Aykroyd sent her up in a Saturday Night Live skit in 1978, audiences across the US knew very well that he was mocking the fancy Frenchified aspirations of an entire nation. One of the more unusual outings for Escoffier’s brigade system in recent years was Disney-Pixar’s animated movie Ratatouille (2007) – a favourite pop-cultural amuse-bouche among our Food Studies students – a familiar bildungsroman set in the context of a French culinary fantasy. The film’s plot follows Remy, a gutter rat who dreams of becoming a chef. Who can forget Remy attempting to smoke a mushroom on a spit over an old woman’s rooftop chimney, only to be struck by lightning that causes his mushroom to puff up like a popcorn kernel? Upon tasting the result, Remy pronounces it to contain an ‘mmmm ZAP’ of flavour with ‘lightning-y’ tang. Yet to become a ‘real’ chef, Remy must mask his identity and hide under the toque of his marionette-like kitchen accomplice, Alfredo Linguini, yanking his hair to operate the man’s arms through each dice and stir. As Remy finds glory cooking at Gusteau’s, the premiere restaurant in Paris, he epitomises the film’s central adage: ‘Anyone can cook.’ The fancy mustard from Dijon, breakfasting on croissants – all of this is symbolically off-bounds to minorities That Ratatouille is animated tends to obscure the direct reference to Escoffier’s brigade and, more importantly, the realistic labour issues at play in high-end kitchens where so few women are employed. The only female character in the film, Gusteau’s rôtisseur Colette Tatou, explains to Linguini that ‘haute cuisine is an antiquated hierarchy built upon rules written by stupid old men, rules designed to make it impossible for women to enter this world. But still, I’m here!’ Colette’s gender struggles make explicit the tensions in the rat’s own quest for acceptance as a chef in a French kitchen, labouring his way through the embedded hierarchies of the brigade system. In the climactic scene, Remy prepares a ratatouille that wins the heart of the film’s antagonist, the critic Anton Ego, showing that even a simple Provençal preparation – created by a gnarly rodent, at that – can be the apex of good taste. The premise, absurd yet heartwarming, makes it (like Julia Child) ripe for parody. In the awardwinning film Everything Everywhere All at Once (2022), set in a multiverse uncomfortably close to our own, the lead character visits a Benihana-style teppanyaki joint, and encounters ‘Raccacoonie’ when she runs into a wild-eyed chef who has a furry ringed tail sticking out of his toque. Having come full circle, from pop culture to the French Revolution and back again, we now find homages to French cuisine across the rap music scene. Since Das EFX’s album Dead Serious (1992) made a reference to the French brand Grey Poupon on their track ‘East Coast’ (‘He’s the don, have you seen my Grey Poupon? Bust this, we roll more spliffs than Cheech and Chong’), waves of rappers have invoked the mustard to signal class status. When Kanye West demands that his server hurry up with his croissants, his point is not merely to acknowledge that many Americans align French food culture with status, but to highlight the way such alignments are traditionally closed off to Black people. In his US, ‘French’ equates socioeconomic status with white privilege: the fancy mustard from Dijon, dining in a ‘French-ass restaurant’, breakfasting on croissants – all of this is symbolically off-bounds to minorities. So, when West demands the expedient delivery of his pastry, he effectively yokes the cachet of French cuisine in the US to the standpoint of a Black man from Chicago who has achieved such regular access to it that he can take it down. Perhaps too, in West’s ‘hurry up’, there is a nod to the invisible violence that permeates French restaurants through the brigade system whose structure remains intact in so many high-end eating places today. For all its whimsical Disneyfication in Ratatouille, the degree of bullying in the kitchen brigade – which, ironically, Escoffier designed his system to mitigate – remains unchanged. Kitchen staff are as notoriously overworked and underpaid as ever they were. There’s no better representation of this kitchen exploitation on the screen today than the TV drama The Bear (2022-). Its portrait of a tortured chef who flees his job at a Michelin-starred restaurant in Manhattan to take over his family’s Chicago beef joint amid a crisis reveals the worst kind of abuse the brigades are famous for. ‘You are terrible at this, you are no good at it,’ the head chef hisses in a flashback scene – as our guy tries his best to concentrate on arranging tiny sprigs of fennel and orbs of fish roe atop a bite-sized sliver of grapefruit-pink salmon – silencing a thrumming kitchen staffed with sauciers, chefs de partie and cooks of all ranks. The scene is shockingly brutal, but also shockingly complicit, co-opting viewers who fundamentally believe that the French brigade system produces the most aesthetically perfect food that exists. The intensity of the scene culminates in the top chef practically spitting into his underling’s face: ‘You should be dead.’ It’s so off the charts that it tips into dark comedy. Still, the real message here is that life in the classic brigade system involves routine abuse. Why is this important? Because it asks us to consider how our experiences with food give us a sense of value and confer social membership. The Bear hit home at the same moment as the TV docuseries High on the Hog (2021-), which traces the lineage of many so-called American foods to Africa, and also spotlights Black culinary expertise, won a prestigious Peabody award for excellence in storytelling and was hailed by The New York Times as ‘profound[ly] significant’. The uneasy coexistence of these two shows hints that Americans value French food as a symbol of class even as we know that our own nation’s history obscures the real foodways upon which the shaky concept of ‘American food’ rests. The US chef Alice Waters made some attempt to bridge that distance by harnessing locally grown fresh ingredients to promote a vibrant simplicity in place of complicated and time-consuming preparations. Her ‘revolution’ at the restaurant Chez Panisse in California nearly rendered the ‘mother sauces’ obsolete. By the 1980s, Chez Panisse was known across the country for its French-American hybridity, and its success snowballed into the now absurdly popular farm-to-table movement. The Birkin bag and Chanel No 5 – both evidence of the American desire for status in a French-inflected form At the time, it felt as if French cuisine would be relegated to libraries. So how did it come roaring back? It turns out that it never really went away. Decades of its perceived cultural supremacy meant that its social signifiers remained embedded in the US collective understanding of class, which is what Kanye West plays to so well in his work. When he demands that kitchen servers hustle with his croissants, he insists that he’s worthy of occupying a space once reserved for those at the top of the social order. ‘I belong here,’ he says. Mobilising the trope of a ‘French-ass’ restaurant being a space – if not the space – by which class legitimises itself, West’s mocking term skewers traditional French elitism by aligning it with another cultural totem – another piece of bling. His meaning is less about ‘I’ll have some of that’ and more about ‘Look at you with your fancy props and pretensions.’ That French food sits outside of US ritual food traditions (Fourth of July barbecues, Thanksgiving and Labor Day picnics) makes it ever riper for a takedown. Of course, other French products exert a serious hold on the aspirational ideals of modern Americans. There’s the enduring appeal of the Birkin bag, and the seductive status of Chanel No 5 – both evidence of the American desire for status in a French-inflected form. Both products are tied to distinct brand histories and traditions designed to appeal to outsiders like us; buying into them, according to this logic, will make us insiders. Like French cuisine itself, these overdetermined luxuries rely on a sense of exclusionary appeal, on the fact that you can’t get real versions of them elsewhere. This trope plays well with Americans who have grown up on the ubiquity of KFC and McDonald’s – places where it takes so little time for customers to get food that robots are now being installed to do the job. When West says ‘hurry up with my damn croissants’, he might as well be talking to one of them. And that’s why he says it. He’s saying that the so-called fancy treats are just as good as the ones we’ve already got. And on this point, a most unlikely person would have agreed. For Julia Child’s guilty pleasure was a McDonald’s French fry.
Kelly Alexander & Claire Bunschoten
https://aeon.co//essays/on-the-united-states-enduring-love-affair-with-french-food
https://images.aeonmedia…y=75&format=auto
Genetics
Despite advances in molecular genetics, too many biologists think that natural selection is driven by random mutations
Since 1859, when Charles Darwin’s On the Origin of Species was first published, the theory of natural selection has dominated our conceptions of evolution. As Darwin understood it, natural selection is a slow and gradual process that takes place across multiple generations through successive random hereditary variations. In the short term, a small variation might confer a slight advantage to an organism and its offspring, such as a longer beak or better camouflage, allowing it to outcompete similar organisms lacking that variation. Over longer periods of time, Darwin postulated, an accumulation of advantageous variations might produce more significant novel adaptations – or even the emergence of an entirely new species. Natural selection is not a fast process. It takes place gradually through random variations, or ‘mutations’ as we call them today, which accumulate over decades, centuries, or millions of years. Initially, Darwin believed that natural selection was the only process that led to evolution, and he made this explicit in On the Origin of Species: If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case.A lot has changed since 1859. We now know that Darwin’s ‘gradualist’ view of evolution, exclusively driven by natural selection, is no longer compatible with contemporary science. It’s not just that random mutations are one of many evolutionary processes that produce new species; they have nothing to do with the major evolutionary transformations of macroevolution. Species do not emerge from an accumulation of random genetic changes. This has been confirmed by 21st-century genome sequencing, but the idea that natural selection inadequately explains evolutionary change goes back 151 years – to Darwin himself. In the 6th edition of On the Origin of Species, published in 1872, he acknowledged forms of variations that seemed to arise spontaneously, without successive, slight modifications: It appears that I formerly underrated the frequency and value of these latter forms of variation, as leading to permanent modifications of structure independently of natural selection.– from Chapter 15, p395, emphasis addedToday, we know in exquisite detail how these larger-scale ‘spontaneous’ variations come about without the intervention of random mutations. And yet, even in the age of genome sequencing, many evolutionary scientists still cling stubbornly to a view of evolution fuelled by a gradual accumulation of random mutations. They insist on the accuracy of the mid-20th-century ‘updated’ version of Darwin’s ideas – the ‘Modern Synthesis’ of Darwinian evolution (through natural selection) and Mendelian genetics – and have consistently failed to integrate evidence for other genetic processes. As Ernst Mayr, a major figure in the Modern Synthesis, wrote in Populations, Species and Evolution (1970): The proponents of the synthetic theory maintain that all evolution is due to the accumulation of small genetic changes, guided by natural selection, and that transpecific evolution [ie, the origins of new species and taxonomic groups] is nothing but an extrapolation and magnification of the events that take place within populations and species.This failure to take account of alternative modes of change has been foundational to popular and scientific misconceptions of evolution. It continues to impact the study of antibiotic and pesticide resistance, the breeding of new crops for agriculture, the mitigation of climate change, and our understanding of humanity’s impacts on biodiversity. Discoveries like hers should have inspired a radical rethinking of evolution During the past century, discoveries that have challenged the gradualist view of evolution have been sidelined, forgotten, and derided. This includes the work of 20th-century geneticists such as Hugo de Vries, one of the rediscoverers of Mendelian genetics and the man who gave us the term ‘mutation’, or Richard Goldschmidt, who distinguished between microevolution (change within a species) and macroevolution (changes leading to new species). Their findings were ignored or ridiculed to convey the message that the gradual accumulation of random mutations was the only reasonable explanation for evolution. We can see the absence of other perspectives in popular works by Richard Dawkins, such as The Selfish Gene (1976), The Extended Phenotype (1982), and The Blind Watchmaker (1986); or in textbooks used in universities across the world, such as Evolution (2017) by Douglas Futuyma and Mark Kirkpatrick. However, it’s an absence that’s particularly conspicuous because alternatives to random mutation have not been difficult to find. One of the most significant of these alternatives is symbiogenesis, the idea that evolution can operate through symbiotic relationships rather than through gradual, successive changes. In the early 20th century, American and Russian scientists such as Konstantin Mereschkowsky, Ivan Wallin and Boris Kozo-Polyansky argued that symbiotic cell fusions had led to the deepest kinds of evolutionary change: the origins of all cells with a nucleus. These arguments about symbiotic cell fusions, despite being vigorously championed by the evolutionary biologist Lynn Margulis in later years, did not find a place in evolutionary textbooks until they were confirmed by DNA sequencing at the end of the 20th century. And yet, even though these arguments have now been confirmed, the underlying cellular processes of symbiotic cell fusions have still not been incorporated into mainstream evolutionary theory. The pioneering geneticist Barbara McClintock at work at the Cold Spring Harbor Laboratory, 1947. Photo courtesy the Smithsonian Institution Archives An absence that’s perhaps even harder to explain is why the pioneering work of the cytogeneticist Barbara McClintock, one of the giants of 20th-century genetics, has not been accepted as posing a viable alternative to dominant theories of evolution. McClintock won the Nobel Prize in 1983 for her discovery during the 1940s of rapid genetic changes in maize plants that were definitely not random – changes found not only in maize but, we now know, across all forms of life. After confirmation by molecular geneticists in the 20th century, discoveries like hers should have inspired a radical rethinking of evolution. Instead, these ideas were accepted only among a small circle of geneticists. The scientists of the Modern Synthesis simply could not imagine any other way for hereditary variation to occur besides Darwinian gradualism. And so, for more than a century, natural selection through random mutations has dominated public conceptions of evolution. I became embroiled in the evolution debates in the 1960s, at the beginning of my life as a scientist. While doing my PhD research, I isolated genetic mutations in E coli bacteria whose properties differed from standard explanations of genetic variations at the time. According to molecular geneticists in 1965, mutations were supposed to take place only in two ways: through errors in DNA replication limited to just one or two base pairs, or by deletions of longer stretches of the genome. I eventually showed that the puzzling mutations I found in E coli were caused by the insertion of long segments of genetic material, typically more than 1,000 base pairs. I wasn’t the only one to come across these long insertions. Other bacterial geneticists had isolated unusual mutations in different locations in the genome of bacteria, and they turned out to be DNA insertions too. So, in 1976, two colleagues and I organised the first meeting on DNA insertions. During this meeting, it became clear that geneticists working on bacteria, yeast, fruit flies, plants and animals were all studying the same phenomenon McClintock had discovered in her maize plants 30 years earlier. This realisation would profoundly change the way we understood evolution, and it led me to begin thinking of insertions as important evolutionary tools, rather than supposedly harmful ‘junk DNA’ as they were later claimed to be. It was at this 1976 meeting that I first met McClintock. In the early 1930s, she’d discovered that X-rays broke chromosomes, and that maize could repair the damage by joining broken ends together. If the rejoined ends came from the same breakage event, the chromosome was restored to its original configuration, but if those ends came from two different breakage events, the chromosomes were restructured. As McClintock delved deeper into chromosome breakage and repair, she uncovered processes that led to chromosome restructurings and rapid genetic changes in her maize plants. She had discovered biologically mediated genome change, but even more startling results lay ahead. Maize plants were rapidly changing their own genomes through transposable controlling elements In 1944, McClintock began mating maize plants with genomes configured so that both parental pollen and ovule cells contained broken chromosomes. The result of these experiments created what has been described as ‘a genetic earthquake’ in the fertilised embryos. Many could not produce viable maize plants, and those that could grow to maturity often exhibited variegated patterns of coloration in the stalks, leaves and kernels (see figure below). Maize kernels showing variegated expression of the C kernel pigmentation locus, from McClintock’s first public presentation of her work on transposable controlling elements, 1951. Courtesy of the Barbara McClintock Papers, American Philosophical Society These characteristics were associated with ‘unstable’ genetic determinants at different sites in the plants’ genome. McClintock found that unstable loci carried insertions of genetic material that were unlike any previously discovered. She demonstrated that these ‘controlling elements’, as she came to call them, had previously been dormant in the maize genome and were activated in response to ‘genome shock’ from ongoing cycles of chromosome breakage and repair. Controlling elements were not fixed at a specific site in the chromosomes and, unexpectedly, were able to move or ‘transpose’ from one place to another in the genome. When they arrived at a new location in the genome, they could alter the expression of nearby genetic material. This discovery revealed an entirely new mechanism of genetic regulation and variability: maize plants were rapidly changing their own genomes through transposable controlling elements (TEs). And moreover, TE changes were nonrandom in two ways. Firstly, the same DNA element could insert repeatedly at new target sites; and, secondly, TE mobility and mutagenic activity was activated by specific organismal stress conditions. Corn specimen, 1978. Courtesy of the Barbara McClintock Papers, American Philosophical Society Since the 1970s, it became clear that all living organisms, from bacteria to plants and animals, use TEs as key evolutionary tools. There are multiple types of TEs, including purely DNA-based ‘transposons’ as well as two different types of ‘retrotransposons’, which use RNA intermediates to move to new locations in the genome. Every species has its own characteristic content of different TEs, which can accumulate to very high numbers in the genomes of more complex organisms. The human genome, for example, contains more than 30 times as much TE DNA as it does protein-coding DNA. TEs have played a major role in evolving genome systems for complex properties like immune defences, embryonic development, and viviparous reproduction in mammals. To support his random mutation ideas, Darwin quoted Carl Linnaeus’s dictum ‘Natura non facit saltum’ (nature does not make jumps) several times in On the Origin of Species, but molecular genetics proved that nature does indeed make jumps in cellular genomes – and they’re not random. Nature has invented multiple biochemical mechanisms for those jumps to take place. We might expect that McClintock’s discovery of TEs and their rediscovery across all forms of life would have unleashed serious questions for established views of evolutionary change. Instead, her findings were ignored. My own belief is that the reason for this wilful neglect lies in the basic philosophical foundations of mainstream thinking about evolution, which requires a purely physical explanation for all evolutionary processes. The fact that TEs respond to stress indicates that they are regulated biological entities that play a sensory-guided role in survival and reproduction. The notion of controlled biological processes at the core of organic evolution is plainly incompatible with a purely physicalist explanation, such as random mutations plus natural selection. Genome modifications by transposable elements may be the best-known examples of evolutionary processes that have nothing to do with the gradual accumulation of random mutations, but genome sequencing has revealed many others, equally important. They include the symbiotic cell fusion about 2 billion years ago that introduced the bacterial ancestor of mitochondria into the eukaryote progenitor cell from which all forms of complex life would eventually evolve. They include instances in which fully evolved adaptations were acquired by horizontal DNA transfers across taxonomic boundaries, rather than through vertical inheritance directly from ancestors. They also include the evolution of Lego-like proteins, in which specific regions or ‘domains’ in a protein’s chain structure can migrate between molecules and add new functionalities to the recipient proteins. Finally, they include the recent and actively growing field investigating the multifarious functions of non-coding RNA (ncRNA) molecules transcribed in part from TEs and other repetitive DNA elements. The origins of life are still obscure, but we assume it only happened once because all living cells have DNA genomes and use them in similar ways to encode the molecules of protein and RNA that carry out the detailed business of survival, growth and reproduction. That is, all living cells – whether bacterial, archaean or eukaryotic – have a similar genetic structure, which suggests a shared inheritance. Bacteria and archaea have been around for at least 3.4 billion years of Earth’s approximately 4.5 billion-year existence. Both cell types are generally microscopic and have no defined nuclear structure, so they are called ‘prokaryotes’, which is Greek for ‘pre-nucleus’ (‘karyon’ means kernel in Greek). Scientists have known that bacteria are a distinct form of life since at least the 19th century, but it is sobering to realise that we have known about archaea for only 46 years. In 1977, Carl Woese and his colleagues at the University of Illinois identified archaea as a separate form of life based on the sequence of its cellular nucleic acids. A lot of diversity has resulted from biological abilities to transfer and integrate DNA intracellularly ‘Eukaryote’ means true kernel in Greek, and these cells are distinct from bacteria and archaea because they all have nuclei – their ‘kernel’. Many have evolved into macroscopic multicellular organisms, including insects, reptiles, plants and Homo sapiens. Eukaryotes appeared around 2 billion years ago, and we know from DNA sequencing that this important step in biological evolution included a cell fusion, or ‘symbiogenetic’ event, between a particular kind of aerobic bacterium and a particular kind of anaerobic archaeon. The bacterium was the ancestor of the mitochondria that allow our cells and those of other eukaryotes to efficiently generate energy in the presence of oxygen, known as aerobic metabolism. The anaerobic archaeon would have been a microorganism capable of thriving without oxygen. Since their union – a foundational symbiogenetic event – coincided with the appearance of oxygen in Earth’s atmosphere due to oxygen-producing photosynthesis by cyanobacteria, it is not difficult to imagine that the symbiogenesis gave rise to a cell type with enhanced energetic potential. No gradual mutations were involved. After this initial cell fusion there have been ongoing exchanges of DNA sequences between the bacterial genomes in mitochondria and the nuclear genomes of different eukaryotic organisms. Hence, a lot of basic diversity has resulted from biological abilities to transfer and integrate significant stretches of DNA intracellularly. These processes do not occur accidentally. Additional symbiogenetic cell fusions of various eukaryotic cell types with photosynthetic cyanobacteria have been well documented as the origins of several kinds of algae, green plants and other photosynthetic eukaryotes. Clearly, these important groups, on whom our lives depend for the oxygen they produce, also evolved without using random mutations – arguably at the most important stage in their evolutionary history. Symbiotic cell fusions continue to this day. Generally, one cell surrounds and engulfs the other within its membranes and places it inside the cytoplasmic interior of the cell. In eukaryotic origins, it appears that the anaerobic archaeon engulfed the aerobic bacterium that became the ancestor of mitochondria found in virtually all eukaryotic species. In 1953, scientists deciphered the double-helix structure of DNA. This breakthrough provided, for the first time, a molecular explanation for how genes encode proteins: the nucleotides in DNA encode the amino acids in protein. Within a decade, it was demonstrated that proteins are encoded when a DNA sequence is transcribed into a messenger RNA (mRNA) intermediate that carries a copy of the genetic sequence to the site of protein synthesis in a cell. Based on instructions in the mRNA, amino acids are sequentially added to a protein chain. This process provided a perfect molecular model for neo-Darwinian gradual evolution in which random changes in the sequence of genomic DNA could alter proteins, one amino acid at a time. Over time, the accumulation of amino acid changes would evolve proteins to carry out new functions. However, the problem with this model is that much protein evolution has not occurred through sequential changes to single amino acids. By the end of the 20th century, as scientists used DNA sequences from different organisms to trace patterns of protein evolution, they came across several surprises. Genomic data showed that some DNA sequences that encode proteins important to an organism’s specific ecological adaptation did not evolve gradually through small changes to DNA sequences that had been present in the organism’s ancestors. Instead, they had been acquired in a fully evolved state from completely unrelated life forms. For example, herbivorous beetles and nematode worms weren’t always able to digest complex plant polysaccharides. They acquired the enzymes allowing them to digest plants in a fully evolved state through horizontal DNA transfer from various bacteria and fungi. Horizontal DNA transfer occurs across all taxonomic boundaries in Earth’s biosphere and in both directions between complex and simpler organisms. For a horizontal transfer to occur, a DNA sequence has to be extracted from one organism and taken up by another. There are multiple biological mechanisms involved in these horizontal DNA exchanges, including viruses, parasites and the uptake of DNA from the environment. But none of them involves the accumulation of random mutations. Furthermore, the horizontally transferred DNA must also be integrated into the genome of the recipient organism, which involves coordinated, nonrandom biochemical activities. Cells can cut and splice their own DNA molecules Besides uncovering evidence for horizontal transfers, the sequencing of DNA that encodes different proteins also led to the recognition that many proteins contain segments with very similar amino acid sequences. These segments came to be known as ‘protein domains’, and most could be linked to a particular aspect of the protein’s overall function. For example, proteins that control the transcription of DNA sequences into RNA share DNA-binding domains to recognise a shared group of signals in the genome. Moreover, protein domains are often arranged in diverse combinations to carry out different overall tasks. In other words, there is a Lego-like modularity to much of protein evolution. However, this does not align with the neo-Darwinian perspective: the random mutation hypothesis suggests that new protein functions emerge through random changes to single amino acids that make up the structure of protein, which would be an inefficient way of creating new functions. The processes of ‘domain acquisition’ and ‘domain shuffling’, often seen in the evolution of organisms as they become more complex, represent far more efficient ways to generate new overall functions than random changes to one amino acid at a time. Like horizontal DNA transfer, ‘domain shuffling’ involves inserting extended segments of protein-coding DNA in various locations in the genome. This means that cells can cut and splice their own DNA molecules, a capability that I call ‘natural genetic engineering’. The multidomain structure of proteins fits nicely with another major discovery of genome sequencing that was initially perplexing. The first sequences of human DNA that were studied showed that many protein-coding ‘genes’ are not continuous sequences. Rather, they are composed of coding sequences called ‘exons’ that are separated by non-coding sequences called ‘introns’. Through a process known as ‘splicing’, cells can modulate expression of their genetic material by removing introns from a gene’s mRNA ‘instructions’ and joining together exons into a coherent coding message. This is another way that cells can ‘engineer’ the structure of proteins. Cells can even combine different exons to generate proteins with different functions. In certain fish, this kind of ‘alternative splicing’ allows them to fashion protein variants for dealing with different stresses and challenges. Alternative splicing is significant because it challenges two of the foundations of neo-Darwinian thinking in the Modern Synthesis: the ‘one gene, one protein’ paradigm, and the concept of the gene as a fixed unit. In the mid-20th century, as the Modern Synthesis was taking shape, evolutionists believed they understood the fundamental relationships between the genome and the organismal characteristics it determined. Each gene encoded a single protein (‘one gene, one protein’) that determined a particular trait and constituted ‘the basic unit of life’, as the Nobel Laureate George Beadle wrote in an article for Scientific American in 1948. This concise unitary vision of genome structure and function was a crucial feature of the Modern Synthesis, and it enabled a theory of natural selection by random mutation to dominate our conceptions of evolution. Its proponents could not have anticipated that, decades later, molecular genetics would discover discontinuous protein-coding sequences and ultimately resolve each genetic locus into an elaborately formatted system of several quite distinct DNA components, rather than a fundamental unitary genetic element as envisioned by Beadle. But even more fundamental complexities in the basics of genome coding remained to be discovered. The ‘central dogma of molecular biology’, first enunciated by Francis Crick in 1958 and restated in 1970, assigned to RNA molecules the primary task of serving as intermediates carrying coding sequence data from the DNA to the ribosomes, where that data is translated into the sequence of amino acids in protein chains. According to this explanation, adaptation took place only through encoded proteins, and it became a puzzle to evolutionary biologists why the protein-coding sequences of the most complex organisms make up such a small fraction of their genomes. In our own genomes, for example, more than 50 per cent of the DNA does not code proteins (our genomes contain only about 1.5-2 per cent protein-coding DNA). This gave rise to notions that genomes contained large amounts of ‘junk’ DNA, which was simply reproducing itself in the name of its own ‘selfish’ survival, as popularised by Dawkins’s The Selfish Gene. There is no ‘selfish’ or ‘junk’ DNA; all regions of the human genome code for biologically significant molecules Notwithstanding the central dogma that proteins execute all the business of living cells, research in molecular genomics has revealed that all cells contain many noncoding RNA molecules (ncRNAs) and, by the late 2010s, the global Encyclopedia of DNA Elements project (ENCODE) found that human cells regulated expression of ncRNAs in the same ways as protein-coding mRNAs. In other words, ncRNAs are controlled and, presumably, biologically functional. They are not ‘junk’. The ENCODE discoveries and subsequent research on ncRNAs have revolutionised our understanding of genome coding in two important ways. Firstly, many of the copies of TEs and other repeated DNA elements found in large complex genomes contribute transcription templates for the so-called ‘noncoding’ ncRNAs, which carry out a wide variety of cellular and developmental regulatory functions. Thus, there is no ‘selfish’ or ‘junk’ repetitive DNA in genomes; all regions of the human genome code for biologically significant molecules. Secondly, while the importance of ncRNAs was completely unexpected, it turns out that they influence all levels of organismal activity. These range from scaffolding the formation of multimolecular complexes in the cytoplasm, facilitating the formation of three-dimensional genome complexes in the nucleus, to stimulating the reprogramming of terminally differentiated tissue cells with limited growth potential into pluripotent stem cells. New functionalities for ncRNAs emerge daily, telling us that this class of molecules has enormous structural and functional diversity. The rapidly expanding catalogue of functions shows that, through ncRNAs, genomes encode biologically functional molecules other than proteins. It is possible that ncRNAs even represent a higher level of biological control than proteins. In an intriguing 2013 study, the molecular biologists Gangiang Liu, John Mattick and Ryan J Taft found that the genome content of non-protein-coding DNA relates better to organismal complexity (defined by number of different cell types) than protein-coding DNA, which indicates that more complex organisms have a higher proportion of non-coding DNA. In any case, the discovery of functional ncRNAs in the genome completely undermines the ‘selfish gene’ arguments for evolution by Dawkins and similar thinkers that rely on random mutation and natural selection. While the revelations of molecular biology and genome sequencing document how cells manipulate and utilise their genomic DNA in evolution, they do not tell us about the actual biology of how new species come into being. That knowledge comes from a field that has long been treated by mainstream scientists as a digression from the serious business of evolutionary biology. It turns out that it has been known for at least seven decades that mating between individuals from distinct but closely related species often leads to the rapid formation of new species. In 1951, George Ledyard Stebbins, a leading proponent of the Modern Synthesis, described hybrid speciation as ‘cataclysmic evolution’ to emphasise the speed with which it occurs. So-called hybrid speciation has been responsible for the evolution of many crop plants, such as wheat, rice, potatoes, rapeseed and cotton. The evolutionary naturalists Peter and Rosemary Grant and their colleagues have also observed hybrid speciation occurring in real time in the wild among Darwin’s finches on the Galápagos Islands. Genome sequence analysis provides growing evidence that hybrid speciation is widespread in nature. The biological reason may be that interspecific crosses are most likely to occur as a stress response when one of the mating partners comes from a population in such severe decline that it could not find a mate from its own species. Interspecific hybridisation has been important in the evolution of the ability to ferment lager beer When a hybrid forms, it typically has a highly unstable germline genome, characterised by increases in chromosome rearrangements and activation of TE mobility to new genomic locations, as well as an increase in TE numbers within the genome, sometimes resulting in a significant increase in genome size and ncRNA abundance. Since this novel genome configuration can be different from either of the hybrid’s parental species, there is meiotic incompatibility and a barrier to interbreeding, which is a classic feature in the definition of a species. In other words, within a small number of generations, descendants of the initial hybrid constitute a newly evolved species with novel adaptive characters and reproductive isolation. Hybrid speciation has now been documented extensively in eukaryotes ranging from yeasts to higher plants and animals, which means this form of rapid speciation is not an accidental consequence of ‘improper’ mating. Instead, it constitutes a complex macroevolutionary response that has proved adaptive and been stably maintained for the approximately 2 billion years of eukaryotic history. In many cases of hybrid speciation, the novel hybrid genome undergoes a whole genome duplication (WGD), involving the duplication of all chromosomes. WGD does not take place through random mutation but rather by control over cellular reproduction. WGD creates a germline genome with two copies of every chromosome so there are no pairing problems to disrupt meiosis, gamete formation and fertility. In addition, WGD generates extra copies of every genetic locus so that formation of novel protein domain arrangements or TE-mediated recruitment of a genetic locus into a novel genome expression network does not result in any loss of pre-existing functions. It is very likely that hybrid speciation and resulting WGDs have played key roles throughout eukaryotic evolution. Analysis of genomes reveals many duplicated chromosome segments and thus tells us that WGDs have been critical steps in the evolution of yeast, diatoms, plants and animals. In yeasts, for example, interspecific hybridisation has been important in the evolution of novel practical applications, such as the ability to ferment lager beers and Belgian ales. In animals, the ancestral vertebrate genome went through two rounds of WGD since their divergence from more primitive tunicates. The two vertebrate WGDs explain why we and other vertebrates have up to four closely related copies of many highly evolved genetic regions, such as homeobox and major histocompatibility complexes critical to embryonic development and immune defences. By amplifying the repertoire of physiological responses, WGDs contribute to greater biological complexity and adaptive success. So how are we to understand Darwin’s legacy today? Darwin was more nuanced in his ideas than his neo-Darwinist followers and was willing to acknowledge hereditary variation ‘independently of natural selection’. In The Variation of Animals and Plants under Domestication (1868), he described a pangenetic theory of inheritance of acquired characteristics by means of what he called ‘gemmules’ – particles that pass from parents to offspring. Today, the inheritance of acquired epigenetic states and their transmission across generations by means of extra-cellular vesicles (the 21st-century equivalent of Darwin’s gemmules) is subject to widespread and vigorous experimentation. In The Descent of Man, and Selection in Relation to Sex (1871), Darwin formulated his theory of sexual selection, which postulates an active evolutionary role for evolving organisms. Clearly, Darwin remained open to new ideas, even when they contradicted his earlier pronouncements. Not so with Darwin’s 20th-century followers. Advocates of the neo-Darwinian Modern Synthesis misrepresented the range of Darwin’s theories of evolution by narrowing them down to just two of his contributions – gradual variation and natural selection – and claimed that this simplification could explain all of evolution. This was an example of what McClintock called the ‘now explanation’, meaning a set of ideas based on contemporary science that is taken as the final and complete understanding of a complex subject, such as evolution. That idea of completeness was implicit in the name ‘Modern Synthesis’ and was often articulated by its advocates. But science is not static. Discoveries such as transposable elements, horizontal DNA transfers, Lego-like protein evolution and the multifarious roles of ncRNAs cannot be anticipated. Evolutionary biologists share a responsibility to prepare their students for inevitable surprises. By turning evolutionary variation from random accidents to biological responses, 21st-century molecular genetics and genomics have revealed that living organisms possess tremendous potential for adaptive genome reconfiguration. For evolution scientists, this revelation poses an important set of obligations. Those obligations include reorienting our studies of adaptive variation towards learning how deeply genome change is integrated with biocognitive sensory responses. This new evolutionary paradigm will require a more organic mode of research that combines genomics, physiology and cognitive science. For some philosophers of science, 21st-century evolutionary biology will require rethinking all the purely mechanical physics-based assumptions they have held about life. Biologists will have to incorporate as foundational a recognition that rapid genome reorganisation is not only a feature of all organisms but, evidently, has proved essential for the survival of life on an ecologically diverse and dynamic planet.
James A Shapiro
https://aeon.co//essays/why-did-darwins-20th-century-followers-get-evolution-so-wrong
https://images.aeonmedia…y=75&format=auto
The ancient world
The name ‘Eutychis’ was etched into a wall 2,000 years ago. Finding out who she was illuminates the dark side of Rome
‘Eutychis, a Greek lass with sweet ways, 2 asses.’ This pithy graffito advertising sex for sale comes from the walls of Pompeii. The ancient Roman city was already an old town when it was destroyed in the eruption of Mount Vesuvius in August 79 CE – and thus preserved for posterity. Located on the Bay of Naples, near the mouth of the river Sarno, there are early signs of Etruscan culture, though the area was later settled by Oscan-speaking Samnites, who began the town’s real growth after around 200 BCE. The land around Pompeii was fertile, and the city and region grew wealthy. ‘Eutychis, a Greek lass with sweet ways, 2 asses’; grafitto at the entrance to the Lupanar, Pompeii. Photo courtesy Pompeii Sites As Rome expanded its power throughout Italy, Pompeii became a Roman city, though one that retained a diverse population. We can imagine a busy place of some 12,000 people, rich and poor, free and enslaved, of public squares, fountains and gardens, fine houses and poorer dwellings, taverns, shops and workshops, and a stone amphitheatre for the provision of large-scale public entertainment. There would have been a clamour of Oscan, Greek and Latin, and all the activities we would expect from a thriving town – politics, business, love, crime. Graffiti is one of the most exciting kinds of evidence preserved for us by the destruction of Pompeii, because it comes not from the literature of the elite, or the inscriptions of the powerful, but from a wider cross-section of society. The Eutychis graffito gives us a woman’s name, an ethnicity, a price, the hint of a good time to be had – and suggests a seamy side to the ruined town now frequented by inquisitive tourists and keen culture-vultures. It was written on the vestibule wall of a well-to-do house owned by two freedmen, the Vettii, which is perhaps best known to the world for its painting of the well-endowed Priapus weighing his member on a balance against a bag of coins. While brief and to the point, this announcement, calling out to us from nearly 2,000 years ago, can set us on a journey to understanding more about the life of Pompeii’s haves and have-nots. At the same time, it may well leave us with more questions than answers about Eutychis herself and the prostitutes of Pompeii. The Lupanar brothel, Pompeii. Photo courtesy Wellcome Images Pompeii, often seen under the bright sun with hordes of other visitors, does not hide its darker side – in fact, the single purpose-built brothel identified in the city, known as the Lupanar, is one of its most popular attractions. The sexy frescoes are one highlight. Eight can be seen above the doorways of the little cubicles with their masonry ‘beds’. Five or six are female-male sex scenes, another shows a woman standing next to a reclining man as she points at an erotic picture, and the last depicts the god Priapus with two erect phalluses. These show something very basic and timeless that we have in common with ancient Pompeiians – sex – but they also titillate the visitor and sometimes prompt dirty jokes from both guides and visitors. The frescoes presumably indicate the kind of activities that were available to customers, and helped in creating an erotically charged atmosphere. 13029Masonry bed at the Lupinar 13032Erotic fresco from the Lupanar 13033Priapus depicted with two penises; fresco at the Lupanar. All courtesy Wikipedia Thanks to the graffiti in the brothel, we even know the names of some of the women who worked there: Anedia, Aplonia, Atthis, Beronice, Cadia, Cressa, Drauca, Fabia, Faustilla, Felicla, Fortunata, Habenda, Helpis, Ianuaria, Ias, Mola, Murtis, Myrtale, Mysis, Nais, Panta, Restituta, Rusatia, Scepsis, Victoria, and the daughter of Salvius. Eutychis does not appear in the list, although it might well be that those were working names; some of them appear in graffiti elsewhere in town. They were on display naked with prices, to be perused, evaluated and chosen by the male clientele How would Pompeiians and other visitors have experienced the brothel? We can learn something of this from a fragmentary literary text called The Satyricon, written in the 1st century CE by an elite Roman called Petronius. In the scene, the male character Encolpius has become lost in town. An old woman tricks him into entering the brothel, whereupon: I noticed some men and naked women walking cautiously about among placards of price. Too late, too late I realised that I had been taken into a bawdy-house. I cursed the cunning old woman, and covered my head, and began to run through the brothel to another part, when just at the entrance Ascyltos met me, as tired as I was, and half-dead. It looked as though the same old lady had brought him there. I hailed him with a laugh, and asked him what he was doing in such an unpleasant spot.He mopped himself with his hands and said: ‘If you only knew what has happened to me.’ ‘What is it?’ I said. ‘Well,’ he said, on the point of fainting, ‘I was wandering all over the town without finding where I had left my lodgings, when a respectable person came up to me and very kindly offered to direct me. He took me round a number of dark turnings and brought me out here, and then began to offer me money and solicit me. A woman got threepence out of me for a room, and he had already seized me. The worst would have happened if I had not been stronger than he.’ Petronius was writing comedy, but there is no reason to dispute the incidental details, which can bring the Lupanar to life. His description suggests brothels would be located in more out-of-the-way parts of town and were not necessarily identifiable from the outside; the prostitutes and punters were screened from the outside world by a curtain. It also reveals that people could be enticed or tricked into visiting – presumably chaperones who drummed up trade got a fee. The prostitutes themselves were on display naked with prices, to be perused, evaluated and chosen by the male clientele. Cubicles or rooms could be rented out for private use. The Roman poet Horace wrote about men’s choice of sexual partners in one of his satires, where he is pointing out the follies of some men – especially in hankering after or having affairs with elite or married women. He suggests that prostitutes are a much more sensible choice when a man had need of sex. For one thing, their faces and bodies are visible, he says. In contrast to respectable women, whose bodies were well covered, prostitutes’ clothes could be revealing, allowing the man to view what he might want to buy and use. And, during the encounter, Horace says, a man might call the prostitute by any name – she could be expected to cater better to man’s fantasies. Horace, at least in character, preferred these women to be fair and natural, smartly turned out, prompt and inexpensive. The Satyricon begins to fill in the details of the lives and environment of some prostitutes – those who worked in brothels, at least – but, so far, from the text and the paintings we have something of a light-hearted view of what went on. However, the reality of the women in the brothel, naked and carrying their price placards, was a grim one: their bodies put to use for the profit of the brothel’s owners, their physical and emotional work performed in tiny open cubicles or sex booths. Most of them were slaves, who had little choice in what they were doing, at the mercy of their owners and customers. Poorer free women too were vulnerable and had probably been driven to prostitution by necessity. About a fifth of the women’s names in the brothel indicate they were free. Slavery was an accepted institution in the Roman Empire, and slaves of all kinds – agricultural workers, urban house slaves, labourers, miners, teachers (and prostitutes) – were everywhere. Some few slaves may have lived relatively privileged lives, and others had a degree of independence in their work and life, within the confines of being owned; some could even hope to be made free. But there are also sombre reminders of the less fortunate. Columella, also writing in the 1st century CE, explained how slaves on the farm should be treated and managed. He recommended their constant supervision and physical restraint in chains and in slave prisons (ergastula), as well as keeping control over where they could go, whom they could see, and also their bathing, religious practices and sex. What was the regime like for slave prostitutes in the Lupanar? Slaves often had no space of their own but were simply part of the furniture Enslaved people might be denied a love or sex life of their own but also be forced to reproduce (a source of new slaves), while female slaves might be forced into sex to help control male slaves. Sexual abuse and rape by owners was one particular vulnerability of slaves, female and male, child and adult. Many slaves could expect to be used sexually by their owners as a matter of course, perhaps household slaves in particular given their proximity. In another satire, Horace gives the line: ‘When your organ is stiff, and a servant girl or a young boy from the household is near at hand and you know you can make an immediate assault, would you sooner burst with tension?’ He also wrote an ode on the topic, advising his friend not to be ashamed of loving his slave when Achilles and other heroes did the same. A gold armband from Pompeii hints at this kind of relationship; it carries the legend ‘from the master to his slave girl’. Yet as well as perhaps being an earnest gift, it was also a reminder of who was who in the relationship. Sometimes, slaves may have been able to leverage sex to their advantage. Archaeology furnishes more tangible evidence of the life of the enslaved. The relative absence of slave quarters in elite houses suggests that slaves often had no space of their own but were simply part of the furniture. They may have hidden and slept where they could, including making use of the spaces under staircases and dark and unpleasant cellars. These cellars could have other uses too. At Pompeii, in the Villa of the Mosaic Columns, the skeleton of a slave was found in a cellar with iron shackles anchored into the ground. This seems to have been a slave prison. Leg irons were found in a cupboard in the House of the Venus in Bikini, perhaps a half-hidden but lingering threat to coerce slaves into good behaviour. Beatings and whippings and the threat of violence were commonplace means of control and punishment by slaveowners. There was not only the physical pain involved, but also the humiliation of such a personal affront, which the victim was powerless to prevent. Compulsion and coercion were enforced psychologically as well as physically. Perhaps the Lupanar had its own ‘bouncers’, ready to deal with any trouble from customers or from the women themselves. Enslaved people did resist and some did manage to run away. One nameless slave from Bulla Regia, in north Africa, was put in a lead collar inscribed: ‘This is a cheating whore! Seize her because she escaped from Bulla Regia.’ The collar was found with a skeleton in the Temple of Apollo. Nobody knows the name of the woman who wore it. Did she die seeking shelter in the abandoned temple? Was this her first attempt to flee or had she tried before and then been collared? This woman is almost lost in history, unknown apart from the symbol of her captivity. Copper alloy tag that was attached to the collar of an enslaved person, inscribed with a demand to return the wearer to the slave master at his estate in Rome, 4th century CE. Courtesy the British Museum There are some 45 known slave collars. Many mention the name, occupation and location of the slave’s owner, but never the slave. For example, a bronze tag from a slave collar from Rome announced: ‘Hold me because I have run away and return me to the Caelimontium to the house of Elpidius, vir clarissimus, to Bonosus.’ Its other side said: ‘Hold me and return me to the Forum of Mars, to Maximianus the antiquary.’ It seems the tag had been reused. An iron collar from Rome also offers a reward: ‘I have run away; hold me. When you have brought me back to my master Zoninus, you will receive a gold coin.’ So who was Eutychis? Was she a prostitute, an enslaved woman, both or neither? Was she forced into selling herself? Let us think first of her name. Eutychis is a Greek name that roughly translates to ‘fortunate’, and was in use throughout the Greek-speaking world. Although the graffito surely refers to a real person, we don’t know whether it was her real name, or a slave name, or a working name. If she was a free woman working as a prostitute, she may have chosen Eutychis as her pseudonym; if she was a slave, the name may have been given to her by her owner. Renaming a slave with something cheerful and Greek was a common practice of the Romans. This act of renaming an enslaved person could affect them in various ways. It robbed the enslaved person of a key aspect of their own identity and replaced it with a sometimes cruel name that emphasised their status as property. A slave prostitute named ‘fortunate’ or ‘lucky’ was probably anything but. Another slave in the House of the Vettii was called ‘Eros’ (‘desire’). The graffito that mentions him reads: ‘Eros likes to be sexually passive,’ but it was later scratched out). Nowadays, we might see this as an aspect of coercive control, an ongoing rite of abuse and humiliation. However, renaming could also allow the enslaved person to retain a separate, inner ‘core’ identity linked with their own ‘private’ name. An unhappy possibility is that we know ‘Eutychis’ only by the name her owner gave her, and she remains, in this sense, invisible. The second part of the graffito stresses Eutychis’ Greekness. Should we read the label of ‘Greek’ simply as a statement of fact, or could it have a different significance? If the graffito were written by her owner or a pimp, then it could be that both the name and the ethnic label were given to increase Eutychis’ appeal by generating a sense of exoticism – although Pompeii had quite a mixed heritage and attracted all kinds of visitors. Were Greek women fetishised as sex objects by some Pompeiian males? The fetishisation and sexualisation of groups of women (and men), based on ethnicity or race, is well known still, so it is not impossible. A prostitute would have had to appear cheerful, coy or aroused, depending on the circumstances On the other hand, perhaps ‘a Greek lass’ would have appealed to Greeks travelling through the town or Greek-heritage Pompeiians, such as Mousaios or Epagathus, named in graffiti from the Lupanar, or Pyrrhus or Chius, known in graffiti from the Basilica at Pompeii. Mousaios’ name was written in Greek rather than Latin, one of a number of Greek graffiti found around the city. Either way, the choice and statement of both her name and ethnicity could have been deliberate business strategies rather than factual biographical details. The advert also stresses Eutychis’ ‘sweet ways’ (moribus bellis). This is found on a number of graffiti in Pompeii. In the Lupanar, we find ‘Restitua with sweet ways’. Other prostitutes, such as Spes, Successa, and Menander, a man, are so labelled. This label suggests a good time in an indirect fashion, perhaps good company as well as a sexual service. Indeed, in her research on the public and private lives of Pompeian prostitutes, Sarah Levin-Richardson reminds us that prostitution could also involve emotional as well as physical labour. A prostitute would likely have had to run the gamut of emotions, appearing cheerful, coy or aroused, enticing and passionate, pliable or dominant, depending on the circumstances. This contrasts with some of the other graffiti around the town that elaborates on the specific services offered, especially fellatio and sex. Finally, what of the price of two asses (copper coins)? It seems low; the same price as a loaf of bread. Recorded prices for prostitutes varied, but most tend to be from one to five asses. Two asses seems to have been a common price at Pompeii and may have been a kind of standard low rate. Price could reflect a number of factors. It would be calculated to attract a high volume of customers, or it may reflect the age and perceived appeal of the woman. It could have been used as a punishment. Perhaps there was an element of haggling possible once a punter had expressed an interest. Independent prostitutes may have had little means to protect themselves and enforce payment. We should also consider what the location of the graffito adds to the story. The text was written on the left wall of the vestibule of the House of the Vettii. The Vettii are usually thought of as brothers, Aulus Vettius Conviva and Aulus Vettius Restitutus, who were freed slaves who took on the name of their former master, Aulus Vettius. Their names were on bronze seals found near a large wooden strongbox in the house, along with a ring with the initials ‘AVC’. In graffiti on the outside of the house, Conviva is named as an Augustalis, a priestly position open to former slaves, while Restitutus urges voters to support one Sabinus. Whether they were brothers, father and son, friends or coworkers can’t really be known, but they seem to have had some wealth and privilege, including their well-decorated town house. The Priapus painting emphasises the Vettii’s wealth, as Priapus was a god of fertility – besides the member and the money, the painting shows his basket of fruit on the ground. Two of the ‘funny’ stories associated with him, related by the poet Ovid, have him failing to rape the goddess Hestia and a nymph called Lotis. Priapus in the House of the Vettii. Photo courtesy Carole Raddato The other paintings in the House of the Vettii may tell us more about the life of the house and its inhabitants, and shed light on Eutychis. One careful analysis by Beth Severy-Hoven focuses in particular on the sets of paintings found in a pair of reception rooms, called ‘n’ and ‘p’. The rooms and their decorative scheme seem to mirror each other. Room n has two violent paintings. On the back wall is a depiction of the punishment of Pentheus, who was killed by the female worshippers of Dionysus. The naked Pentheus is falling, arms spread wide as the women attack him from all sides. On the wall on the right, the punishment of Dirce is shown. She is tied, naked, under a bull to be trampled to death, a fate she had planned for another woman, Antiope. 13034From the House of the Vettii: The Punishment of Pentheus. Courtesy Wikipedia 13042The Punishment of Dirce. Courtesy Wikipedia Room p shows the punishment of Ixion at the back, who was tied to a fiery wheel for eternity for trying to rape Hera, and the punishment of Pasiphae, made (for her husband’s crimes) to lust after a bull, on the left. Each room, then, shows the torture of a female and a male character from mythology. Those in room n were punished by humans, and those in p by the gods. Severy-Hoven argues that we can read the decoration from the perspective of the Vettii being freed slaves and from a perspective of power. As we already noted, enslaved people were there to be used sexually and were likely to be punished in numerous ways, which may have been something the Vettii themselves experienced. The Vettii’s choice of these images of punisher and punished to decorate their public dining rooms would call to mind the distinction between master and slave – both for themselves and for their slaves. As Severy-Hoven puts it: ‘When the owner selected these images of eroticised torture, he was inscribing his own power to punish or to enjoy onto his very walls.’ Was Eutychis a house-slave of the Vettii? It is possible. One clue from that graffito itself is that it may first have read ‘verna’ rather than ‘Graeca’ – that is ‘homeborn slave’ rather than ‘Greek lass’. If she was, it is not unlikely that she would have been a target of sexual advances by the Vettii and their friends. Did the Vettii also pimp out their house slave Eutychis? One ancient novel, The Golden Ass by Apuleius, tells of a young woman called Charite, who has been kidnapped by bandits. When she is caught trying to escape, one of the bandits suggests selling her to a brothel or to pimps in a nearby town. As he says to the bandits, ‘seeing her servicing men in a whorehouse will be sweet revenge for you’. Sex in various ways could be a punishment. Daily life as a slave in the House of the Vettii may have been a constant fight to remain unmolested and unpunished, a fight to retain a degree of control and dignity. The house may have been home to Eutychis, but it was no brothel like the Lupanar. However, one thing that has puzzled scholars about the House of the Vettii is the existence of a little room behind and only accessible from the kitchen, known as x1. This room is of interest because it is decorated with three large erotic paintings, yet it is hidden away in a service area. Was this the place where Eutychis had sex with customers, making more money on the side for the Vettii? It is possible, but such an activity might not have benefitted the owner’s public reputation. Or were the paintings a gift to a good cook or other slave, who had the use of the room? Alternatively, were they a reminder to the enslaved staff, who may have slept there, of their sexual availability and vulnerability and their lack of power in the house? Another idea is that such rooms, of which there are perhaps six at Pompeii, were private ‘sex clubs’, rooms that conjured up the brothel atmosphere but were not open to the public. Perhaps Eutychis was required to serve her masters in this grim fantasy. On the other hand, was the Eutychis graffito a real advert at all? We can’t rule out the possibility that it was simply a malicious daubing, in a way familiar from subway graffiti today. The Greek satirical writer Lucian, writing in his Dialogues of the Courtesans in the 2nd century CE, mentioned that graffiti could be used as a joke, to mislead or to stir up trouble between lovers. Maybe Eutychis was a real Greek girl, perhaps even with sweet ways, but she need not have been a prostitute or even a slave. A rejected suitor or jilted lover might have written it in a place that she, and those who knew her, would see it. Perhaps this implies she did have some kind of connection with the House of the Vettii, but we cannot say for certain if she was a house-slave there. The single graffito we began with has taken us on a journey through some of the darker aspects of life in Pompeii: the grittiness of the Lupanar, the ever-present threat for the enslaved of sexual assault and violence, being chained to the floor of a basement in pain and terror, being owned and being used. It is difficult to conjure these horrors while visiting the sun-baked town with its busloads of bright-shirted and good-natured tourists, or marvelling at the beautiful art and architecture in glossy books. We will never really know for sure about Eutychis, beyond the fact that there was a woman attached to the name. We may never know what life in the House of the Vettii was really like for its inhabitants, either. But we can keep trying to read the evidence to find the stories that bring the lives of Pompeii’s less fortunate into the light.
Guy D Middleton
https://aeon.co//essays/what-pompeiis-ruins-say-about-its-enslaved-prostituted-women
https://images.aeonmedia…y=75&format=auto
Thinkers and theories
Demonised by the political establishment for his radical, dissenting views, this 18th-century Welsh polymath deserves better
The year 2023 marks the tercentenary of the birth of the Welsh polymath Richard Price – dissenting minister, mathematician, moral philosopher, and author of influential tracts on the American War of Independence and the French Revolution. Yet he is all but forgotten. This cultural amnesia is all the more striking when you consider that his obituary in 1791 predicted that this so-called ‘Liberty’s Apostle’ would be remembered alongside Thomas Jefferson, Lafayette and George Washington. Portrait of Dr Richard Price (1784) by Benjamin West. Courtesy the National Library of Wales Price may be familiar to those with an interest in the 18th century and English Dissent in particular, and perhaps to those with an interest in the history of moral philosophy of that period, but beyond these circles little is known of someone who, in his lifetime, was held in equal standing with Edmund Burke. Indeed, the fact that Burke felt compelled to respond forcefully to Price’s sermon ‘A Discourse on the Love of Our Country’ (1789) is indicative of his reputation. Contrary to the oft-recited history, it was Price’s text and not Burke’s Reflections on the Revolution in France (1790) that began the Revolution Controversy, a seminal debate in modern political thought. In terms of his ethics, Price was notable as a figure who challenged the prevailing moral sentimentalism (the view that our emotions ground our ethical judgments) of those such as Francis Hutcheson and David Hume. Together with his political ideals, which captured much of the radical worldview in the late 18th century, Price’s body of work is representative of a richness in our intellectual heritage that is often overlooked in Britain and beyond – by a mainstream narrative that cleaves to the predominance of empiricism and liberal utilitarianism. Price, and his remarkable contributions across a range of areas, can be fully appreciated only in light of the religious and social milieu that he occupied, one that is embodied by the term ‘English Dissent’. It reflects both the standing of the Protestant denominations that stood outside the Anglican Church, and their positioning in terms of the reformist agenda that issued from their peripherality, and of which Price would become a leading exponent as a Unitarian minister at Newington Green in north London, where he took up residence in his mid-30s. Born in 1723, Price was brought up in his native Wales in a dissenting community of a very different kind, at Tynton farm in the Garw Valley (the village of Llangeinor stands there today). His family had close ties with Samuel Jones, who was part of an emerging Puritan movement in Wales during the English Civil War, but who was forced to sequester with the Restoration. With the support of Price’s grandfather and others, Jones was able to establish a meeting house in the Garw Valley that would continue the tradition in the spirit of an orthodox Calvinism that Price himself would come to thoroughly reject. Indeed, this became a familial theological conflict, captured most symbolically in the story of the father, Rhys Price, happening upon his son Richard reading the work of the Anglican cleric Bishop Samuel Clarke, and throwing the offending book into the fire. Rhys Price in fact removed his son from the Dissenting Academy he attended in Pentwyn, west Wales, because of his concerns about the ideas he was being exposed to by another, more radical non-conformist Samuel Jones. The young Price was sent to Talgarth under the tutelage of the renowned Vavasour Griffiths, but it seems the student’s head had already been turned. When his father died, and his mother soon after, Price saw to it that his sisters were well looked after, and then – as so many Welsh before and after – he followed other members of his family to London to seek, if not his fortune, then a flourishing future in the metropolis. There he was soon fortunate enough to be appointed chaplain for George Streatfield and his family, and became assistant to Samuel Chandler (an important figure in the history of English Dissent) at the Old Jewry Meeting House. While ‘dissent’ in its earlier form was a term used to designate the whole gamut of Protestant denominations and sects that rejected the authority of the Church in England – from Puritans to Quakers to Levellers – the English, or Rational, Dissenters were a specific group that emerged during the 18th century, and whose core values tended to coalesce around a particular set of ideas and principles that were, in the broadest term, progressive in nature. This was in no small part a reflection of their material situation, marginalised as they were by the Test Acts that effectively rendered them second-class citizens, and that made political reform an obvious priority. Their ideals also emerged from a form of rational religion that aligned their Christian faith and belief with the scientific revolution of the time, rather than with the orthodoxy of the Church of England. A striking example of this is to be found in the hypothesis that Bayes’s probability theorem (which should perhaps be called ‘Bayes and Price’s theorem’) was inspired in part by a desire to establish mathematical proof for the existence of miracles, in response to Hume’s sceptical arguments in his notorious essay ‘Of Miracles’ (1748). Price was Thomas Bayes’s literary executor, and he was elected a fellow of the Royal Society for his work in amending, expanding and bringing the theorem to light. Bayes’s work in probability has had an immense influence on areas involving statistical inference, including in the development of the internet and of artificial intelligence. It was in this dissenting milieu that Price found himself at home and began to exert influence. He became a focal point for the dissenting community when he took up his role as minister in the Newington Green Meeting House. By that time, both Streatfield and Price’s uncle, who had welcomed his nephew to London, had died. So Price was making his way as an independent man in the world, having married Sarah Blundell, who was, somewhat surprisingly, a lifelong member of the Anglican Church. Also during this time, Price’s defining work in ethics, A Review of the Principal Questions in Morals (1757), was published. Wollstonecraft had a great affection for Price who had taken her under his wing and promoted her cause A further defining relationship for him was with Joseph Priestley, another of the key figures in the dissenting movement. The Welsh historian Iwan Rhys Morus has recently discussed Price and Priestley in the context of their ties with another important contemporary, Benjamin Franklin. Morus elucidates how the three sought to bring science into the service of their wider dissenting agenda, and interestingly struck upon the theme of the three as thinkers from the periphery who brought innovation, if not revolution, from their respective origins beyond the imperial capital. Morus explains how this disrupts typical assumptions about the flow of knowledge from the metropolitan core. Mary Wollstonecraft (c1797) by John Opie. Courtesy Wikipedia The idea of science deployed in the service of a progressive political agenda reflects a wider culture that had a broader impact on contemporary culture, as well as an appeal beyond its context of religious dissent. We can see this in the way that Mary Wollstonecraft, who was not herself a dissenter, made connections with key figures in the movement and found an affinity in their manners, their progressive ideas and their treatment of women. At Newington Green, she established her school for girls and began to develop her social critique that would eventually find expression in the Revolution Controversy and her book A Vindication of the Rights of Woman (1792). Before that publication, however, came A Vindication of the Rights of Men (1790), which was in part a defence of Price against Burke’s attacks, though it is usually portrayed as solely directed at Burke. Wollstonecraft had a great affection for Price who had taken her under his wing and promoted her cause both financially and practically by introducing her to the publisher Joseph Johnson. Despite Price’s secular influence, it is important not to lose sight of his religious core, not only because it informed his ideas in politics and ethics, but also because of the way in which it inspired his relentless activity. In particular, his tireless work in the field of insurance – he worked as an actuary, and published important papers on the mathematics of life assurance and annuities – is an example of how his concern for the welfare of others was driven by his religiosity. Given Price’s deep Christian faith and his religious values, the ethical perspective conveyed in his Principal Questions is somewhat unexpected. Price rejects the idea that what is good issues directly from God’s will, and instead develops what the Welsh philosopher Walford Gealy regarded as an early version of the ‘autonomy of ethics’ associated with the influential 20th-century philosopher G E Moore. Broadly speaking, this is the idea that there exists a self-sustaining moral order that informs what is good or bad, independent of God or nature. This makes Price’s ethical outlook closer to his rationalist contemporaries from the Continent than to that of the British empiricists, for he argued that we are able to perceive the moral quality of any action on the basis of our understanding. That is to say, the mind is an independent source of knowledge: we are endowed with a moral understanding that allows us to perceive directly the moral quality of human action. The empiricists, broadly speaking, regarded good and bad as secondary qualities that do not signify anything in the objects themselves, but rather their effects upon us, particularly their emotional affects. In contrast, Price adopted a form of moral objectivism, believing that the moral quality resides in the action itself and that we perceive this through use of our understanding. Price’s moral philosophy can be seen as a critical response to the sentimentalism of empiricists such as Hume and Hutcheson. As far as Price was concerned, their approach, which foregrounded emotional reactions, rendered morality a subjective and psychological matter, subject to individual whim, whereas Price believed that moral judgments occupied the same realm as mathematical truths, and are universal, permanent and unconditional. In this way, the good is not a quality that can be accounted for or described in relation to, or through reference to, other objects or perceivers. Price’s politics are built on his ethics with an emphasis on freedom, virtue and knowledge The emphasis here on our innate ability to perceive the good in and for itself anticipates not only Moore but the moral intuitionists of the 20th century such as the Scottish philosopher W D Ross. It, in some ways, also anticipated aspects of Immanuel Kant’s philosophy, in particular his argument that moral duties arise directly from our ability to perceive the good and the bad. The comparison with Kant is interesting and intriguing. He and Price were contemporaries born only a year apart, and similarities in their moral perspectives – particularly their rejection of consequentialist and utilitarian approaches – are striking. While Price never came close to Kant’s detailed and painstaking treatment of the various aspects of philosophy – he was too busy involving himself in the practical debates and activities of the day – his political tracts are his best known, and provide the outlines of a republican cosmopolitanism that is in line with the cosmopolitanism that Kant developed. In Kantian fashion (to put it anachronistically), Price’s politics are built on his ethics with an emphasis on freedom, virtue and knowledge. In Price’s pamphlets, which supported the American War of Independence and the revolution in France (he died in 1791 before the outbreak of the Terror), a radical republican view emerges. The best known of these is ‘A Discourse on the Love of Our Country’ (1789), published at the outbreak of the French Revolution. In it, Price expresses support for the revolution on the grounds that it represents the spirit of England’s Glorious Revolution of 1688, when James II was deposed and replaced by William of Orange. Price’s sermon was also a mature statement of the most important aspects of his political philosophy, presenting arguments and themes that still resonate today. The first of those, one that often goes unrecognised, is that Price arguably provided one of the earliest statements of civic nationalism, in declaring that by ‘our country’ we mean ‘that body of companions and friends and kindred who are associated with us under the same constitution of government, protected by the same laws, and bound together by the same civil polity.’ As a member of a community marginalised by the Test Acts, and as a Welshman, and very likely Welsh-speaking, hailing from the Celtic fringe, Price articulated a capacious idea of nationality that could offer a fuller sense of membership for those outside the established elite. His ideal of civic nationalism was part of a wider set of ideas around nationalism that embodied two key principles: first, that love for our country should not mean that we regard ourselves as superior to others; and, secondly, that all nation states should conduct themselves in the spirit of cooperation and not competition. Nationalism can be valid and legitimate only when it is held in check by reason and by sympathy for our fellow human beings. It’s striking how sceptical Price was about the effects of power on those who possess it These admirable principles lay the basis for Price’s cosmopolitanism. He was among those Enlightenment thinkers who believed in a peaceful worldwide federation, a model for which in his view was offered by the new emerging federation in America. Price’s belief in virtue and freedom as the basis for politics would also lead him to advocate for the abolition of slavery and hold the Americans to account on these matters. The republicanism of Price is another defining aspect of his politics, and as the political scientist Nicole Whalen recently argued, this laid the basis for a proto-anticapitalism that advocated a distributive equality in contrast with the sort of society envisaged by Adam Smith. Moreover, what is particularly striking is how sceptical Price was about the effects of power on those who possess it, calling to mind the basic anarchist critique – that power corrupts. Connected with this, Price emphasises that we as citizens must be active and not passive, and that it is our duty to challenge our leaders and to hold them to account. This is not only because that is the only way to ensure good governance, but also because this kind of participation in our community is what it means to be human. We are social beings. It is both our privilege and our duty as humans to contribute in this way. Despite his move to London, Price never lost his Welshness. He was a religiously driven nonconformist with energy and intelligence of a type that would come to dominate his homeland, and lay the political basis for what is often referred to as the ‘rebirth of a nation’ in the latter half of the 19th century. In the wider context of the United Kingdom, however, the dissent Price embodied would remain just that – an intellectual and political spirit that was a counterpoint to a dominant culture entrenched in privilege and conservatism. The British establishment worked hard to demonise and marginalise Price and his form of politics in his lifetime. The widespread ignorance today of Price both as a thinker and as a public intellectual is a mark of their success.
Huw Williams
https://aeon.co//essays/remembering-the-18th-century-radical-dissenter-richard-price
https://images.aeonmedia…y=75&format=auto
Economic history
The French idea of the good life doesn’t always make rational economic sense. So much the worse for traditional economics
Apparently, the Carthusian monks who distil the herbal liqueur Chartreuse have been struggling to maintain a work-life balance. Sales of the drink, which totalled $30 million in 2022, continue to bankroll the order. The brothers’ vows of solitude and silence have not prevented the company that hawks their wares, Chartreuse Diffusion, from building a global luxury brand. Nonetheless, in January 2023 the wholesaler announced their decision to limit supply despite robust demand. What would prompt a firm to commit the cardinal sin of leaving money on the proverbial table? The monks’ stated motives were two-fold. Although they expressed a concern with the environmental impact of their operation, the deeper impetus was spiritual. As the order reminded frustrated tipplers, they first and foremost are in the business of contemplating eternal mysteries. Well beyond Chartreuse, or even Dom Pérignon, France has long been associated with a specific version of the good life, from haute cuisine to haute couture. In the global imagination, the French excel not only at putting quality before quantity, but also in distributing the finer things more widely than their Anglophone counterparts. The French are as famous for their national healthcare system, month-long vacations and 35-hour working week as they are for bread, wine and cheese. That bosses are disallowed by law from requiring their employees to read or respond to work emails after business hours only reinforces the perception that life is meant to be savoured. How, then, should one approach decisions that appear to exceed the limits of economic rationality? The real task is not to point out some transhistorical tick of the psyche, or neuroscientific hack, and call it a union-negotiated day. Not only are the politics of such a decision abysmal, but the move also makes for false history. Carthusians might regard themselves as mere pilgrims in the terrestrial sphere, but the corner of the world they presently inhabit sits in southeastern France – a nation that has been roiled in recent months by opposition to the government’s plan to reform the pension system by raising the retirement age from 62 to 64. The two developments are related, and for reasons that are distinctly Catholic as well as French. Indeed, the monks’ willingness to retire from the world of commerce offers a microcosmic glimpse into the motives animating the national fight to retire at an age that allows for enjoying the fruits of one’s labour. The shared struggles of the Carthusians and the French electorate reveal the ongoing significance not of the axioms set down by Adam Smith, but of what one might call economic theology. In the most literal sense, this term of art connotes an attentiveness to the economic elements of theology as well as the theological vestiges in economic science. Think of Max Weber’s Protestant ethic, but with a different cast of characters and constellation of values. In a more concrete sense, economic theology refers to a corpus of works produced by 18th-century theologians writing with a conception of the economy that was far more expansive than typical usage allows. In contrast to classical political economy, which enshrines its object as an autonomous, purely natural domain of thought and action, French economic theologians described a range of activities aimed at producing and circulating wealth broadly conceived, from the riches contained in consecrated hosts and the power invested in images of the king to the value projected onto and believed to radiate from commodities. As I argue in The Spirit of French Capitalism (2021), this way of figuring the economy can be seen as underwriting ‘a distinctly Catholic ethic’ that, in contrast to Weber’s schema, ‘privileged the marvellous over the mundane, consumption over production, and the pleasures of enjoyment over the rigours of delayed gratification’. To cite a particularly conspicuous example, Catholic sacramental theology exerted a surprising influence on the reception of John Law’s efforts in the 1710s to introduce a paper national currency in France that would be untethered from gold and silver. Proponents and critics alike depicted banknotes and company shares as promising unfathomable riches. Well before Johann Wolfgang von Goethe’s Faust, a Eucharistic-alchemical complex lent itself to describing these instruments and their myriad effects. Priests-cum-alchemists explicitly likened the philosopher’s stone to the consecrated host. Cartesian clerics such as Jean Terrasson justified the infinite extension of matter with direct references to the sacrament. In defending Law’s reforms, he went so far as to transpose his metaphysical doctrines into an economic theology of money. Like the Eucharist, which Terrasson in 1720 regarded as an ‘efficacious sign’, bills served as active ‘signs of the transmission of real wealth’. Thus, if ‘the banknote never produces specie, which it is not, no bearer of this note can ever suffer a loss, since he has the same claim to the corresponding goods as the first who had received it.’ Paper’s efficacy followed from its dual nature as both visible and transparent – that is, as a means of exchange that not only passively reflected but also brought into being the very existence of wealth. The assignat inspired hopes of financial salvation as well as fears of ruinous speculation The Catholic ethic of circulation and consumption not only emboldened investors in Law’s System. More quietly, perhaps, but even more resolutely, it underwrote the soul’s transactions with God as imagined by theologians, who wrote extensively on the production and distribution of spiritual as well as material wealth. It was put into practice by the millions of French subjects, clerics and laypersons alike, who participated in the sacramental life of the Church in the hope of absolving sin and securing the most sublime of riches – the splendours of eternal life. It enticed consumers to seek pleasure through the acquisition of goods, including devotional objects, and authorised the ex-seminarian turned political economist Anne Robert Jacques Turgot to endow land, and the material realm more generally, with untold productive capacity. It also compelled theologians to denounce what they feared was becoming an idolatrous attachment to things. Louis Genty, a doctor of theology and vice-secretary of the Royal Society of Agriculture in Orléans, plumbed the depths of a dynamic acknowledged by Turgot and enshrined the following century by Karl Marx and Friedrich Engels in The German Ideology. ‘Capitalists,’ Genty alleged in 1783, ‘devise a million artifices’ to provoke the ‘depraved imagination’ of consumers, which ‘invents without ceasing new means to enjoy every caprice that transforms itself into needs’. The history of economic theology does not end with the collapse of the Old Regime. For instance, the problem of fixing the relationship between political and economic sovereignty propelled the French Revolution from the outset. On 2 November 1789, the new government voted to nationalise ecclesiastical property, which had long been regarded as possessing a sacred character, given its role in supporting worship and assisting the poor. The deputies then proceeded to issue their own paper currency backed by this wealth. Like the notes of Law’s bank 70 years before, to which it was anxiously compared, the assignat inspired hopes of financial salvation as well as fears of ruinous speculation. Despite such risks, the fate of the new regime born of debt depended on the power of financial instruments not merely to represent wealth but to generate it ad infinitum. In the pivotal debates during the summer and autumn of 1790, the Count of Mirabeau promoted the assignat as a national fiat currency by likening it to a ‘philosopher’s stone of finances’. The notes held the power of ‘resuscitating, as if by magic, labour, industry, commerce, and abundance’ throughout the nation. Mirabeau used the language of alchemy not so much for dramatic effect as to illuminate a fundamental point of his monetary theory, that ‘it is currency that creates currency. It is this spur to industry that leads to abundance; it is the movement that animates everything, which restores everything.’ Mirabeau’s colleague Louis-Marthe de Gouy, a member of the finance committee, further developed an economic theology of money reminiscent of Terrasson. Gouy unabashedly claimed that the assignat would affect ‘a prompt transmutation of the state’s debt into a circulating paper note.’ He did not stop there. Paper notes could even constitute the ‘Real Presence’ of all the wealth in France – but especially the lands of the Catholic Church that had been placed ‘at the disposal of the nation’. The choice of words was by no means accidental: throughout the 18th century, the expression always referred to the doctrine of transubstantiation. The case of the assignat suggests that economic theology underwent a series of unlikely mutations. A massive transfer of material and symbolic capital ensued. As the revolution entered its radical phase upon the overthrow of the monarchy in 1792, the popular movement in Paris called on the new republic to commit itself with no less vigour against the ravages of scarcity. The sans-culottes, urban artisans associated with the popular movement in the capital, shared with their 18th-century predecessors a vision of the economy that was general rather than restricted, concerned not with the preservation of scarce resources but the dissipation of superabundance. Jacques Roux – the public voice of the so-called Enragés, a sect of radical egalitarians – effectively revolutionised the Catholic ethic of infinite abundance by modelling the human economy on the divine. Indeed, his rhetoric evinced a patriotic rendering of economic theology. He affirmed that religion is ‘nothing other than an exchange between God and men’ requiring a ‘baptism of blood that sanctifies’ and ‘an ardent charity in which all good works are realised’. He went so far as to spur his fellow citizens to virtue in the name not only of liberty and of the constitution, but also of the Eucharist – or, in his words , ‘the pure and stainless host that will be sacrificed at the foot of those altars still red and smouldering with the blood of Jesus Christ’. Macron’s programme signals a fundamental reorientation in how the French government figures value Bowing to pressure from Roux and his acolytes, members of the National Convention set aside their laissez-faire scruples to pass laws that placed limits on both wages and, more crucially, the price at which commodities could be sold. The first Maximum was imposed in May 1793, soon followed by the General Maximum in September. These measures reflected sans-culottes’ demands not only for bread but also for a host of comestibles not traditionally considered essential. The table of goods prepared for the capital listed ‘various meats’ including ‘beef, veal, mutton, fresh pork, fresh bacon, lard, salted pork, salted bacon, smoked pork, and jambon de Bayonne’. Parisians were no less stinting with drink, calling for both coffee and chocolate from the Caribbean as well as ciders, wines, and eaux-de-vie produced throughout the metropole. Likewise, it was expected that such meals would be concluded with tobacco from Maryland and Virginia. Plus ça change, plus c’est la même chose. This has been the pessimistic motto of political progressives since it was coined by the journalist Jean-Baptiste Alphonse Karr to express his disillusionment with the short-lived French Second Republic – established in 1848 and usurped by Louis-Napoléon Bonaparte, its elected president and the eventual emperor. If critics are to be believed, the parallels with the present are more apt than one might suspect. Indeed, the Left-wing firebrand Jean-Luc Mélenchon has demanded the establishment of a sixth republic. This new-and-improved regime would finally exorcise the ghost of monarchism thought to possess the current president, Emmanuel Macron. As Macron declared even before his first election: ‘There is an absence in the democratic process … In French politics, this absence is the figure of the king, whom I fundamentally believe the French people did not want to see die.’ Macron’s programme signals a fundamental reorientation not only of French values, but also in how the French government figures value more generally – economic as well as political, cultural, and even spiritual. The head of state, known for a Gaullist if not Napoleonic vision of his office, now finds himself in an embattled position that recalls that of his illustrious predecessors. Facing public outcry and deadlock in the National Assembly, this March Macron authorised the prime minister, Élisabeth Borne, to invoke article 49.3 of the constitution, a mechanism that allowed the government to pass pension reform without approval from the deputies. By law the move triggers a no-confidence vote, which Borne survived by a mere nine ballots cast. All other legal challenges have likewise faltered. Such manoeuvres have done little to silence the millions resisting reforms widely viewed as antisocial and undemocratic. As many as three-fifths of the population opposes the law, and they continue to show up in force. Labour unions have committed to supporting strikes and shutdowns throughout the country. Garbage collectors allowed mountains of trash to pile up in the narrow, winding streets of the capital, which protestors were only too willing to ignite like so many burning barricades. Macron and his ministers now face hecklers banging pots and pans during official visits – a form of protest that dates back to the medieval period. Scores of railroad workers temporarily occupied the offices of LVMH, a conglomerate of luxury brands headed by one of the world’s richest men, Bernard Arnault. The firm currently owns, partly or outright, Dom Pérignon, Moët & Chandon, Louis Vuitton and Christian Dior, as well as Stella McCartney’s design house. Lest one think that the new retirement age is a nod to the Beatles’ song on the Sgt Pepper’s album, the French people have declared that they will no longer love Macron when they’re 64. The world has followed their discontent with a mix of beguilement, apprehension and Schadenfreude. Economists in France, the US, and the UK charge that, despite the falling ratio of active workers to retirees (currently 1.7:1), the administration might keep the current system afloat if it would only countenance a tax hike. The reform as currently devised appears to fall short of closing existing deficits by 2030, especially now that Macron has announced plans for further tax cuts. France’s European partners have expressed reservations that Macron’s heavy hand will not only undermine popular confidence in domestic institutions but also galvanise scepticism toward the European Union on both extremes of the political spectrum. Such fears are warranted. Macron’s embrace of arguably undemocratic power seems to have played into the hands of Marine Le Pen, the far-Right leader known for courting actual authoritarianism. She can only profit from the erosion of faith in the constitutional order. Curiously, the pension reform has found support from none other than Pope Francis, who made a vague pronouncement on how future generations should not be burdened with public debt to sustain a broken system. The pontiff’s remarks have failed to inspire his brothers in Christ to pump out more Chartreuse. The formula for the drink, based on a 17th-century alchemical manuscript, was first codified in 1764 as a ‘vegetable elixir’. The processes of purification and distillation used to describe Law’s banknotes still infuse this peculiarly French spirit of capitalism, as does the grace of the Catholic ethic. The monks have issued an ultimatum to their customers, and perhaps to themselves, that the pursuit of commercial profit cannot detract from the desire for heavenly splendours. One might even say that the latter serves as the regulative ideal for the former. The botanicals that go into Chartreuse are elevated into a higher form, but the monks do not see it as their duty to flood the market with a liquid that, in contrast to the blood of Jesus, is forever subject to ecological if not manmade scarcity. It remains to be seen whether the monks’ decision will inadvertently drive up the price of their most rarefied commodity. The president’s compatriots won’t have to endure ‘work that does not allow for living well’ Thomas Piketty argued in Capital et idéologie (2019) that the political culture of the French Revolution bequeathed an unresolved conflict to the French regimes that followed in its wake. In particular, the abolition of clerical and aristocratic privileges in 1789 promised emancipation through the ‘sacralisation’ of property rights, which gave new impetus to the inequalities that characterise social relations under capitalism. Despite Piketty’s recourse to religious terminology and his stated emphasis on ideological determinants, he passes in silence over the actual arguments made in defence of the divinely sanctioned character of Church lands or of the wealth they were believed capable of generating. His contemporaries – whether on the Left or on the Right, in the academy or in the press – have been no more forthcoming in their analyses. A distinctly French and Catholic ethic of enjoyment, or jouissance, was inscribed into the script of the revolution once the state decided to absorb Church property long believed to have been given by God for the care of souls. Indeed, the origins of this ethic extend deep into the 18th century, and its effects show no signs of abating even now. If work defines one’s early adulthood and middle age, such exertions will be redeemed during a long retirement during which citizens can enjoy the fruits of their labour in reasonably good health and material tranquility – not only their own, but also the aggregate wealth generated by those still working – without the compulsion to produce forever more. Even in France, where secularism forms part of the national heritage, economic theology remains the coin of the realm. We must examine both sides. Tellingly, Macron does not dispute the central premise of his millions of critics. A televised address this April reminded his recalcitrant flock that austerity is a matter of survival in the global economic order. ‘Gradually working more,’ he said, ‘also means producing more wealth, and we need it.’ Unleashing the productive forces of the French nation will require a ‘new pact’ between workers and the state, one that the president assures will keep his compatriots from having to endure ‘work that does not allow for living well’. Rising scepticism abroad and severe political backlash at home have not cooled another of Macron’s plans, that of establishing Paris as the world capital of the cryptocurrency market. The money will roll right in, we are told, and it will further enhance the productive powers of the French workforce. As the history of economic theology makes clear, what Karl Marx called the fetish character of commodities is not a novel phenomenon, but rather one especially remarkable iteration in a series of attempts to describe the enchanting character of seemingly mundane objects. According to Marx, with the introduction of money as universal equivalent, ‘circulation becomes the great social retort into which everything is thrown, to come out again as a gold-crystal.’ Later he returned to post-Reformation confessional distinctions to code interest-bearing capital, the very quintessence of the modern economy, as Catholic. Here one came face to face with ‘the religious quid pro quo, the pure form of capital … the transubstantiation, the fetishism, is complete’. The founder of scientific socialism, then, could himself be regarded as an economic theologian of sorts who followed in the well-worn steps of early modern predecessors. After all, the laws of political economy continue to demand exuberant faith as much as restrictive calculation. The Catholic ethic sanctions a vision of social solidarity in profusion. Since the French Revolution, a gospel of enjoyment has informed not only official government policy, but also dissident movements founded by self-professed socialists, who idealised the communalism of the early Christian Church in an attempt to outline an alternative programme of economic modernity. Social democracy thus has origins that are religious and revolutionary in equal measure. As Macron tries to impose reconciliation from above, it remains to be seen how, in France and around the world, the dually Catholic and capitalist ideal of limitless abundance can be squared with the equally pressing demand to glean psychic and spiritual fulfilment in the here and now.
Charly Coleman
https://aeon.co//essays/chartreuse-economic-theology-and-the-french-spirit-of-capitalism
https://images.aeonmedia…y=75&format=auto
Quantum theory
Why does the quantum world behave in that strange, spooky way? Here’s our simple, four-step explanation (no magic needed)
Almost a century ago, physics produced a problem child, astonishingly successful yet profoundly puzzling. Now, just in time for its 100th birthday, we think we’ve found a simple diagnosis of its central eccentricity. This weird wunderkind was ‘quantum mechanics’ (QM), a new theory of how matter and light behave at the submicroscopic level. Through the 1920s, QM’s components were assembled by physicists such as Werner Heisenberg and Erwin Schrödinger. Alongside Albert Einstein’s relativity theory, it became one of the two great pillars of modern physics. The pioneers of QM realised that the new world they had discovered was very strange indeed, compared with the classical (pre-quantum) physics they had all learned at school. These days, this strangeness is familiar to physicists, and increasingly useful for technologies such as quantum computing. The strangeness has a name – it’s called entanglement – but it is still poorly understood. Why does the quantum world behave this strange way? We think we’ve solved a central piece of this puzzle. Entanglement was first clearly described, and named, in 1935, by the Austrian physicist Erwin Schrödinger. He pointed out that, after two quantum particles interacted, they could no longer be considered independent of each other, as classical physics would have allowed. As the contemporary US physicist Leonard Susskind puts it in the preface to Quantum Mechanics: The Theoretical Minimum (2014), ‘one can know everything about a system and nothing about its individual parts.’ Here’s a simple analogy. If we want to give a complete description of the present state of a two-handed poker game, for example, we just give a description of the two five-card hands. What could be more obvious? But in QM, for some reason, the obvious thing doesn’t work. Schrödinger said that, in general, the quantum description of the two particles is ‘entangled’, and the name stuck. As he puts it: ‘When two separated bodies that each are maximally known come to interact, and then separate again, then such an entanglement of knowledge often happens.’ The full weirdness of entanglement wasn’t immediately obvious Schrödinger concluded elsewhere that entanglement is not ‘one but rather the characteristic trait of quantum mechanics.’ Many physicists now agree. Susskind says it is ‘the essential fact of quantum mechanics’, while in his Lectures on Quantum Mechanics (2013), Steven Weinberg writes that it is ‘perhaps its weirdest feature’. The full weirdness of entanglement wasn’t immediately obvious, and Schrödinger himself didn’t quite live to see it. For him, its strangeness was the prohibition it imposed on describing a two-particle system by its parts. He thought that this had important consequences, especially because it debunked what had become the orthodox view of what QM is telling us about the microworld. This orthodox view was the so-called Copenhagen Interpretation, proposed by the Danish physicist Niels Bohr. Bohr argued that it was nonsense to think of quantum systems as having definite properties, before they were measured. Like Einstein before him, Schrödinger thought that entanglement proved Bohr wrong. To grasp the Einstein-Schrödinger argument, consider the two poker hands, now with some of the cards face down, hidden from view. The state of this game can no longer be described in terms of the known cards (the ones turned face up). At least superficially, this looks like entanglement: a full quantum system can’t be described in terms of what’s known about its pieces. Moreover, when an additional card on one side is revealed, it changes our knowledge about the other hand. If the queen of hearts turns up in the hand on the left, say, then we know that it is not one of the hidden cards in the hand on the right. The same is true for entangled particles. Observing one gives us new knowledge about the other, even if it is a long way away. Einstein and Schrödinger argued that this meant that something is hidden inside these quantum systems prior to measurement – something not fully described by QM, and disallowed by Bohr’s view. They argued that, if measuring a nearby particle teaches us a new fact about a remote particle, this new fact must have existed already, even though the best QM description didn’t include it. The alternative would be that the nearby measurement was changing the remote particle in some way. Schrödinger thought that this was absurd: ‘measurements on separated systems cannot affect one another directly, that would be magic [our emphasis].’ ‘It provides a gentle pillow for the true believer from which he cannot very easily be aroused’ Schrödinger died in Vienna in 1961. Just three years later, the Northern Irish physicist John Stewart Bell argued that, if the predictions of QM are correct, then Schrödinger’s magic actually happens. When we have entangled particles, measurements on one of them can have a subtle effect on the other one, even though they might in principle be light years apart. Bell called this magic nonlocality. These days it is often linked to Einstein’s phrase ‘spooky action at a distance’, though Einstein, too, didn’t live to see Bell’s result. (When Einstein complained about spooky action at a distance, in a 1947 letter to the physicist Max Born, he had in mind a different weird feature of the orthodox interpretation of QM.) The importance of Bell’s argument took some time to sink in. The field had to first shake off some of what Einstein in 1928, writing to Schrödinger, called the ‘Heisenberg-Bohr tranquilising philosophy … so delicately contrived that, for the time being, it provides a gentle pillow for the true believer from which he cannot very easily be aroused.’ But gradually, in the second half of the quantum century, entanglement became one of the major concerns of the field. It is now absolutely central, theoretically, experimentally and, increasingly, technologically. Entanglement is what makes quantum computers different from their classical cousins, for example. A major motivation for this shift was Bell’s work. As the physicist Krister Shalm put it to Quanta Magazine in 2021: ‘The quantum revolution that’s happening now, and all these quantum technologies – that’s 100 per cent thanks to Bell’s theorem.’ Bell had argued that, if the QM predictions were correct, then nonlocality was unavoidable. But were the predictions correct? Answering that question required some very subtle and difficult experiments, involving two-particle systems similar to those that Schrödinger had discussed in 1935. Since they were inspired by Bell’s work, they came to be called ‘Bell experiments’. Most Bell experiments use photons, the fundamental quantum components of light. Pairs of photons are produced together, with their properties entangled in the way that Schrödinger had described. Each photon is sent to one of two physicists, conventionally called Alice and Bob. Alice and Bob each choose one of several available measurements – this is called choosing a measurement setting. Each measurement produces an outcome, which might be a 1 or a 0, depending on which way the photon emerges from the measuring device. Each run of the experiment thus produces four numbers: the two settings and the two outcomes. Repeated over and over, the experiment generates a long table of results, with these four numbers in each row. Bell realised that these experimental results, as predicted by QM, looked quite strange. So strange, in fact, that with just a few additional assumptions, he could prove that the results were impossible. The primary assumption was that Schrödinger’s magic was not allowed – Bell called this assumption locality. So if QM’s predictions were correct after all, that would be bad news for locality (and good news for magic). ‘This is the real problem with quantum theory: the apparently essential conflict [with] fundamental relativity’ It took several decades, but we now know that QM is indeed correct. Some of the most convincing Bell experiments were conducted as recently as 2015. In 2022, nicely timed for the decade of quantum centenaries, the Nobel Prize in Physics was awarded to three pioneers of these experiments: Alain Aspect, John Clauser and Anton Zeilinger. As the Nobel citation put it, the prize recognised their ‘experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science.’ Combined with these experiments, Bell’s analysis seems to imply the kind of magical action at a distance that Einstein and Schrödinger considered absurd. One reason for thinking that it would be absurd was that it would seem to clash with a core principle of Einstein’s own theory of relativity – that nothing could go faster than light. Bell was well aware of this tension, saying in 1984 that there was ‘an apparent incompatibility, at the deepest level’, between QM and relativity. ‘For me then,’ he said, ‘this is the real problem with quantum theory: the apparently essential conflict [with] fundamental relativity.’ Forty years later, this conflict has not been resolved. The work of Aspect, Clauser and Zeilinger and many others certainly confirms that entanglement is real. As Aspect himself put it in his speech at the Nobel Prize banquet: ‘Entanglement is confirmed in its strangest aspects.’ But the experiments don’t tell us what entanglement is, or where it comes from. In that sense, entanglement remains as mysterious as ever. Why is the world put together in this weird way? Our research suggests a surprisingly simple answer. Our recipe for producing entanglement uses just four ingredients. All of these ingredients are available off the shelf (although admittedly, in one case, from a remote corner of the shelf). As far as we know, it has not previously been noticed that they can be combined in this way, to throw new light on the weirdest feature of the quantum world. Let’s start with the main ingredient. Called collider bias, it is well known to scientists who use statistics in fields such as sociology, psychology and medicine. One of the first writers to describe it clearly was Joseph Berkson, a Mayo Clinic physicist, physician and statistician. In the 1940s, Berkson noted an important source of error in statistical reasoning used in medicine. In some circumstances, the selection of a sample of patients produces misleading correlations between their medical conditions. Simplifying Berkson’s own example, imagine that all the patients admitted to hospital Ward C have similar symptoms, caused by one of two rare infections, Virus A or Virus B. Ward C specialises in treating those symptoms, so all its patients have at least one of these diseases. A few may have both, but everyone on the ward who doesn’t have Virus A is certain to have Virus B, and vice versa. Taken at face value, these correlations might suggest that avoiding one virus causes infection with the other one. But Berkson pointed out that this apparent causal connection isn’t real. It is an artefact of the way the sample has been selected. The patients on Ward C are a very biased sample. In the general population, having a vaccine for Virus A won’t make you more likely to catch Virus B. Figure 1: a simple collider This means that if a patient on Ward C with Virus A says to himself: ‘I’m on Ward C, so, if I hadn’t caught Virus A, I would have caught Virus B,’ then he’s making a mistake. If he hadn’t caught Virus A then (most likely) he wouldn’t have either virus, and he wouldn’t have been admitted to the ward. It may look like these causes are influencing one another, but they are not This statistical effect is now called Berkson’s bias, or collider bias. The term collider comes from causal modelling, the science of inferring causes from statistical data. Causal modellers use diagrams called directed acyclic graphs (DAGs), made up of nodes linked by arrows. The nodes represent events or states of affairs, and the arrows represent causal connections between those events. When an event has two independent contributing causes, it is shown in a DAG as a node where two arrows ‘collide’. This is shown in Figure 1 above, where being admitted to Ward C has two contributing causes, from the two kinds of virus infection. If we just look at a sample of cases in which the event at a collider happens, we’ll often see a correlation between the two independent causes. It may look like these causes are influencing one another, but they are not. It is a selection artefact, as causal modellers say. That’s collider bias. The correlation stems from the way in which the event at the collider depends on the two causes – in our simple example, it needed one cause or the other. We want to take collider bias in the direction of physics – ultimately, in the direction of the experiments for which Aspect, Clauser and Zeilinger won their Nobel Prize. We want to propose an explanation for what may be going on in those experiments, and other cases of quantum entanglement. We’ll get there via a series of toy examples. For the first of them, imagine that two physicists, Alice and Bob, play Rock, Paper, Scissors. For anyone who doesn’t know the rules of this game, at every turn, Alice and Bob each choose one of these three options, and send their calls to a third observer, Charlie. As in the usual version of the game, rock beats scissors, scissors beats paper, and paper beats rock. Charlie makes a list of the results: Alice wins, Bob wins, or it’s a draw. Suppose that Charlie likes Alice and dislikes Bob. He therefore follows the policy of throwing away most of the results when Bob wins. In the remaining ‘official’ results, Alice wins a lot more often than Bob. The correlation looks the way it would if Alice actually had some influence over Bob’s choice – as though Alice choosing scissors makes it a lot less likely that Bob will choose rock, and so on. If Alice and Bob are far apart, this could look like Schrödinger’s magic. But there’s no real Alice-to-Bob causation involved. It is just collider bias at work. Given Charlie’s policy, the event at the collider – whether he retains or throws away the result – is influenced both by Alice’s choice and by Bob’s choice, giving us the same kind of converging arrows as in Figure 1 above. Suppose that in a particular round of the game Alice chooses paper and Bob chooses rock. As in the medical case, Alice would be making a mistake if she says: ‘If I had chosen scissors instead, Bob would probably not have chosen rock.’ The right thing for her to say is: ‘If I had chosen scissors, then Charlie would probably have discarded the result – so my choice may have made a difference to Charlie’s decision, but it didn’t make a difference to Bob’s choice.’ Now to our second ingredient. It is the least familiar of all, although it, too, is already on the shelf, if you know where to look. It doesn’t have an established name, outside of our own work. We call it constraining a collider. We’ll use the Rock, Paper, Scissors game to explain what it is. In the version of the game just described, Charlie could favour Alice only by discarding some results. Let’s see what happens if we rig the game in Alice’s favour, without throwing any results away. In our world, this isn’t going to happen naturally so, for now, let’s imagine it happening supernaturally. Suppose God also likes Alice more than Bob, so he tweaks reality to give her an advantage. Perhaps he arranges things so she never loses when she plays the game on Sundays. How does God do it? It doesn’t matter for our story, which doesn’t need to be realistic at this point, but here’s one possibility. In a so-called ‘deterministic’ universe, everything that happens is determined by the initial conditions at the very beginning of time. If God gets to choose the initial conditions, and (relying on his divine foreknowledge) knows exactly what follows from them, he can simply choose the initial conditions so that Alice never loses on Sundays. Readers who prefer a God-free version could imagine that Alice and Bob live in a simulation, and that the artificial superintelligence (ASI) that runs the simulation favours Alice on Sundays. Some serious thinkers have suggested that we ourselves may live in a simulation, so it would be hasty to say that this version is inconceivable. On Sunday Alice can’t lose, so if she had chosen scissors, Bob could not have chosen rock Now we can explain our terminology. In a case like this, we say that God (or the ASI) constrains the collider – just on Sundays, in this version of the story. A collider is constrained if something prevents some of the possibilities that would normally be allowed (such as Bob winning, in our example). To see what difference this makes, think about a round of the game where Alice chooses paper and Bob chooses rock. Is Alice still making a mistake if she says: ‘If I had chosen scissors instead, Bob would not have chosen rock’? It now depends what day of the week it is. This is still a mistake on Monday through to Saturday. On those days, the right thing for Alice to say is: ‘If I had chosen scissors, Bob would still have chosen rock (and I would have lost).’ But Sunday is different. On Sunday Alice can’t lose, so if she had chosen scissors, Bob could not have chosen rock. Let’s suppose that Alice knows that the game works this way. Perhaps she figured it out after years of experiments, and now makes a comfortable living as a gambler, working one day a week. From her point of view, it looks like she can control Bob’s choices (though only on Sundays). By choosing scissors, she can prevent Bob from choosing rock, and so on. With a constrained collider, then, we would have something that looks a lot like causation across the collider, from one of the pair of incoming causes to the other. True, it would be a very strange kind of causality. For one thing, it would work the other way, too, from Bob to Alice (though less happily, from his point of view). By choosing rock on a Sunday, Bob could prevent Alice from choosing scissors, and so on. For our purposes, it isn’t going to matter whether this would be real causality, or even whether the question makes sense. Could we still speak of both Alice and Bob as making free choices, for example, if the choices are linked in this way? We think that entanglement itself is connection across a constrained collider We take the following lesson from the example above: if natural causes constrained a collider, we should expect to find a new kind of dependence between the normally independent causes that feed into that collider. We call this new kind of relation connection across a constrained collider (CCC). As we said, we invented the term ‘constrained collider’. As far as we know, the idea hasn’t been explicitly discussed before, in physics or in causal modelling. But it is already on the shelf, in the sense that there’s at least one place in physics where what we’re calling CCC has actually been proposed: it has been suggested as a key for solving the so-called black hole information paradox by the physicists Juan Maldacena and Gary Horowitz. The background here is that Stephen Hawking discovered a process now called Hawking radiation, by which all black holes eventually evaporate away to nothing. He thought initially that this process would be random, preventing the escape of information that had fallen into the black hole in the first place. Some physicists disagreed, and in 1997, with Kip Thorne and John Preskill, Hawking made a public bet on the matter. Hawking and Thorne took one side (against the escape of information), and Preskill the other. (Hawking eventually conceded that Preskill had won.) In 2004, Maldacena and Horowitz proposed a new way for information to escape from a black hole. In our new terminology, they suggested that a collider inside the black hole is constrained by a special ‘final state boundary condition’ at that point. They suggest that this creates a zig-zag causal path through time, along which information can escape from a black hole. In our terms, that would be a connection across the constrained collider. Discussing the Maldacena-Horowitz hypothesis in 2021, the Cambridge physicist Malcolm Perry said: The interior of the black hole is therefore a strange place where one’s classical notions of causality … are violated. This does not matter as long as outside the black hole such pathologies do not bother us. Our proposal is that ‘such pathologies’ are exactly what’s been bothering us in QM, ever since 1935. We think that entanglement itself is connection across a constrained collider. To explain how that can be the case, and to introduce our two remaining ingredients, we need to get closer to the physics of the quantum world. As noted, many Bell experiments have now confirmed the strange correlations, predicted by QM showing the quantum world is unavoidably nonlocal. Given that these so-called Bell correlations were important enough to win Nobel Prizes, readers may be surprised to learn that they can easily be reproduced in a version of our Rock, Paper, Scissors game. The only change we need is to have Alice and Bob each flip a coin before they make their choice. In this variant – let’s call it quantum Rock, Paper, Scissors – Alice and Bob each send two pieces of information to Charlie: their choice of rock, paper or scissors, and the result of their coin flip. So Charlie gets four values, two choices and two coin outcomes. This is precisely the same amount of information generated in each run of a Bell experiment. In quantum Rock, Paper, Scissors, it is very easy for Charlie to set up a filter, keeping some results and throwing away others, to make sure that the set of results he keeps satisfies the Bell correlations. By using the right filter, Charlie can ensure that the selected results look exactly like the data generated in real Bell experiments. To match one kind of Bell experiment, for example, Charlie’s filter specifies that, when the settings are the same, the two outcomes must be different; and that, when the settings are different, the outcomes are the same 75 per cent of the time. This doesn’t mean that there is any sort of strange nonlocal magic in quantum Rock, Paper, Scissors, of course. As in the earlier version, the correlations are simply a selection artefact, a result of collider bias. There is one big difference between quantum Rock, Paper, Scissors and real Bell experiments We could reintroduce God or an ASI at this point, to add a constrained collider to quantum Rock, Paper, Scissors. There would be one interesting difference from the original game. In that case, the effect of the constraint was to give Alice and Bob control over each other’s choices, making it hard to maintain that they both had freedom to choose. In quantum Rock, Paper, Scissors, as in the analogous real Bell experiments, that problem goes away: Alice and Bob each get some influence over the result of the other’s coin toss, but we can still treat both of their own choices as completely free. There is one big difference between quantum Rock, Paper, Scissors and real Bell experiments, however, that we haven’t yet mentioned. In quantum Rock, Paper, Scissors, Alice and Bob send their choices to Charlie after they are made. In a spacetime diagram with time running up the vertical axis, the structure looks like an upside-down letter V – see the left-hand side of Figure 2 below. We’ll say that cases like this are ‘∧-shaped’. In real Bell experiments, Alice and Bob receive their particles from the source, which emits them earlier in time. So the structure looks like ∨, as in the right-hand side of Figure 2 – we’ll say that they are ‘∨-shaped’. Figure 2: the difference between ∧-shaped and ∨-shaped experiments Can we flip quantum Rock, Paper, Scissors to make it ∨-shaped as well? It might look easy. We can have Charlie toss the two coins and send them to Alice and Bob, so that the results (heads or tails) become Alice and Bob’s measurement outcomes. But if that’s all we do, Charlie won’t know what choices Alice and Bob are going to make when he sends out the coins. That means there’s no way for him to put bias into the results, in the way that he could in the ∧-shaped case. There’s no way that Charlie can produce the Bell correlations, in other words. But suppose we let Charlie know in advance what choices Alice and Bob are going to make – we give him a crystal ball, say. Then it is very easy for him to manage the coins so that the net results, gathered over many plays of the game, satisfy the Bell correlations. The trick is for Charlie to toss one coin, and then choose the result for the other coin based on a rule that takes into account Alice and Bob’s future choices. The rule he needs is the same as in the ∧-shaped version of the game. When Alice and Bob’s settings are the same, he sends them different coin results; when the settings are different, he sends the same coin results 75 per cent of the time. Let’s ask the same question we did about the ∧-shaped version. Does the new ∨-shaped case involve some kind of nonlocal magic from Alice to Bob, and vice versa? We hope that readers will be inclined to say ‘No’ to this question. After all, the basic causal structure of the new ∨-shaped version is something like Figure 3 below. Thanks to Charlie’s crystal ball and the preset rules, Alice’s and Bob’s choices both influence Charlie’s outcomes, in every case. This means that Charlie’s selection procedure is a collider, and we have to be on our guard for collider bias. Figure 3: a past collider For this reason, attentive readers might suspect that collider bias plays the same role in explaining the results of the new ∨-shaped quantum Rock, Paper, Scissors as it did in the ∧-shaped case. But there’s one very big difference between these two cases – which brings us to our third ingredient – something we call ‘initial control’. In the ∧-shaped version of quantum Rock, Paper, Scissors, Charlie had to throw away results he didn’t want. But in the ∨-shaped case, he gets to choose the results in light of what he learns from the crystal ball. He’s arranging the coins in exactly the pattern he wants, not achieving the pattern by discarding a lot of cases that don’t fit. In this case, then, Charlie himself can constrain the collider, no gods or ASI needed. What Charlie needs to do this is an ordinary ability we take for granted, to control the so-called ‘initial conditions’ – the way things are set up at the beginning of the experiment. This familiar ability is our third ingredient. Let’s call it initial control. Perhaps we shouldn’t take initial control for granted. It is actually a remarkable ability, one that depends on the fact that we live in a place where abundant energy can be harnessed by creatures like us to do work. Living on a cool planet next to a hot star is much like living at the base of a giant waterfall. It’s easy to harness the passing flow of energy, just as life on Earth has been doing for billions of years. Like all the complicated ways in which terrestrial creatures control their environment, the ability of human scientists to control experiments depends on harnessing this energy flow. But, like the natural flow of heat between different-temperature objects, it works only one way. We have much more control over the initial conditions of experiments than over their final conditions. It’s easy to arrange the balls on a pool table into precise positions before the initial break, for example, but virtually impossible to play the game so that they all end up in those positions. It doesn’t have the relativity-challenging character normally associated with Schrödinger’s magic The combination of the collider structure in Figure 3 above and the constraint provided by initial control gives us CCC – connection across the collider. If we are happy to use causal language, we can say that it gives us the kind of zig-zag causal connection shown in Figure 4 below. There’s also a zig-zag path from Bob’s choice to Alice’s outcome, of course. Figure 4: the Parisian Zig Zag But does ∨-shaped quantum Rock, Paper, Scissors involve some kind of nonlocal magic from Alice to Bob, and vice versa? At this point, we need to be careful about what we mean by nonlocality. As we have just seen, there is indeed some influence, or connection, from Alice to Bob, and vice versa – it is CCC. Since they are at a distance from each other, and a direct connection might need to be faster than light, we might still want to call it nonlocality. (One of last year’s Nobel laureates told us he thought such a zig zag should still count as a nonlocal effect.) However, the connection between Alice and Bob is indirect, and depends entirely on processes that don’t themselves require anything faster than light. So, whatever we call it, it doesn’t have the relativity-challenging character normally associated with Schrödinger’s magic. And it is not very mysterious: we know exactly what it is, namely, connection across a constrained collider. The crystal balls were magic, of course, but, once we gave ourselves those, the explanation of the connection between Alice and Bob is straightforward. Imagine if something like this could explain the results of real Bell experiments – that would be a nail in the coffin of the quantum spooks. To make this work, we need our final ingredient. It is retrocausality, the idea that causality might work backwards in time, from future to past. In ∨-shaped quantum Rock, Paper, Scissors, we gave Charlie a crystal ball, to allow causation to work backwards – in other words, to allow Alice and Bob’s choices to feed into the rule Charlie uses to select the measurement outcomes. This zig-zag path would avoid the kind of faster-than-light magic that Einstein and Schrödinger objected to In the real world, of course, we don’t find magical crystal balls on any actual shelf. In the quantum world, however, retrocausality is an old and familiar idea. In that sense, it is certainly available off the shelf. It was first proposed in the late 1940s by the Parisian physicist Olivier Costa de Beauregard. He was a graduate student of the French physicist Louis de Broglie, another of the 1920s pioneers. In his own PhD thesis in 1924, de Broglie had proposed that all particles can behave like waves. Just five years later, after experiments had confirmed it, this won him the Nobel Prize. Costa de Beauregard spotted a loophole in the Einstein-Schrödinger argument from 1935. Schrödinger had said that ‘measurements on separated systems cannot affect one another directly, that would be magic’. Costa de Beauregard pointed out that they might affect each other indirectly, via the kind of zig-zag path shown in Figure 4 above. (That’s why we called it the Parisian Zig Zag.) This zig-zag path would avoid the kind of faster-than-light magic that Einstein and Schrödinger objected to. But it would still undermine the Einstein-Schrödinger argument against Bohr. If the reality on Bob’s side of the experiment can depend on Alice’s choice of measurement, we’re not entitled to assume that it would have been there anyway, even if Alice had done something else. Later, after Bell’s work in the 1960s, Costa de Beauregard proposed that the zig zag could explain the strange Bell correlations, without relativity-threatening nonlocality. Retrocausality remained a niche idea in QM for many years, though it has long had some distinguished proponents. In the 1950s, one of them, at least briefly, was the British physicist Dennis Sciama, who taught an astonishing generation of physicists, including Hawking. Sir Roger Penrose, himself a recent Nobel laureate, has long been sympathetic to the idea, as he argued in his chapter for the collection Consciousness and Quantum Mechanics (2022), edited by Shan Gao. There’s a story from the 1990s of Penrose drawing a zig zag at a quantum workshop at the Royal Society in London, and joking: ‘I can get away with proposing this kind of thing, because I’m already a Fellow here.’ (Now that he has a Nobel Prize, it is even easier, presumably!) More recently, we ourselves have written about the advantages of retrocausal approaches to QM, both in avoiding action at a distance, and in respecting ‘time-symmetry’, the principle that the microworld doesn’t care about the distinction between past and future. But an additional striking advantage of retrocausality seems to have been missed. It suggests a simple mechanism for ‘the characteristic trait of quantum mechanics’ (Schrödinger), ‘its weirdest feature’ (Weinberg) – in other words, for the strange connections between separated systems called quantum entanglement. Starting with retrocausality, our recipe goes like this, in four easy steps: Retrocausality automatically introduces colliders into Bell experiments, at the point where the two particles are produced. Alice and Bob’s choices of measurement both feed back into the past, to influence the particles at this point.That’s interesting because colliders produce collider bias and causal artefacts – correlations that look like they involve causation, but really don’t.But constraining a collider can turn a causal artefact into a real connection across the collider, as shown in Figure 4. Because of the constraint, a different choice on Alice’s side sometimes requires a different outcome on Bob’s side, and vice versa.In the case of colliders in the past, as in Figure 3, constraint is easy. It just follows from normal initial control of experiments.Taken together, these steps suggest a simple explanation for the Parisian Zig Zag, and the strange connections in the quantum world required by entanglement: it is connection across constrained colliders, where the colliders result from retrocausality and the constraints from ordinary initial control of experimental setups. We don’t mean that it is a trivial step from ∨-shaped quantum Rock, Paper, Scissors to real Bell experiments. But this toy example demonstrates that the combination of retrocausality and initial control can give rise to a connection between separated systems that looks very similar to entanglement. In our view, this is such a striking fact – and entanglement is otherwise such a strange and mysterious beast – that we propose the following hypothesis: Hypothesis: quantum entanglement is connection across constrained colliders (CCC), where the colliders result from retrocausal influence on the source of pairs of entangled particles, and the constraint results from normal initial control of the experiments that produce such particles.If this hypothesis turns out to be true, then in place of Schrödinger’s magic we’ll get something that works like Costa de Beauregard’s zig zag. That’s just what connection across a constrained collider does: it makes a zig zag from two converging arrows. It will still be true that QM gives us a new kind of connection between the properties of distant systems. Bell experiments provide very convincing evidence that quantum entanglement is a real phenomenon. But it would no longer look mysterious – any world that combines retrocausality and initial control would be expected to look like this. QM has been built on the idea that there are limits to what it is possible to know about physical reality Finally, a note for readers who are worried that the cure is worse than the disease – that retrocausality opens the door to a menagerie of paradoxes and problems. Well spotted! For one thing, the crystal balls give Charlie options much like those of the famous time-traveller, meeting his own grandfather long before his parents met. What’s to stop him from interfering with the course of history, say by bribing Bob to make a different choice than the one shown in the crystal ball? (In the causal loop literature, this is called ‘bilking’.) Also – less dramatic, maybe, but especially interesting in comparison to QM – the crystal balls allow Alice and Bob to send messages to Charlie, and hence potentially, with his help, to signal to each other. This isn’t possible in real Bell experiments, where Alice and Bob can’t signal to each other, despite having some influence on each other’s measurement outcomes. So isn’t this bad news for retrocausality? These are good objections, but it is easy to modify the ∨-shaped quantum Rock, Paper, Scissors game to avoid them. We just need to split Charlie’s functions into two parts. Most of what he does gets replaced by a simple algorithm, inside a black box, that takes in information about the two future measurement settings, and spits out the two measurement outcomes. Charlie himself can’t see inside the black box, and doesn’t have access to the future settings. But he still has a vital job to do. The box has a knob on the front, with a small number of options. Charlie controls that knob, and if he wants the device to produce the Bell correlations, he needs to choose the right option. In the terminology of QM, that’s called ‘preparing the initial state’. If that’s all that Charlie does, and the quantum black box takes care of the rest, the door to the menagerie is closed. Alice and Bob can no longer signal to Charlie, or to each other. Everything works as in orthodox QM, except that we now have the prospect of an explanation for entanglement. This means that if nature wants retrocausality without signalling into the past, and the paradoxes it would lead to, it needs black boxes – places in nature where observers like Charlie can’t see the whole story. In normal circumstances, such black boxes would seem like another kind of magic. Charlie is a clever guy, after all. What’s to stop him from taking a peek inside? The answer, in the quantum case, is Heisenberg’s uncertainty principle, from 1927. Ever since then, QM has been built on the idea that there are limits to what it is possible to know about physical reality. This is just the veil of ignorance we need, to allow retrocausality in QM without threatening anybody’s grandparents. As Adam Becker put it in the New Scientist in 2018: Heisenberg’s uncertainty principle states that it is impossible to know both the position and momentum of a particle at the same time. So there are features of the quantum world that are persistently hidden from us, and this is ultimately what allows for retrocausation without letting us send signals to the past.It may seem just too convenient, that one curious feature of quantum theory allows a paradox-free version of another curious feature. But in the real world, every piece of stage magic has a coherent explanation underneath. Often that explanation combines various components in surprising ways: stage magic wouldn’t be magic if it was obvious how it worked. We’ve seen that quantum entanglement looked like magic, by the standards of some of the pioneers who discovered it. It still looks very strange, even to the physicists who have just won Nobel Prizes for proving that it is real. Any coherent explanation of it seems likely to combine some unexpected elements, and to require a careful analysis of how causes interact with each other, down at the level where we can’t see all the effects. The biggest surprise, in our view, is how few ingredients the explanation seems to need – and how simple the recipe is for putting them together.
Huw Price & Ken Wharton
https://aeon.co//essays/our-simple-magic-free-recipe-for-quantum-entanglement
https://images.aeonmedia…y=75&format=auto
Personality
What do the lives of twins tell us about heritability, selfhood and the age-old debate between nature and nurture?
Thirteen days before the start of the Second World War, a 35-year-old unmarried immigrant woman gave birth slightly prematurely to identical twins at the Memorial Hospital in Piqua, Ohio and immediately put them up for adoption. The boys spent their first month together in a children’s home before Ernest and Sarah Springer adopted one – and would have adopted both had they not been told, incorrectly, that the other twin had died. Two weeks later, Jess and Lucille Lewis adopted the other baby and, when they signed the papers at the local courthouse, calling their boy James, the clerk remarked: ‘That’s what [the Springers] named their son.’ Until then they hadn’t known he was a twin. The boys grew up 40 miles apart in middle-class Ohioan families. Although James Lewis was six when he learnt he’d been adopted, it was only in his late 30s that he began searching for his birth family at the Ohio courthouse. In 1979, the adoption agency wrote to James Springer, who was astonished by the news, because as a teenager he’d been told his twin had died at birth. He phoned Lewis and four days later they met – a nervous handshake and then beaming smiles. Reports on their case prompted a Minneapolis-based psychologist, Thomas Bouchard, to contact them, and a series of interviews and tests began. The Jim Twins, as they were known, became Bouchard’s star turn. Thomas Bouchard conducting personality tests on James Lewis and James Springer, identical twins adopted by separate families, Minnesota, USA, 1979. Photo by Thomas S England/Science Photo Library Both Jims, it transpired, had worked as deputy sheriffs, and had done stints at McDonald’s and at petrol stations; they’d both taken holidays at Pass-a-Grille beach in Florida, driving there in their light-blue Chevrolets. Each had dogs called Toy and brothers called Larry, and they’d married and divorced women called Linda, then married Bettys. They’d called their first sons James Alan/Allan. Both were good at maths and bad at spelling, loved carpentry, chewed their nails, chain-smoked Salem and drank Miller Lite beer. Both had haemorrhoids, started experiencing migraines at 18, gained 10 lb in their early 30s, and had similar heart problems and sleep patterns. Of the 1,894 twins raised apart who had been tested by psychologists internationally between 1922 and 2018, the ‘Jim Twins’ story was, by far, the example cited most often, mainly because it seemed so strongly to suggest that nature trumped nurture, aptly illustrating Bouchard’s prior perceptions. Their tale spread around the globe, finding its way from national newspapers to The Tonight Show Starring Johnny Carson, to school and university textbooks. Later, it was all over the web; 44 years on, it pops up whenever twins are discussed in the media, with the significant differences between these two men invariably ignored. Some reports feature the story with two sidebar cases, also drawn from Bouchard’s twins’ larder. Oskar Stöhr and Jack Yufe were identical twins born in Trinidad in 1933, to a German mother and a Jewish-Romanian father, but they were separated six months later when their parents’ relationship broke down. Oskar was raised Catholic by his mother in Germany and joined the Hitler Youth. Jack was raised as a Jew in Trinidad by his father. They met briefly at 21 and were reunited at 47. Although they had very different world views, their speech patterns and food tastes were similar, and they shared idiosyncrasies, such as flushing the toilet before using it, and sneezing loudly to gain attention. The other sidebar is devoted to the ‘Giggle twins’, Daphne Goodship and Barbara Herbert, identical twins adopted into separate British families after their Finnish mother reportedly killed herself. They reunited, aged 40, in 1979. Unlike their adoptive families, they were both incessant gigglers, had a fear of heights, dyed their hair auburn, and met their husbands at town hall Christmas dances. Cases such as these have been used to revive the notion that distinct upbringings make no difference in how we turn out: it’s all down to biology, specifically the clockwork mechanisms of Mendelian genetics – an idea with a long historical tail. But much has changed in our understanding of genetics since the human genome was sequenced in 2003. It was discovered that we have far fewer genes than anticipated (around 20,000, rather than the anticipated 100,000), and that there are very few genes ‘for’ anything. A complex property such as intelligence, for example, involves a network of more than 1,000 genes, interacting with the environment. Other discoveries that chipped away at genetic determinism noted that environmental pressures prompt changes in cell function and gene expression that don’t involve changes in DNA (sometimes lingering over several generations) known as epigenetics; while advances in neuroscience have revealed how our plastic human brains are moulded by experience. Yet many of those involved in twin studies have been resistant to these influences, betraying the influence of a deeply rooted magical thinking around twins that has cast its long shadow over our understanding of the line between selfhood and otherness. Thirty years ago, when I began writing about twins, I approached several professionals who specialised in counselling women who’d experienced multiple births, asking each of them why behavioural coincidences in identical twins occurred. Rather to my surprise, they all plumped for telepathy – like the separated conjoined twins of Alexandre Dumas’s novel The Corsican Brothers (1844), who read each other’s thoughts when apart. Joan Woodward, a twin herself and a psychotherapist, suggested that identical twins offered evidence of extrasensory perception ‘which seems to exist for some twins – a bit like these stories of Bushmen in the Kalahari walking miles to visit an uncle because they sense he’s in trouble.’ I was well aware that claims about telepathy, between twins or otherwise, failed when tested under clinical conditions, and that such mystical examples of premonitions were a good illustration of why anecdotes are not evidence. But such assertions interested me nonetheless, because idiosyncratic stories are so much part of what drives our fascination with twins. Their mystique is woven into our cultural history – perhaps because the idea of having a doppelgänger is so compelling, a mirror version of ourselves who echoes our thoughts and fears, or a companion who understands our every impulse and ensures we are never lonely. Our enchantment may also come from the perception that we have a different person inside us, the internal twin, of the Jekyll-and-Hyde variety. Then there’s our fascination with identical twins seeming so ‘other’– a pair so attuned to each other’s way of being that one can pass for the other, hoodwinking the rest of us. Or, equally pervasive, the idea that, in order for one twin to truly thrive, he or she must destroy the other. Think of Romulus and Remus, twin sons of the vestal virgin Rhea Silvia and the god Mars, suckled by a she-wolf and united against their enemies – until they fall out over which of the seven hills to build on; Romulus kills Remus and goes on to found Rome. Or Jacob and Esau, one usurping the other’s birthright to win their father Isaac’s favour. This kind of mythologically laden thinking about identical twins in particular, combined with a steadfast belief in old-style genetic fundamentalism, has tainted the science of twin studies, with some of its leading lights faking or manipulating evidence. The ‘Angel of Death’ would order any twins he spotted among incoming prisoners to step out for experiments Scientists first spotted the potential of twin studies in 1875 when Charles Darwin’s polymath cousin Francis Galton wrote to 35 pairs of apparently identical twins and 20 pairs of apparently fraternal twins. He used their anecdotes to conclude that the twins who said they looked alike had similar characters and interests, whereas those who said they looked different became more so as they got older. With both sets the ‘external influences have been identical; they have never been separated,’ he said. Galton claimed his results proved that ‘nature prevails enormously over nurture’. Galton’s work with twins reinforced his dubious belief in purifying the population, a version of ethnic cleansing that became the engine of Nazi eugenics, underpinning Josef Mengele’s notorious research in Auschwitz, involving 1,500 twin pairs. The physician known as the ‘Angel of Death’ would order any twins he spotted among incoming prisoners to step out for experiments. In one case, his assistant injected chloroform into the hearts of 14 pairs of Roma twins, after which Mengele dissected their bodies. In another, he sewed together a pair of Roma twins to create conjoined twins. They died of gangrene. In a third, he connected a girl’s urinary tract to her colon. Sometimes, he’d simply shoot them and then dissect them. The revelations of Mengele’s crimes gave twin studies a nasty name, but the research continued because, until recently, those wanting to uncover the genetic contribution to particular traits had little alternative. Over the past few decades, twin studies have been used to test everything, from whether Vitamin C can prevent colds (it can’t) to whether homosexuality has a genetic origin (minor with gay men, and even smaller with lesbian women). The main method of twins-based research is to compare dizygotic (DZ), or two-egg ‘fraternal’ twins, with monozygotic (MZ), or one egg ‘identical’ twins, who are more unusual – one birth in 250 (half the frequency of fraternal twins). The basis of this approach is the assumption that both groups share their environments to the same extent, but that, because fraternal twins share only half their sibling’s genes, if they show greater variation, the cause must be genetic, so it becomes possible to attach a heritability figure to it. An example of this kind of study, involving a national sample of 11,117 twins, prompted The Guardian headline in 2013: ‘Genetics Accounts For More Than Half Of Variation In Exam Results’. Towards the end of their paper, the study’s authors noted a potential methodological drawback: to wit, ‘the equal-environments assumption – that environmentally caused similarity is equal for MZ and DZ twins’. Acknowledging the problem didn’t stop them making bold claims about the genetic contribution to exam performance. But the problem is profound, undermining hereditary claims when it comes to social studies. The experiential gap starts in the womb because, unlike identical twins, one fraternal twin might be bigger than the other and therefore take up more space. Also, they each have their own placenta (unlike most identical twins). One meta-analytical study on the impact of the foetal environment on IQ concluded that it accounted for 20 per cent of IQ differences between fraternal twins. What’s more, the gap in experience widens as fraternal twins grow older. A more newsworthy line of research involves comparing twins who’d been separated at birth When I was at school, I had several twins in my classes. One pair – the Thompsons – were identical. None of us could tell them apart, and they navigated the world as a unit. Another pair – I’ll call them the Wellingtons – looked and acted differently: Amy was blonde, sporty, good looking and popular; Mary was less striking, red-haired and had just a few close friends. They seldom hung out together, were treated differently, and pursued distinct paths. These differences may have been prompted by genes, but they were widened by experience. Amy was a partygoer who spent less time studying, drank more and smoked cannabis, all of which could affect exam performance. With the identical Thompsons, we can be sure their similar exam results were genetically prompted because their shared environment was the same and they modelled their behaviour on each other. But with the Wellingtons, their different life paths (and exam results) could be a result of genes, or of environments that became more distinct as they got older, or both. We can’t be sure. The other, less common but more newsworthy line of research involves comparing twins who’d been separated at birth. The pioneer of this method was the British psychologist Sir Cyril Burt, a eugenics enthusiast, who claimed that IQ and other differences between races and social classes were hereditary. Burt advised government bodies on the introduction of the 11-plus exam into British schools (to sift the top 20 per cent of pupils into grammar schools, leaving the rest to fill trade-based, secondary moderns), and pushed them to include an IQ test, insisting that IQ was innate. He based his conclusions on studies of separated identical twins that he claimed to have conducted with three assistants in the 1950s and ’60s. Shortly after he died in 1971, Burt’s records and notes were all burnt, after which his reputation imploded. Two of his researchers, whose names appeared as co-authors on his papers, could not be traced (when asked about them, Burt had said they’d both ‘emigrated’ – but he didn’t know where) and a third he clearly invented. In The Science and Politics of IQ (1974), the American psychologist Leon Kamin noted that in 1955, when Burt claimed to have tested 21 separated identical twins, he put the correlation between their IQs at 0.771, yet in the 1960s, when his twins cohort numbered 53, he gave the identical three-decimal figure, which Kamin said had a statistically minuscule chance of occurring. Some circumstantial details that Burt claimed to have found among his twins also raised eyebrows: of a pair born to a wealthy mother and then adopted, he claimed one was raised in splendour on a Scottish country estate, and the other was left to a shepherd (like Perdita in The Winter’s Tale). The killer blow was delivered by his approved biographer, Leslie Hearnshaw, a one-time Burt enthusiast who in 1979 concluded that all of Burt’s twin studies were invented. The next big wave of studies of separated twins came from the stable of Bouchard, the hereditarian behind the ‘Jim Twins’ revelations. Bouchard was attracted to race science and, in 1994, he publicly endorsed a document drawn up by the race science promoter Linda Gottfredson: ‘Mainstream Science on Intelligence’. Its purpose was to back Richard Herrnstein and Charles Murray’s book The Bell Curve (1994), which argued that poverty was caused by low IQ and that this was the reason why there were more poor Black people. Bouchard also wrote an enthusiastic endorsement for an overtly racist book called Race, Evolution, and Behavior (1995) by the Canadian psychologist J Philippe Rushton. Bouchard received financial backing for his twin studies from the Pioneer Fund, set up in 1937 by Nazi supporters. The Fund maintained its policy of promoting research in eugenics and ‘race betterment’. Race science promoters were drawn to twin studies because they thought that, if it could be shown that IQ was highly heritable, then different IQ averages between population groups could be portrayed as innate. But this assumption misunderstands heritability, which speaks to the degree of variation in a trait directly caused by genes within a population, never between populations. This can be illustrated using something far more heritable than IQ: height. Two populations with the same gene profile might have different height averages for environmental reasons. For instance, South Koreans are up to 8 cm taller than North Koreans because of better nutrition over several generations. In the same way, two populations might have different IQ averages owing entirely to environmental factors – something that Bouchard’s backers failed to appreciate. Using Pioneer Fund money, Bouchard’s Minnesota Center for Twin and Family Research built up its larder of twins raised apart, inviting them in for a battery of interviews and tests. His research team ended up grilling 81 pairs of identical twins and 56 pairs of fraternals. Bouchard’s results must have delighted his sponsors because he said adult IQ was 70 per cent heritable (later he opted for an overall figure of 50 per cent). But his methods and conclusions did not impress other researchers. One problem was self-selection. His identical twins had known each other for an average of nearly two years before contacting him; some had known each other as young children; and it seems likely that those who were most alike were most likely to contact him. Kamin, the professor who rumbled Burt’s fraudulent studies, and his colleague said there was pressure on the twins to come up with cute stories, and that Bouchard’s studies had ‘a number of serious problems in the design, reporting, and analyses’. Numerous scientists have questioned the value of heritability estimates for social phenomena Another issue is that Bouchard’s heritability score was based on ‘the assumption of no environmental similarity’ even though almost all the twins in his studies were raised in aspirant, white, middle-class environments, often living near one another, with relatives. Richard Nisbett, a psychology professor at the University of Michigan who specialises in IQ, argued that this false baseline insistence that all adopting families were different led to an overestimation of heritability. ‘Adoptive families, like Tolstoy’s happy families, are all alike,’ he said in an interview with The Times in 2009. Incidentally, Bouchard acknowledged that his percentages applied only to the ‘broad middle class in industrialised societies’, which seemed to contradict his ‘no environmental similarity’ assumption. Bouchard’s claim that the heritability of IQ increases as twins get older – when true IQ potential kicks in – is equally problematic. Jim Flynn, perhaps the leading IQ theorist of the past half-century, disagreed. In his book What Is Intelligence? (2012), he used the example of separated identical twins born with sharper-than-average brains – prompting them to go to the library, get into top-stream classes, and attend university – to disavow the notion that ‘identical genes alone’ will account for their similar adult IQ scores, and suggesting instead that ‘the ability of those identical genes to co-opt environments of similar quality will be the missing piece of the puzzle.’ Flynn places this ‘multiplier effect’ squarely in the environmental column. Numerous scientists have questioned the value of heritability estimates for social phenomena – both because it is impossible to separate genetic and environmental prompts, and because it depends on how a population is defined: the wider the definition, the lower the heritability percentage. Bouchard acknowledged that non-middle-class environments might reduce heritability estimates, and that his percentages should therefore ‘not be extrapolated to the extremes of environmental disadvantage’. The British neuroscientist Steven Rose put it more bluntly: ‘Heritability estimates become a way of applying a useless quantity to a socially constructed phenotype and thus apparently scientising it – a clear cut case of Garbage In, Garbage Out.’ It would seem that genes are less predictive of human behaviour than once thought – at least this is what is emerging from genome-wide association studies (GWAS) that find genetic markers for alleles that influence a trait within a particular population, producing a ‘polygenic score’. Thus, a study of 54,888 Icelanders published in Nature Genetics in 2018 found the heritability of educational attainment was 17 per cent (compared with 50 per cent-plus from twin studies) . The academic psychologist Kathryn Paige Harden, who uses twin studies and GWAS methods, acknowledges that both these forms of social research may over-egg heritability. ‘There are questions with the twin studies about whether they are attributing to genes what should really be claimed by the environment,’ she told The Observer in 2021. ‘And for polygenic score studies, people may just happen to differ genetically in ways that match environmental factors, and it is really those that are driving the effect.’ If people with identical genes are raised in similar environments, it is likely their IQs will also be similar. But what would happen if they were raised in diverse environments like the twins in The Corsican Brothers – one raised by a servant in the mountains; the other as a gentleman in Paris? Despite his own hereditarian bias, Bouchard’s research suggests the answer. In one paper he referred to separated identical twins with an IQ gap of 29 points; in another, 24 points. What about both smoking Salem cigarettes, owning dogs called Toy, and having wives called Linda and Betty? More recently, two pairs of Colombian identical twins were raised as fraternal twins after being mixed up in a hospital error: one pair was raised rurally in a poor family near La Paz, the other pair grew up in a lower-middle-class family in cosmopolitan Bogotá. When they met in 2014, initial reports focused on their similarities. But when Yesika Montoya, a Colombian psychologist, and Nancy Segal, an American academic psychologist who’d once been Bouchard’s lead researcher, persuaded all four men to sign up for a batch of interviews, IQ tests and questionnaires, they discovered that the twins were even less alike than anticipated. ‘The Colombian twins really made me think hard about the environment,’ Segal told The New York Times in 2015. Later, she told The Atlantic: ‘I came away with a real respect for the effect of an extremely different environment.’ A similar experience with a pair of Korean identical twins confirmed Segal’s appreciation of ‘cultural influences’. ‘They really do have a strong effect,’ she told The Telegraph in 2022. ‘But they don’t blot out the basic similarities.’ Yet it is all too easy to pounce on those similarities and overstate their significance, and then brush aside differences. The Jim Twins are an example. There are clear genetic links to heart problems, migraines, weight gain, sleep patterns, nail-biting and probably to maths preference too. Other parallels can be explained at least in part by the Jims’ similar home environments, including shared holiday destinations, job overlaps and car choices. But what about both smoking Salem cigarettes, owning dogs called Toy, and having wives called Linda and Betty? Pure chance. In nearly 2,000 studies of twins raised apart, coincidences inevitably emerge, but no studies uncovered anything like the level of overlap found with the two Jims. Put it all together, and it would seem that using twins to discover heritability percentages for human behaviour is inherently unreliable. The usual method, of comparing identical and fraternal twins falters because it cannot calculate the impact of the diverging environments experienced by most fraternal twins. The esoteric method of comparing twins raised apart may produce tasty anecdotes, but it has even more profound problems, starting with the small, self-selecting sample and the false assumption that their home environments differ substantially. Twin studies are still widely used and may remain useful in trying to find out the heritability of illnesses and other physical outcomes where the environmental component is unlikely to differ between identical and fraternal twins. But there is a huge gap between attaching a heritability percentage for, say, macular degeneration, and for something like IQ or academic performance, where it’s impossible to untangle the interlocking influences of biology and culture. Even the Jim Twins, raised by similar families, in the same part of the same state, have their own stories to tell because of their unique upbringings. Focus on these, and a different picture emerges. When they first met, they had distinct hairstyles and facial hair (one a bit Elvis, the other more Beatles) and different kinds of jobs. Their children were of different ages and most had different names. Springer stayed with his second wife, Betty, while Lewis married a third time. More significantly, they displayed marked character differences, noticeable to anyone who met them: Springer, the more loquacious of the brothers, called himself ‘more easy-going’ and said Lewis was ‘more uptight’. Lewis was reticent in public and, in private, he preferred to write down his thoughts. Much of the magic evaporates when we lift the lid on the sensational tales of parallel lives. What emerges in place of this seductive mirror myth of the hidden double are more mundane tales of everyday difference, revealing the unique selfhood that is part of the inheritance of all people – including those with genetic doppelgängers.
Gavin Evans
https://aeon.co//essays/what-do-twin-studies-really-say-about-identity-and-genetics
https://images.aeonmedia…y=75&format=auto
Virtues and vices
In the face of global challenges, Augustine offers a way between the despair of pessimism and the presumption of optimism
Russia’s war on Ukraine has left many thousands dead and millions displaced, and the threat of nuclear war has only exacerbated fear. Climate change is already wreaking havoc on ecosystems and communities, and it threatens to inflict more substantial devastation if urgent action is not taken. Racial, ethnic and gender injustices continue to afflict marginalised populations, while gun-related deaths in the United States are reaching record levels. If this were not enough, these challenges are unfolding against a backdrop of deep social division, political polarisation and economic uncertainty after a global pandemic caused almost 7 million deaths and created major mental and physical challenges for millions more. When these global challenges are compounded by personal ones, temptations toward despair can be real and significant. In the US, polls show that pessimism is widespread. A Gallup survey reports that Americans were ‘largely pessimistic’ entering 2023, while Newsweek magazine suggests that the country has ‘lost its optimism’, noting that a higher percentage of Americans report feeling more pessimistic (42 per cent) than optimistic (27 per cent) about the future of the US than they did in 2019, before the COVID-19 pandemic. A study by Gregory Mitchell and Philip Tetlock published in Clinical Psychological Science in 2022 shows this pessimism is shared across racial, economic and ideological divides. Summarising the findings in The Wall Street Journal this April, Alison Gopnik suggests that ‘pessimism is the one thing Americans can agree on.’ Yet the same study also shows that much of this pessimism seems unjustified. Measuring perceptions of changes on a range of indicators, from poverty and incarceration rates to educational attainment and unauthorised immigration, Mitchell and Tetlock argue that these pessimistic appraisals are skewed. Many Americans wrongly believe that ‘things are getting worse than they really are.’ To reduce the influence of COVID-19, the study focuses on changes from 2000-18 but, as it was conducted in 2021, the experience of the pandemic might still affect its results. Nonetheless, the authors consider a range of other explanations, from memory processing biases to a political rhetoric of decline. They also add another potential factor from the lens of error-management: ‘People may arguably see it as more prudent to make the error of overestimating societal problems than the error of underestimating them.’ These psychological and political explanations may be bolstered by a conceptual one: contemporary discourse tends to assume a binary between optimism and pessimism and force a false choice between them. If these are our only options, pessimism can seem more ‘realistic’ than optimism, which is often blind to difficulties and, in its most extreme forms, demands a form of ‘toxic positivity’ that presumes good things will happen if we simply wish them. When such hopes are dashed, as they often are, pessimism becomes even more attractive. Yet when pessimism becomes the lens through which we see the world, it can generate a despair that is debilitating and dangerous. Despair can cause us to give up on efforts to address pressing problems and can feed into a narrative of decline, which makes things only worse and obscures the actual progress being made. The result is even more despair. We need some way to break this cycle, to avoid the despair of pessimism without embracing the presumption of optimism. Here is where Augustine of Hippo can help. His virtue of hope offers a way between – and beyond – optimism and pessimism. Enlisting Augustine as a teacher of hope might seem surprising. An influential African bishop, theologian and philosopher who lived in the Roman Empire at the turn of the 5th century, Augustine is often described as one of the West’s great ‘pessimists’. John Rawls called him one of ‘two dark minds in Western thought’, and countless others – from Hannah Arendt to Martha Nussbaum – have deemed his thought too pessimistic for contemporary politics. One reason for Augustine’s reputation reflects his vigorous critique of evil and domination. Throughout his writings, Augustine is alert to the ways that pride and excessive self-love can motivate a ‘lust for glory’, which in turn fuels a ‘lust for domination’, a desire to dominate others to prove one’s superiority and sustain one’s power. Ultimately, the lust for domination can itself become dominating, consuming a person’s character, and motivating malicious acts of violence and vice. His acute analysis of pride, self-interest and injustice punctures the presumption of positive thinking In his book The City of God, Augustine traces the effects of pride and domination in human life, noting that, even among close family and friends, dishonesty, betrayal and cruelty are all too common. The situation is worse in politics. Augustine asks: If … there is no security even in the home from the common evils which befall the human race, what of the city? The larger the city, the more is its forum filled with civil lawsuits and criminal trials. Even when the city is at peace and free from actual sedition and civil war, it is never free from the danger of such disturbance or, more often, bloodshed.Those of us watching contemporary politics – from invasions to insurrections to indictments – can recognise these dangers. Given Augustine’s awareness of how the lust for glory and domination can thwart the pursuit and protection of important goods, it is difficult to describe him as an ‘optimist’. This is one reason, however, why his thought is particularly relevant today: his acute analysis of pride, self-interest and injustice punctures the presumption of positive thinking and awakens us to the dangers of domination in its various forms. Augustine’s thought confounds any optimism that denies the realities of human fallibility and the persistence of pride, evil and injustice in human life. Yet (as I argue in a series of articles and a book on Augustine), this does not mean that we should describe him as a ‘pessimist’. The binary between optimism and pessimism does not capture the complexity of Augustine’s thought. As concepts, ‘optimism’ and ‘pessimism’ came to be employed only in the 18th century. Moreover, the binary overlooks Augustine’s more nuanced account of hope as a virtue that finds a middle way between the vices of presumption and despair. The difference that it makes when we understand hope as a virtue is often missed in contemporary discourse, which tends to characterise hope as an attitude or emotion and to neglect the possibility that hope might also be a virtue that regulates our desires for future goods. Like many contemporary thinkers, Augustine also recognises that hope is a natural affection or emotion: it is a love or desire for objects that we perceive to be good, future and possible, but not yet seen or possessed. However, unlike those who identify hope with ‘optimism’ and see it as unqualifiedly good, Augustine recognises that the emotion can sometimes go wrong: we can hope for the wrong objects, in the wrong people, or in the wrong ways. Our hopes can become misplaced or disordered. This is why we need a virtue of hope, a more stable and enduring quality of character that helps to direct the emotion of hope toward the right objects in the right ways. Augustine’s theology shapes how he understands the content of hope. The Christian bishop identifies God and ‘eternal goods’ as the ultimate objects of hope, but he recognises that human beings must also hope for ‘temporal goods’ such as health, peace and friendship. He believes these goods are legitimate objects of hope as long as they are properly ‘ordered’ to eternal goods. His idea of right order is complex. Here it is worth highlighting one feature often overlooked in interpretations of Augustine’s ‘pessimism’: the virtue of hope helps human beings to resist two vices of disorder: presumption and despair. Presumption characterises those whose feelings of hope are perverse, excessive or false. Those with the vice of presumption hope for the wrong objects, or in the wrong people, or too much for, or in, the right ones. In some cases, optimism can reflect the vice of presumption more than the virtue of hope. By contrast, pessimism can often express hope’s corresponding vice of deficiency – despair. While despair, like hope, is a natural emotion that can be justified in some situations, it becomes a vice when it reflects a more habitual failure to hope sufficiently for goods that are actually possible to attain. This vice causes us to give up all hope, which can lead us to withdraw from the pursuit of difficult goods or, out of desperation, cause harm to ourselves or others. Augustine compares those in despair to Roman gladiators destined to die in the arena. Because they have ‘no hope of being spared’, they are either ‘looking for a way to die’ or ‘do not hesitate to commit a foul’, using violent force without constraint. For Augustine, both vices can cause complacency or complicity. If we presume that attaining an object is certain, or despair that it is impossible, we will not work to attain what we hope for. We need the virtue of hope to act in the face of the difficulties, dangers and delays that accompany our objects of hope. A lack of power or privilege can create temptations toward despair While Augustine discusses the virtue of hope primarily within a theological context, its conceptual structure can provide a valuable resource for contemporary life, whether or not we embrace Augustine’s theology. When we consider the challenges we face, many of us are often tempted toward presumption or despair. We either presume that particular problems are not as bad as we think, or despair that they are so bad that nothing can be done to address them. Sometimes, these temptations affect people in specific roles and social locations differently. Those with power and privilege, for example, may be more tempted toward presumption, falsely assuming that some future goods are likely or certain, that they don’t depend on others to achieve them, or that they can use their power or privilege to pursue their aims without limit or constraint. Their presumption may fuel a lust for glory and domination. By contrast, a lack of power or privilege can create temptations toward despair. When those in such positions experience the effects of powerlessness, injustice and domination, they can often feel – rightly – that those who deny them power or voice make achieving their objects of hope harder. These people must resist the vice of despair. If they care about justice, equality or peace but despair about achieving them, then they might give up, and the problems they face will become only more entrenched. As one Augustinian prophet of hope, Martin Luther King, Jr, said in 1967: ‘Today’s despair is a poor chisel to carve out tomorrow’s justice.’ Augustine provides the conceptual vocabulary to identify a virtue that can help us sustain a realistic hope for justice and other important goods. By distinguishing the emotion of hope from the virtue, showing how the virtue helps to regulate the emotion toward the right objects in the right ways, and identifying the vices that oppose the virtue, an Augustinian account supplies valuable resources to help us cultivate virtuous hope in our time and register and resist temptations toward presumption and despair. Augustine also supplies a model of rhetoric that can support and sustain this virtue. A renowned professor of rhetoric before he became a Christian bishop, Augustine recognises the pedagogical power of persuasion. Like his Greek and Roman predecessors, he practises philosophy as ‘a way of life’ aimed not only at analysing abstract ideas but also helping others cultivate the virtues needed to live well. Contemporary scholars who read Augustine simply as an analytic philosopher miss how he employs rhetoric to shape the hopes of his audiences. Consider a passage from The City of God often taken as a primary expression of Augustine’s ‘pessimism’. In Book 22.22-23, Augustine offers a scathing analysis of the ‘many and grave evils’ that affect earthly life, from ‘diseases’ and ‘disturbances’ to ‘deceptions’ and ‘wars’. Ultimately, he concludes: ‘This is a state of life so miserable that it is like a hell on earth.’ Many interpreters take this verdict as confirmation of Augustine’s pessimism, but they ignore the next chapter of The City of God, where he offers a lengthy list of earthly goods. ‘Who could give a complete account of all these [good] things?’ Augustine asks. ‘If I had chosen to deal with each one of them in turn … what a time it would take!’ In this passage, Augustine employs the ancient rhetorical devices of ‘vivid description’ and ‘antitheses’, oppositions that set good and evil ‘side by side’ to make a contrast more vivid and enhance audiences’ awareness. Here, Augustine performs what the rhetoric scholar Kenneth Burke characterises in The Philosophy of Literary Form (1974) as a ‘structure of encouragement’, a form of social critique that takes readers ‘into hell, but also out again’. Augustine takes his readers into a ‘hell on earth’ to highlight the realities of evil and thereby challenge their presumptions about the world. But he also recognises that describing evils so vividly might leave readers in despair. So he highlights the world’s goodness in the next chapter to take readers out of hell and supply grounds for hope. In this way, Augustine enacts what Jeffrey Stout in Blessed Are the Organized (2010) describes as the ‘delicate task of the social critic’: ‘to adopt a perspective that makes the dangers of our situation visible without simultaneously disabling the hope of reforming it.’ ‘Emotional sequencing’ is effective: one begins with reasons for fear but concludes with grounds for hope Today, much social criticism rightly offers rigorous analyses of political systems and structures to diagnose, deconstruct and disrupt domination. The virtue of hope depends on such criticism to register and resist presumption. Yet this criticism can breed cynicism and despair if it does not also empower audiences to address the problems it diagnoses. The political scientist Jennifer L Hochschild suggests that this may be why many contemporary scholars and social critics are ‘relatively pessimistic’ about politics. Social science that focuses only on persistent structural problems without attending to positive examples of human agency can leave citizens feeling disempowered and unmotivated to address the problems that social science has helpfully identified. In Teaching Community: A Pedagogy of Hope (2003), bell hooks affirms the need to pair ‘rigorous critique’ with the recognition of how to resist domination: When we only name the problem, when we state complaint without a constructive focus on resolution, we take away hope. In this way critique can become merely an expression of profound cynicism, which then works to sustain dominator culture.hooks’s call for empowering critique aligns with an Augustinian structure of encouragement that sustains a realistic hope while resisting the presumption and despair that often fuel domination. This model of social criticism might be especially useful for responding to issues that generate widespread despair. For example, despite an almost universal scientific consensus that human-induced climate change is occurring, many people are not motivated to address it, presuming either that its future effects will not be that bad or that new technology will emerge to mitigate it. To challenge such presumption, environmental activists have emphasised the dangers of ecological destruction to spur audiences to action. ‘Fear appeals’ can raise awareness of the threat and increase attention to it, but research shows they can sometimes have disabling effects, causing audiences to feel despair in the face of potential catastrophe. This effect has led some to use ‘hope appeals’ instead, yet hope appeals can encourage presumption and complacency if they are unrealistic, and downplay the dangers that climate change poses to people and the planet. Augustine’s rhetoric of hope could offer a valuable framework for communicating this threat. Rather than seeing hope simply as an attitude or emotion, as is common in climate communication, he recognises that hope is also a virtue that must avoid both presumption and despair, and he structures his rhetoric to help audiences resist both temptations, modelling a structure of encouragement that emphasises real dangers to avoid presumption while concluding with legitimate reasons for hope to prevent despair. Empirical research on climate communication affirms that such ‘emotional sequencing’ is effective, particularly if one begins with reasons for fear but concludes with grounds for hope, as Augustine often does. As a Christian bishop living in the Roman Empire during the 4th and 5th centuries, Augustine might have had difficulty envisioning many of our contemporary challenges. And although he is one of the West’s most vigorous critics of evil and empire, he also supported forms of domination common in his Roman context, from slavery to patriarchy, even as he sought to limit their worst abuses. Yet, at a time of deep division, when many of our challenges seem intractable and politics is plagued by both presumption and despair, Augustine’s thought can supply valuable insights on the nature and value of hope. His rigorous critique of pride and domination can alert us to the risks and realities of political life and chasten presumptions about what we can expect from it, while his virtue of hope and structure of encouragement can motivate action and help us resist the vice of despair. Augustine offers conceptual and rhetorical resources to cultivate and communicate a reasonable and realistic hope. Ultimately, the former professor knows that the most powerful form of persuasion is how we live: ‘Bad times, hard times – this is what people keep saying; but let us live well, and times shall be good. We are the times: such as we are, such are the times.’ Or as he says elsewhere: ‘You are hoping for the good; be what you hope for.’ This essay draws on ideas from A Commonwealth of Hope: Augustine’s Political Thought (2022) by Michael Lamb, published by Princeton University Press.
Michael Lamb
https://aeon.co//essays/what-can-augustine-of-hippos-philosophy-teach-us-about-hope
https://images.aeonmedia…y=75&format=auto
History
The daggers that knights carried to the crusades help us understand why they thought of holy war as an act of love
From 1096 until 1271, Christians from western Europe waged eight major wars and many smaller military operations in the Near East and North Africa that scholars now call the crusades. The spiritual leaders of the Catholic population, from the popes on down, regarded these expeditions as just and holy wars, in part because they considered the earlier Muslim conquests as unjust incursions into and occupation of Christian lands. Furthermore, they resented the fact that more recently Islamicised Turkic peoples continued to press militarily on the remaining independent realms in the eastern Mediterranean, in particular, the Byzantine Empire, a situation that reached crisis proportions in the late 11th century. In fact, this situation was what prompted the First Crusade, whose recovery of Jerusalem and establishment of new Christian polities, the Crusader States, made it the most successful of all these wars. Christians took up arms with similar justifications elsewhere as well, such as Iberia and the eastern Baltic region. In time, military campaigns against dissenting Christians and enemies of the political aspirations of the Church in Europe also received papal validation as ‘crusades’, which is to say, as just and holy wars. The Christian soldiers who fought in these expeditions were both pilgrims and warriors. As warriors, the crusaders prepared themselves in carefully orchestrated and choreographed ways before departing Europe and going into battle. This essay addresses the roles that they played in making themselves and their weapons worthy of engaging in holy war, and explores the curious relationship between mercy and their weapons, in particular the dagger. Crusaders, a term derived from crux, the Latin word for cross, were men who ‘took the cross’ or, rather, received the sign of the cross. In public view, and drawing on far older precedents, they voluntarily accepted a cloth cross, which they wore to publicise their vow to fight the armed enemies of the Christian faithful. Their acceptance of the cross also testified to their recognition of its spiritual power and their own sanctification. The symbol evoked the 4th-century Roman ruler and first Christian emperor, Constantine the Great. His biographer, Eusebius of Caesarea, claimed that Constantine had looked into the heavens before his decisive victory in the Battle of the Milvian Bridge (28 October 312), which guaranteed the commander’s political ascendancy, and saw a shining cross accompanied by words, which Latin sources, incorporating versions of the story, rendered as ‘In hoc signo vinces’, ‘In this sign you will conquer.’ Crusaders, men signed with the cross, crucesignati, drew courage from their trust in God’s aid. Women could take the crusader vow, but they typically fulfilled it, in lieu of physical participation, by redemption – monetary contributions in support of the expeditions – though scholars know precious little about the rituals accompanying their gifts. Far better documented are the rituals accompanying the men’s vows. They were to receive the sacrament of confession and to put themselves under the safeguard of a saint or saints to protect them. Potential crusaders also made sure that their weapons received appropriate blessings. Lastly, they proclaimed aloud, ‘having called together the[ir] neighbours’, the wills they had drafted, which specified, in the event of death, arrangements for their heirs and an array of bequests, particularly charity for the poor, sick and other deserving categories of people. These gestures confirmed the potential crusaders’ willingness to go to war, leave their comforts behind and forsake their kinfolk and friends, as well as their hope of returning alive and well. These gestures also served as elements in the warriors’ purification. Unlike some aged or ill pilgrims, who chose to travel to Jerusalem to die, few crusaders deliberately sought out death, but they did wish to be prepared in the eyes of God if it came. None of this is to say that observers and participants were without doubts about the confessional and therefore transformative culture of ‘crusading ethics’, as the historian Jonathan Riley-Smith dubbed the moral code they so carefully articulated. Marisa Galvez, looking at the contemporary romance literature, has written perceptively about how unrepentance was ‘an idiom in dialogue with the immediate cultural climate of penance and confession’, implying that crusaders had a heightened sense of the corruption that threatened them if their sins went unatoned. Taking the cross could come years before setting out to fight, and circumstances might conspire to prevent individual crusaders or groups from ever fulfilling their vows. Nonetheless, the taking of the cross and the reception of a series of blessings in preparation for eventual departure emerged in the period 1100-1300. They soon coalesced into a formal liturgy, while other ceremonies preceding departure were achieving standard form that borrowed elements from rituals associated with becoming a simple pilgrim. The occasions for taking the cross varied. Notorious sinners, rebels and criminals might do so after having publicly acknowledged their misdeeds. Other people took the cross following rousing sermons, open-air or in church, intended to inspire such acts, or during and after other religious or semi-religious occasions, like Christmas festivities or dubbings, the initiation rites of young men as knights. Each man who publicly vowed to go to war under the aegis of the cross received immediately or soon after the cruciform cloth for sewing on an over-garment. In rare instances, a crusader opted to have himself branded or incised with a small emblem of the cross on his shoulder, much as a soldier nowadays might obtain a tattoo affirming her comradeship with companions and commitment to their cause. Observers expected the men who promised to go on crusade to enact public redress of grievances Regardless of how much time elapsed between taking the cross and setting out on the journey, the actions in the immediate prologue to war began with confession. Confession has two meanings here. One is that of sacramental confession – seeking priestly absolution upon the admission of one’s sins. Ordinarily, this form of confession took place at Eastertime, but it was also appropriate during periods of grave illness, which called for extreme unction. Because war always carried with it the risk of death by disease, accident or battle, clergy and laity alike regarded sacramental confession as the proper initial rite in preparing for departure. Confession as a crusader, however, implied a second undertaking that paralleled sacramental confession, but was typically more elaborate. The faithful hoped that all penitent sinners would supplement their traditional Easter confession by undoing the wrongs they had committed and making restitution for money sinfully obtained. Observers and commentators not only hoped for but also expected the men who promised to go on crusade to enact public redress of grievances. This applied especially to lords and rulers. Conforming to this expectation, Lord Jean de Joinville, the friend and biographer of the French king Louis IX tried to right all the injustices that he or his agents had committed on his estates before he set out on crusade in 1248. In doing so, he also followed the king’s own example. Accompanying every act were prayers to God and the saints for the success of their mission. The Virgin Mary was pre-eminent, but there was no dearth of alternative or supplementary guardians and battle saints. Pre-Christian Jewish heroes, like David, the slayer and beheader of Goliath, and Judith, the slayer and beheader of Holofernes, could serve. The archangel Michael, who cast out Satan from heaven, was another appropriate figure, as were, Saints Denis, George, Martin, Maurice and Sebastian for various reasons. Sebastian, for example, was appropriate because he had been a Roman imperial soldier, who protected his fellow Christians from persecution and suffered attempted execution for it, in one popular version of his legend, by bow and arrow. In the same way, the bearers of swords, lances, shields, battle standards, and the like, invoked saints whose lives seemed to have common threads with theirs. Many of the rites associated with their invocations, though they drew on longstanding precedents, were new in the age of the crusades. Medieval warriors, one should add, attempted to empower their weapons spiritually in other ways besides blessings. They hired expert engravers to inscribe them with talismanic formulas mirroring the sentiments expressed in the benedictions, a practice that also drew on older habits of naming favourite weapons and wearing spiritual weapons – amulets with divine names – in battle, a motif enthusiastically introduced into contemporary romances. King Louis IX himself kept a relic – a piece of the True Cross – on his person, a fact that seems to have been widely known. He probably did so for apotropaic purposes, or so one might infer from the report of his contemporary, the English chronicler Matthew Paris, who described how the king’s mother, Queen Blanche, applied Passion relics to him when he was sick in order to restore him to health. Like mother, like son. While rituals and ceremonies designed to protect and, if necessary, to heal crusaders multiplied in this period, a weapon that has received somewhat less attention in these developments than it should have is the dagger. One can probably explain this relative neglect by the fact that it fell into the catchall category of smaller weapons or the general category of knives, the details of whose sanctification often go unrecorded. The term ‘knives’ as a category covers a multitude of weapons and, indeed, there were many words for these artefacts. The English word ‘dagger’ seems to be a genuinely native word, although a few etymologists have attempted to assign temporal priority to its usage in other languages. When scholars encounter the Medieval Latin daggerum or Middle English dagger, they translate it unreflectively (and presumably correctly) as dagger, a term that, by convention, describes a cruciform weapon, with a hand-length blade lightly cambered on both sides and neither broad nor narrow, but of intermediate breadth. The problem is that, when scholars come upon Medieval Latin or vernacular texts with other words for knife, which may or may not conform to this or similar definitions, their default choice in translation is also dagger. Thus, writers have rendered annelacum, bidew, boydekin, cultellus, durk, misericordium, poignard, sica, skayne and stiletta as dagger at one time or another. In fact, these terms refer to several very different sorts of knives, not necessarily daggers. However, the misericordium (in Modern English, ‘misericord’), the blade of interest here, appears to have been a true dagger, and no weapon is more revealing of the interplay of or move between violence and compassion in the Middle Ages. Most scholars believe that the making of daggers took off in the West in the 12th century, after a thousand-year hiatus in their production, and that the use of the name misericord for them was an innovation of the same period and therefore responds to the advent of the crusades. Their renewed production expanded in the 12th and later centuries along with references to them in written sources. The great early modern lexicographer, Charles du Fresne, sieur du Cange, cited instances in various sorts of Latin sources, in romances such as the Roman de la Rose, and in vernacular administrative records. Insular lexicographers have documented a similar plenitude of references to misericord weapons in Middle English. It was an expensive knife, as knives went. If a possessor happened to lose his (men constituted the great majority of owners), the person who found it ran the risk, in trying to sell it, of breaking the law of treasure trove, which in such circumstances vested the pre-eminent right to valuable property in overlords. A stiff fine was a notable consequence. A misericord was more expensive if it was enamelled, possessed a silver-gilt haft or had valuable decorations embedded in its handle, like the one belonging to Raoul de Presles, the constable of France, with its ‘image in crystal’, inventoried in 1302. Pricey or not, aristocratic soldiers of the time and combat soldiers more generally regarded the misericord as a genuine war-knife and employed it as such. The military historian Charles Stanton, drawing on ‘contemporary French sources’, describes a battle during one of the Italian crusades against Manfred of Hohenstaufen, the last king of Sicily and the bastard son of the Holy Roman Emperor Frederick II, this way. ‘Swinging heavy German long swords in … cramped quarters, Manfred’s mercenaries were an easy mark for the shorter, more tapered blades of the French (like the miséricorde dagger). Worse still,’ Stanton continues, the chronicler ‘Villani asserted that the French foot soldiers used their daggers to strike at the steeds of the German knights as well.’ The German knights were ‘overwhelmed’. This behaviour was quite unlike that of the French soldiers’ late Roman counterparts. The latter carried their daggers (pugiones, rather larger than medieval types) as a sign of status, and, although they could use them in hand-to-hand combat, they most often did so to settle private squabbles (matters of honour) and in assassinations, such as Julius Caesar’s. Scholars have imagined that the death-stroke was a gift of pure and unmerited love of an enemy King Louis IX wore his dagger hanging at his side during his first crusade (1248-54), and his family continued to treasure it like a relic long after he died. It hung at his side, but which side? Evidence on the placing of the weapon in a medieval warrior’s attire comes in part from various depictions in the effigies that adorn knights’ tombs. In a typical effigy, the misericord will hang by a strap on the right side. The fact that it does so has enormous symbolic significance. Right-sidedness, in all its manifestations in medieval representations, generally transmitted positive messages, including majesty, power, wisdom, honour, justice, moral rectitude, prestige and ceremonial precedence. The meaning of these representations mirrored the privileging of right-sidedness in scriptural texts. Jesus, for example, sits at the right hand of the Father, a metaphor repeatedly employed by the authors of the New Testament and by its late antique and medieval interpreters. Many antiquaries and modern scholars, since the rise of Romantic medievalism in the 18th century, have also regarded the misericord as a special dagger – or, rather, the special dagger, with which a knight administered a death-stroke or mercy-stroke to a badly wounded enemy who would otherwise suffer needlessly prolonged agony. This hypothesis owes its superficially persuasive character to the obvious fact that the word for pity (heartfelt mercy) in Latin is misericordia. However, while there is no doubt that contemporary commentators drew significance from this etymological fact, it is not clear that they regarded the stroke or stabbing that a combatant inflicted on his enemy with the knife as an act of mercy. One modern opinion holds that the relatively thinly bladed dagger was designed to slip through the spaces where sheets of plate armour came together, which would explain the German name, Panzerbrecher. Yet, if this is so, the presumption that such a stroke would bring about a quick death seems dubious at best. Perhaps a dagger could achieve this goal if the victim wore chainmail. A stab wound inflicted by slipping a knife under an armour plate and thrusting upward would have only a fortuitous chance to hasten death significantly, unlike, say, a slash across an enemy’s wrist or neck. The alternative, ironically, was to prolong the agony by removing part of the victim’s armour in order to inflict a stab wound, say, to the heart, that would cause instant death. However, a variation on this opinion, valorised by its incorporation in André Vauchez’s influential reference compendium on the Middle Ages, continues to have adherents. They too assume that a knight administering the stroke wanted to save his victim from a lingering decline, but not merely from one occasioned by the plethora of wounds that put him in the victor’s power in the first place. The knight’s concern came rather from his awareness of his victim’s susceptibility to lockjaw, ‘the unspeakable agonies of death by tetanus’. Supposedly, he wanted to dispatch the wounded man before the disease’s onset. After verifying the nature and extent of his victim’s injuries by removing a part of his armour, and the likelihood or not of the onset of lockjaw, the knight could then determine whether to administer a death-stroke. Dying from tetanus was, of course, no rarity in a society where rusty nails, especially uncoupled (or thrown) horseshoe nails, were everywhere, and where, as a result, there was a high incidence of puncture wounds. The lead up to death from the disease was terrifying. One clinical description of severe tetanus, the type that has historically resulted in a mortality rate as high as 70 per cent, goes this way: [R]eflex spasms … may be of appalling intensity … [T]he intense muscle spasm may fracture vertebrae. Spasm of the laryngeal muscles, the diaphragm, and the intercostals [muscles of the chest wall] prevents ventilation [breathing], and cyanosis [turning blue] occurs.In the face of such a fate befalling a defeated knight in the age of chivalry, in other words, scholars have imagined that the death-stroke was a gift of pure and, indeed, unmerited love of an enemy. In this scenario, the recipient’s suffering contributed to but did not, in and of itself, earn the sympathetic death-stroke. Its administration was rather an act of unmerited love on the part of the victor for the vanquished knight. It was the human counterpart of divine grace, the unmerited love of God for sinful humanity. Certain modern synonyms for the death-stroke at first glance seem to support this interpretation: Gnadenstoss in German ‘the jab of grace’, for example, but it does not appear to be medieval. Coup de grâce, also not medieval, means a gracious blow, understood as a death-stroke, as does mercy-stroke in English, a usage not attested before the 18th century. Since modern usages cannot establish the truth of this theory, an opening exists for a competing theory – and one, I think, far more persuasive than the preceding. It is this. There is no evidence that knights called their daggers misericords before the crusades. If this silence in the texts mirrors reality, then it is likely that, when knights began seeking blessings for the weapons they wielded in the holy wars, they did so because the ostensible purpose of the expeditions was to bring relief for – mercy to – fellow Christians. Their mission was to protect and succour those reportedly suffering under or resisting the oppression of non-believers, especially Muslims in the eastern Mediterranean. This would also explain why so many of these weapons exhibit engraved crosses on their handles and blades. For the ‘cross’, as a monastic writer noted centuries before this strange co-mixture of the lethal and the compassionate had emerged, ‘was made the Sun of Justice for us that we might be illuminated by its mercy.’ As with swords, prized knives received names from their warrior owners, who had engravers inscribe them with various devotional formulas. One such warrior’s knife dating from the late 13th century bears the extraordinary inscription ‘AMOR VINCIT OMNIA’ – ‘love,’ perhaps the name of the knife, ‘conquers all.’ At one level, the inscription alludes to the Constantinian vision, In hoc signo vinces. At another, in my opinion, it suggests that the warrior who authorised the engraving was construing righteous warfare, and therefore the use of this sanctified weapon, as an act of love. If I am correct in this inference, then, one has here an instantiation of Riley-Smith’s recovery of the crusaders’ ideology of their warfare as an act of love, that is, the love than which none is greater, the willingness to lay down one’s life for one’s friends, meaning one’s fellow Christians. A warrior wielded the dagger also in line with the notion prevalent among crusade enthusiasts of the 13th century that their type of warfare was ‘a means of doing honour to the fatherland of the bride’, the Virgin, another but related kind of love. A Christian, one commentator wrote, should imagine the crucifixion as a kind of sanctified cultellus (little knife, dagger) that cut through sin, ‘the knife by which the hardness of your heart can be rent.’ A suggestive parallel in a different culture may be the richly decorated ritual daggers of the Tibetan Buddhist Phurba cult, which received blessings for their intended use in taming evil spirits in part by stabbing the earth and restoring stability to the world. Another is the dedication of weapons to the Goddess (Devi) by nobles in pre-colonial India, in thanksgiving for the victories of beings in the pantheon (like Durga and Ram) over the demons. For a knight to use an instrument of righteous violence in the wrong manner would put his soul at risk Scholars are well aware from textual references that European artisans produced similarly decorative misericord daggers for their patrons as well as ornamented sheaths for the knives. Unfortunately, the latter consisted primarily of cloth and leather, which means that they have not survived in abundance, except as fragments. Nonetheless, it is reasonable to suppose that they incorporated aspects of the iconography and inscriptions of the other sanctified weapons of European medieval warfare. The representations and formulas on the knives were a constant reminder, then, to the wielders of the weapons of their moral responsibilities. For a knight to use such an instrument of righteous violence in the wrong manner, as defined by the prevailing ethos of the period, would defile him and put his soul at risk. Killing innocent, defenceless and vulnerable people, or already defeated or overcome, and therefore harmless, foes during wars, especially with a sanctified weapon like the misericord, was an affront to its consecration and therefore an affront to God. This is what commentators meant when they lamented attributing the name of mercy to murder weapons. In the 12th century, Geoffroy de Vigeois in the Limousin in southwestern France wrote of a knight ‘cruelly’ (crudeliter) administering the stroke of a misericord, despite the appellation mercy. The continuation of the Chronicle of Sigebert, written in the diocese of Arras in northern France, refers under the year 1192 to mercy-daggers as being improperly (improprie) given the appellation in such circumstances. Even the charter of 1194 issued by King Philip II (Philip Augustus) for the town of Arras refers to such weapons as arma multritoria, ‘murder weapons’, when wielded breaking the peace in his cities. The critics were addressing the weapons’ misuse. Crusading against an armed enemy of the faith was one thing. The use of sanctified weapons in other contexts (crime, revenge, etc) was quite another. It disassociated them from their holiness. The witnesses to Renaudins de Hangest’s murder of a Spaniard, who was travelling in northern France in 1303, were appalled. They vividly and repeatedly described at length, in a record of only about 200 words, the desecration of the bloodstained misericord he used in perpetrating the crime. Its employment in this way seems clearly and deeply to have upset them. One final observation. It pertains not to Christian warriors but to the clergy who, among other actions, blessed them and their weapons. Churchmen also took vows and went on crusade, if not as combatants, at least as exhorters of the troops and conduits for sacramental grace. Further research may be able to establish whether they periodically renewed their blessings of the weapons of war while on crusade, and whether in doing so they reminded lay crusaders when and if they could regard their weapons as genuinely sanctified. One wonders whether churchmen conscientiously observed and called the warriors to account by threatening to annul their blessings if the latter perpetrated unjustified – immoral – acts, what we might now call war crimes. It seems to me that this subject, like several others explored in this essay, is worthy of further exploration.
William Chester Jordan
https://aeon.co//essays/what-crusaders-daggers-reveal-about-medieval-love-and-violence
https://images.aeonmedia…y=75&format=auto
Family life
In the eyes of the Runa people, Western kids grow up indulged, over-mothered and incapable of facing outward to the world
Imata raun paiga? (‘What is she doing?’) – my husband’s grandmother, Digna, asks him. The ‘she’ Digna is referring to is me. What I am doing is rather simple: I am wrapping my four-month-old son in a baby sling, his face toward my chest, in a calm, reassuring embrace. But my husband’s grandmother, who has raised 12 children in a small village in the Ecuadorian Amazon, does not think of this mundane gesture as being anything normal. ‘Why is she wrapping the baby like that?’ she insists, with genuine surprise. ‘This way the baby is trapped! How is he even able to see around?’ Squished inside the wrap, my son immediately starts crying, as if confirming his great-grandmother’s opinion. I bounce him up and down, in the hope of soothing his cries. I turn to Digna and say: ‘This way he is not overstimulated, he sleeps better.’ Digna, who has since passed away, is a wise, dignified woman. She simply smiles and nods, saying: ‘I see.’ I keep bouncing up and down, walking back and forth across the thatched house, until my son eventually snoozes and I can breathe again. The relief of being able to breathe again: that’s perhaps a feeling familiar to most new parents. Like many other people I know, I also almost lost my mind after the birth of my first child. It’s hard to tell how the madness began: whether it started with the kind and persistent breastfeeding advice of the midwives at the baby-friendly hospital where I gave birth, or with a torn copy of Penelope Leach’s parenting bestseller, Your Baby and Child: From Birth to Age Five, first published in 1977, confidently handed to me by a friend who assured me it contained all I needed to know about childcare. Or maybe it was just in the air, everywhere around me, around us: the daunting feeling that the way I behaved – even my smallest, most mundane gestures – would have far-reaching consequences for my child’s future psychological wellbeing. I was certainly not the only parent to feel this way. Contemporary parenting in postindustrial societies is characterised by the idea that early childhood experiences are key to successful cognitive and emotional development. The idea of parental influence is nothing new and, at a first glance, it seems rather banal: who wouldn’t agree, after all, that parents have some sort of influence over their children’s development? However, contemporary parenting (call it what you like: responsive parenting, natural parenting, attachment parenting) goes beyond this simple claim: it suggests that caretakers’ actions have an enormous, long-lasting influence on a child’s emotional and cognitive development. Everything you do – how much you talk to your children, how you feed them, the way you discipline them, even how you put them to bed – is said to have ramifications for their future wellbeing. This sense of determinism feeds the idea of providing the child with a very specific type of care. As a document on childcare from the World Health Organization (WHO) puts it, parents are supposed to be attentive, proactive, positive and empathetic. Another WHO document lists specific behaviours to adopt: early physical contact between the baby and the mother, repeated eye contact, constant physical closeness, immediate responsiveness to infant’s crying, and more. As the child grows older, the practices change (think of parent-child play, stimulating language skills), yet the core idea remains the same: your child’s physical and emotional needs must be promptly and appropriately responded to, if she is to have an optimal development and a happy, successful life. Like other such parents, in the first few postpartum months I also engaged, rather unreflectively, in this craze. However, when my son was four months old, during a period ridden with chaos, parental anxiety, sleep deprivation and mental fogginess, my husband and I made the decision to leave Europe. We packed our clothes and a few other things and hopped on a flight to Ecuador. Our final destination: a small Runa Indigenous village of about 500 people in the Ecuadorian Amazon. Our decision wasn’t as mad as it sounds. The Ecuadorian Amazon is where my husband grew up and where his family currently lives. It is also the place where I have been doing research for more than a decade. We wanted to introduce our newborn to our family and friends in the village, and we didn’t think twice before going. I could not yet imagine the repercussions this decision would have on me, both as a mother and as a scholar. I ended up in frantic searches across the village to find my baby, under the perplexed stares of neighbours In the first weeks of our stay in my husband’s village, family and neighbours quietly observed how I took care of my son. He was never out of my sight, I was there always for him, promptly responding to (and anticipating) any of his needs. If he wanted to be held or breastfed, I would interrupt any activity to care for him. If he cried in the hammock, I quickly ran to soothe his cries. Our closeness soon became the subject of humour, and then, as the months passed, of growing concern. Nobody ever said anything explicitly to me or my husband. Most Runa Indigenous people – the community to which my husband belongs – are deeply humble and profoundly dislike to tell others how to behave. Yet it became clear that my family and neighbours found my behaviour bizarre, if not at times utterly disconcerting. I did not really understand their surprise nor did I, in the beginning, give it too much thought. People, however, started rebelling. They did so quietly, without making a fuss, but consistently enough for me to realise that something was going on. For instance, I would leave my baby with his dad to take a short bath in the river and, upon my return, my son would no longer be there. ‘Oh, the neighbour took him for a walk,’ my husband would nonchalantly say, lying in the hammock. Trying desperately not to immediately rush to the neighbours’ house, I would spend the following hours frenetically walking up and down in our yard, pacing and turning at any sudden noise in the hope that the neighbours had finally returned with my son. I was never able to wait patiently for their return, so I often ended up engaging in frantic searches across the village to find my baby, under the perplexed stares of other neighbours. I usually came back home emptyhanded, depressed and exhausted. ‘Stop chasing people! He will be fine,’ my husband would tell me affectionately, giving me the perfect pretext to transform my anxiety into anger for his fastidiously serene and irresponsible attitude. At the end, my son always came back perfectly healthy and cheerful. He was definitely OK. I was not. On another occasion, a close friend of ours who was about to return to her house in the provincial capital (a good seven hours from our village) came to say goodbye. She took my son in her arms. She then told me: ‘Give him to me. I will bring him to my house, and you can have a bit of rest.’ Unsure whether she was serious or not, I simply giggled in response. She smiled and left the house with my son. I watched her walking away with him and I hesitated a few minutes. I did not want to look crazy: surely she was not taking away my five-month-old son? I begged my husband to go to fetch our baby just in case she really wanted to take him away. When we finally found them, she was already sitting in the canoe, holding my son in her lap. ‘Oh, you want him back?’ she asked me with a mischievous laugh. To this day I am not sure whether she would have really taken him or whether she was just teasing me. As an anthropologist, I admit, I should have known better. Scholars who work on parenting and childrearing have consistently shown that, outside populations defined as WEIRD (white, educated, industrialised, rich and democratic), children are taken care of by multiple people, not solely their mothers. The dyad of the mother-child relationship upon which so much of psychological theory rests reflects a standard Western view of the family as a nuclear unit – where parents (and, more specifically, mothers) are in charge of most childcare. In most places in the world, relationships with grandparents, siblings and peers are as important as the ones with the parents. As a new mother, however, it was difficult to appreciate this reality, especially when people were not merely claiming my son as their own but also clearly showing to me that what they thought was important for a child’s proper development differed quite dramatically from my own beliefs. This became clear one day when Leticia, my husband’s aunt, came to visit us. Leticia had in the past affectionately joked about how caring and loving I was toward my son, and how amazed she was at the time and attention I devoted to him. As we were sitting together in our thatched house, Leticia took my son in her arms and started playfully talking with him. She tenderly touched his nose and laughed. ‘Oh poor little baby,’ she exclaimed suddenly. ‘Poor little baby, what will you do if your mother dies?’ She kissed him on the cheek. ‘You will be an orphan! Alone and sad!’ she laughed cheerfully. She then turned around so that I was no longer in my son’s sight. ‘Look! There is no more mama! She is gone, dead! What will you do, my dear?’ She kissed him again and laughed softly. In her landmark book on Inuit child socialisation, Inuit Morality Play (1998), the anthropologist Jean Briggs describes how Inuit adults ask children very similar questions. ‘Want to come to live with me?’ asks an unrelated woman to a toddler whose parents she is briefly visiting. Briggs argues that this kind of difficult teasing – which might sound inappropriate, even offensive to a Euro-American – helps young children think about matters of extreme emotional complexity, such as death, jealousy and loneliness. She describes at great length how, for the Inuit she worked with, this kind of teasing ‘cause[s] thought’. Likewise, I also often hear my family engaging in this kind of teasing with older children: this was, however, the first time I had become the target of it. For if Leticia’s teasing was intended to ‘cause thought’, my son was certainly not the only person she was encouraging to think. To let children face the world re-orients their attention towards sociality, toward others Hers was not just an admonishment on the perils of a too-exclusive attachment, a reminder of the eternal fluctuations of life and death. It was also an invitation for me, as a mother, to take a step back and let my son encounter and be held by others, lest he be ‘alone and sad’. In a place like a Runa village, where cooperation, work and mutual help are so important for living a good life, Leticia seemed to be telling me, my son truly needed to be with other people beyond his mother. Leticia’s episode made me think about Digna’s puzzlement at the way I carried my baby. Despite the calm, respectful response Digna gave me at the time I was wrapping my son, she must have thought I was crazy. What could the concept of sensorial overstimulation have meant to her? Runa children are carried around in a sling with their faces toward the outside, all the time, everywhere, from dawn to night, under the rain and the sun, in the garden and in the forest, at parties that go on for hours where they fall asleep to the sounds of drums, cumbia music, and the excited yells of dancers. When Digna carried my son, she did so the way all Runa women do: either on her back, or on her hip. Digna made sure he could turn his face to the outside world. ‘This way he can see everything,’ she said to me. I started from the assumption that my child needed to be protected from the world, his face safely turned toward his mother; she thought that a child needs to be turned toward other people, toward the world, because he belongs to it. Overstimulation, for Digna, was just the necessary work a baby has to do to become a participant in a thriving, exciting social life. To let children face the world re-orients their attention towards sociality, toward others. In one of their papers, the psychologists Barbara Rogoff, Rebeca Mejía-Arauz and Maricela Correa-Chávez beautifully describe how Mexican Mayan children pay more attention to their surroundings and to other people’s actions compared with Euro-American children. They explain the difference with the fact that Mayan children, unlike their Euro-American counterparts, are expected to actively take part in community life from early on. The practice of paying attention to social interactions, this encouragement to turn toward the community, seems to start, at least among the Runa, well before babies can speak or help at home. It starts, as Digna put it, by literally turning their faces toward the world. If the idea of an exclusive, preponderant relationship between mother and son might have seemed alien to our Runa family, equally strange, if not plain wrong, was the idea that a child’s needs should be always and promptly met by her caretakers. This is another central idea of current parenting philosophies: children’s emotions, needs and desires should be not merely accommodated, but also promptly, consistently and appropriately responded to. This translates into a form of care that is highly child-centred, whereby children are treated as equal conversational partners, praised for their achievements, encouraged to express their desires and emotions, stimulated through pedagogical play and talk, often with considerable investment of time and resources. These practices encourage the gentle cultivation of what the anthropologist Adrie Kusserow has defined as ‘soft individualism’, in which self-expression, psychological individualism and creativity are core values. It is not a coincidence that these are also qualities promoted in a neoliberal society where entrepreneurship, self-realisation and individual uniqueness are deemed paramount for success and happiness. This approach is premised on the fantasy that there is a ‘natural’ way to raise humans Taking this worldview up a notch, some people claim that findings from neuroscience support the goal of ‘optimal’ brain development as foundational to a child’s future success and happiness. The ideology is presented as if based on indisputable scientific evidence, but let us not be fooled. The approach fits perfectly with neoliberalism and has its origin in the culture of the US middle-upper class. Proponents describe the intensive care that results from this pursuit as ‘natural’, drawing on idyllic and stereotyped accounts of childrearing in ‘traditional’ non-Western societies. There is a popular book I am often given as a gift by other parents whenever I mention that I work in the Amazon and am interested in children. It is The Continuum Concept: Looking for Happiness Lost (1975) by Jean Liedloff. The back cover of the German edition shows the author in the jungle: she stands, tall and blonde, in a shirt and a leopard-print bikini next to a bare-breasted Ye’kuana woman and her sleeping baby. The book – a bestseller in the so-called natural parenting movement – tells the story of Liedloff who, after living for two years with the Carib-speaking Ye’kuana of Venezuela, discovers the recipe for raising well-balanced, independent, happy children. This amazing result is accomplished, we are told, through practices such as co-sleeping, responsive care and natural birth. Liedloff’s book, like the natural parenting movement, is based on the idea that people in industrialised Western countries have lost touch with the childrearing ways of our ancestors. Bringing together attachment theory, as well as a simplified theory of human evolution and cherrypicked information about childcare in non-Western societies, this approach is premised on the fantasy that there is a ‘natural’ way to raise humans. While responsive parenting and ‘natural’ parenting are not exactly the same, they can be thought of as two dots on a continuum: they both assume there is an optimal way to raise children that, if not followed, has negative consequences. The type of childrearing that both models encourage is also equally intensive and child-centred. What these accounts, which claim roots in anthropology, fail to reflect is that, outside of postindustrial affluent societies, no matter how cherished, children are very rarely the centre of adults’ lives. For instance, Runa children, while affectionately cared for, are not the main focus of their parents’ attention. In fact, nothing is adjusted to suit a child’s needs. No canoe trip under a merciless sun is modified to meet the needs of a baby, let alone of an older child. No meal is organised around the needs of a young child. Parents do not play with their children and do not engage in dialogical, turn-taking conversations with them from an early age. They do not praise their children’s efforts, nor are they concerned with the expression of their most intimate needs. Adults certainly do not consider them as equal conversational partners. The world, in other words, does not revolve around children. This is because children are not relegated to a child-only world nor deemed too fragile to engage in difficult tasks. From an early age, Runa children participate fully in adults’ lives, overhearing complex conversations between adults on difficult topics, helping with domestic tasks, taking care of their younger siblings. Participating in the adult world means that sometimes children can get frustrated, or denied what they want, or feel deeply dependent on others. At the same time, there is so much that they gain: they learn to pay close attention to interactions around them, to develop independence and self-reliance, and to forge relationships with their peers. Most importantly, in this adult world, they are constantly reminded that other people – their parents, their family members, their neighbours, their siblings and peers – also have desires and intentions. The psychologist Heidi Keller and colleagues wrote that good parenting for many societies is primarily about encouraging children to consider the needs and wants of others. The Runa are no exception. They enormously value qualities such as social responsiveness and generosity – capacities deemed indispensable for living a good life in a closely knit community. These presuppose the ability to acknowledge and respond to other people’s desires and needs. Runa childrearing practices reflect these priorities. The very idea that children’s needs and desires should be always and promptly met by caretakers is completely foreign to the Runa. Instead, not answering to some of these needs and desires might be a valuable practice. The goal here is to transform a child into someone who recognises that her own will is just one among many This is evident in an episode that occurred shortly after we arrived in Ecuador. I was then zealously following the breastfeeding instructions I received from the midwives (exclusive and on demand! In a quiet place and without interruptions! As recommended by the WHO! And the baby-friendly hospital initiative!) I was baffled when one day, right in the middle of breastfeeding, our neighbour, Luisa who was sitting next to me, placed her hand on my breast and took the nipple away from my son. He looked at me surprised. He grunted loudly. Luisa laughed. ‘Do you want your milk, little baby? Do you really want it?’ She kept my breast away from him. I watched her teasing him, trying to escape from her without looking rude or excessively defensive. ‘Your poor mama!’ she continued without paying attention to me: ‘Just leave her alone! This is not yours!’ My son became purple with rage and twisted in my arms. Luisa laughed again, removed her hand and kissed his little hand. I did not know how to react: my feelings ranged between confusion and anger. I asked my husband why she would do such a thing. He stared at me blankly. ‘To tease the baby! To let him know that the breast is not really his,’ he answered matter of factly. Why did Luisa make my son purposefully uncomfortable? What was her goal? The more I reflected on this, the more I began to see the teasing as crystallising a central moral lesson: in stating ‘this breast doesn’t belong to you, it is your mother’s’, Luisa redirected my son’s attention to the presence and desires of others. The intentional, playful refusal to attend to a baby’s desire for milk invites him (and anyone else present) to acknowledge that he is not the only one who has a will and desires in an interaction. It is exactly by these acts of playful refusal, by not promptly responding to their children’s will, by not making them the centre of their world, that the Runa cultivate in their children an awareness of other people’s needs and of their own place within a dense web of relationships. The childrearing goal here is to transform a child into someone who recognises and acknowledges that her own will is just one among many. Unlike what parenting books might tell us, there is simply no single recipe for good parenting. This is because each act of parenting is always and inescapably an ethnotheory of parenting: a set of practices that aim to shape a good person in a given society. Of course, one doesn’t need to travel to all the way to the Amazon to realise that. Step out of the privileged space of what Barbara Ehrenreich and John Ehrenreich in 1979 called ‘the professional-managerial class’, and the kind of debates surrounding childcare are likely to be very different. However, because this is a parenting ideology produced by a cultural and political elite that has a tremendous power in the world, it has quickly become normalised. What is most worrying is to see this ideology being increasingly exported, under the guise of evidence-based early childhood interventions, to the Global South. Promoted by organisations such as the WHO, the World Bank and UNICEF, such interventions aim to teach low-income families in the Global South to become responsive carers and optimise their children’s cognitive and emotional development through the adoption of ‘appropriate’ behaviour. These programmes assume optimal childcare to be a universal, objective, neutral fact that can be easily translated into a plethora of handy practices. This model of childrearing (and its more extreme neuroscientific version, where every act is seen as enhancing or hurting the brain) is anything but apolitical and acultural. Instead, it finds its origin in a specific culture and socioeconomic context where everything (including children’s abilities) can be measured and optimised in terms of future life success. To assume one cultural model of childcare is universally applicable to children everywhere, as WHO and others do, is dangerous. Not only do such programmes encourage culturally specific childrearing with little scientific basis, they also depict any type of care that deviates from the norm as in need of correction. Like early missionaries who travelled around the world teaching the natives how to be ‘good’, such interventions assume that parents in the Global South need to be taught how to raise their children properly. Following current orthodoxy, Runa childrearing – with its casual breastfeeding, abrupt weaning, no extensive parent-child play, no lengthy adult-child talk – would be described as ‘lacking’ in so many respects. And yet, my Runa friends and family thought my own childcare practices were conspicuously inadequate to raise a child in the context of their community life. Their observations, their puzzlement and their quiet defiance of my own childcare practices remind us that, whenever we talk about childrearing, we are not talking about achieving some objective child development based on irrefutable scientific evidence, but rather about a moral project: a moral project about what kind of people we would like our children to become, what society we would like to live in, and what kind of economy we would like to serve. As my Runa friends and family have subtly but relentlessly demonstrated, there is more than one way to flourish as humans in this world.
Francesca Mezzenzana
https://aeon.co//essays/why-runa-indigenous-people-find-natural-parenting-so-strange
https://images.aeonmedia…y=75&format=auto
Oceans and water
With care for the social and ecological consequences, foods from the ocean should provide sustainable protein to billions
Having lived nowhere other than the western coast of India for the first 21 years of my life, seafood was an indispensable part of my diet growing up. When the family business was prospering, we’d feast on plump pomfrets and juicy tiger prawns. When it wasn’t, there’d be smaller, bonier fish like anchovies and sardines. Or the less popular bycatch at least. If nothing else, my mum would bring out wares she’d stashed away for the greyer days; a jar of spicy pickled shrimp or salted, sundried mackerel perhaps. But fruits of the Arabian Sea always featured prominently in most meals. In fact, the act of procuring seafood was almost as delightful as consuming it. My Saturday mornings were often spent at the fish market with my mum, watching her negotiate with Hira – our family’s favourite fishmonger. ‘I saved these for you, I know your kids enjoy them,’ I remember Hira saying, trying to sell us her most formidable pair of mud crabs. She wasn’t wrong, I do love a good mud crab curry. These days, my Saturday mornings are spent shopping for the week’s groceries at the supermarket in my neighbourhood in Rotterdam in the Netherlands. Every week, I spend several minutes eyeing squeaky-clean salmon steaks and delicate basa fillets packed in the most sterile-looking plastic boxes I’ve ever seen. The stickers on the box tell me so much about the fish – freshness, origin, environmental impact, recyclability of the packaging. Yet I long to run my fingers through its non-existent scales and inspect its long-discarded gills for tactile cues about quality. Without the sights, sounds and serendipitous communal life of a coastal fish market, buying seafood has lost its allure for me. I guiltily move to the meat section to check for other protein options for the week. Like me, many have ‘upgraded’ to consuming more meat than previous generations did. By factory farming livestock, we are now able to produce meat at unbelievably low costs. We also have more money to spend than we ever did. Data show a strong positive correlation between a country’s GDP per capita and the amount of meat the average citizen consumes in a year. Collectively, we eat three times the meat we did just 50 years ago. In rapidly industrialising countries like China and Brazil, meat consumption has doubled in a span of two to three decades. Meanwhile, developed countries continue to consume meat in even more copious amounts than they did before. For many, eating more meat means improved food security and nutritional status. But it also pushes against our planet’s boundaries like few other anthropogenic activities do. With cow flatulence enveloping Earth in temperature-raising gases and the Amazon losing its cover to cattle feed, the current ways of producing and consuming meat have been pronounced detrimental to the planet’s health. In fact, it isn’t particularly good for human health, either. Consuming meat excessively, especially the red and processed kinds, exposes us to higher risks for various lifestyle-related diseases. We are currently at a point in time where the evidence against the ills of factory-farmed meat are simply too jarring to ignore. Results from scientific studies are clear – we cannot keep eating this way without inducing a climate apocalypse. There’s a strong push to find new ways to feed billions of protein-hungry mouths without destroying the planet. With the area of arable land available to us remaining limited, scientists have urged policymakers and decision-takers to turn their attention towards ‘blue foods’ – animals, plants and algae harvested from natural and artificial aquatic environments. The logic of blue foods, particularly aquatic animals, being less burdensome to the environment is fairly simple. Being cold blooded, they do not use energy gained from their feed to keep their bodies warm. This means more meat per unit of feed compared with warm-blooded terrestrial livestock. Although incomparable with the rise of meat consumption, global interest in blue foods has been inching upward as well. In 2018, the average person consumed 15.1 kgs of blue foods per year, compared with the 11.5 kg per person per year figure of 1998. The distinction between ‘seafood’ and ‘blue food’ is critical here because close to half of the aquatic plants and animals we consume today do not come from the sea at all. They are farmed under controlled, semi-natural conditions in tanks, ponds, raceways and enclosed sections of the ocean. Even consumers from traditionally seafaring parts of the world have begun to prefer farmed aquatic foods over those from a nearby sea. This would explain the popularity of salmon and basa – neither harvested from the North Sea – in my neighbourhood’s supermarket. After all, there are few things the Dutch like better than economical supermarket offerings and the convenience of semi-prepared foods. But aquaculture is unlikely to ever completely replace wild-capture fisheries in the foreseeable future. Next to providing protein and micronutrient-rich sustenance, fisheries are a source of livelihood for millions across the globe. The United Nations estimates that around 120 million people are directly and indirectly engaged in wild-capture fisheries, compared with the 15 million in aquaculture. This is unsurprising considering that the act of procuring food from the sea is as old as humanity itself. However, what was once a cornucopia of diverse and delicious foods is increasingly reluctant to share its bounty with us. Fishes that were once captured with ease are becoming elusive, endangered and, in some cases, even extinct. This scarcity pushes fishers to go looking farther into the sea and come into conflict with others doing the same. The conservation zoologist Tim McClanahan and colleagues mention the UK-Iceland Cod Wars of the 1950s and ’70s, the Yellow Croaker dispute between China and Japan in the 1920s and ’30s, and the Canada-Spain Turbot War of 1995 as examples of such conflicts. They explain that these clashes over marine resources have the potential to ‘lead to wider instability, particularly where food insecurity is high, people are vulnerable, and governance is weak or autocratic.’ Up until the 1970s, wild-capture fisheries provided the world with almost the entirety of its blue food supply. It was in the 1980s, when wild-fish harvesting plateaued, that the world started thinking of other ways to procure aquatic foods. Overfishing led to the severe depletion of fish stocks and, consequently, serious disruptions in marine ecosystems. Largescale commercial aquaculture was born of the necessity to continue providing dietary staples to seafood-dependent communities around the world, without endangering marine ecosystems. By making use of the rapidly advancing technology in this sector, we were able to master the art of farming aquatic life efficiently within a relatively short span of time. In fact, we got so good at it that, by 2014, produce from aquaculture had bested wild-caught seafood as a source of food. The sustainability problems of aquaculture are less complex and more solvable than those of livestock Like industrialised livestock rearing, aquaculture has become popular for the several commercial benefits it offers. Selecting only the most robust species, eliminating risks from predators, and engineering the perfect environmental conditions allows aquaculturists to produce high-quality blue foods at a lower cost than deep-sea fishing. More control over the production process and clearer rights over the harvested produce also ensure higher profits and fewer geopolitical disputes. The success of aquaculture has not only flooded traditionally seafood-consuming markets with a year-round supply of affordable aquatic foods but also created new markets in regions where these foods weren’t always popular. Next to finfish such as carp, catfish, salmon, tilapia, trout and tuna, other aquatic flora and fauna are farmed as well. Specialised systems cultivating molluscs such as oysters, clams, mussels and abalone, and various species of shrimp, are proliferating. There is also a growing interest in farming crabs, lobsters and other invertebrate animals, like sea urchins and sea cucumbers. Although a minority, some aquatic farms focus on marine plants and algae such as water chestnut and seaweed. One would think that with the grand success of blue food production from aquaculture, wild-caught seafood will eventually become a thing of the past; like hunting wild animals for sustenance has in most parts of the world. This, however, is far from the truth. Like their counterparts from the natural environment, farmed aquatic creatures thrive only when their diet is rich in all essential nutrients. Often omnivorous, these animals subsist on plants and smaller animals from their natural ecosystems. Prospering aquaculture farms around the world are supported by wild-capture fisheries that harvest forage-fish species, such as anchovies, herring, mackerel and sardines, and turn them into fishmeal and fish oil. Accounting for a third of all wild-capture landings, a sizeable portion of these fish are caught in the waters of developing countries, where they are an important source of sustenance for local populations. Final aquaculture products, especially the premium varieties, are often exported to wealthier countries. This, in sum, results in the removal of proteins and micronutrients from many food-insecure regions. Thankfully, most aquatic creatures aren’t picky eaters. This means that, with some ingenuity, it is possible to reduce their dependence on fish meal and oil. Like other omnivores such as pigs and chickens, many fish species can be raised on leftovers from the human food chain. Nutrient-rich marine microalgae and insects are great options, too. Sustainably grown terrestrial plants like soybean, engineered to reduce antinutritional components, can also successfully replace at least a part of fish meal and oil in aquafeeds. Innovation in aquafeed could potentially decouple aquaculture from wild fisheries and provide pathways to expand blue-food farming in a sustainable way. So, on the feed front at least, the sustainability problems of aquaculture are less complex and more solvable than those of livestock. However, there’s another aspect of the industry that is much harder to fix: its chronic dependence on exploitative labour practices. With close to 92 per cent of the total production coming from Asia, the prosperity aquaculture has brought to the continent is often used as a metric to measure its economic potential. But if one were to investigate how Asian aquaculturists are able to sell at low prices while making substantial profits, poor working conditions would be a part of the answer. Of course, technology and knowledge make the system effective too, but it is on the backs of underpaid and overworked primary production workers that the industry has scaled the heights of commercial success. Pioneers of the blue revolution have been so busy overcoming technical and biological challenges that the social impact of producing food this way has remained largely unaddressed. In Asia and beyond, precariously employed persons belonging to marginalised communities make up a large share of aquaculture workers. This includes women, children, Indigenous people and migrant workers. Borrowing from the worst practices of the wild-fisheries industry, aquaculture workers are routinely coerced into debt bondage, discriminated against, denied rights of association, and employed in facilities that lack adequate occupational safety and health standards. Reporting, and therefore statistics, on injuries and diseases among workers is a rarity in the sector but, from whatever little is available through journalistic and investigative records, we know that musculoskeletal disorders, skin infections and respiratory diseases are rampant. Like many other areas of the food system, the only way to create better conditions for aquaculture workers is through stricter regulation; both public and private. Governments of countries like China, Indonesia, India and Vietnam, where the blue revolution is thriving, need to do more to protect workers’ rights. Buyers with big market muscle must demand social sustainability audits and certification from producers. The industry at present is well-enough rooted for its custodians to move beyond biotechnical hurdles and invest in setting up ethically sound supply chains. In addition to its dependence on wild-caught fisheries and questionable labour practices, it is important to acknowledge and improve upon the ecological issues inherent to aquaculture production systems. Producing high-quality aquatic foods at low costs requires the use of genetic-engineering techniques that create aquatic species with special physiological characteristics. Often more resilient than their wild counterparts, aquaculture species that manage to escape or are released from their enclosures end up taking over the natural habitats of wild fish. This disrupts entire ecosystems and threatens the existence of wild populations that are already vulnerable. Escapees also spread diseases that wild aquatic populations have no immunity against. In aquaculture systems, these diseases are prevented through the use of antibiotic medication, residues of which may end up on our plates. Aquatic production systems are no panacea for all our food security and sustainability concerns. They’re fraught with ethical and practical problems and need considerable work to be sustainable in the long run. Yet they present much promise with regard to improving food security in the face of climate change. A study in 2020 exploring the future of food from the sea concludes that, because aquatic foods are nutritionally diverse and avoid many of the environmental burdens of land-based food production, they are uniquely positioned to contribute to future global food and nutrition security. Particularly, it emphasises the role in this endeavour of mariculture – farming aquatic foods in a cordoned-off section of the sea. It also recommends that we produce more low-impact bivalves, such as mussels, clams and oysters, to sustainably meet the growing protein demand. But the big question is, are we, as consumers, ready for our plates to be bluer in the near-future? Many studies and policies on blue foods are so focused on production capacity that they forget to account for the biggest incentive for expansion – consumer demand. Unlike poultry, beef and pork, blue foods have thus far been limited by geographic restraints. As culturally and nutritionally critical as they are for communities living in close proximity to water bodies, the idea of eating creatures that grow underwater may feel outlandish to natives of other terrains. While there are studies that confirm this, I personally found this out not too long ago. At a restaurant in Marseille in France, my friend and travel companion – who doesn’t eat seafood and comes from a land-locked country – asked one of the most baffling questions I’ve ever been asked. ‘Doesn’t this feel like you’re eating little aliens?’ they enquired, while watching me demolish a large bowl of luscious bouillabaisse dotted with clam shells and chunks of beautiful white fish. ‘Aliens?’ I asked, perplexed. ‘Seafood is so different from all other meats, you see,’ they explained. ‘With their patterned shells, long, wriggly tentacles and shiny scales, I think they look a lot like little alien creatures.’ The writer H P Lovecraft, creator of Cthulhu – a monster with an octopus head, scaly body, and claws at the end of its limbs – would probably agree. Other than being affordable, blue foods also need to be amiable So, next to technical, biological, economic and social concerns, those in charge of expanding blue-food production are tasked with an additional mission – convincing the unacquainted that blue foods are not little alien creatures from a distant aqueous planet. Will they succeed? Perhaps not with the entirety of our population. But there’s a good chance that those among us with an even slightly adventurous palate and an appetite for sustainable consumption could be brought into the fold. After all, so many of the popular blue foods we eat today were once considered unappealing. The lobster, now a luxury item, was once thought of as the poor man’s food by European settlers in North America. Milkfish, once rejected because of its numerous intermuscular bones, is among the most popular fish in Southeast Asia today. Crayfish – once disdained by many as a swamp-dwelling, paddy field-infesting crustacean – is now a favourite in many countries. And, as various Asian cuisines gain popularity, seaweed products have been popping up in kitchens around the world. Like any other strategy seeking to assuage the effects of climate change and ensure the future habitability of our planet, increasing consumers’ acceptance of blue foods is a long and arduous process that demands concerted efforts from several parties. The cornerstone of this undertaking must be the availability of affordable blue foods. This is a bit of a chicken-or-egg dilemma because, to achieve economies of scale, demand is a critical factor. But without being able to purchase these foods, especially the novel varieties, there cannot be an increase in consumer demand. Other than being affordable, blue foods also need to be amiable. For consumers to be willing to buy them, they need to first like them. And by ‘like’ I don’t mean only the taste, texture, aroma and such. Those are important too but, in order to make the purchase at all, consumers must feel a sense of connectedness with blue foods. Given that opening wet markets and finding our favourite fishmongers is (unfortunately) impractical in many parts of the world where the fisheries industry is not traditional, stakeholders in the food system must find other ways to help consumers get better acquainted with blue foods. This could be done by encouraging restaurants to incorporate blue foods into local gastronomy, educating children and adults about aquaculture and its role in sustainable food production, and publishing accessible recipes. Lastly, putting an assortment of blue foods on the market is essential as well. To avoid replicating the damage inflicted by monocropping on our terrestrial ecosystems, aquaculture must strive to maintain the diversity of aquatic systems. This means that we cannot all be eating salmon fillets and tuna steaks. For blue foods to be able to truly make a difference, we must be willing to expand our gastronomic horizons considerably and give new foods a chance. However, in the quest to ensure that blue foods are affordable, amiable and assorted, they must not be taken away from the people who truly depend on them. In 1997, the political scientist George Kent wrote: ‘Fish used to be known as poor people’s food. However, when fish supplies deteriorate, fish tends to disappear first from the plates of the poor.’ He explains that, ‘for people with abundant alternatives’, having less or lower-quality fish ‘may be little more than an annoyance’. But for those who live on the margins and heavily depend on fish, insecurity surrounding aquatic foods can be incredibly detrimental to livelihoods and wellbeing. More than 25 years later, his observations remain true. While creating new markets for blue foods is important to improve macro-level food security, it must not be done at the expense of communities who have consumed these foods through the ages; be it Arctic-dwelling Indigenous peoples, artisanal fishers from coasts all around the world, or my family back in India, relying on seafood through thick and thin.
Madhura Rao
https://aeon.co//essays/will-the-sustainable-food-of-the-future-come-from-the-blue
https://images.aeonmedia…y=75&format=auto