title
stringclasses
114 values
description
stringlengths
71
138
essay
stringlengths
412
63.4k
authors
stringlengths
7
67
source_url
stringlengths
52
104
thumbnail_url
stringlengths
113
249
Death
Death rites have become private and tepid affairs. The Burning Man Temple brings a fiery edge to modern mourning
Just after sunrise, I cycled towards the Temple of Juno, a pagoda-like structure obscured in the distance by the desert haze of dust. It was a long ride from my camp at the corner of Lilac and 3:30 streets in Black Rock City. I rode by a steel dragon the size of a school bus which blasted electronica as dancers swayed to the beat and looked towards the sun rising over the Nevada desert. Part-Viking vessel, part-steampunk museum piece, the dragon was a dance club roaming the streets of the Burning Man festival. Its motorised companions — among them, a giant, roving octopus; vans made into insects; a shiny aluminum duck; a space shuttle — were ‘mutant vehicles’. These cars, vans, bicycles, and buses transformed into moving sculptures are the only ones allowed by the event organisers to drive on the ‘playa’, a prehistoric lakebed full of nothing but alkaline dust and art pieces. In the early hours before the its coming demise, the temple was quiet, except for some Hare Krishnas chanting in one corner of the courtyard and a guitarist playing through a small amplifier in the other. Inside, a woman dressed up in red lace meditated in the lotus position in front of the central altar while the man next to her, coated entirely in silver paint, took photos of the altar and its offerings. Some ‘Burners’ (as the festivalgoers are called) wore furs against the morning chill. Others were sunburned and naked, oblivious to the dust and the cold. They were busy writing or reading messages, praying softly to themselves, weeping. The air was thick with grief and loss. Every surface of the walls, arches, stairs, and altars was covered with messages, letters, photos, and offerings. The face of a beloved German Shepherd, its collar hanging from a picture frame; the smile of a grandmother in a yellowed photo; the military uniform of a father — they all sat side by side on an altar, strangers until now. Some messages spoke of the living: ‘May the cancer in the cells of all who are living with that disease melt away by the graceful fire of this temple and the true and clear prayers of us all that we offer this evening.’ But most were dedicated to the beloved dead: ‘This burn was for my brother Bill who died in June of this year… he was in the temple now I can put his ashes away.’ And ‘For my beloved — in the light you still guide me … the universe can feel just how much I miss you.’ And: ‘For my little sis… died age five… never had a chance.’ No boundaries or borders distinguished one person’s grief from another’s. Mourning was contagious, afflicting us all as we gazed on thousands of photos of dead sons, mothers, lovers and pets, a three-dimensional collage of collective loss. An atmosphere thick with grief: mementos of the dead adorn the Temple of JunoThe Temple is built in the Black Rock Desert of Nevada at the end of every summer, as part of the Burning Man festival. The festival itself attracts artists, computer programmers, dance music DJs, Buddhists, rabbis, fire dancers, carpenters, massage therapists and others of all ages and from all kinds of backgrounds. Started in 1986 by the artist Larry Harvey, who burned an effigy on a beach in San Francisco which drew a small crowd, the festival is now bursting at the seams, with more than 50,000 attendees from around the world. The organisers want to increase attendance to 75,000, if they can get permission from the Bureau of Land Management, a government agency that manages the desert. Burning Man has also spawned regional ‘Burns’ from Australia to New Jersey, as well as ‘Decompressions’ and ‘Afterburn’ parties. Anyone who can pony up several hundred dollars for a ticket, and get together all the food and water needed for a week in the desert can attend. The semi-circular encampment of Black Rock City runs on a gift economy: nothing is for sale within festival boundaries, except ice and coffee drinks sold by the Burning Man organisation. Wandering its temporary streets, I was offered pancakes and a cup of tea, flashy costumes and a manicure, as Burners tried to outdo each other in their generosity. No one goes wanting in this gifting city full of food and fashion. Each year, the place comes alive as festivalgoers arrive with camping gear, pavilions, art installations, costumes, masks, body paint, and a range of temporary desert homes. Musical and theatrical performances, tea parties, dance marathons, meditation tents, yoga sessions, and many other events are scheduled all day and night. But most Burners spend their time wandering. They visit old and new friends, take part in croquet and poker games at neighbors’ camps, frequent hundreds of themed bars, and look at lots of art. Every year, Black Rock City is created anew and then totally dismantled when the week is over. At the end of the festival, Burners shed their feathers and glitter and go back to work in what they like to call ‘the default world’. But volunteer crews remain for weeks until they can leave the desert just as it was when they arrived. As a sacred destination, Burning Man is rife with contradictions, some of them disturbing Events such as Burning Man are flourishing in the US, especially among young adults. According to Burning Man’s census, close to half of attendees are under 30 years old. Polls show that the number of Americans of all ages exploring spirituality outside institutional religion is on the rise too. Jeet-Kei Leung, musician and organiser of the Mystic Garden festival, proclaims such events as the spiritual wave of the future. In a TED talk in 2010, he praised ‘transformational festivals’ for rejoining ‘sacred ritual with secular festival’. Festivals such as Burning Man offer participants opportunities for pilgrimage, transformation and spiritual experience without requiring any commitment to a specific creed. For many Burners, this is their church or religious holiday: a sacred time and place set apart from ‘the default world’. Like other festivals, it offers an escape from the society they leave outside its gates. As the naked greeters say to those arriving at the festival: ‘Welcome home’. Burning Man is a sacred space that embraces opposites, a place of inversion and dissent, of intense seriousness and lighthearted play. After travelling many hours along a two-lane highway across a barren desert landscape, Burners feel like pilgrims shedding their old ways, open to new possibilities. But as a sacred destination, Burning Man is rife with contradictions, some of them disturbing. Heavy drug use and drinking coexist with ascetic New Age regimes. A camp down one street is hosted by an evangelical preacher urging passers-by to get saved while, down another, Bianca’s Smut Shack offers pornography. A Buddhist meditation retreat is next door to a rollerskating rink. Trampolines and children’s swimming pools beckon Burners of all ages throughout the city. And the slogan ‘Leave No Trace’ is belied by the trash bags littering the highway leading out from the Burning Man site. This place, which is supposed to be liberated from all ordinariness, nonetheless requires the presence of hundreds of police, fire fighters, emergency medics and drug enforcement officers. The Man itself and the Temple of grief together embody the dual nature of the festival experience. The Man, an abstract sculpture that looms over the festival space, is a universally recognised symbol of the event. This genderless figure is traced in dust on cars when Burners travel home, painted and tattooed on their bodies, stencilled on banners to decorate camps, and crafted into necklaces to give away to other Burners. It marks the centre of Black Rock City from which everything else emanates. Every year the figure looks the same, but different artists design its base. This year’s arched base was colorfully lit by night, but by day its walls were simple and functional. On them, Burners wrote lighthearted graffiti: ‘Make tea, not war’, ‘You are beautiful, welcome home’, ‘Peace, love, light’. There were no offerings or altars. Burners wandered through, watched each other, chatted with friends, shared cans of beer, and gazed out at the city spread before them. Every year, in an extravaganza of fireworks, rave music, art cars, costumed stilt-walkers, fire-spinning, and wild parties, the Man is burned down on the Saturday of the festival, and revelers cheer and dance around its ashes all night long. By contrast, down a lantern-lined avenue away from the Man and the city stands the temple. Within the ritualistic space of the whole festival, Burners create the same kind of opposition that exists between Burning Man and the outside society. If the Man is exuberance and excess, the temple offers an experience that is deeper and more intimate. Both are sites of collective rituals, but one is for festivity and play, while the other speaks to sorrow and loss. Each allows for the expression of feelings that are repressed in the outside world. Yet it is the Temple that has become the heart and soul of Burning Man, stealing that honour from the festival’s namesake. The temple too is burned to the ground, but in an altogether different atmosphere. A dawn gathering at the Temple of Juno on the last day of its ephemeral existence.What makes the Temple feel sacred, a space to approach with reverence, while the Man is not treated in quite the same way, despite being new every year? Burners might know what they will find at the temple, but what makes it so very powerful are the images of the dead and the intentions of the living invested in the messages and offerings at the altar. The temple speaks to a society looking for new ways to mourn its dead. I have been coming to Burning Man since 1997. This year, I arrived before the event officially opened and watched the temple being finished by the artist David Best and his ‘Temple Crew’. Each year, a temple is constructed over months in advance and assembled at the desert a few weeks before the event. Earlier temples included the ‘Temple of Tears,’ and the ‘Temple of Joy’. In a spectacular, cathartic bonfire, the temple is burned down on the last night of the festival, with tens of thousands of Burners gathered solemnly around it. This year’s Temple of Juno might have been transported from a mountain monastery in Thailand, with its delicately curved and pointed corners, intricately carved walls and graceful arches. Many Burners had showed up early at the temple’s gates to find them closed. They looked on from a distance, imagining where they would place altars and collages for the dead, ashes of loved ones, and other mementos brought from home. When the temple opened two days later, they sent out a collective cheer, then hastened inside to make it their own. For the rest of that week, until the ‘Temple Burn,’ they came to leave offerings, meditate, play bagpipes or guitars, write messages to those they had lost, and read messages strangers had left for their parents, grandparents, children, lovers, dogs, and cats. They wept and prayed and embraced one another. Soon, the fire was lit and the interruptions forgotten as flames roared and the air grew unbearably hot There are no spiritual authorities here. No one orchestrates movement through the Temple or decides what happens there. It is a space for the spontaneous rituals of grief. ‘Did you ever have someone close to you die and you can’t get the pain out of your heart?’ asked one Burner as the Temple was destroyed. ‘That’s what the Temple is all about’. The Temple is radically inclusive, making space for those who are often excluded, such as suicides and pets. David Best, the San Francisco Bay Area artist who designed the first large-scale memorial Temple in 2001, as well as the Temple of Juno, says that Burning Man is about ‘manifesting things not otherwise available’. That first one, the Temple of Tears, was dedicated to suicides, who had an elaborate memorial in the temple’s central altar, so that ‘they wouldn’t have to go to the shadows of the Temple’. No other festival provides anything like the Temple and its collective mourning rite, but new, experimental ways of mourning the dead are spreading everywhere across the US and other Western countries. The spontaneous shrines to victims who died in the September 11, 2001 terrorist attacks might have been seen by the Burners. They might have walked along the Vietnam War memorial and noticed the intimacy that can be found in mourning with strangers. Creativity and self-expression help to fill the place once occupied by socially mandated religious rites. New kinds of grief require new kinds of mourning: the AIDs quilt; new rites for pregnancy loss; organ donor memorials; and roadside shrines. Old rites can be co-opted to meet new needs, as in the popularity across America of the Latino celebration of the Day of the Dead. Like all of these communal rituals, the Temple transforms private grief into a public rite. Modern death rites have become privatised, shifting the emphasis from the community to the private world of the individual and the family. The Temple at Burning Man reverses this process, making grief and mourning public and shared between intimate strangers. Studies by Philippe Ariès, Peter Homans and other scholars suggest that death and loss become more bearable when the burden of mourning is taken on by a community rather than left to the individual. Burners’ testimonies suggest that unmourned grief is overwhelming for those who cannot turn to traditional ways of collectively and publicly mourning the dead. Collective rites of passage around death and grief are designed to allow for the dead to move on and for the living to let the dead go. Most Burners identify themeslves as ‘spiritual’ but not religious, and for them the dearth of rituals for grief and mourning is especially difficult. Bill, one of the carpenters who worked on the Temple of Juno, suggested that ‘a lot of these people at Burning Man don’t have a structured religion to help them’. The Temple, he pointed out, provides a place in which they can ‘express their grief and joy’. Although the Temple at Burning Man is a site for mourning, in a characteristic twist, it is also a popular location for weddings. I visited the Temple of Juno daily during the week I was in Black Rock City, and on one of those visits I happened upon a small gathering. Two couples — two men, and a man and woman — were getting married by Reverend Bill Talen of the Brooklyn, New York-based Church of Stop Shopping. Dressed in a white suit and clerical collar, Rev Billy performed the ceremony while the Stop Shopping Gospel Choir sang irreverent hymns about anti-consumerism. The wedding reminded me of the kinds of serious play so common at Burning Man, even at the Temple. On BMIR, Burning Man’s radio station, David Best told a story about what he hoped the Temple could do for festivalgoers. He had been talking to a woman who had recently lost her son to a drug overdose. To get her away from the noise and chaos of the Black Rock City, he had found a quiet place for the two of them to sit and talk. Just as they had got settled, a giant chicken came driving by. They had stopped talking and burst into laughter. For Best, this showed that an ‘equal sharing of pain and joy’ is what the Temple is for. But this fluidity of meaning and the premium placed on self-expression can also lead to conflict. At night, the contrasts can be keenly felt, always threatening to tip the balance of civility and warmth that Burners expect from the festival into something edgy and darker. One night, while I was watching mourners at the Temple, a group of men approached, yelling and kidding each other, obviously drunk. They were shunned and shushed by other Burners. ‘But this is how we mourn!’ one of them shouted back as they moved away into the darkness of the desert. Then an art car came up to the Temple gates with speakers blaring and mourners again complained. Opposing interests coexist everywhere at Burning Man, but at the Temple this co-existence can become especially uncomfortable. The Temple of Juno, with its messages and photos of the dead, burns while the crowd watches.When tens of thousands of Burners gathered solemnly for the Temple Burn, someone flickered a laser pointer at its walls and a couple of other lasers joined in. They were immediately shouted down. An invisible boundary had been crossed: one person’s self-expression was another’s bad behavior, and the reverence that this final ritual required had been undermined. The lasers left, then one of them returned. The crowd started chanting in unison: ‘Turn your laser off.’ Soon, the fire was lit and the interruptions forgotten as flames roared and the air grew unbearably hot. Dust devils spun out from the fire as the spirits of the Temple carried messages to the dead and released mourners’ grief into the dark landscape all around. As the glowing embers of the Temple faded into dust and ashes, we returned to our camps, packed up our belongings and headed home. The desert had served its purpose and our mourning was done.
Sarah Pike
https://aeon.co//essays/the-burning-mans-temple-goes-up-in-a-blaze-of-grief
https://images.aeonmedia…y=75&format=auto
Biology
Dogs rescue their friends and elephants care for injured kin – humans have no monopoly on moral behaviour
When I became a father for the first time, at the ripe old age of 44, various historical contingencies saw to it that my nascent son would be sharing his home with two senescent canines. There was Nina, an endearing though occasionally ferocious German shepherd/Malamute cross. And there was Tess, a wolf-dog mix who, though gentle, had some rather highly developed predatory instincts. So, I was a little concerned about how the co-sharing arrangements were going to work. As things turned out, I needn’t have worried. During the year or so that their old lives overlapped with that of my son, I was alternately touched, shocked, amazed, and dumbfounded by the kindness and patience they exhibited towards him. They would follow him from room to room, everywhere he went in the house, and lie down next to him while he slept. Crawled on, dribbled on, kicked, elbowed and kneed: these occurrences were all treated with a resigned fatalism. The fingers in the eye they received on a daily basis would be shrugged off with an almost Zen-like calm. In many respects, they were better parents than me. If my son so much as squeaked during the night, I would instantly feel two cold noses pressed in my face: get up, you negligent father — your son needs you. Kindness and patience seem to have a clear moral dimension. They are forms of what we might call ‘concern’ — emotional states that have as their focus the wellbeing of another — and concern for the welfare of others lies at the heart of morality. If Nina and Tess were concerned for the welfare of my son then, perhaps, they were acting morally: their behaviour had, at least in part, a moral motivation. And so, in those foggy, sleepless nights of early fatherhood, a puzzle was born inside of me, one that has been gnawing away at me ever since. If there is one thing on which most philosophers and scientists have always been in agreement it is the subject of human moral exceptionalism: humans, and humans alone, are capable of acting morally. Yet, this didn’t seem to tally with the way I came to think of Nina and Tess. Binti Jua lifted the unconscious boy, gently cradled him in her arms, and growled warnings at other gorillas that tried to get close The first question is whether I was correct to describe the behaviour of Nina and Tess in this way, as moral behaviour. ‘Anthropomorphism’ is the misguided attribution of human-like qualities to animals. Perhaps describing Nina and Tess’s behaviour in moral terms was simply an anthropomorphic delusion. Of course, if I’m guilty of anthropomorphism, then so too are myriad other animal owners. Such an owner might describe their dog as ‘friendly’, ‘playful’, ‘gentle’, ‘trustworthy’, or ‘loyal’ — a ‘good’ dog. On the other hand, the ‘bad’ dog — the one they try to avoid at the park — is bad because he is ‘mean’, ‘aggressive’, ‘vicious’, ‘unpredictable’, a ‘bully’, and so on. Nor are these seemingly moral descriptions entirely useless. On the contrary, it is a valuable skill to be able to assess these descriptions when an unfamiliar dog is bearing down on you in the street. If I’m guilty of anthropomorphism, so too, it seems, are many others. Many scientists (and more than a few philosophers) would have no hesitation in accusing perhaps several billion people of such delusional anthropomorphism. A growing number of animal scientists, however, are going over to the dark side, and at least flirting with the idea that animals can act morally. In his book Primates and Philosophers (2006), the Dutch primatologist Frans de Waal has argued that animals are at least capable of proto-moral behaviour: they possess the rudiments of morality even if they are not moral beings in precisely the way that we are. This was, in fact, Charles Darwin’s view, as developed in The Descent of Man. In a similar vein, the American biologist Marc Bekoff has being arguing for years that animals can act morally, and his book Wild Justice (2009) provides a useful summary of the evidence for this claim. Perhaps scientists such as Darwin, de Waal and Bekoff are also guilty of anthropomorphism? The evidence, however, would suggest otherwise. Eleanor, the matriarch of her family, is dying. She is unable to stand, so Grace attempts to help her, lifting and pushing her back to her feet. She tries to get Eleanor to walk, nudging her along gently. But Eleanor stumbles, and falls again. Grace appears very distressed, and shrieks loudly. She persists in trying to get Eleanor back to her feet, to no avail. Grace stays by the fallen figure of Eleanor for another hour, while night falls. If the figures that played out this grim tableau were human, we might have little hesitation in explaining what was going on in moral terms. Grace, we might say, was motivated by her sympathy for Eleanor’s plight. However, neither Grace nor Eleanor is human. Eleanor is the matriarch of a family of elephants, one that the British zoologist Iain Douglas-Hamilton and his colleagues have come to call the ‘First Ladies’ family. Grace is a younger, unrelated, member of another family, the ‘Virtues Family’. Grace is not unusual among elephants. Take another series of events: a young female elephant suffered from a withered leg, and could put little weight upon it. A young male from another herd charged the crippled female. A large female elephant chased him away and then, revealingly, returned to the young female and gently touched her withered leg with her trunk. Joyce Poole, the ethologist and elephant conservationist who described this event, concluded that the adult female was showing empathy. Binti Jua, a gorilla residing at Brookfield Zoo in Illinois, had her 15 minutes of fame in 1996 when she came to the aid of a three-year-old boy who had climbed on to the wall of the gorilla enclosure and fallen five metres onto the concrete floor below. Binti Jua lifted the unconscious boy, gently cradled him in her arms, and growled warnings at other gorillas that tried to get close. Then, while her own infant clung to her back, she carried the boy to the zoo staff waiting at an access gate. Apes in particular have been known to care for other species. Photo by Flickr/GettyDe Waal relates a similar story of Kuni, a female bonobo chimpanzee at Twycross Zoo in England. One day, Kuni encountered a starling that had been stunned during some misadventure. Fearing that she might injure the bird, Kuni’s keeper urged her to let it go. Kuni, however, picked up the starling with one hand, and climbed to the top of the highest tree in her enclosure, wrapping her legs around the trunk so that she had both hands free to hold the bird. She then carefully unfolded its wings and spread them wide open. She threw the bird as hard as she could towards the barrier of the enclosure. Unfortunately, it didn’t wake up, and landed on the bank of the enclosure’s moat. While her rescue attempt didn’t succeed, Kuni certainly seemed to act with good intentions, and tried to make amends by guarding the vulnerable, unconscious bird from a curious juvenile for quite some time. These examples merely scratch the surface of the evidence for apparently moral behaviour in animals. Much of it has been around for a long time but it has languished unrecognised. As long ago as 1959, the experimental psychologist Russell Church, now professor at Brown University, Rhode Island, demonstrated that rats wouldn’t push a lever that delivered food if doing so caused other rats to receive an electric shock. Likewise, in 1964, Stanley Wechkin and colleagues at the Northwestern University in Chicago demonstrated that hungry rhesus monkeys refused to pull a chain that delivered them food if doing so gave a painful shock to another monkey. One monkey persisted in this refusal for 12 days. This, however, is my favourite (delusional dog owner that I am, perhaps): a dog had been hit by a car and lay unconscious on a busy motorway in Chile. The dog’s canine companion, at enormous risk to its own life, weaved in and out of traffic, and eventually managed to drag the unconscious dog to the side of the road. I cringed my way through the video on YouTube, a site which is rapidly becoming the biggest single repository of evidence for apparently moral behavior in animals. While the evidence of apparently moral behaviour in animals is no longer in dispute — and cannot be restricted to mere anthropomorphic outpourings — how to interpret this evidence still is. Most scientists and philosophers are still sceptical of the idea that there is ‘real’ or ‘genuine’ morality at work here. This scepticism comes in two forms, one associated with scientists, the other with philosophers. Underlying scientific opposition is what has become known as Lloyd Morgan’s Canon, after the 19th-century British ethologist Conwy Lloyd Morgan. The basic idea is reasonable: when we explain animal behaviour, we should not postulate any more than we absolutely have to. In other words, we should not explain the behaviour of animals in complex, moral terms when another — non-moral — explanation is available. But are there other, non-moral, explanations for the sorts of cases described above? In some cases, the alternative, non-moral explanations can be almost endearingly desperate. In the case of Binti Jua who rescued the boy, some argued that since she had been hand-raised by zoo staff, who had taught her mothering skills by using a stuffed toy as a pretend baby, she was simply doing what she had been trained to do, believing that the unconscious boy was another stuffed toy. Yet this explanation, resting as it does on the assumption that a gorilla is incapable of distinguishing a boy from a stuffed toy (something a dog can do with a 100 per cent success rate) is astonishingly, and one suspects wilfully, naïve. In other cases, alternative, non-moral explanations appear more plausible. In the case of Russell Church’s rat experiment, for example, a rat’s failure to push the food bar might be explained not in terms of moral concern for its fellow rat but as an aversion to the noise made by a rat when it receives an electric shock. Indeed, this ‘aversive stimulus’ explanation is supported by the fact that white noise will have a similar affect on mice — they will refuse to push the food bar if doing so results in a loud blast of white noise. It might seem as if this is a purely scientific issue. Either an animal is motivated by a moral emotion — sympathy, kindness, malice, etc — or it is motivated by something else. However, philosophical assumptions, and confusions, can also intrude. First, the ‘aversive stimulus’ explanation does not necessarily rule out a moral explanation. Sometimes, the basis of aversion will be a feeling of sympathy. I find the cries of my children unpleasant — I have an aversion to those cries. But this is precisely an expression of my concern for them and not something separate. Consider, for example, a (probably apocryphal) tale concerning Abraham Lincoln. Seeing some young birds that had fallen from their nest and were in distress, Lincoln stopped to help them back into the nest and reunite them with their mother. On being praised for his charity, Lincoln replied: ‘I wouldn’t have been able to sleep tonight if I had been thinking of those poor birds.’ Lincoln was certainly ‘averse’ to the distress of the birds, but this aversion cannot be separated from his sympathy for them. If he didn’t care about the plight of the birds, then their distress would not have troubled his sleep. Lincoln’s aversion to their distress and his sympathy are, in this case, inextricably bound together: sympathy is the basis of his ‘aversion’. Secondly, the ‘aversive stimulus’ explanation can often seem curiously misdirected. After all, what explains an animal’s behaviour is not simply whether it finds a situation aversive: it’s how it responds to this aversion that is crucial. The apparently heroic Chilean dog in the YouTube video might well have found the sight of his companion lying prone on the road unpleasant or ‘aversive’. But there are various ways of escaping an aversive stimulus — walking away is the simplest. The fact that the dog didn’t walk away, but instead risked its life to save the other is, surely, significant. Did the apparently heroic dog think to itself: ‘I am inclined to drag my companion to safety. Is this an inclination I should act upon or one that I should resist?’ Perhaps Lloyd Morgan’s Canon itself is wrong. We might think of the Canon as akin to a game with a set of arbitrary rules: don’t give animals anything more than you absolutely have to. Assume only the bare minimum of cognitive abilities required to explain their behaviour. Ditto emotional sensibilities. Moral emotions — kindness, sympathy? Certainly don’t give them those unless there is no other choice. We know that we have cognitive and emotional capacities aplenty, and we know that we can, and often do, act for moral reasons. But don’t assume other animals are like us unless there is no other option. Here, courtesy of de Waal, is another possible game. We know that animals are like us in many ways — in terms of their evolution, their genetic structure, the structure of their brains, and their behaviour. Given these known similarities, when we see animals behaving in ways that seem to be similar to the ways we behave, then do not assume a difference in motivation unless there is some evidence that supports this difference. When a chimpanzee gives what appears to be a consoling hug to its fellow who has just received a savage beating from the alpha male then, in the absence of evidence to the contrary, the working hypothesis should be that the chimpanzee is motivated by the same sorts of emotions as a human would be in the same sort of situation. If, in the human case, we take this to be an expression of sympathy, then we should assume the same for the ape unless there is positive evidence to suppose otherwise. Many scientists assume that the Lloyd Morgan Canon is the only one in town, and few express any fondness for de Waal’s alternative. But it’s not clear that Lloyd Morgan’s game has any more legitimacy than de Waal’s. On the contrary, the Lloyd Morgan position seems to make sense only if we assume there is a drastic discontinuity between humans and other animals — an assumption that is becoming increasingly difficult to defend. The scepticism of philosophers towards the idea that animals can behave morally is subtly different from that of scientists. Scientists question whether there is enough evidence to support the claim that animals can be motivated by emotions such as kindness or compassion, or by negative counterparts such as malice or cruelty. Philosophers argue that, even if animals were to be motivated by these sorts of states, this is still not moral motivation. When they occur in animals, these states are not moral ones. For example, compassion, when it occurs in an animal, is not the same sort of thing as compassion when it occurs in a human. When it occurs in an animal, compassion has no moral status, and so even if the animal acts through compassion, it is still not acting morally. In a nutshell, this is the philosopher’s worry: moral action seems to imply moral responsibility. If I act morally, then I am, it seems, morally responsible for what I do. But do we really want to hold animals responsible for what they do? During the medieval era, it was not uncommon for courts of law to try (and often execute) animals for perceived indiscretions. I assume that no one wants to go back to those days, and underlying this reluctance is the thought that, whatever else is true of animals, they are not really responsible for what they do. But this seems to imply that they cannot act morally. Consider a principle associated with the philosopher Immanuel Kant: ought implies can. It doesn’t make sense to suppose that I ought to do something if I am incapable of doing it. Nor does it make sense to say I shouldn’t do something if I can’t help doing it. To say that you ought (or ought not) to do something is to imply that you have a say in the matter — that you are capable of choosing what it is you are going to do (or capable of refraining from whatever it is you are tempted to do). Moral motivations seem to imply that you have this ability. A morally good motivation is one that you ought to endorse or act upon. A morally bad motivation is one that you ought to resist. So animals can’t act morally, it seems, unless they are capable of deciding how they are going to act, and so are responsible for what they do — and then, it seems, we are back to medieval animal trials. Most philosophers have been united in their reasons for thinking that animals cannot be responsible for what they do. To be responsible requires an ability that animals do not have — the ability to scrutinise their motivations critically. To be responsible, animals must be able to think the following sorts of thought: I am inclined to do this; is this an inclination I should embrace or reject? Did the apparently heroic dog think to itself: ‘I am inclined to drag my companion to safety. Is this an inclination I should act upon or one that I should resist?’ According to philosophers, it is not simply that the dog didn’t engage in this sort of scrutiny of its motivation. What is crucial is that it cannot do this — it does not have the ability to scrutinise its motivations. Of course, human beings often act unreflectively, too — dashing into burning buildings to save babies, and so on, without a thought to the consequences. But the difference, philosophers say, is that we can scrutinise our motivations even if, in particular cases, we don’t. This is why philosophers have almost universally rejected the idea that animals can act morally: they assume that animals cannot perform this same self-scrutiny. Despite its widespread acceptance, I think this is incorrect. In the first place it is not clear that the requirement to critically scrutinise our actions is at all crucial to our own moral behaviour. Simply put, say I am inclined to help a dog I see lying unconscious in the middle of a busy road. Do I have control over this inclination? According to the standard philosophical view, I have control over it as long as I am capable of critically scrutinising it — of asking myself whether I should act upon this inclination or resist it. But recent work in psychology suggests that my responses can be skewed by environmental influences of which I am unaware and over which I have no control. We have a problem of regress here: the ability to engage in critical scrutiny of my motivations will give me control over them only if I have control over the critical scrutiny. Where does this end? We began with the problem of explaining my control over my motivations, but have merely substituted for this another problem: the problem of explaining my control over my critical scrutiny. We haven’t explained control at all, merely pushed the problem back a step. The traditional philosophers’ way of understanding the ‘ought’ of moral motivation in terms of rational control is questionable. There is another way of understanding morality that does not rest on this assumption. It is, for example, possible to do things that we ‘ought’ to do, even in the absence of critical scrutiny or rationalisation about alternative courses of action – acting prudently to ensure a long and healthy life, say, or caring for another being. This opens up a new way of thinking about the moral capacities of animals. Animals can, in fact, act morally even if they are not responsible for what they do. They can be motivated by the desire to do good (and also bad) things even if they are not responsible for their actions. A dog can be motivated by the desire to rescue his companion, and rescuing his companion is a good thing. But this does not imply that the dog is responsible for what it does. This allows us to make sense of the growing body of evidence that supports the idea that animals can act morally without returning us to the horrors of animals on trial. The crux of this issue has as much to do with humans as it does with animals. When humans act morally, what is it we are doing? Traditionally, the philosopher’s answer has been an intellectualist one: acting morally requires the ability to think about what we are doing, to evaluate our reasons in the light of moral principles. But there is another tradition, associated with the philosopher David Hume and developed later by Charles Darwin, that understands morality as a far more basic part of our nature — a part of us that is as much animal as it is intellectual. On this ‘sentimentalist’ account of morality, our natural sentiments — the empathy and sympathy we have for those around us — are basic components of our biological nature. Our morality is rooted in our biology rather than our intellect. If this is true, then the reasons for thinking that animals cannot act morally dissolve before our eyes. What is left is a new understanding of what we are doing when we act morally and, to that extent, the sorts of beings we are. Those beings are, perhaps, just a little more biological and a little less intellectual, a little more animal and a little less spiritual, than we once thought.
Mark Rowlands
https://aeon.co//essays/if-a-lion-did-a-good-deed-would-we-understand-it
https://images.aeonmedia…y=75&format=auto
Architecture
Nature conservation is still obsessed with the pristine. It needs to learn to love this mongrel world
The scene could have been repeated in a thousand protected areas in Africa: a small line of visitors walking carefully in the savannah, accompanied by a local game guard with a rifle. We were approaching an old female elephant on foot, in an area set aside for wildlife in a remote corner of the Zambezi Valley. I had seen plenty of elephants in the wild before, but always from the safety of a vehicle. I felt intensely aware of the noise of my movements and highly conscious of the direction of the wind. It struck me that the tree I stood behind was about the same size as the one the elephant had just gently pushed over. The elephant population in Zimbabwe was buoyant at that time, and the thorn bush around us crackled as the rest of the group moved around the old female we were watching. The country was empty of people, with only visitors and managers allowed to enter. As a result, the landscape looked wild, but in fact it had once been grazed and farmed, and was now carefully monitored and managed for wildlife conservation. Conservationists love charismatic species such as elephants. They appear on brochures, websites, and logos. The catastrophic decline in elephant numbers due to illegal hunting in the 1970s (and again now) provides one of the longest-running and most clear-cut stories about the plight of wildlife in the modern world. Who could forget the images of elephant carcasses, with their tusks removed, rotting in the bush? Or the huge pile of confiscated ivory set on fire by Daniel Arap Moi, Kenya’s President, in 1989? Tourists also love elephants, and wildlife holidays in game reserves and parks offer a deeply romantic experience of wild creatures and people in apparent harmony in a remote, unspoiled land. In establishing protected areas for species such as elephants, conservation creates special places where the normal destructive rules of engagement between people and nature do not seem to apply. However, nature reserves and national parks — or, in broad terms, ‘protected areas’ — are much more than a romantic idea. In the Anthropocene era, humankind is an increasingly dominant ecological force across the planet, from the tropics to the poles. Biodiversity is in decline everywhere, and the human impact on nature includes over-harvesting and overfishing, agricultural intensification and the growth of cities, toxic chemicals, ocean acidification, climate change, and many others. There is a real possibility of reaching ‘tipping points’, or changes that cause permanent shifts in the state of global ecological systems. The loss of global biodiversity is the focus of huge efforts by charitable foundations, non-governmental organisations, and governments. The nature of the challenge is widely researched and, broadly, well-understood, yet international biodiversity targets are not being met. Recognising this, parties to the Convention on Biological Diversity pledged in 2010 to create more and better protected areas (at sea as well as on land). This is the familiar strategy of setting aside spaces for nature, which has dominated modern conservation since the late 19th century. Proponents of this strategy argue that efforts should focus on those areas that are still relatively unchanged by human action — most of all, the tropical frontier, where the rate of biodiversity loss is greatest. In the words of a 2011 article in Conservation Biology, the chief priority is to ‘identify remaining intact ecosystems at local extents, to protect them, and to remind the public of them’. In his poem ‘The Explorer’, Rudyard Kipling wrote of ‘Something lost behind the ranges. Lost and waiting for you. Go!’ Back in 1898, this refrain epitomised the Boy’s Own spirit of colonial adventure. Now it could be the marching song of global nature conservation. But there are worrying signs that this emphasis on preserving the wildest and most pristine places is mistaken. Parks and protected areas cannot save the world’s biodiversity. If that is why we fence, patrol, fund and manage them, then we need to find out what’s gone wrong. Part of the problem is biological. Protected areas such as national parks do help preserve the animals and plants inside them, if the areas are large enough. Yet, despite the fact that there has been a huge increase in both the number and extent of protected areas through the 20th century, biodiversity loss has continued apace, accelerating in many regions. What is going wrong? The problem is that protected areas become ecological islands. In the 1960s, a famous series of experiments on patterns of extinction and immigration were conducted in the islets of the Florida Keys by EO Wilson and his student Daniel Simberloff. Their findings became the basis of the ‘theory of island biogeography’. Simply put, islands lose species: the smaller the island, the faster they are lost. Since then, ecologists have recognised that these islands of habitat need not be surrounded by a sea of water. In Amazonia, ecologists conducted experiments on land that had been converted from forest to farms: islands of trees in a sea of dirt. They preserved square blocks of forest of different dimensions and studied the effect on diversity. Edge effects — the increase of sun, wind and weeds at the boundary between forest and cleared land — changed the microclimate of the forest, and species were lost. The smaller the remnant forest patch, the faster the species disappeared. Landscape ecology, the science of animal populations, and studies of ecological networks all point the same way. Small protected areas surrounded by land without suitable habitat will not be sufficient to protect global biodiversity. And for large mammals, a park that is ‘too small’ might in fact be very large indeed. One of the greatest conservation challenges in Africa is to manage elephants, whose enormous ranges cannot be contained even in the greatest of parks. One response is to seek more and bigger reserves, or to build corridors between them (‘more, bigger, better and joined’ was the slogan of a UK Government report Making Space for Nature in 2010). Yet, at most, a protected area strategy will create biodiverse islands on a fraction of the Earth’s surface (perhaps 17 per cent) leaving the rest of the Earth (to which humanity is restricted) radically transformed, and perhaps permanently impoverished in diversity. Science is not the only critic of protected areas. They are often resisted and subverted by the people who have to live with them as neighbours. To understand why so many people around the world feel a burning resentment of protected areas from which they are excluded, we need to know more about their history, which starts in the 19th century — the heyday of empire and expansion of the Western world. In 1999, J Michael Fay, an ecologist working for the Wildlife Conservation Society, walked over 3,000 km through the forest of the Congo Basin. This ‘megatransect’ was to complete an ecological and vegetation survey, but also to highlight threats to the Congo Basin forests from logging, mining and hunting. He succeeded brilliantly. In 2002, Gabon’s president Omar Bongo created 13 new national parks in one go, a coup de théâtre that made him the toast of the 2003 World Parks Congress. In his subsequent book on the megatransect, The Long Follow (2006), the science writer David Quammen compared Fay in his obsessive trek through the jungle to Kurtz, the mad, disillusioned trader in Conrad’s Heart of Darkness. Meeting Fay in the jungle, Quammen said he felt ‘like Stanley addressing Dr Livingstone’. It was an apt comparison — Stanley was an American journalist who found, sensationally, the empire’s most famous and charismatic missionary, half-lost in the depths of Africa. The imperial comparisons were no accident. The tropics, and Africa in particular, had exerted a powerful attraction on naturalists since the dawn of European empires in the 17th century. By the 19th century, museum and zoo collectors and big-game hunters were making expeditions to bring back all kinds of plants and animals as specimens and trophies. This tradition was the rootstock for 20th century conservation — founded in a mix of curiosity, control, and exploitation of the far reaches of empire. Many of the world’s biggest conservation organisations were founded at this time, some as zoos — themselves a direct expression of imperial collecting. The Frankfurt Zoological Society was founded 1858, and the New York Zoological Society (now the Wildlife Conservation Society) in 1895. The Society for the Preservation of the Wild Fauna of the Empire, established in London in 1903 (and still going as Fauna and Flora International), drew its founding membership from colonial administrators and big-game hunters as well as prominent naturalists. High-profile American conservationists such as Theodore Roosevelt moved between the internal frontier of the American West, and the frontiers of European empire, in America and Asia. From imperial hunting and natural history the modern conservation movement was born. President Theodore Roosevelt with an elephant kill, 1909. Photo by Bettmann/CorbisAfter the Second World War, conservation became truly internationalised through the International Union for the Conservation of Nature (IUCN), the United Nations, and an increasing number of non-governmental organisations, such as what is now the World Wide Fund for Nature (WWF) and the Nature Conservancy in the US. In the 1950s, conservationists began to focus systematically on species diversity, and threats to it, through international instruments such as the Red List of the IUCN Species Survival Commission. WWF was shaped by a founding manifesto that declared that conservationist organisations would ‘need above all money, to carry out missions and to meet conservation emergencies by buying land where wildlife treasures are threatened, money to pay guardians of wildlife refuges… For sending experts to danger spots and training’. Conservation had become both a global battlefield and a mission, as it continues to be. Conservation International’s film Hotspots (2008), which showed scientists toiling heroically across the globe, surveying and protecting nature from the advancing tide of brash humanity, declared itself part of a ‘war to save tens of thousands of species’. By 2005, there were protected areas in every country, rich and poor alike, covering almost 12 per cent of the Earth’s land surface Organisations such as WWF grew dramatically in the post-war period, from small clubs to enormous corporations, with millions of members and multi-country international operations. A 2009 study by Dan Brockington and Katherine Scholfield of the University of Manchester identified 278 conservation organisations working in sub-Saharan Africa, almost half of which were based outside the continent (and almost a quarter in the US alone). By the end of the 20th century, the global conservation movement had a massive funding and membership base in developed countries, while much of its effort was focused on stopping the transformation of landscapes, and loss of species, in developing countries in the tropics. At the same time, the number of protected areas grew. As former colonial territories gained independence, they also gained national parks. Roderick Neumann, professor of geography at Florida International University, suggests that national parks were ‘products of the creation of the modern nation state’ and ‘as much an expression of modernism as skyscrapers’. By the 1960s, the UN was keeping count of a rapidly expanding global list of parks and protected areas. The land area officially protected as nature reserves of one sort or another doubled in successive decades, expanding like a rash across the tropics. By 2005, there were protected areas in every country, rich and poor alike. In total, they covered almost 18 million sq km, 12 per cent of the Earth’s land surface. Numerous and large though they were, these protected areas were increasingly islands surrounded by land being radically transformed by human action. Some even began to be fenced, like vast zoos, to keep people out and nature in. Conservationists had stepped beyond the frontier of the known lands, beyond the edge of cultivation, to regions where maps still had blank spaces to be filled in. At least that’s how it seemed to them. But they had forgotten something. The places we might think of as intact wilderness are, almost invariably, someone else’s home. I once interviewed farmers who had lost land when the Mgahinga Gorilla National Park was created in Uganda, in 1991. Their land was steep and meticulously terraced. Fields were small and intensively manured, thick with beans, barley or cabbages. Previously, farmers had been allowed to make new farms, even on the slopes of the Virunga volcanoes. When the poorly protected forest reserve and wildlife sanctuary were amalgamated into a new national park, they lost houses, village buildings, and farmland. They understood perfectly well why mountain gorillas were extraordinary and rare animals, and why tourists came to see them, but their sense of injustice was very strong. As Mark Dowie chronicles in his book Conservation Refugees (2009), there is a long history of displacements in the name of nature protection. The Ahwahneechee people were removed from the Yosemite valley by the US military. The great East African parks – including the Maasai Mara and the Serengeti — banned traditional livestock herders who had lived alongside wildlife for generations. And the Mkomazi National Park in Tanzania was the scene of mass evictions in the 1980s. Protecting nature from people is a highly political act. Protected areas are deeply unpopular in many countries — particularly those in the tropics — partly due to the draconian nature of their creation. Many of the places conservation planners see as natural have been transformed by human occupation, and many have people living in them, even if at low density. The creation of parks to protect nature often displaces those people. Some lose access to land for hunting or grazing. Some lose homes and farms. Some own their land, and are compensated according to the law. But in many parts of the developing world, there are no good land titles and, no matter how long that land has ‘belonged’ to a community, they have no right of redress when it is taken away. Conservationists (particularly in the US) often speak about their concern for ‘wilderness’, and their determination to protect what Steven Sanderson, president of the Wildlife Conservation Society in the US, calls ‘the last of the wild’. But that word ‘wild’ — with it connotation of pure nature, untrammelled by culture — is highly problematic. ‘Wilderness’ is a word with powerful meanings in Western culture, and a dangerous one to apply idly to inhabited land where people have a long history of occupation and rights. There is something remarkably short-sighted about the relentless pursuit of the pristine, in a world increasingly transformed by human consumption. There is a certain false comfort in the idea that biodiversity is something far away: to be protected from ever-expanding human populations in distant parts of the world. But the greatest driver of biodiversity loss is economic activity, or rather the growth in natural resource and energy consumption that accompanies it, and for that we need to take responsibility. Global meets local, human meets natural: such hybrid landscapes are conservation’s greatest challenge. A woman passes the Agip Oboma flow station in Okoroma, Nigeria, 2006. Photo by Ed Kashi/CorbisThe reach of the global economy, and its commodity chains, is not limited by border or distance: there are no blank spaces on the map, no territories beyond the reach of human production or consumption. This year, a paper in Nature quantified the ways in which threats to 25,000 endangered species on IUCN Red Lists were linked to the production of 15,000 commodities in 187 countries via more than 5 billion supply chains. The authors argued that global trade in quite ordinary products such as tea, coffee, palm oil, sugar, and textiles was responsible for around a third of the threats to endangered species. Biodiversity loss, they concluded, is ‘a global systemic phenomenon’. The economic machine that consumes biodiverse habitat has its foundation in the world economy. As that economy grows, demands made on the biosphere increase. Particularly in the rapidly industrialising countries of Asia, the standard economic growth model is having some success in helping people to escape poverty, and others to become rich. This is admirable but also, for a conservationist, very disturbing. Global consumption of raw material and energy (and production of wastes) has risen inexorably. Poor countries pursue the model of the rich, and poor people, understandably, dream of becoming wealthy. The problem is that biodiversity shrinks before the combined onslaught of people and wealth. The Western model of consumption is unsustainable for any but a few, and the model has to change in rich and poor countries. Focusing conservation efforts on residual pristine landscapes is a way to treat symptoms not causes. It is displacement behaviour: the real issues are elsewhere. But how do we go about changing the global conservation model with its focus on the exotic and pristine? How do we find new approaches to the very real problems we face in managing our relationship with nature? One answer might lie in an older conservation tradition, which is quieter and more local. Once, nature conservation began at home. That is the root of the word ecology, from the Greek for oikos, or home. At the same time that exotic wildlife and grand landscapes inspired one branch of the conservation movement, another was growing up ‘back home’, where the impact of industrialisation and urbanisation was beginning to bite. This was part of a wider environmental concern, not just for the sake of non-human nature, but also for the quality of human health and wellbeing in issues such as air and water pollution, and urban design. It was a concern about the destruction of the everyday ‘haunts of nature’ that led to the foundation of the Audubon Society in the US and the Royal Society for the Protection of Birds (RSPB) in the UK, as well as the rise of organisations such as the National Trust and the Open Spaces Society in the UK. Nature was important for its beauty or rarity but also for its significance to human society at a time of rapid change. National parks embody deeply rooted ideas about national identity, all the way from Yellowstone in the US in the 1870s to the Cairngorms in Scotland in 2003. In future, how will the Earth’s citizens encounter nature, except virtually, through videos or webcams or gaming simulations? Of course, local nature is still important. In the UK and the US, many organisations focus on local wildlife and wild places, and their importance to ordinary people. Nature reserves are promoted as ‘green gyms’, for their health-giving potential as much as their ecology. Projects abound to get children out of the house to inoculate them against ‘nature deficit disorder’, inspired by books such as Richard Louv’s Last Child in the Woods (2005). But despite the efforts put into camps and trails, minibeast safaris, fungus forays and bat walks, local nature has undoubtedly lost some of its power to motivate conservationists, and its appeal to the public. Nature is no longer here, where we live, but something that exists far away: polar bears on teetering ice floes, orangutans dispossessed by palm oil plantations, or albatrosses in the southern ocean. The RSPB no longer confines its work to the Suffolk coast, or Speyside in Scotland. It has acquired big chunks of rainforest in Sierra Leone and Indonesia, and has projects in British overseas territories. Its magazine Birds offers birding holidays in exotic destinations from Morocco to the Maldives. Conservation might begin at home, but increasingly it leads abroad. Even local environments must conform. Wicken Fen near Cambridge, one of the very few surviving fragments of the once vast fen wetlands of East Anglia, has been a nature reserve since the National Trust acquired it as their first piece of land in 1899. Now it covers more than nine sq km, including both undrained fen and a large area of former farmland, part of a 100-year open-ended restoration project. The National Trust does everything it can to encourage people to visit the fen, and explore its wildlife, history and landscape. The area has been subject to intensive study by naturalists since the Victorian era, and the list of species recorded there is huge (1,082 moths, 1,893 flies, and 1,527 beetles, for example). So there is something rather defeatist in the National Trust’s explanation of the place’s significance: Wicken Fen, they say, is ‘Britain’s version of a tropical rain forest’. This comparison underestimates the power of a more local, engaged, and human-centred form of conservation. Conservation must once again learn how to engage directly with our need for nature - whether we live in Cambridge or in Kenya. This is not to say that the only values of nature are utilitarian, nor easily measured in terms of something like ‘ecosystem services’, and priced by economists. However, given the scale of human demand and our capacity to transform ecosystems, human and natural interests must become more aligned. If conservation is confined to protecting intact ecosystems, it will always be pitched against human material interests, be they those of the powerless poor or the acquisitive rich. In future, how will the Earth’s citizens encounter nature, except virtually, through videos or webcams or gaming simulations? Without contact with nature, where will conservation look for its sense of value, for its democratic support? As part of recognising that the fate of humans and the natural world are bound up with one another, we must reintegrate conservation and justice, taking seriously the welfare and aspirations of people as well as the biodiversity that they live with daily. Industrial and urban landscapes are impoverished in living diversity, and less ecologically resilient in the face of human-generated change. With more than half of the world’s population already living in cities, we need to address the effect of urban environments on health — human and ecological. This new ‘local’ conservation has a global reach. For the sake of both people and nature, the conservation movement must work to develop parks and other spaces where wild species can thrive, clean watercourses where children can play and that absorb floods, novel environments such as green roofs or linear parks, and a culture of celebration of wild nature, from migrant birds overflying skyscrapers to butterflies on window boxes. In rural areas, the need for integration is just as great. As David Kaimowitz and Douglas Sheil of the Centre for International Forestry Research in Jakarta note in a 2007 paper entitled ‘Conserving What and for Whom?’, ‘the future of many, if not most species, depends on what happens outside strictly protected areas’. The 2004 World Parks Congress put forward a new vision for protected areas as ‘the building blocks of biodiversity in an ocean of sustainable human development, with their benefits extending far beyond their physical boundaries’. This intermingling of people and biodiversity points to a subtle, but even more foundational, shift in how conservation works. It is a shift in our understanding of nature itself. Conservation now needs to be designed for a mongrel world, one made up of ecological hybrids, not ‘natural’ ecosystems in some constant state. Scientific understanding of ecosystems has changed. The idea of a balance of nature, of ecosystems in equilibrium, has given way to an understanding of ecosystems as highly variable, subject to changes in state at a variety of interlocking scales. Climate change is part of this, affecting our understanding of what it means to describe a habitat or landscape as ‘natural’. Peter Kareiva, chief scientist for the Nature Conservancy, captures this when he speaks of the extent of human transformation in terms of the ‘domestication of nature’. It is not a matter of finding unchanged nature anymore, but responding to the impacts of that transformation. If nature is constantly in flux, it is a less reliable source of cultural references: how can nature frame fixed human identities when it is itself constantly changing? In her book Rambunctious Garden (2011), Emma Marris talks about the challenge of saving nature in a ‘post-wild world’, urging conservation to celebrate nature wherever it finds a place in and around (and in spite of) human lives and aspirations. It is now possible to log on to a website and track the movements of an elephant across the African landscape from hour to hour This means that conservation will need to move beyond the doomed attempt to protect ecological purity. It will need to restore and sustain biodiverse ecosystems in a world transformed by intense human activity. It will need to care about the weeds on the brownfield site, the remnants of wetlands in the city, and the polluted river that runs through a new town. Conservation must focus on these survivals of nature, persisting in impoverished and fragmented landscapes. This requires ecological restoration, not just protection. Restoration might involve undoing specific instances of harm (pollution, deforestation, species extinction), but it is not enough just to look back, to try to rebuild nature in some fixed ideal state. Restoration must also look forwards, with an open-ended and creative spirit. Conservation must preserve dynamic processes, not something timeless. You cannot fence off nature and expect it to survive: nature works, it doesn’t just exist. There are signs that conservation is starting to change. The need to move beyond the tunnel vision of ‘wilderness preservation’ is now widely acknowledged. Despite their size and corporatist tendencies, many global conservation organisations are struggling to engage with human needs and livelihoods, and are trying to situate conservation regimes in mixed-use landscapes. As grounds for hope, I see that the giant Conservation International, famous for its corporate focus on the protection of biodiversity hotspots, has adopted ‘people need nature to thrive’ for its new tagline. And behind its old-school panda, WWF UK explicitly links the aim of safeguarding the natural world with the challenges of climate change and sustainable resource use. But how fast and how profound are these transformations? In most of the conservation ‘industry’, it is business as usual. The approaches to conservation that served well in the 20th century still dominate, and are not right for the 21st. Pursuing the shrinking frontier of relatively intact ecosystems is important, but not enough. What will happen after this endgame is played out, the final frontier mapped and protected, the unknown known? Above all, conservation has to come back home: to link people with local nature, in the places where they live and work; to express the concerns they have for non-human life, and beauty, and the rhythm of the seasons; to express their need for the things nature provides, from carbon sequestration to joy. Conservation must learn to link these things to our consumption habits, to the coffee we buy or the iPhones we make, or the clean water we drink. And it needs to remember that one culture’s ‘wilderness’ is another’s ‘home’. http://youtu.be/q7-RmrsGcjQ This will not be easy, but it is not impossible. The world’s very connectivity is a great asset. It is now possible to log on to a website and track the movements of an elephant across the African landscape from hour to hour, as its radio collar sends locations through the mobile phone network. That gives a very different picture of the daily life of elephants from what the average tourist gets: you start to see it from the point of view of the elephants, and the farmers who live with them. In places such as the Laikipia Plateau in Kenya, elephants and people compete for space. Farmers’ fields provide perfect feeding stations and the costs, in lost livelihoods, and sometimes lost lives, is huge. Here, the conservation challenge faced by charities such as Space for Giants is not about creating areas that are protected like fortresses against people, but about building hybrid, peopled landscapes where people and elephants can co-exist. It is time for conservation to stop dreaming of life behind the ranges. TS Eliot, once an anthologist of Kipling, observed in ‘Little Gidding’ that ‘the end of all our exploring/Will be to arrive where we started/And know the place for the first time’. Nature is not a consumer good or a rare resource, to be chased down in some remote tourist destination, but home. How we live, in nature, with nature, and as part of nature, matters, perhaps more than anything else. For more detail on Bill Adams’s academic work (published under the name WM Adams) see his university webpage, or his new blog. The work of Space for Giants in enabling humans and elephants to co-exist is detailed on their website.
Bill Adams
https://aeon.co//essays/the-wilderness-fetish-is-bad-for-people-and-for-the-planet
https://images.aeonmedia…y=75&format=auto
History
We take Pericles’ funeral oration out of context at our peril. For a true picture of war, read Thucydides to the end
When a memorial to the Royal Air Force Bomber Command opened in Green Park in London in June this year, it was greeted with a cacophony of debate. Discussions of its message and architectural merit became entangled almost immediately with a renewed dispute over the morality of Britain’s bombing campaign in Dresden during the Second World War. The fact that the memorial includes a quotation from an ancient Greek politician received, quite understandably, little attention. The architecture critic Rowan Moore was among the few who spotted it, writing in the Observer that the inscription was ‘defiant and triumphant, using quotations from Churchill and Pericles to justify the bombings’. Otherwise, the quote passed unnoticed. Or perhaps it just seemed appropriate for such a memorial, whether or not the memorial was itself appropriate. ‘Freedom is the sure possession of those alone who have the courage to defend it.’ The words on the plinth come from another memorial to the war dead: the Funeral Oration of Pericles, delivered at the end of the first year of the Peloponnesian War in 431 BCE. The year before, Sparta had provoked the Athenians, demanding that they ‘give the Greeks their freedom’ — in other words, dissolve their empire — as the price of peace. Pericles, the best speaker and the most influential man in the democracy, persuaded the Athenians that they had the resources to win any conflict, and so they refused all concessions. The first year of war was inconclusive. Following Pericles’ strategy, the Athenians withdrew behind their city walls while the Spartans ravaged their territory. Elsewhere, there were a few skirmishes. That winter, as the contemporary historian Thucydides recounts, the Athenians gave a public funeral for those who had died. As was their custom, ‘a man chosen by the city for his intellectual gifts and general reputation’ was to make a speech in praise of the dead. It fell to Pericles, the man who had persuaded them to start the war. Pericles’ oration was a masterpiece of rhetoric, and has been quoted and imitated ever since. In praising those who gave their lives for the city and justifying their sacrifice, it has supplied posterity with appropriate words for all such occasions of public commemoration, especially in the 20th century. The line from the Bomber Command memorial, a rather idiosyncratic translation of the original Greek, first appeared in 1924 on the Soldiers’ Tower at the University of Toronto. Over the next decade it was adopted for memorials and remembrance ceremonies across the ANZAC nations. A line from the previous section of Pericles’ speech, ‘the whole earth is the tomb of famous men’, has proved even more popular, appearing on monuments to known and unknown soldiers from Athens to Auckland. A more accurate version of ‘Freedom is the sure possession…’ appears on the websites of numerous US veterans’ organisations: ‘Be convinced that to be happy means to be free and that to be free means to be brave.’ Sometimes it comes with the mistranslated coda ‘therefore do not take lightly the perils of war’ (Pericles actually told his audience not to worry too much about danger). Conveniently for contemporary purposes, the oration is extremely (and deliberately) vague Whichever version is chosen, one reason for the popularity of Pericles’ words in this context must surely be the way they serve multiple purposes. They both honour the dead and insist that such sacrifices are necessary for the defence of freedom. They even suggest that it is only those who are willing to fight for freedom who truly deserve to enjoy it, a flattering thought for the many veterans who feel undervalued by civil society. The line serves a similar function to another popular quote wrongly attributed to Thucydides (not least by the House Armed Services Committee): ‘The society that separates its scholars from its warriors will have its thinking done by cowards and its fighting done by fools.’ More insidiously, quoting Pericles presents any given conflict as a struggle for freedom from tyranny and legitimises war as the proper response. That might have been fair in the Second World War, but it is not so obviously true of the First World War. Yet that was precisely when Thucydides’ words were given this role in Britain: extracts from the Funeral Oration were printed up as pamphlets, and quotes appeared on posters on London buses. And if it was doubtful then, the support of Pericles now is even more questionable as a justification for our own overseas adventures. One reason why the speech is so effective at selling war is that it is not solely (or even primarily) concerned with military matters. The main section is a eulogy for Athens and the qualities of its society: the things that, according to Pericles, the Athenians are fighting for, and that justify every sacrifice that might be demanded of them. As a matter of fact, it is striking how far the oration subordinates individuals to the collective good. It isn’t just the dead themselves, whose faults and virtues are rendered equally irrelevant beside their deaths in the service of the city. It is also their grieving parents, who should draw consolation from their sons’ honourable deaths and set about having more children if they are young enough, and their widows, whose greatest glory is simply not to be talked about by men, whether in praise or in blame. These sentiments reflect a thoroughly un-modern conception of the relationship between the citizen and his community. One might suppose that, at least within the portion of the ideological spectrum that is suspicious of state power, they would raise doubts about Pericles’ political tendencies. Yet this eulogy for Athens, read as a eulogy for democracy in general, has proved enduringly popular among US politicians of both parties. Lines such as ‘our constitution is called a democracy because power is in the hands of the people’ are quoted time and again in Congress. Pericles celebrates those values — freedom, democracy, and equality — that are worth dying for, and that must be reaffirmed in times of crisis. As Congressman Major Owens put it (or rather, rapped it) in the immediate aftermath of 9/11: ‘Defiant orations of Pericles / Must now rise / Out of the ashes.’ At best, the funeral oration expresses ideals that are undermined by subsequent events. At worst, it represents the qualities that led to Athenian defeat Conveniently for contemporary purposes, the oration is extremely (and deliberately) vague about the actual machinery of the Athenian state. Its grand statements about the power of the people, equality before the law, and emphasis on ability rather than class can be co-opted by any nation that chooses to call itself a democracy. Now, one might feel that quibbles over the original context of Pericles’ words miss the point, or that they can be dismissed as an attempt to ring-fence the ancient world as an academic monopoly. Pericles certainly did want to praise democracy (albeit his own, rather tendentious version of it). His purpose was to get his audience to support the war and accept the inevitable casualties. It is hardly inappropriate that his words should be quoted for similar ends. But the important question is whether Thucydides, our source for the oration, also endorsed its sentiments. Did he, as is often assumed, intend his readers to take Pericles’ speech at face value? Why does this matter? Because Pericles was just a politician, but Thucydides was more than just a historian. He sought not only to record events, but to interpret and explain them; not as an end in itself, but because this would help his readers understand future events as well. His history, he claimed, would thus be a work of continuing importance and usefulness, a ‘possession for all time’. This is how it has been received, especially over the past two centuries. For many historians, Thucydides has been the model scientific historian; for theorists of international relations, he is the founder of the ‘Realist’ school, if not of the entire discipline. If Thucydides endorsed Pericles’ views, this is powerful support for the idea that war is always justifiable in defence of one’s own interests — and that imperialism, which Pericles sought to defend, is both inevitable and desirable. The inclusion of highly polished speeches such as Pericles’ funeral oration in Thucydides’ history has often puzzled readers, especially since he claimed to disdain writing for entertainment’s sake. Some have treated them as literal transcriptions of what was actually said, although Thucydides himself contradicted this. Others have read them as statements of Thucydides’ own views. Thus, many political theorists have claimed that the famous line from the Melian dialogue, ‘the strong do what they can, and the weak suffer what they must’, is a Thucydidean doctrine, although it is said by the Athenians. Both approaches are wrong. The speeches allowed Thucydides to explore the motives of key actors at critical moments, and to develop one of his central themes: the awkward relationship between words and deeds, ideas and reality. By juxtaposing speeches and action in his narrative, he emphasised the constant mismatch between what people thought and claimed — and what actually happened, whether because of ignorance, miscalculation, deception or chance. So it is with Thucydides’ account of Pericles’ Funeral Oration. There is interminable debate among specialists about whether Thucydides actually admired Pericles’ leadership. It is possible to read the speech as an endorsement of Periclean ideals, but one can also see it as a deliberate exposition of the dangers in Pericles’ conception of an all-powerful Athens, with its hidden agenda of imperial expansion and the enslavement of others. In either case, the rest of the narrative demonstrates how unsuccessful the whole project was. Having confidently started a war against Sparta, Pericles succumbs to plague within a year. Athens becomes ever more corrupt in its pursuit of dominance; this is exemplified in the amorality of the Mytilenean debate and the Melian dialogue with its ‘might is right’ argument, and in the selfish ambition of figures such as Cleon and Alcibiades. Yet it is clear that its roots lie deeper. After the disastrous invasion of Sicily, the Athenians readily abandon the glories of democracy in favour of an oligarchy, in the hope of pay from the Persians. In any case, as Thucydides had noted, democracy under Pericles was already in reality the rule of one man. At best, then, the funeral oration expresses ideals that are inevitably undermined by subsequent events. At worst, it represents the very qualities that led to Athenian defeat. Periclean patriotism is questioned and undercut at every turn, and the reader is encouraged to weigh it carefully — or look back at it critically — rather than adopt its powerful but simple-minded slogans. It is this emphasis on the complexity of the world and the unpredictability of events that underpins an alternative, less belligerent tradition of reading Thucydides, one that stands in opposition to the ‘Realist’ tradition of international relations. This approach sees Thucydides as offering an essentially tragic account of the world. It is echoed in modern artistic responses: John Barton’s dramatisation of Thucydides’ history as The War That Never Ends, first as a play in the 1960s (in response to Vietnam), then as a film in 1991; or Peter Handke’s agonised engagements with ‘history’ in his collection of short pieces Noch einmal für Thukydides (1995); or WH Auden’s poem ‘September 1, 1939’: ‘Exiled Thucydides knew / All that a speech can say / About Democracy, / And what dictators do…’ For such readers, the key sections of the history are not the idealistic claims about the glories of democracy or the cynical analyses of power politics, but the vivid depictions of human suffering, in the plague at Athens, the civil war at Corcyra and the Athenian retreat from Syracuse. Noble sentiments and rational calculations become shipwrecked on the realities of war and the complexities of real life — or are themselves part of the machinery that leads to disaster. Thucydides depicted the horror and tragedy of war, and showed how society was corrupted and undermined by it. An optimistic interpretation of his intention is that he hoped to teach us to recognise such dangers and strive to avoid them. Alternatively, perhaps the idea is simply that we will learn to see the world as it is, unpredictable and uncontrollable. In either case, a quotation from Thucydides, even on a war memorial, should be the starting point for debate, and certainly not the cue for patriotic solemnities. We can take Pericles’ words at face value only if we ignore the rest of his story; and if we do that, we too risk falling under the spell of a leader who presents his military adventure as a noble defence of freedom against its sworn enemies.
Neville Morley
https://aeon.co//essays/there-is-a-sting-in-the-tale-we-use-to-remember-the-fallen
https://images.aeonmedia…y=75&format=auto
Wellbeing
Solitude is enlightening but if it does not lead us back to society, it can become a spiritual dead end
‘I have a great deal of company in my house; especially in the morning, when nobody calls.’ Henry David Thoreau’s remark about his experience of solitude expresses many of the common ideas we have about the work — and the apparent privileges — of being alone. As he put it so vividly in Walden (1854), his classic account of the time he spent alone in the Massachusetts woods, he went there to ‘live deep and suck out all the marrow of life’. Similarly, when I retreat into solitude, I hope to reconnect with a wider, more-than-human world and by so doing become more fully alive, recovering what the Gospel of Thomas called, ‘he who was, before he came into being’. It has always been a key step on the ‘way’ or ‘path’ in Taoist philosophy (‘way’ being the literal translation of Tao) to go into the wilderness and lay oneself bare to whatever one finds there, whether that be the agonies of St Anthony, or the detachment of the Taoist masters. Alone in the wild, we shed the conventions that keep society ticking over — freedom from the clock, in particular, is a hugely important factor. We are opened up to other, less conventional, customs: in the wild, animals may talk to us, birds will sometimes guide us to water or light, the wind may become a second skin. In the wild, we may even find our true bodies, creaturely and vivid and indivisible from the rest of creation — but this comes only when we break free, not just from the constraints of clock and calendar and social convention, but also from the sometimes-clandestine hopes, expectations and fears with which we arrived. For many of us, solitude is tempting because it is ‘the place of purification’, as the Israeli philosopher Martin Buber called it. Our aspiration for travelling to that place might be the simple pleasure of being away, unburdened by the pettiness and corruption of the day-to-day round. For me, being alone is about staying sane in a noisy and cluttered world – I have what the Canadian pianist Glenn Gould called a ‘high solitude quotient’ — but it is also a way of opening out a creative space, to give myself a chance to be quiet enough to see or hear what happens next. There are those who are inclined to be purely temporary dwellers in the wilderness, who don’t stay long. As soon as they are renewed by a spell of lonely contemplation, they are eager to return to the everyday fray. Meanwhile, the committed wilderness dwellers are after something more. Yet, even if contemplative solitude gives them a glimpse of the sublime (or, if they are so disposed, the divine), questions arise immediately afterwards. What now? What is the purpose of this solitude? Whom does it serve? Solitude can enliven a new sense of what companionship means To take oneself out into the wilderness as part of a spiritual quest is one thing, but to remain there in a kind of barren ecstasy is another. The Anglo-American mystic Thomas Merton argues that ‘there is no greater disaster in the spiritual life than to be immersed in unreality, for life is maintained and nourished in us by our vital relation with realities outside and above us. When our life feeds on unreality, it must starve.’ If practised as part of a living spiritual path, he says, and not simply as an escape from corruption or as an expression of misanthropy, ‘your solitude will bear immense fruit in the souls of men you will never see on earth’. It is a point Ralph Waldo Emerson, Thoreau’s friend and teacher, also makes. Solitude is essential to the spiritual path, he argues, but ‘we require such solitude as shall hold us to its revelations when we are in the streets and in palaces … it is not the circumstances of seeing more or fewer people but the readiness of sympathy that imports’. Thoreau, however, felt keenly the corruption of a politically compromised, profit-oriented, slave-keeping society. His posthumously published work Cape Cod (1865) is, at least in part, an expression of dismay, even grief, in which he revealed his desire to turn his back on American society. Yet for much of his life, he kept Emerson’s principle close, as he remembers in Walden: There too, as everywhere, I sometimes expected the Visitor who never comes. The Vishnu Purana says, ‘The house-holder is to remain at eventide in his courtyard as long as it takes to milk a cow, or longer if he pleases, to await the arrival of a guest.’ I often performed this duty of hospitality, waited long enough to milk a whole herd of cows, but did not see the man approaching from the town.Perhaps the ‘Visitor who never comes’ is the man approaching from town – or perhaps it is some other, more mysterious – and perhaps less benevolent arrival. As Merton cautioned, the wilderness is a place of becoming lost, as much as found. ‘First, the desert is the country of madness. Second, it is the refuge of the devil, thrown out … to “wander in dry places”. Thirst drives men mad, and the devil himself is mad with a kind of thirst for his own lost excellence — lost because he has immured himself in it and closed out everything else.’ Karl Marx expresses this idea in another way. In his A Contribution to the Critique of Hegel’s Philosophy of Right (1844) he says, ‘what difference is there between the history of our freedom and the history of the boar’s freedom if it can be found only in the forests? … It is common knowledge that the forest echoes back what you shout into it.’ Marx saw religion — and by implication, the spiritual life in general — as ‘the opium of the people,’ but the important point is the need to be careful of the dangers of forest thinking. As in every fairy tale and medieval romance, the wilderness is peopled with dragons, but only some of them are native to the place. The rest are introduced by the solitary pilgrim himself, whose quest had seemed so pure and well-intentioned when he set out. If solitude does not lead us back to society, it can become a spiritual dead end, an act of self-indulgence or escapism, as Merton, Emerson, Thoreau, and the Taoist masters all knew. We might admire the freedom of the wild boar, we might even envy it, but as long as others are enslaved, or hungry, or held captive by social conventions, it is our duty to return and do what we can for their liberation. For the old cliché is true: no matter what I do, I cannot be free while others are enslaved, I cannot be truly happy while others suffer. And, no matter how sublime or close to the divine my solitary hut in the wilderness might be, it is a sterile paradise of emptiness and rage unless I am prepared to return and participate actively in the social world. Thoreau, that icon of solitary contemplation, did eventually return to support the cause of abolition. In so doing, he laid down the principles of civil disobedience that would later inspire Gandhi, Martin Luther King and the freedom fighters of anti-imperialist movements throughout the world. ‘No man is an island, entire of itself,’ wrote John Donne, in a too-often quoted line, but the full impact comes in the continuation of his meditation, where he writes: every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bell tolls; it tolls for thee.It is one of the great paradoxes of solitude, that it offers us not an escape, not a paradise, not a dwelling place where we can haughtily maintain our integrity by ignoring a vicious and corrupt social world, but a way back to that world, and a new motive for being there. Moreover, it can enliven a new sense of what companionship means — and, with it, a courtesy and hospitability that goes beyond anything good manners might decree. Because, no matter who I am, and no matter what I might or might not have achieved, my very life depends on being prepared, always, for the one visitor who never comes, but might arrive at any moment, from the woods or from the town.
John Burnside
https://aeon.co//essays/the-allure-and-danger-of-the-solitary-life
https://images.aeonmedia…y=75&format=auto
Gender and identity
Despite so many threats to their freedom, Arab women continue to stage a thousand small revolutions in their everyday lives
The face, a nod to her Egyptian mother, is pharaonic, curls freefalling all around it. The muscles packing her body betray countless hours of toil in the pool, on the bench press, on her feet — running. Her attire is as blunt in its skimpiness as Jordan, where she lives, is adamant in its conservatism. But while the look screams a fight easily won, her demeanor is tactful, her tenor even-keeled. This is a woman used to negotiating her way through life. I will call her Hadaf, a common enough Arabic name, befitting a woman whose 34 years have been just that: hadaf, meaning singular purpose. At first glance, neither her impressive résumé nor her place in Jordanian society surprise. The milestones are all there: a BA in graphic design from Yarmouk University, up north in Irbid; a stint with a successful advertising agency, followed by a top executive job in a global company; and, just recently, an MBA from the London Business School. In Jordan, among the well-heeled, an education and a career with the tight skirts to match are not unusual. Get to know Hadaf a little better, though, and you uncover a life that encapsulates the hard-going, incessant, yet finally successful struggles of Arab women to transform their unforgiving circumstances into a better set of odds. Hadaf’s Palestinian father and Egyptian mother met in Egypt in the late 1960s. She was studying interior design, he agricultural engineering. Like thousands of Palestinians looking for a living, and indeed a home, he moved to Kuwait to teach. Hadaf’s mother soon followed, becoming a teacher herself so as to tiptoe neatly around her husband’s ego; interior design, they both knew, would have made them better money, but it would have made him feel less of a man. I escaped marriage to that mechanic by parading in a T-shirt in front of his shop. He was the one to break off the engagement When Saddam Hussein invaded Kuwait in 1990 and Yasser Arafat took his side, many Palestinians found themselves no longer welcome in the Gulf emirate, and so they moved to Jordan to try to reconstitute a decent life. Hadaf’s father was among them. Settling on the outskirts of the capital, he started up a spare-parts business, but it failed even before it had built up steam, plunging the entire family into near poverty. There is no equivalent dramatic moment when Hadaf’s own battles began. No epiphanies that made her want to reach for something different. Her father’s tumble from middle-class status was not drastic. The neighbourhood was more ‘a social than an economic slum’ — very conservative and very comfortable with it. Her mother donned the veil when she hit 40, because that’s what women in her milieu do upon reaching the gateway to middle age. Her father stopped drinking alcohol but did not pray his way through the days. And there was love around — plenty of it. It’s just that Hadaf walked to the beat of a different drum. ‘Inch by inch. This is how I escaped marriage at 16 to a car mechanic; how I shortened the skirt and rolled up the sleeves a tiny bit; how I managed to go to Yarmouk; how I worked as a waitress there, and at the Hard Rock Café in Amman during the summers… I lobbied my mother, talked my father to death; had him visit the dorms, had him meet the owner of the restaurant up in Irbid, had him pick me up at 2am from the Hard Rock… But, you see, I was making a living. It was very difficult for them to argue with that.’ There were, of course, many fudges, omissions and fibs. ‘I did not tell my parents that I moved into my own apartment the second year of college [and] I escaped marriage to that mechanic by parading in a T-shirt in front of his shop. He was the one to break off the engagement. All I had to do then was persuade my father to let me finish high school.’ When Hadaf started to win the family’s bread, as it were, the parents went quiet. And they’ve been quiet ever since. Yet, even now, for their sake, Hadaf says, ‘I have two wardrobes at home: one for me, for my life; and one for them, when I go visiting.’ A movement without movement: women at the metro station in Cairo. Photo by Mark Henly/PanosThe Hadafs of the Arab world outnumber women like me, for whom life has been much gentler. But we don’t strain to reach out for each other across the divides. I did not have to go in search of her for insight. She lives everywhere around me. Hadaf populates the Middle East. That said, hers is a distinctly upbeat story. Other women are fighting tooth and nail for much smaller breaks. They do win in the end — inch by inch — but the toll tends to draw age early on a young face, and the demands on grit and courage are almost always insufferably unkind. A 22-year-old Palestinian – I’ll call her Suad – works with me on a region-wide community development initiative. She has just set aside her veil in conservative surroundings. She describes her own and her female friends’ lives in Beirut as an incessant search for opportunity amid a heap of constraints. ‘You worry about the consequences, and then you take a step, and then another one after that, and another one after that… You make use of every opening.’ This particular step, her unveiling, has proven to be especially charged. Suad took up the veil, a blend of shapes and colours in her case, at the age of 17, the last in her class to do so. Although most of her school friends were veiled, her parents’ leftist inclinations helped check the growing calls for Suad to amass herself into the fold. It was her teacher’s warning that finally tipped the scales. ‘She brought me before the entire class and told me that my mother would go to hell if I did not wear it. I succumbed.’ Suad took off her hijab two months ago, inspired by the rousing theme of nonconformity threading through her English literature courses. She felt she was being true to her identity. Yet reactions to her move in a milieu that actively discourages feminine unorthodoxies are mixed. Suad’s father is happy to see her let her hair down and so, she thinks, would have been her mother, had she lived to witness this pointed return to visibility. So far, however, the neighbourhood has been worryingly quiet; and the backlash at work, especially among the men, upsetting. In the midst of widespread opprobrium, only one friend has said: ‘If that’s what you want, then do it.’ Suad shrugs. ‘Better this than hypocrisy.’ On their own, such triumphs as Suad’s and Hadaf’s stand as little more than heartening anecdotes on the margins of what is a crushingly discouraging situation for Arab women. One is tempted to ask: only this, after so many years of struggle? But pulled together and read differently, these small victories create a different norm of empowerment — one that, paradoxically, draws strength from the very subtlety and everyday nature of the ‘dissident’ acts themselves. Asef Bayat, an authority on Islamism and professor of sociology and Middle Eastern studies at the University of Illinois at Urbana-Champaign, calls this quiet dissent the ‘art of presence’. His term aptly describes the multitudes of women who, through ‘the mundane [pursuit] of education, sports, arts, music or work’, haggle over more space and better terms with deeply conservative authoritarian regimes and patriarchal societies. If their methods stand in stark contrast to the public activism typically exercised in the West, it is with good reason: they are much more effective in extricating concessions from systems that are lethally unresponsive to organised action. The path of resistance is thus a ‘non-movement’ by defenseless constituencies, massaging the possible in pervasively hostile environments. As Islamists reap the rewards of last year’s Arab uprisings, the fear is they will overturn what little rights have accrued for women Even during a supposedly more liberal Arab moment, decades back, women walked this same path. For the novelist Hanan al-Shaykh, Hadaf’s journey recalls episodes from her own youth in Lebanon: ‘That’s how we did it back in the 1950s and ’60s. Piecemeal. Small steps hiding big dreams. If a girl from Nabatieh [in the south] had her eyes on Beirut, she would tell her family she just wanted to go to Sidon [a bigger town than her own]. If she wanted to get a university education, she would settle for a teaching diploma or nursing.’ While some women adopt the hijab others take it off in a ‘pointed return to visibility’ . Photo by Mark Henley/PanosThe incrementalism of Hanan al-Shaykh’s days would shadow breakthroughs not retreats. ‘Layla Ba’albaki’s monumental novel Ana Ahya (I Live), published in 1958, tore through so many taboos,’ she tells me. ‘You could see the modern Arab woman take shape. There were pressures, but we had just started. Life was like a Chinese cupboard: one drawer for politics, one for culture, one for religion… You could be angry at the West and yet long for conversations with it.’ In the event, Hanan al-Shaykh has managed this tricky path with expert footing, for in spite of her daring writings she is still well received in many Arab countries, including those where her novels are banned. When I look back on a century of Arab women’s struggles myself, and regardless of how many stages the experts delineate for us through a history battered by colonialism and harangued by our own aggrieved search for cultural authenticity, my sense is that our activism has always been about that art of presence. This is not to suggest that this region has been free of concerted public advocacy for gender rights. But that too, inevitably, has had to be a rather genteel prodding of the state, carefully balancing its very masculine sensibilities with emphatic demands for better legal protections. It is not surprising that, in these potentially revolutionary times, the prospects for Arab women seem grim. We’ve been here before. When men mobilised against colonial powers in the first half of the 20th century, women moved along with them, hoping that some of the resulting freedoms would include them. They didn’t. But then, as dictatorships quickly took root everywhere, liberation would end up offering little to the men themselves. The fear now, as Islamists appear to be reaping the rewards of last year’s Arab uprisings, is that history will repeat itself. Even worse, that Islamism will connive to overturn what little rights have accrued for women. Yet many commentators obsessing about our dire predicament seem to be under the impression that as the winds shift radically for systems at large, so they must for the female gender. Frantically, they look for ruptures between the present and the future, when — should the pessimists win — tomorrow, for us, will be very much like the better part of today: a continuum, a plodding, patient, opportunistic negotiation for rights in a region where patriarchal prerogative is stubbornly entrenched in custom and religion, and, of course, in law. For what does it mean, in the 21st century, that Jordanian and Lebanese women still cannot give their nationality to their children? Or that Article 308 in Jordan’s penal code saves a rapist from prosecution if he agrees to marry his victim and the marriage lasts five years? What does it mean that Article 277 in Egypt’s penal code stipulates that a man is guilty of adultery only if he commits the act at his marital home, whereas a woman is guilty regardless of where the act takes place? Or that the personal status code of the Lebanese Maronites declares a girl of 14 eligible for marriage with her guardian’s consent, 15 without? ‘You make use of every opening’: young women in Cairo. Photo by Mark Henley/PanosScores of women practically everywhere in the Middle East are the easy victims of honour crimes for which sentences remain the most lenient in the legal books. What does that mean? Let me add here the veil, as a finale, and ask what it means when the hijab has become every other female’s constant companion? With political Islam beating at the door, it is tempting to anticipate the dawn of another 1979 — the year that turned Iran upside down — with its ushering in of ‘subversive’ Shia Islamism and its wardrobe of visual props: the veil, the turban, the beard, the open hand beating against a heaving chest. But it is a bizarre temptation, with such different shades of Islamism already in full sight, from the austere Wahhabism of Saudi Arabia to the obnoxious militarist fundamentalism of Sudan, and to the Brotherhood of Hamas in Gaza. Countries such as Jordan and Egypt might have appeared to stand firm against such headwinds. But, in truth, the wins there have been few, while the concessions to stricter interpretations of Sharia have been many, rendering family law a perversion of the civic rights embedded elsewhere in their constitutions. Only Tunisia resisted fundamentalist encroachments, remaining as conspicuously secular as it was authoritarian under both Habib Bourguiba and Zine al-Abidine Ben Ali. Which explains why there is so much post-revolutionary angst that women’s rights in Tunisia will be subject to drastic reversals. So there is no sense, really, in trying to fashion out of a very complicated narrative a simpler, more straightforward story. Even that hijab — often flagged up, since its spirited rush through the late 1970s, as the clearest proof of our break with a presumably progressive past — can’t quite bend to stereotype, camouflaging a battle for agency that has been as hard to fight without it as it has been with it. In this century-long grind, it would have to be the hijab, wouldn’t it, that offers up a chronicle wrapped head to toe in irony. Of course this powerful symbol would galvanise warring camps. Once shedding it was cited — tellingly enough, by both sides — as evidence that (Western) modernity was winning the argument in the (conquered) East, it was bound to be taken up again, as armour, by women in harassed societies, questing for their inner voice by way of a rebuttal. In veiling herself, a woman seeks reprieve as well as freedom of movement — akin to closing the windows in order to keep the door open As a matter of fact, for a sea of veiled Arab Muslim women, the veil has never been a choice — pure and simple. Equally, it has always been the happy route of those who believe they are fulfilling God’s wish — no more, no less. But when scholars went in search of explanations, they found women at the forefront of the discontent that has fuelled the veil’s ready adoption as a centerpiece of identity politics. If the relatively better educated — and, yes, more affluent — types were first to unveil in the early years of independence, soon followed by others further down the economic rung, so the daughters of the rising middle classes would be first to put it back on a few decades later, sending it up the same rung. That Islamists, who had long been rallying to spread this star marker of their influence, grabbed a hold of the trend and ran with it was just smart strategy and brilliant timing. They plowed fertile minds and made use of every advantage in a propitious regional climate: the collapse of Gamal Abdel Nasser and his charade of secularism and Arabism in Egypt; the Gulf money that sponsored the dimmest visions of Islam almost everywhere in the East; the ripening fight against leftist currents in the region, in which religion would be the sharpest of weapons. All this passed with little comment under the watchful eyes — and often active encouragement — of our so-called Western friends. There are lessons to be learned from Columbia University professor Lila Abu-Lughod’s reminder that ‘some of the most conservative movements that focus on women in these parts of the world have resulted from interactions with the West’. Perhaps the most consequential of these lessons is that the insistence on context is not scapegoatism. It seems embarrassingly elementary, but easily the most unfortunate failure of thought in the debate on Arab women’s subjugation has been the race between those who conveniently locate the problem strictly within the Arab-Muslim self, and those who furiously look everywhere else for culprits. The problem, of course, is a miserable brew of both. Nor is the damage limited to the quality of the debate. For by ignoring the impact of policy on the direction of culture and politics, we risk the most wrongheaded of remedies. With plights as fraught with pain as that of Arab women, however, context is a tricky thing. It often trespasses on comforting stereotypes and requires studied deliberation when the wounded heart thirsts for indictments of the sweeping kind. If you listen to the arguments of the hijab’s early champions, for example, you find yourself weaving a paradox of defiance in retreat, of empowerment through conformity. The more women went about their business freely in public, the more the indignities seemed to pile up in the hustle and bustle of life in, say, 1970s Cairo. On buses, in the workplace, in the neighbourhood, at university, there boomed the loud discomforts of a society that had never quite accepted women as full partners in the public arena. Hence the merciless harassments that have turned a supposed act of emancipation into a sellout to Western corruptions. Rebellion in the guise of the surrender: the hijab is swathed in ironies. Photo by Mark Henley/PanosThe motivations that crowned the veil on many a woman’s head are various: there’s the need for a shield, for an expression of faith in one’s embattled heritage, or simply the compulsion to follow the example of others — easier to fade into the pervading scene, and to belong. In veiling herself, a woman might at once seek reprieve as well as retain some form of freedom of movement in stubbornly inhospitable climates. By consciously taking her visage out of contention, she might be allowed to carve a respectable space for herself — akin to closing the windows in order to keep the door open. More than 30 years have passed since the veil made its comeback, but talk to women today and the stories you hear have the same complex weave. As Dina El Sherif, an Egyptian in her mid-30s, and a professional in the field of social entrepreneurship, put it to me in a moment of exasperation: ‘We [women] live in a constant state of struggle — culturally, socially, economically, politically. It’s… exhausting.’ She happened to opt out of the hijab. Others didn’t. Surrender in the guise of rebellion? That’s always been easy to argue about the veil. But so, it needs to be said, is rebellion in the guise of surrender. Ultimately, there is something inescapably unsatisfying about the nuance here — much like that larger one which pervades the larger issue of the repression of Arab women. For even the most sensitive reading of the veil and its many meanings cannot avoid the rueful question that Samar Dudin, a 49-year old Jordanian activist and community organiser, voiced towards the end of one of our many conversations: Tell me, will you: what kind of a free choice is it, when you’re making it out of fear, or need ofsutra(protection), or intolerable peer pressure, or just, for the love of God, to be left alone? How do you actually practice free choice in the absence of even a semblance of freedom? I mean, where does this kind of contamination in a supposed act of free will belong in this debate?More than exposing the serious tensions between obligations and rights under Islamism, her question brings into sharp relief the more fundamental matter of citizenship and its requirements, specifically those that postcolonial regimes wiped clean from the lexicon of Arab politics. Opinion-makers — male and female, Eastern and Western — who attempt to disentangle the case for Arab women from the wider disquiet that triggered the uprisings do it a most grave injustice. The complaint against Islam and/or patriarchy, however appealing, is incomplete. Implicated though they might be, the more effective attack is against despotism and its local and foreign patrons who have cynically played and reinforced both, leaving targets, such as women, vulnerable to all manner of assaults. How optimistic dare we be in our predictions about a region that is extremely susceptible to political manipulations? It is telling that the very first tide of Arab female activism accompanied the battle against Empire, while this current one of Islamic feminism accompanies the fight against domestic tyranny. I say Islamic feminism because, predictably, as societies and polities have become more decidedly Islamic, so have, in the main, the public endeavors by and on behalf of women. That, after all, has been the path of gender rights in the Arab world from the outset: a pragmatic, incremental approach that has almost always moved with wider trends. We have been marching towards a crossroads for a while now. The Arab revolts had the unique honour of announcing it. But the forces at play have long been in the making. We are, in fact, in the midst of heated post-Islamist arguments over a religion that has been brought down to arbitrate even the minutiae of everyday existence; between those who insist that Islam is a strict regime of unalterable duties and rituals meant to control the very rhythm of life, and those who counter that it is but a body of evolving ethics designed to guide but never govern. It is in such arguments that the future of the Arab world lies — and, by extension, our own future as Arab Muslim women. If successful, the drive for a more participatory and expansive politics will necessarily embolden demands for human and civic rights, in which case women’s struggle will become more audacious and insistent. If it fails, then autarchy will regain momentum, and we’ll just keep plodding on. Barely a few pages into this new era in Arab history, you could sketch it every which way you like. For every female friend who sees a magnificent opportunity I have one doubting the odds, while the news keeps churning out headlines about us that lift the spirit only to sink them again. For Dina, the 30-something from Cairo who chose not to wear the veil and who has always been as comfortable with her faith as she is with her ‘liberal’ self, one recent experience has become a measure of how life has already begun to change. Previously, Cairo’s notorious pre-revolutionary streets had subjected her to the usual mixture of disdain and cold-shouldering. But right after the uprising, she started to experience more threatening taunts, urging her to cover up. She flinched at first, scurrying ahead to avoid contact — a rather strange reaction for the self-described offspring of ‘a family of very strong women’. But these slurs had much sharper edges than the insults of old. And then, suddenly, she decided to do her own one-woman revolt. She started to shout back.
Amal Ghandour
https://aeon.co//essays/inch-by-inch-what-next-for-the-women-of-the-arab-spring
https://images.aeonmedia…y=75&format=auto
Food and drink
An exquisite, luxurious meal is an ephemeral pleasure – but perhaps that’s the point. So is the human condition
We have taken our places. This evening’s performance, sold out months in advance, is about to begin. The programme, handwritten in a traditional script on a rolled parchment, tied with string, tells us to expect a prologue, two chapters and an epilogue, without interval. I’m nervous with anticipation but I’m somewhat embarrassed to admit that it’s not because I am waiting for the curtain to rise on a Wagner opera or a Shakespeare play. I’m actually waiting for my dinner. This is no ordinary meal, however. It’s the 19-course tasting menu at one of the world’s best restaurants, Frantzén/Lindeberg in Stockholm. Ranked number 20 in Restaurant magazine’s influential annual survey, it earned two Michelin stars in its first two years and is almost certain to get a third. Food doesn’t get much, if any, better than this. Yet there seems something wrong about the effort and expense that fine dining like this involves. And when the average bill is the stiffest in the Nordic region, around €350 (or £280) per head, that unease can turn to moral outrage. What on earth could justify spending so much money on what is ultimately just fuel for humans, especially in a world where almost one billion people still go hungry every day? Answering these questions was the main reason I was at the table at all. I was writing a book on food and philosophy and felt I needed to experience some of the extremes of food luxury. Of course, I also love eating, so it was a wonderful excuse. But I really don’t think I could have justified the reservation without some rationale other than pure pleasure. After all, this was going to be the most expensive three hours of my life. Earlier, I had interviewed the head chef Björn Frantzén and he had played down expectations. He told me that he likes eating the sausages sold at football matches. Also that ‘I know there’s so many things wrong with it, but I think McDonald’s is nice’. And that ‘At the end of the day, let’s not forget, it’s a restaurant, you go to a restaurant because normally you’re hungry and it’s just food.’ As it turned out, he could not have been more wrong. This is not just food, and hunger is not the reason to eat it. Take, for instance, the bone-marrow with caviar and smoked parsley. Delicious but, for all that, you might say it’s just an ephemeral experience. The obvious rejoinder is that of course it’s ephemeral: all experiences are; life itself is. The difference is that unlike, say, opera, when you are eating food you can never forget that fact. Certain aesthetic experiences of high art create a sense of transcendence, a feeling that you are somehow transported beyond the merely mortal realm to taste something of the divine. Indeed, that is precisely why some people believe art is so important. I would argue the other way. The problem with art is that it can fool us into forgetting that we are mortal, flesh-and-blood creatures. The culinary arts, on the other hand, remind us that we are creatures of bone and guts, even as they delight us with creations no other animal could ever produce. Fine food is about the aesthetic of the immanent, not the transcendent. A mouthful of Frantzén’s diver scallops, truffle purée and bouillons transports you to heaven while never letting you forget it is a perishable place on Earth. Through experiences like these you come to know the potential intensity of being alive, what it means, as Thoreau recommended, to suck out all the marrow of life. So yes, eating is ephemeral, but some experiences are so extraordinary they are worth it for their own sakes. Life is not just about such peak moments but it is very much enriched by them. ‘Mere experiences’ can also provide a kind of first-hand knowledge of the heights to which skill, flair and determination can take us. Perhaps it doesn’t matter which of the myriad forms of human excellence we most enjoy. Frantzén actually started out as a professional footballer for the top Swedish club, AIK Stockholm, before injury ended his career at the age of 20. I wondered whether he agreed that excellence is the thing that matters, not its particular vessel. He nodded in agreement, saying, ‘It happened to be cooking now, it could have been anything.’ A rare sensitivity to seasonality: one of the Swedish restaurant’s kitchen gardens. Photo by Fredrik SkogkvistNor should it be seriously doubted that the very best chefs really do reach extraordinary heights, requiring rare skill, hard work and obsessive perfectionism. Take their dish of peas and beans, cream cheese of goat’s milk and mint. On the plate, it looks extremely simple, and in principle it is no more than the right ingredients put together in the right way. So why does this have a depth of flavour more usually associated with fine wine, along with an extraordinary ‘finish’: the taste that lingers in your mouth after you have swallowed, quite different from the one when you were chewing? Likewise, what explains how Satio tempestas, a small salad of 40 different ingredients, is delicious in a way that, from its description, is literally unimaginable? The answer is that Frantzén has a rare depth of understanding of and sensitivity to ingredients. The restaurant has two gardens of its own and they harvest what Frantzén says is ‘peaking, when they are best in season, and that’s sometimes just a matter of a couple of days’. Hence the menu changes almost every day, as Frantzén adapts to create the perfect combinations of perfect ingredients. And it’s not just about the combinations on the plates, but between them. ‘When you’re having a tasting menu, it’s a lot about the rhythm, and the speed you’re serving things,’ says Frantzén. So the frozen lemon verbena, for instance, is one of the simplest dishes on the menu, but it’s in exactly the right place at the right time. A meal like this is not just about delicious food. Frantzén says it sounds pretentious, but ‘it’s like going to the theatre … more than just what’s on the plate, it’s a lot of other things: storytelling, ingredients, where they’re coming from, how you present it, the look and feel of the restaurant’. Our evening was full of theatre. As soon as we took our seats we saw a glass-topped wooden box on our table containing a small baguette-shaped piece of dough, proving. It was then taken away and baked over an open fire and brought back with some buttermilk, churned in front of us. At one point the maître d’ Jon Lacotte brought a piece of raw veal to our table and blow-torched it through a piece of coal. It was then taken away to return later as a ‘tartare’, with tallow from an 11-year-old milk cow, smoked eel and black roe. This is not the cheap theatricality of banging plates or a flamboyant chef tossing pasta. Like a good play, you see only the action that is relevant to the plot, and that moves it forward to a satisfying resolution. So the freshest, most delicious bread and butter I’ve ever eaten, the very definition of simplicity, takes its rightful place alongside the most elaborate creations, because behind both is an incredible amount of care and effort to get it exactly right. Still, there is the nagging question of cost. How could anyone possibly justify the bill? There is at least a financial logic to it. Ingredients such as the top-grade oyster, which came with frozen rhubarb, cream and juniper, cost a fortune. Frantzén’s business partner, the pastry chef Daniel Lindeberg, told me that 40 per cent of the bill is the cost of the ingredients alone. The rest is time. One dish we were served was whole turbot, which was baked for four hours at 55°C (130°F), with white asparagus baked for three hours with pine, lemongrass and mint. It takes longer to prepare a single ingredient of a single dish than it does for us to eat the whole meal. And it takes eight people in the kitchen, and about the same again outside it, to serve a total of 25 guests for dinner. Since I visited, that proportion has been ratcheted up again, with more kitchen space and less eating room: 11 chefs to 16 guests. Like an opera that requires an orchestra, a chorus and the world’s best solo voices, it’s this expensive because it costs this much to produce. You have to be obsessive to push your cooking to such limits of originality and ingenuity, always at the edge, looking to create ‘the perfect restaurant, the perfect service, the perfect plate, the perfect menu’. To pursue excellence so doggedly, the restaurant has to come first, certainly above creaming off profit. ‘Let’s say the average spend here is, to make it easy, €350,’ says Frantzén. ‘It costs us €349 to sell for €350. Everything goes back to the customer. Everything. Our margin is almost zero.’ The extraordinary should not be allowed to become ordinary, no matter how good it is Even if the economics justify the expense there is still, of course, a great deal of conspicuous consumption in high-end dining: showing off, being seen, being served by the servile and paying over the odds to do so. But the same is certainly true of opera. Any expensive art is going to attract a mixture of the true connoisseur and the conning poseur. However, this kind of restaurant is not just for show. You can’t get away with serving inferior food on silver platters as you might once have done. Frantzén and Lindeberg define the mood of the restaurant as ‘casual elegance’: focused on the cooking rather than the prestige the customer can glean from being seen there. This is progressive, not decadent. The meal I had was without doubt the most incredible eating experience of my life. So many dishes were outstanding that there came a point at which the very word was meaningless. How can more things stand out than not? Perhaps that’s why, for all the wonder of the evening, it really would be wrong to do this kind of thing too often. The extraordinary should not be allowed to become ordinary, no matter how good it is. There is also an inherent risk in splashing out at a really expensive restaurant, one that worried me when I bit into the very first dish of the evening, beef with lichens and ash. It was pretty tasty, but nothing amazing. Taste is so personal that you could save up for what is supposed to be one of the great gastronomic experiences in the world and be left cold. But again, the same can be true of great art, which you can admire without loving. As Frantzén said, at even the best restaurant ‘I might not like everything. I can leave and say, “It’s not actually my cup of tea, but they’re fucking great.”’ So as the bill arrived and, along with it, a wax-sealed brown envelope containing a typewritten copy of the menu with a list of the wines we have drunk, two questions arose. Could I justify the cost? And was it worth it? Despite the case I’ve made, I’m not sure I can confidently answer the first in the affirmative. But to the second, my heart and head both scream yes. It’s a contradiction some will find easier to digest than others.
Julian Baggini
https://aeon.co//essays/the-most-expensive-evening-of-my-life-was-it-worth-it
https://images.aeonmedia…y=75&format=auto
Deep time
A chronicle of climates past and a portent of climates to come – the telling rings of the bristlecone pine
No event, however momentous, leaves an everlasting imprint on the world. Take the cosmic background radiation, the faint electromagnetic afterglow of the Big Bang. It hangs, reassuringly, in every corner of our skies, the firmest evidence we have for the giant explosion that created our universe. But it won’t be there forever. In a trillion years’ time it is going to slip beyond what astronomers call the cosmic light horizon, the outer edge of the observable universe. The universe’s expansion will have stretched its wavelength so wide that it will be undetectable to any observer, anywhere. Time will have erased its own beginning. On Earth, the past is even quicker to vanish. To study geology is to be astonished at how hastily time reorders our planet’s surface, filling its craters, smoothing its mountains and covering its continents in seawater. Life is often the fastest to disintegrate in this constant churn of water and rock. The speed of biological decomposition ensures that only the most geologically fortunate of organisms freeze into stone and become fossils. The rest dissolve into sediment, leaving the thinnest of molecular traces behind. Part of what separates humans from nature is our striving to preserve the past, but we too have proved adept at its erasure. It was humans, after all, who set fire to the ancient Library of Alexandria, whose hundreds of thousands of scrolls contained a sizable fraction of classical learning. The loss of knowledge at Alexandria was said to be so profound that it set Western civilisation back 1,000 years. Indeed, some have described the library’s burning as an event horizon, a boundary in time across which information cannot flow. The burning of books and libraries has perhaps fallen out of fashion, but if you look closely, you will find its spirit survives in another distinctly human activity, one as old as civilisation itself: the destruction of forests. Trees and forests are repositories of time; to destroy them is to destroy an irreplaceable record of the Earth’s past. Over this past century of unprecendented deforestation, a tiny cadre of scientists has roamed the world’s remaining woodlands, searching for trees with long memories, trees that promise science a new window into antiquity. To find a tree’s memories, you have to look past its leaves and even its bark; you have to go deep into its trunk, where the chronicles of its long life lie, secreted away like a library’s lost scrolls. This spring, I journeyed to the high, dry mountains of California to visit an ancient forest, a place as dense with history as Alexandria. A place where the heat of a dangerous fire is starting to rise. Dendrochronology, the study of tree rings, takes its name from the Greek words dendron, meaning ‘tree limb’ and khronos, meaning ‘time’. It is a young discipline, an offshoot of astronomy – a science itself obsessed with the measure of deep time. In 1894 the father of dendrochronology, Andrew Ellicott Douglass, was an assistant to Percival Lowell, the astronomer who popularised the notion that Mars is ringed with canals. Douglass had exhibited a remarkable fluidity of mind well before he caught Lowell’s eye. At Trinity College, Connecticut, he had taken degrees in physics, geology and astronomy, earning honors in all three. Douglass was hired to assist Lowell in a series of detailed observations of the red planet, but in his spare time he was nurturing a new idea about the sun. He had begun to wonder if sunspots, magnetic irregularities on the solar surface, could swing the Earth’s climate enough to affect growth in trees, and if some record of these swings might be divined from tree rings. He wondered if trees inscribed celestial events into their rings. Douglass was hardly the first to see tree trunks as oracles or sources of knowledge — a myth has it that the idea of a circle first passed into the human mind by way of a tree stump. Nor, remarkably, was he the first to correlate tree rings with climate. In the fourth century BC Theophrastus, a pupil of Aristotle, wrote a nine-volume study of botany, wherein he noted that trees produce new rings annually. In the 16th century, Leonardo da Vinci went a step further, observing in one of his many journals, that ‘rings in the branches of sawed trees show the number of years and, according to their thickness, the years which were more or less dry’. It was through this pattern, this dispersed archive of recorded starlight, that dendrochronology was born This April, I visited Ed Cook, a dendrochronologist at Columbia University’s Tree-Ring Lab, just north of Manhattan. Cook, who helped to found the lab back in 1975, is a short, serene man with a deep voice and a long wizardly beard. He has done fieldwork in forests all over the world, but his signature achievement is a North American drought atlas. We’re accustomed to thinking of atlases as representations of space, purely geographical. This atlas captures time as well. To construct it, Cook compiled data from 2,000 separate tree-ring chronologies into a map of the continent’s dry spells — a map that spans thousands of years. Each tree-ring chronology is the product of hundreds of hours spent bent over a microscope, counting and measuring rings on tree cores: smooth, pencil-thin tubes of wood that dendrochronologists bore out from the trunks of trees. When I asked Cook what made him want to spend four decades doing this painstaking work, he paused for a moment, before confiding that it was the mystery of tree rings that hooked him. He said that trees were like a great puzzle, that you had to work carefully to decipher what they were telling you. Trees owe their rings to an ingenious botanical technology, the vascular cambium, a slimy lining of cells that sits under a tree’s bark like a layer of long underwear. In spring, the cambium’s interior surface produces a new layer of cells that hardens into cream-colored wood. The cambium continues to produce new wood into summer, but as the months go by, its output slowly darkens. This seasonal shading delineates each ring from those around it. From this process we derive one of our most elegant axioms about nature, a piece of folk wisdom we all learn in childhood: to count the rings of a tree is to count the years of its life. By 1901 AE Douglass was tiring of Lowell’s eccentricities, especially his obsession with Martian canals — an obsession that eventually led to a falling out between the two men. Douglass was also keen to test his theory of sunspots and trees, and so he began studying tree rings in old stands of ponderosa pine near his observatory in Flagstaff, Arizona. He was looking for patterns of thin growth that coincided with years known for high sunspot activity. At Columbia, Ed Cook explained to me that evidence of such a pattern typically requires core samples from more than 20 trees. A single tree’s rings are not enough, because they might only reflect changes in that tree’s immediate surroundings. In 1904, Douglass made a startling finding. He discovered thin growth in a number of local ponderosa pines for the years 1899 and 1902, both of which were known for sunspot outbreaks. The pattern was dubbed the ‘Flagstaff Signature’, and Douglass would go on to find it in trees throughout Arizona. It was through this pattern, this dispersed archive of recorded starlight, that the science of dendrochronology was born. It didn’t take Douglass long to realise that he had stumbled onto an extraordinary insight, with applications far beyond astronomy and botany. In 1929 he constructed a tree-ring chronology that went back 1,000 years, which he used to help archaeologists date wooden beams from Pueblo ruins perched beneath desert cliffs in America’s south-west. In the years that followed, Douglass would go on to date more than 40 separate Native American ruins, revolutionising the region’s history and making his reputation as a scientist. He detailed his findings in an article for National Geographic in December 1929. Striking a triumphant tone, Douglass compared his tree-ring chronology to the Rosetta Stone, boasting that it was ‘more accurate than if human hands had written down the events as they occurred’. As a science, dendrochronology was off to an auspicious start. But it would have to wait several decades before finding its ultimate object of study: the bristlecone pine. The world’s oldest trees, bristlecone pines belong to a group of ‘foxtail’ pines that live in small alpine pockets of the western United States. Foxtail pines are hardly newcomers to this Earth. Their oldest fossil ancestor dates back more than 40 million years, to the Eocene, the epoch when modern mammals first emerged. Though today the trees are found at between 2,700 and 3,500 metres, their range fluctuates considerably with climate. Because the trees like things dry and frigid, they extend their reach downward in cool, glacial times and recede to high ground in warm periods. In California, foxtail pine fossils have been found as low as 1,500 metres, no doubt the denizens of a previous ice age. The oldest of the living bristlecones were just saplings when the pyramids were raised. The most ancient, called Methuselah, is estimated to be more than 4,800 years old In March this year, I paid a visit to these extraordinary beings on an arid strip of dolomite atop California’s White Mountains. Located just north of Death Valley, the White Mountains are some of the driest on the planet. Visiting the trees in March meant trudging several miles through snow at just over 2,700 metres, as road access to the bristlecones is closed through May. It also meant that the forest was empty, as deserted of human beings as it has been for all but a brief flicker of its history. It is hard to resist cliché when conveying the antiquity of the bristlecone pine. The oldest of the living bristlecones were just saplings when the pyramids were raised. The most ancient, called Methuselah, is estimated to be more than 4,800 years old; with luck, it will soon enter its sixth millennium as a living, reproducing organism. Because we conceive of time in terms of experience, a life spanning millennia can seem alien or even eternal to the human mind. It is hard to grasp what it would be like to see hundreds of generations flow out from under you in the stream of time, hard to imagine how rich and varied the mind might become if seasoned by five thousand years of experience and culture. The bristlecones owe their alpine habitat to the cresting of an ancient geological swell, a great wave rippling through the rock of Earth’s crust. The trees sit atop the White Mountains, but it is the neighbouring Sierra Nevada — the highest range in the contiguous United States — that creates the arid conditions they crave. More than 200 million years ago, deep beneath the Earth’s surface, an oceanic plate began sliding under America’s west coast. The friction of that event filled the local crust with great plumes of granite, a volcanic rock known for its glittering quartz. Out of this deep, crustal sliding arose mountains and volcanoes, craggy giants that millions of years would smooth into rounded stubs. But time would resurrect them eventually. Less than five million years ago — in a sense, the geologic present — new mountains of ancient granite began to rise from the grave of that ancestor range. Today they hover more than 4,000 metres above the neighbouring Pacific Ocean. Over time, water — both liquid and glacial — has carved their peaks into thin wedges, snow-whitened dorsal fins of exposed granite. Together they make up the Sierra Nevada, the great, gleaming spine of California. As you read this, they are still rising. Near the summits of the White Mountains, in the rainshadow of the Sierras, the bristlecones find the cold, dry solitude that they need to survive. A view looking from the bristlecone groves aross to the Sierra Nevada ridge.The Sierra is important to this story because its high ridge forms the rain shadow that makes possible the dry air beloved by the bristlecones. When Pacific storms move east from California’s coast, the granite wall of the Sierra ridge deflects their moisture, funnelling it into some of the world’s largest, most spectacular alpine lakes. All that slips through to the White Mountains is a strong, dry wind, and that’s just the way the old trees like it. Indeed, from the right vantage point, the bristlecones on the high, western slopes of the White Mountains almost seem to reach out to the Sierra, their gnarled branches twisted into an arboreal pantomime of worship. From this elevated perch, these elderly trees have borne witness to the rise of a new geological age, a successor to the Holocene. This new age is called the Anthropocene, or ‘age of humans’, and it refers to the stretch of time during which we have begun to reshape the Earth’s chemical makeup. The idea of this new age arose just over a decade ago, as scientists began to realise that the geological record would likely bear the marks of human activity for aeons to come. There is still a question as to whether the term will enter the official geological lexicon, and there is fierce debate about whether the Anthropocene began with the Industrial Revolution, or with the development of agriculture some 10,000 years ago. But this much is certain: one of its signature features is massive deforestation. It was just under 400 million years ago that plants first evolved woody stems in order to propel themselves upward into unobstructed sunlight. That is an ocean of time: 400 million years is nearly double the time it takes the sun to make a full orbit around the Milky Way. Human beings, by contrast, have been around for only 2.2 million years, one 100th of a galactic revolution. Before trees and forests, the biosphere was just a thin layer of life sheathed around the rock of our planet. By the end of the Carboniferous period, named for the carbon-rich rock produced by fossilised vegetation, forests had circled the globe, pushing the biosphere’s ceiling up by hundreds of feet. We are still living off the sunlight they captured: Carboniferous forests were the raw material of most of the world’s coal fields. Canopy-covered forests created the first leaf-shaded ecosystems, which went on to shelter a profusion of new species, becoming, with time, the most biologically productive environments on Earth. Trees also reshaped the planet’s broad, meandering waterways, which had roamed the continents aimlessly for aeons. Their deep roots stabilised riverbanks, helping to corral rivers into predictable, deep-channeled streams. Trees became the tall tent poles of the biosphere, the infrastructure of an entirely new Earth. And they have proved quite durable since, rallying from super volcanoes, sun-blotting asteroids and the continent-mowing glaciers of ice ages. In modern human beings, however, they face an unprecedented foe. Deforestation began in prehistoric times, but it was not always as brutal or efficient as it is today. Our primate ancestors practised a kind of deforestation by migration, trading the treetops for terra firma and the forests for open plains. Humans are a different story. Anthropologists suspect we have been cutting down trees for as long as we have been around, mostly to harvest raw material for shelter and fire, but also to construct crude bridges to cross rivers into new landscapes. For a time, our tree-felling was no match for the regenerative power of forests. Indeed, today’s indigenous forest peoples demonstrate the human capacity to live within a forest’s natural limits. But over the past 5,000-10,000 years, our fast-growing civilisations have developed the technology to clear trees faster than they can grow back. In that short time — a slim fraction of the forests’ tenure on Earth — we have managed to destroy more than half of them. And we are getting better at it. During the 20th century, the human population grew 100 times faster than any large animal species has ever grown before. To support the caloric needs of that explosion in biomass, we have systematically torched forests to make room for crops, so that today the landmass devoted to agriculture is the largest land-based ecosystem on the planet. Even where forests do remain, they are surrounded on all sides. The logic of globalisation requires that markets be connected, and that means crisscrossing the Earth’s forests with roads and railways. Only in the Congo Basin and some of the more isolated areas of the Amazon do truly large stretches of forest remain intact, and even those might not last the century. The world’s trees are, in every sense, under siege. By 1932, AE Douglass had become so busy with dendrochronology that he hired an assistant to help him with fieldwork, a 24-year-old New Yorker named Edmund Schulman. Until that point in his life, Schulman had spent little time in forests. After a childhood spent in Brooklyn, he bounced around a bit academically, taking classes at New York University and Brooklyn College. Eventually he completed his BSc at the University of Arizona, and, after a stint with Douglass, a PhD in climatology from Harvard. Douglass and Schulman were an odd couple. Douglass had an intuitive style, reminiscent of the boldness of Victorian scientists who thrived on singular moments of great insight. Schulman, whose black glasses and necktie gave him the look of an accountant, was the straight man, the data hound who kept careful records. In our conversations at Columbia, Ed Cook had been quick to praise Douglass’s genius, but he seemed to reserve his deepest admiration for Schulman, whom he credited with bringing a quantitative approach to dendrochronology. It was Schulman, Cook said, who made it into a rigorous science. In 1939, Schulman began a search that would span the whole of his remaining life, a search for the world’s oldest trees. He knew a truly ancient tree would yield a more expansive vision of the Earth’s climate. A tree with thousands of rings would allow climatologists to hover in space for centuries, watching glaciers crawl up and down the continents; it would let them wormhole into springs and summers past; it would let them see the real seasons of Earth. In Dante the forest is demon-haunted and evil, the underworld out of which the hero must ascend Schulman began his hunt in Bryce Canyon, a dry, desert wilderness in Southern Utah, known for its spectacular geology. His first summer there he found an 860-year-old ponderosa pine at 2,000 metres. Later, moving up into Central Utah, he found an even older tree, a 975-year-old pinyon pine. Both of these trees were old, but neither compared with a legendary 3,000-year-old sequoia tree that John Muir claimed to have found in Yosemite. In 1953, new research convinced Schulman to shift the course of his search to high-altitude trees. He speculated, in an article for Science, that there might be alpine trees that were 2,000 years old. Schulman was especially keen on sequoia trees, on account of the Muir legend, but he also mentioned bristlecones in the article, noting that those found in the White Mountains had been ‘recognised by forest personnel as likely to reach unusual ages’. It is not surprising that it took science so long to find the bristlecone pines. These trees rank among the most isolated organisms on Earth. They have spent tens of millions of years crawling away from the planet’s fertile havens, the mild climates and nutrient-rich environments that encourage biodiversity. Not content with the solitude of thin mountain air, these ascetic trees anchor down in nutrient-bereft dolomite, a grey rock that most plants cannot abide. Their muscular roots octopus around underground boulders, forming a base that can keep them rigid and standing for thousands of years after death. Thomas Harlan, a longtime researcher at the Arizona tree-ring lab founded by Douglass, told me he once found a snag of bristlecone wood that had remained rigid for more than 8,000 years. I won’t forget the first time I saw one of these erect tree-corpses, leering at me like a scarecrow beside the snow-covered road. The tree’s exposed wood had aged into rich tones of gold and copper, and it seemed to leap out of the rock like a petrified flame. To see the living bristlecones is to be struck by their beauty, but also by their strangeness. Of the first 30 that I saw in the White Mountains, no two were alike. Some were stooped, some tall; others had multiple trunks and intricately spiralling limbs. The trees owe their unique morphologies to the special way their roots and trunks are connected. A normal tree pools its resources: it slurps up water with its roots and then sends it around the base of its trunk, nourishing each sector of its bark equally. Bristlecone roots do not share water in this way. Instead, they connect directly with a particular section of bark. If a root is damaged or otherwise unproductive, its attached strip of bark dies. Each sector fends for itself in a kind of intra-organism longevity contest. This trees-within-a-tree style of growth is what gives ancient members of this species their singular appearance. As saplings, they look like symmetrical cones, Christmas trees in miniature. But given sufficient time, their sectors begin to diverge, as tiny environmental advantages and disadvantages accumulate over hundreds, even thousands, of years. The tree might spiral and thicken on one side and die on the other, transforming itself into a swirling, wooden Janus. Among the truly ancient bristlecones I saw trunks ringed entirely in exposed wood but for a single holdout: a strip of bark aglow with green needles and the singular fragrance of pine. These Dali-painted trees, called single-strip survivors, can carry on for thousands of years after the death of their other sectors. The dense grain of the dead wood stays rigid over all that time, giving form to the surviving sliver of tree and pushing its needles, its solar panels, high into the sunlight. Those needles, long-lived in their own right, can stay productive and green for more than 40 years, but they abhor neighbours, preferring the sun of sparse stands to the shade of dense forests. This spacing protects the trees from quick-spreading fires, and it makes for a singular forest aesthetic. Hiking through the wide hallways of the bristlecone forest, I noticed each tree was pedestalled on its own root-encrusted island of earth, backlit by blue sky, giving off the dignified air of solitude. Even the thickest patches had their vistas, their long horizons. Pausing for a moment to catch my breath in the thin air, I turned my back to the mountain. In the distance I could see oceans of cold sunlight roaring off the Sierra Nevada. The dead continue to stand among the living: the wide spaces between these solitary beings helps protect them from contagions of pests and fireThis experience of openness and sublimity among the bristlecones is at odds with fundamental Western ideas about forests, ideas that might have something to do with our peculiar animosity towards them. Indeed, a suspicion of forests as dark, shadowy places is written into the basic texts of Western culture. In Greek mythology, Dionysus, the ivy-wreathed god of the ‘wooded glens’, threatens civilisation with a return to animalistic primitivism. In the Old Testament, Yahweh commands Hebrews to burn down sacred groves wherever they find them. Christian culture has traditionally identified the forest as a pagan stronghold, a gloomy haven for witches and outlaws. In Dante the forest is demon-haunted and evil, the underworld out of which the hero must ascend. For Descartes the forest is the precursor to enlightenment, the physical embodiment of confusion, the maze that the light beams of reason must penetrate. The Stanford literary critic Robert Pogue Harrison has an interesting theory about our weariness of forests. In his 1992 book Forests: The Shadow of Civilisation, he suggests that, at a subconscious level, we resent them for their antiquity, their antecedence to humans. Drawing from the work of the 18th century Italian philosopher and historian Giambattista Vico, Harrison traces the human dread of forests to the origin myths of sky gods such as Jove, who ruled over Rome as Jupiter and over Greece as Zeus. According to legend, Jove first announced his existence to aboriginal humans by sending lightning into primeval forests. He did this to clear a hole in the tree canopy, to open up the ‘mute closure of foliage’, as Harrison calls it, so that humans could see the sky for what it was: a divine entity, a sender of signs, a source of revelation about our origins and our destiny. As Harrison explains, The abomination of forests in Western history derives above all from the fact that, since Greek and Roman times at least, we have been a civilisation of sky-worshippers, children of a celestial father. Where divinity has been identified with the sky, or with the eternal geometry of the stars, or with cosmic infinity, or with ‘heaven’, forests become monstrous, for they hide the prospect of god.Embedded in these myths and legends is an uncomfortable truth for a culture that, until quite recently, believed itself to occupy the physical and teleological centre of the universe. The myths betray an ancient, long-stifled awareness that forests preceded human consciousness; that they were a precondition of it. In 1953, Edmund Schulman spent his summer doing fieldwork in the Idaho countryside where, the year before, he had found another old tree, a 1,650-year-old pine. On his long drive back to the tree-ring lab in Arizona, he made a stop in the White Mountains to visit the bristlecone pines, to see if there was any substance to the persistent rumors of their antiquity. As it happens, he nearly missed them. Schulman had spent a fair amount of time in Yosemite Valley sampling John Muir’s giant sequoias. In sequoias, height is a decent proxy for age, a correlation that Schulman assumed extended to all alpine conifers. When he went to see the bristlecones, he spent the bulk of his time poking around the younger trees, which were tall and had the majority of their sectors intact. Schulman cored several of these tall bristlecones and found them to be aged, but not ancient. Most were under 1,000 years old. Left to its own devices a bristlecone pine could live forever, perpetually regenerating itself like an ever-expanding set of flawless Russian dolls One afternoon, on a whim, he decided to venture to the high mountainside south of the tall trees, where a small forest of drooping, sickly looking bristlecones could be seen radiating downward from the peak. As sunset neared, he scrambled up the mountain finding thick, gnarled trees along the way, some scarcely five metres tall — less than half the height of the young trees across the valley. This was at a time when Schulman still did his own mountaineering (in the years to follow, he would sit from a comfortable vantage point and direct graduate students toward trees he wanted cored, yelling and signaling while they scrambled on the rocky hillside). That afternoon in 1953, as dusk hit, Schulman settled on a tree that looked promising enough to core. He drilled into it, extracted a sample, and then hurried back to camp to beat the darkness starting to pool in the valley between the Sierra and the White mountains. That night, by the fire, Schulman began to count the rings of the bristlecone core, finding them so thin and dense that 100 fit into a single inch. Straining in the light of the campfire, he ticked off century after century, more than 30 in all. Schulman had found a 3,000-year-old tree, the oldest then known to science. A tree older than Christ. Bristlecone pine rings: they are so fine and dense that over a century of life can be embodied in a single inch of woodIt is not unusual to feel wonderment, and a small measure of envy, at the longevity of the bristlecone pine. Indeed, humans have poured a great deal of energy and ingenuity into prolonging life, with little success. Though modern medicine has extended many lives, no human has ever lived beyond a century and a quarter. Even in the realm of story and miracle, where the imagination faces fewer restrictions, long-lived humans, usually sages or kings, live a millennium at most. The most famous of these, Methuselah, for whom the oldest bristlecone pine is named, lived just 969 years according to biblical tradition. To hear of the extreme ages of these trees is to wonder how they outlast us so decisively. How do they fend off the entropic crush of the universe for so long? As far as botanists can tell, bristlecones don’t seem to age in the way we do. A close look at their rings reveals that the wood they produce thousands of years into their lives is as fresh as sapling wood: it bears no sign of age-related mutation. Bristlecones have also learned to mirror the vicissitudes of their harsh environment by limiting their growth in times of drought. In the driest years, the bristlecones all but shut down, adding only the slenderest of rings — sometimes just a cell’s width — to their surviving sectors. But this downshift isn’t permanent. When rain returns, the tree roars right back to life: its roots gulp up water, its silver-green needles suck down carbon and sunlight, and before long, the tree’s photosynthetic factories are humming along again, feeding the cambium’s growth spasms. Left to its own devices, a bristlecone pine could live forever, perpetually regenerating itself, like an ever-expanding series of flawless Russian dolls. But the bristlecone doesn’t live in a vacuum; like any tree, it is vulnerable to pests and predators. Though the tree is armed with an unusually thick resin, a kind of sticky, wound-clogging blood that flows through its limbs and trunk, its best protection is its choice of environment. It finds itself alone on this tough, dry mountaintop, far removed from the bark-gnawing beetles and wood-rotting fungi that might otherwise threaten it. In the end, the tree endures extreme environmental hardship so that it doesn’t have to endure company: its solitude is its salvation. The bristlecone pine is so isolated and so well suited to its environment that, until quite recently, its worst enemy was time. If you live 4,000 years, you live long enough to be unlucky, long enough to fall victim to the kind of random events that the Earth’s short-term tenants are spared. John Muir once wrote, of ancient sequoias, that ‘of all living things, the sequoia is perhaps the only one able to wait long enough to make sure of being struck by lightning’. In the bristlecone forest, I saw several ancient trees that bore the scars of lightning strikes. And even when they manage to dodge Jove’s wrath, the trees can’t escape the rise and fall of geology’s tides. Indeed, the oldest often sit atop a tangle of exposed roots, their twisting bulk laid bare by thousands of years of erosion. Though the bare roots are as thick as anacondas, they can’t withstand alpine conditions forever. Eventually they will die, and the trees they support will crash to the forest floor, victims of their own longevity. In 1958 Schulman wrote an article for National Geographic, describing the extraordinary trees he had found in the White Mountains. After 1953 he had gone on to discover twenty different 4,000-year-old trees, including Methuselah, the oldest living tree today, and Prometheus, a 4,900-year-old tree cut down by mistake in 1964. The impact on the science of dendrochronology was enormous, and like Douglass’s discovery of the Flagstaff Signature, the benefits spilled over into other sciences, especially those attempting to get a better grip on the past. The ancient trees were instrumental in calibrating the radiocarbon curve, the lodestar that scientists use to date carbon-bearing material less than 58,000 years old. They also helped ecologists to better understand the structure of forests, the most complex ecosystems on Earth. In the past we have menaced trees with axes and torches, but now it will be the hot, aggregated exhaust of our civilisations Schulman never saw his article published in National Geographic – or the extraordinary scientific breakthroughs that followed. He died of a heart attack at the age of 49, just two months before it hit newsstands in March 1958. But the bristlecones would pay Schulman a lasting tribute, by reserving their most valuable secrets for his discipline: climatology. The most useful bristlecones are those that sit high in the White Mountains, near the edge of the tree line. Like all bristlecones, they are meticulous recorders of climate, because they tune their growth to it so precisely. But the high trees have another advantage. While the low-altitude bristlecones grow in response to rainfall, those near the tree line tailor their growth to temperature. Their rings are like stilled thermometers retrieved from a time machine. Dendrochronologists at Arizona’s Laboratory of Tree-Ring Research have used the high living trees, and their dead ancestors, to piece together a tree-ring chronology stretching back 8,840 years, nearly the entirety of the Anthropocene. The chronology tells a familiar tale about what is happening to the Earth’s climate. In 2005, a researcher from Arizona’s tree-ring lab named Matthew Salzer noticed an unusual trend in the most recent stretch of bristlecone tree rings. Over the past half century, bristlecones near the tree line have grown faster than in any 50-year period of the past 3,700 years, a shift that portends ‘an environmental change unprecedented in millennia,’ according to Salzer. As temperatures along the peaks warm, the bristlecones are fattening up, adding thick rings in every spring season. Initially there was hope that the trend was local to the White Mountains, but Salzer and his colleagues have found the same string of fat rings — the same warming — in three separate bristlecone habitats in the western US. This might sound like good news for the trees, but it most assuredly is not. Indeed, the thick new rings might be a prophecy of sorts, a foretelling of the trees’ extinction. Remember that bristlecone pines thrive in isolation. They owe their longevity to their ability to withstand the rigors of an environment that is inhospitable to their predators. If the White Mountains continue to warm, the next generation of trees will be forced to move higher in the range to find the cool temperatures and isolation they crave. Bristlecones do come equipped with an escape plan of sorts. Their seeds have long, translucent wings attached to them, and if the wind is right, they can fly long distances, pushed aloft by alpine breezes. Under ordinary circumstances the seeds could help the trees to climb up these mountains in a generation or two. But there isn’t much room to move here, because the bristlecones are already flirting with the peaks. A prolonged period of warming will leave them trapped on the summits, with nowhere to go. Meanwhile, bark beetles and fungi, pests that can make quick work of entire forests, will see their habitats expand. They will charge up the White Mountains, headed straight for the bristlecone pines. The oldest bristlecones, with their odd puzzles of living and dead limbs, are dwarfed by their younger kin. These vigorous young trees may not survive if the mountains continue to warm and there is nowhere higher for their offspring to colonise.If global warming drives these trees to extinction it will signal an evolution in the technology of deforestation. In the past we have menaced trees with axes and torches, but now it will be the hot, aggregated exhaust of our civilisations. Deforestation once arose out of our animosity towards particular forests, those that stood in the way of our future homes and crops. But deforestation is becoming delocalised; it is becoming an unavoidable byproduct of our existence, a diffuse, Earth-spanning emanation no tree can escape – even those that take root at the far reaches of the bio-inhabitable world. I asked Thomas Harlan, a longtime researcher at Arizona’s tree-ring lab, if there were signs that the trees were starting to migrate. He told me about a new sapling he had found well above the tree line in the White Mountains. ‘I can take you up to a point that’s higher than any living tree, and higher than any of the dead trees we’ve ever found, and I can show you where, two years ago, I found a little foot-high bristlecone pine growing,’ he said. Harlan explained that the tree was entering one of the most vulnerable periods of its life. ‘Whenever a tree gets to the point where it’s higher than the snow level, the winds are going to beat it to pieces in the winter, and so whether it will survive or not I don’t know,’ he said. ‘But,’ he added, ‘it might be a sign that the trees are starting to advance up the hillsides again.’ It is interesting to think of the different fates that might await Harlan’s sapling. It could live to see the wonders of the year 6000, etching 4,000 summers into its thin-ringed diary. But it might also die early, vanishing this winter or next, or in a few hundred years, along with this grove and the rest of the bristlecone pines. These strange, memory-laden trees might turn out to be nothing more than a brief bloom of wildflowers in the scheme of deep time. Harlan was circumspect about the trees’ odds of survival. ‘With bristlecone pines, you have to be patient,’ he told me. ‘Come back in 200 years and I’ll tell you more.’ Sitting under an old bristlecone in the cool April sun, I thought of Edmund Schulman, for whom this grove is named. I thought about the tragic contradictions of the human animal, this strange species that reads trees, but also destroys them. Over the course of 400 million years, trees built up a fertile new layer on the Earth’s surface, a layer that incubated entirely new ecologies, including those that gave rise to our ancestors. But now it is humans who spread out over the planet, coating its surface in cities and farms, clearing away the very trees that enabled our origins. This forest, like so many others, has become an intersection in time, a place where narratives of geologic grandeur are colliding. A place to put your ear to the ground, a place to confirm that even here, in the most ancient of groves, if you listen closely, you can hear the roar of the coming Anthropocene.
Ross Andersen
https://aeon.co//essays/trees-of-deep-time-are-a-portal-to-the-past-and-the-future
https://images.aeonmedia…y=75&format=auto
Biography and memoir
As a road cuts into their rainforest home, the Mayagna people consider how much modernity they really want
The road through the jungle from the gold-rush town of Bonanza to the tiny Mayagna settlement of Sunawas is dead straight. If you know Nicaragua at all this would strike you as very odd. Another striking thing about this particular highway is its colour. Here, where it rains 340 days a year and the eye is bombarded by a primal tangle of every kind of green, the Bonanza-Sunawas road is defiantly red. When I travelled along it eight years ago the first phase of construction was underway. The road promised to tunnel 8.3km through an antique vegetal tumult of lianas and granadilla, ceiba and mahogany trees into one of the last surviving tracts of lowland rainforest in Central America. A second phase was planned to extend the route another 10km or so from Sunawas to Musawas, on the fringes of the Bosawás Biosphere Reserve — an area about the size of El Salvador, home to about 15,000 Mayagna Indians and, excepting the Amazon, the largest expanse of pristine rainforest on the planet. Phase one was to cost 4.5 million córdobas, just under $200,000 (or around £120,000). It was an unfeasible sum for the Nicaraguan government, so 95 per cent of the funds were provided by the Danish government’s development organisation, DANIDA, with the remaining $10,000 raised by the local municipality. Both the national transport ministry and the local authorities were in favour of the project, which would open up a previously remote area, halving the journey time from Musawas to Bonanza, and making it easier for locals to take their goods to market and ferry the sick, elderly and pregnant to the nearest clinic. Mayagna community leaders were also broadly positive. But there were those who worried that the road might change their lives in unpredictable ways. Back-breaking work: Mayagna Indians hand-shovel earth to build the road to Sunawas through the rainforest of north-eastern Nicaragua. Photo by Stina LindholmThat the money had gone through and construction began is itself a minor miracle. It’s hard to get anything off the ground in Nicaragua. I moved to the country in 2002, seduced by its spectacular landscape and tortured history but, above all, drawn to the place by a love affair with Rigoberto Sampson Davila, a Nicaraguan doctor. It is June 2004 and Rigo and I are making our way along the new road with the aim of exorcising some old ghosts. Rigo came this way in the late ’80s, as a 19-year-old Sandinista conscript. His family had returned to Nicaragua 10 years before, from a comfortable exile in New York City — where Rigo’s father, Rigo Snr, was training to be a cardiac surgeon — to join in the revolution against the dictator, Anastasio Somoza. Rigo Snr soon found himself in charge of medical services for the Sandinista guerrillas. For two years the family hid out in a remote hut in the forest, hoping Somoza’s death squads wouldn’t find them. Rigo Snr would often be away for weeks at a time, the first news of his return coming only when the door creaked open late one night and the familiar, loved, face appeared, only to disappear again into the forest before the sun came up. Until now the only alternative to machete-ing a path through the forest has been to paddle along the crocodile- and snake-infested rivers in dug-out pirogues The camp to which the teenaged Rigo had been conscripted lay just north of where we are now, on the border with Honduras. For months, his life consisted of long spells of crazed, anxious boredom interspersed with sudden, terrifying engagements with an ‘enemy’ he could neither see clearly through curtains of mist, nor bring himself to hate. War exacts an inescapable price on all those touched by it. The Revolution and the subsequent Sandinista-Contra war had left Rigo tormented by guilt, and a feeling of powerlessness. As a teenager, he had been forced to use his AK47 or face death. He had no idea how many lives he had taken, though he did know that some of them were likely to have been Mayagna Indians, whom the counter-revolutionary Contras employed as jungle guides. After the war, Rigo followed his father into medicine. But, over the years, the spirits of the fallen had come to haunt him all the more for their numberlessness, the blanks of their faces. How to grieve for the lives of those you have taken when you cannot count them? How to atone? This is where the new road comes in. Or, rather, the journey we are making, of which the road is part. Rigo is hoping that the completion of the road in a few years time will make further trips here unnecessary. But until then, many of the 15,000 Mayagna Indians who live in and around the Biosphere have no access to the clinic in Bonanza, so we are hoping to bring a rudimentary health service to the four Mayagna settlements of Sunawas, Musawas, Nazareth and Bethlehem. We’ll begin walking on the road, travelling the remainder by dug-out pirogue. We have allowed four days for the trip, but time is elastic here and the only predictable element of the journey is the rain. Even by the standards of Nicaragua, bested only by Haiti in the Latin American poverty rankings, the Mayagna are poor. For the most part they do not speak Spanish, and with burgeoning and vocal Mestizo and Carib populations on the Atlantic and Caribbean coasts to deal with, successive Nicaraguan governments have found it very easy to ignore them. Their infant mortality rates compare with those in most impoverished parts of sub-Saharan Africa. Their average lifespan is 47. Most live by growing corn, rice and bananas in tiny gardens cleared from the jungle, and supplement their diet with fish, birds and forest animals. It’s a subsistence living, though it is now threatened by newcomers to the area who understand that while the region is remote, unmapped, and largely inaccessible, it is also potentially rich: these incomers stand to gain as much from the road as the Mayagna themselves. Yet the Mayagna rightly see themselves as the custodians of the last surviving primary rainforest in Central America, an area more biodiverse than the US, Canada and Mexico combined. The Danish development agency DANIDA daily makes difficult decisions about where to direct resources, but its agents have decided that the road will be a lifeline investment. Until now the only alternative to machete-ing a path through the forest has been to paddle along the crocodile- and snake-infested waters of the Coco, Waspuk and Pis Pis rivers in dug-out pirogues. The road, this bright artery, is a kind of miracle, an antidote to the slow and serpentine progress imposed by nature in these parts: a straight red road to progress. If Mayagna leaders hadn’t been generally in favour of the project, DANIDA would not have begun to build it. From Sakalwas, where the road currently begins, the nearest town, Bonanza — with its terraces of hardscrabble shacks, its skinny, dead-eyed livestock and transplanted, Mestizo population — already feels very distant. At Bonanza, life is dominated by the constant crank of the gold mine and the simmering tension of the nearby pool hall where incoming miners blow their wages on Flor de Caña rum and hard-faced Mestizo women. A little way from the mine and the pool hall, on the track back to Sakalwas, a black-silvery tailings pond stares blankly up at the sky, its cyanide-rich waters surrounded by a cracked concrete retaining wall. Another technical solution, albeit a somewhat elementary one. Cheap labour: Mestizo miners blow their wages here in Bonanza on rum and women. Photo by Stina LindholmThe few Bonanzans employed on the road rather than in the mine will find themselves in a huge tented workers’ encampment not far from Sakalwas. DANIDA has set up six work groups of around 35 people each — four Mayagna, one Mestizo, plus a single group of women (mixed Mayagna and Mestizo). The plan is to rotate the groups so that around 1,000 people will eventually find employment. The work is shamingly hard. In order to maximise the number of jobs, DANIDA has minimised the use of the plant. Today, in 36C temperature with 100 per cent humidity, small, wiry men in rubber boots are spading the thick, sodden clay into wheelbarrows, maneuvering across planks slick with mud, and emptying them by hand onto giant piles of sloppy red spoilage, stopping every so often to catch their breath and rub the ache from their backs. It’s no easier for the women, whose job is to stand bent in black water all day sifting gravel from the creekbed to use as ballast, and processing it through a mill into giant piles. Their children, who do not go to school, play on the bank beside them or simply sit, eyes unfocussed, expressions of vacant boredom on their faces. Since the road is incomplete, Rigo and I must travel most of the distance in the traditional way, by river. Two dug-outs are waiting for us on the banks of the Waspuk, along with a handful of thin, raddled men, with the caved-in, desiccated appearance of de-juiced fruits. An elder sits among them, too frail to paddle, with a quiver on his back containing arrows and a spear for fishing. We clamber in, and as the men push off from the bank I turn to the elder to ask him about the road, but he only tips his head and, with a jolly smile, says in broken Spanish, ‘Tengo cien treinta siete años’. ‘I am 137 years old’. Maybe that’s all the Spanish he has, or perhaps he means to say, ‘the road is coming too late for me’. I’d like to talk with him in the Mayagna language, of which I know just a single word, ‘was’, meaning water. ‘Was’ is one of the Mayagna realities. The Mayagna world is ropes of rain twirling from the vines. It is a reticulation of rivers and creeks and tiny tributaries, many still unnamed on official maps: la red, they say in Spanish, the network. Now that I think about it, the old man’s estimation of his age might have something to do with another Mayagnan reality. The worlds of the dead, the living and the ethereal lead a blended existence here. Ancestors of the long dead mingle daily with the living and with the spirits of those not yet born. The past, the present and the future exist within the same fluid frame of reference. They bleed into one another, like dyes cast on water. The old man’s 137 years might be an expression of the span of time he feels part of, or in contact with. Maybe what he means to say is that his age has removed him from what you and I would think of as a normal human span and woven him into a fabric in which the past, present and future are the weft. He might be trying to communicate that, for him, time is less akin to the arrows he carries on his back, and more like the water into which I trail my hand. This river is brown with floating islands of water hyacinth slowly churning in its current, but were it transparent, I might be able to look deep into it and see what the old man knows is there — the below-water analogue of the above-water world, a spacious, drowned jungle, alive with water jaguar and underwater monkeys, aquatic toucans and tapirs. There are no roads to or from this world and there probably never will be. Liwa, the underwater spirit who watches over this domain, likes things to remain as they are. And yet, even in this liquid place, there are hard, dry realities. For one thing, change has come in the shape of outsiders, of whom we and the Danes are simply the latest. The Mayagna have endured a succession of foreign intrusions beginning with the Spanish in the 16th century, who, unsuccessful in converting the Mayagna to Catholicism and finding the area too difficult to settle, finally abandoned their forts to the jungle. These were followed first by Miskito Indians, newly armed with Spanish guns, then, in the 17th century, by British buccaneers from the coast, whose legacy remains rooted in the Mayagnan words for cat — puss; matrimony — marriage; and disease — sickness. ‘De cual pais, tu?’ the 137-year-old asks me, as we process slowly downstream. ‘De Inglaterra.’ The old man shrugs. ‘England.’ There’s a raised eyebrow of recognition. ‘Guerreros,’ he replies, simply. Fighters. The Mayagna have not been wholly passive in the face of these incursions, or of later ones by Sandinista and Contra forces; nor have they failed to take advantage of some of the goods and technologies provided by the incomers — hunting rifles, torches, rubber boots. But their culture has shown itself to be remarkably resistant (or resilient, depending on how you look at it) to the forces of colonisation. Unlike the Miskito, their more aggressive and entrepreneurial neighbours to the east, the Mayagna have remained culturally more conservative and inward-looking, less willing to exploit whatever opportunities the foreigners might bring. Perhaps it’s for this reason that they are wary of Miskito, whom they think of as coastal people, or perhaps they simply share the customary human habit of wariness towards neighbours. Mayagna do not recognise depression in the way that Western medicine defines it, and so the father can only repeat his firm conviction that tonight the spirits will take the girl away forever It begins to rain harder now and there is suddenly a great deal of bailing to be done. The Mayagna carve their canoes from lightweight, buoyant ceiba wood. The Mayans, who lived further to the north, considered ceiba trees, which can reach 70 metres, to be the corridors between the visible and the spirit worlds. Perhaps it was a remnant of belief that made the Mayagna, who are obscurely related to the Mayans, so amenable to Christianity. Moravian missionaries arrived here in the ’40s and built churches along the Waspuk river, named the places Nazareth and Bethlehem, and encouraged the Mayagna to settle. They preached the gospel: a man nailed to a tree, a washing away of sins in baptismal waters, the heavens patrolled by a holy ghost. You can see the appeal to people whose realities are watery and multidimensional, and whose belief systems are adaptable enough to fit in Jesus right beside Liwa. At sundown we reach Sunawas. Having heard of our arrival, the sick emerge from the darkening forest like spectres. Rigo examines them by candlelight in pouring rain while I stand by in ham-fisted attendance. These are routine matters, mostly – parasites, burns from kerosene stoves, infected wounds – Rigo reassures hollow-eyed parents, loving sons, anxious granddaughters, while I fumble in the darkness for worm tablets, iodine, calamine lotion. Inside the stores box, the bandages have taken on a mildewed look. Cockroaches have invaded the ibuprofen syrup. Rigo cannot treat everyone, which frustrates him. There’s nothing we can do for the old woman with a dislocated shoulder, or the baby with a cleft palate, or the man incapacitated with what might be leukemia. The road will make it easier for these people to reach the clinic in Bonanza. ‘But I don’t have any money for medicine,’ the man, Carlos, opines. The road won’t make any difference to him. ‘Maybe you could find employment working on it?’ says Rigo. Carlos shrugs, a look of resignation on his face. ‘I’m sick.’ For a while, during the troubled ’90s, when the region was still mapless and unbounded, its land titles still unclear, there was a black market in medicines in Bosawas, as remnants of Contra forces, demobbed Sandinistas and Miskito Indians melted back into the jungle and set up competing insurgent groups. Carlos had worked as a jungle guide for a Contra platoon, the very targets Rigo had fired at through the mist. Rigo did it because he was conscripted. Carlos did it for the uniform and because the Contras promised to feed his family. The uniform rotted and the Contras failed to keep their promise. When the insurgency movements fell apart, a few of the fighters on both sides turned to gun running and indiscriminate kidnap. The road might facilitate the criminals but it will make it easier to catch them too. It might, Carlos says darkly, if there were any law enforcement. The next morning we are up early and paddling along the brown serpentine river to Musawas, on the edge of the Biosphere’s core zone. Here there are tapirs and jaguar, green macaws and strawberry poison-dart frogs. There are mahogany trees draped in orchids, two-toed sloths, and an estimated 250,000 species of insect, only one per cent of which have ever been officially identified but never let that get in the way of any opportunity to suck, bite and sting. Nothing is inanimate. Branches become snakes, leaves praying mantises, pebbles the dumb, blinking eyes of frogs. To anyone not used to such irrepressible abundance, the profusion feels more like an invasion of bodysnatchers. Take your eye off the ball and your hair will seethe with spiders, your legs will boil with leeches, and there will be ants harvesting your sweat. In a year or two from now, if all goes according to plan, the second phase of the road will be connecting this mad knot of life to Bonanza, from where illegal loggers are known to operate, often with armed consorts. The phrase ‘not seeing the wood for the trees’ has no meaning here, where the trees are green armies with no ammunition and wood looks like dollars. The Danes are planning to install a locked gate to dissuade the loggers. The Danes are technocrats, people with an innate confidence in Western democracy, the kind of people who, by and large, believe in and respect the message given off by locked gates. Their faith in a particular kind of progress makes good sense in Copenhagen or London or New York. To the Mayagna, for whom all boundaries are watery and permeable, a gate looks like nothing so much as a partially fallen tree. To the loggers, it’s a poor joke. At Musawas we eat rice and boiled bananas by firelight beside a group of men playing dice. They were employed for a time in the gold mine at Bonanza, then on the road. The mine has been through several owners, both state and private, and is now in American hands. But the Mayagna claim that working conditions were horrible and management paid them less than the Mestizo workers got so they stopped going. The road was as bad, if not worse. Shovelling hardcore by hand in the heat and humidity broke their health and their spirits. The women who worked up to their waists sifting ballast from the rivers soon came down with massive fungal infections. They started miscarrying. So the men stopped them working. Now they’ve gone back to tending their subsistence gardens. They hunt and fish. Whenever they need cash, which isn’t often, they pan for gold in the rivers, the old way. They rely on the spirits of their ancestors to watch over them and on the spirits of the forest to provide food. They’re perfectly open to opportunity when there’s an advantage to it, but they’re pragmatists, too. Despite what the Danes might think, unemployment isn’t the worse thing that might befall them. Worse, for them, would be to lose the goodwill of the spirits. When the violence ceased in 2000, hundreds of Mestizos found themselves washed up near Bosawas and decided to stay. Suspicious of the outsiders, the Mayagna men playing dice reckon that people with no home and no identity are prepared to put up with more, at least temporarily, which is why most of the workers in the gold mine and on the road are campesinos, landless Mestizos from the population-dense regions of the Pacific coast. Some Mayagna think the campesinos will use the road to encroach further into the rainforest with their cattle. Likewise, the road will make it easy for them to walk livestock to market in Bonanza. For the most part, the Mayagna aren’t livestock keepers. They fish and they hunt. Unlike the Mestizos, they do not ranch and are conservative in their clearing of jungle for their gardens. They are not expansive people, theirs is a culture of sufficiency. Studies show that they manage the forest in sustainable ways. The Mayagna, of course, already know this. If they take too many fish, Liwa makes them sick; too many animals and the jaguar spirit causes them to die. This evening, our rice and bananas are supplemented with protein. My portion consists of a charred toucan beak. Smiling and miming, the cook encourages me to suck out the tongue and considers my half-hearted attempts hilarious. Eating toucan tongue seems a little poignant. It might be that it is a higher-status food than chicken, say, and the Mayagna are doing their guests an honour. Equally, it might just be that someone in the settlement shot off a rifle and got lucky. In Musawas, Rigo sits on an old fruit box, stethoscope on the move, palpating bellies, flipping up eyelids and depressing tongues, as the well-tempered queue snakes back into the black void of the forest. It’s late when we tumble into our hammocks. With each day that passes, he seems a little happier, the past more and more another country. But at night, where the boundaries between the past and present fade with the light, he time-travels and is back in the fog of war, a terrified 19-year-old stumbling through forest smoky with incendiaries and artillery fire. It is hard to sleep when the forest all around roars and clicks and whoops with life, and its opposite. In the night, it is as though all the reparations of the day have been for nothing. For who is to say then that the howls of the ghosts of buccaneers and missionaries, and the death agonies of the ‘enemy’ felled by Rigo and his father, and of those who fought with them whose wounds they could not heal, are not among the cacophony? But this night is different. This night we are woken out of our restless half-sleep by a dim-white light, approaching through the trees, flickering on and off to save the owner’s battery. A boy’s face fades up from the deep dark. ‘Venga, venga, la hermana miya.’ The boy mimes an arc over his belly. Rigo gives out a low groan. Obstetrics isn’t his specialty but we pull on our boots and pick up our mobile kit of antibiotics, water sterilisers, analgesics, bandages and suture needles, and follow the boy through the trees all the same. It is raining and the thin, sinuous track through the forest is ice-rink slick with mud. Our destination is a large wooden hut in the trees. At the entrance, a man swinging a kerosene lamp introduces himself in broken Spanish as the young woman’s father, though she is hardly a young woman at all but rather a girl. The girl herself is lying inside, the females of the family ranged around her, keening and weeping. The father uses the old pirate word. Sickness. Bad spirits went into her, he says. They stopped her speaking or interacting with her family. She gave up on her work in the home. No one, it seemed, could rouse her. She developed a kind of longueur, a lassitude, as though infected with a case of spiritual parasites. Now, he fears, those parasites are finally about to take her life. Rigo leans over the girl, examining her belly in the light of the lamp, a puzzled expression on his face. ‘Cuando dió a luz?’ When did she give birth? ‘Hace seiz meses.’ Six months ago. ‘A la clinicá?’ ‘Si, pero murió el bebe.’ The baby died. Rigo is stroking the girl’s head, she must be 14 or 15. He asks the father a few more questions, raising his voice to be heard above the roaring heart-beat of the forest. This was her first pregnancy; she had an emergency Caesarean in the clinic. The physical scar has healed but Rigo thinks that post-natal depression set in, which is why the girl is unresponsive and mute. The wound is real and profound. But the injury is to her soul. No amount of local consultation can remove the simple, understandable tendency of people who have very little to say ‘Yes’ when offered more Rigo has personal experience of such things. In this girl, whose life experience is so different to his own, he nonetheless recognises himself. He tries to explain, but gets nowhere. Mayagna do not recognise depression in the way that Western medicine defines it, and so the father can only repeat his firm conviction that tonight the spirits will take the girl away forever. There is a moment when it seems that two worlds have no meeting point, then an idea scoots across Rigo’s face. He puts a multivitamin on the girl’s tongue and encourages her to swallow. There’s an awkward pause. He places his stethoscope on the girl’s chest and listens. There’s a peculiar intensity on his face. Eventually, he looks up. ‘Se fueron, los espiritus.’ The spirits are gone. A tense silence is followed by a noticeable shift in the room. The father goes over to the girl and sits her up. He rubs her hands and runs his fingers through her hair, cracks a smile. He says to Rigo, ‘I think it will be alright now.’ ‘Yes,’ Rigo replies, ‘I think it will.’ Much later, back in our own town, Rigo explained it like this. Making his way along that path in the middle of the night, he felt himself subject to the old feelings of guilt and grief, even as he sensed that this was about to shift. The moment he heard about the daughter’s predicament from the father, he realised that he and the father shared something vitally important. They had both come to the end of their own, separate roads. The father had run out of stories that did not end in the loss of his daughter, and none of the narratives available to Rigo had won him any peace. Their problems were no longer simply personal: they had bled out into the limitations of their respective cultures. Through the father calling and Rigo answering, the two men had jointly built a bridge into other ways way of being. That small encounter in a hut in the forest had given them access to new, unfamiliar stories with new, unfamiliar ends, and they had summoned the courage to enter each other’s worlds and seize on the opportunities they presented. Though we never met the man or his daughter again, Rigo and I did return once more to Musawas, near to the completion of phase one of the road, in September 2005. Not long afterwards, DANIDA decided not to proceed with phase two. An in-house audit of the project reported that although its presence had facilitated access from the more remote Mayagna settlements into Bonanza, and potentially expanded employment opportunities in the area, it had also led to an increase in illegal logging and to further incursions into the forest by Mestizo cattle farmers. Ongoing disputes between the Mayagna and ‘los colonias’, as the Mayagna call the Mestizo farmers, and also with the Miskito, meant that the local authorities could not agree on who should pay for the road’s maintenance. As a consequence, the road had much deteriorated. DANIDA’s report noted that ‘if no systematic road maintenance is taking place, there is a tendency for road conditions to become worse’. The road still runs straight and red through the forest, but it’s now a road to nowhere. Eventually, the jungle will engulf it completely. Was it a dream or simply a folly? Decisions about aid are complex, nuanced and fraught. Aid agencies, and many local community leaders, are often over-eager to pour money into capital and infrastructure projects, and perhaps that happened here. No amount of local consultation can ever remove the simple and understandable tendency of people who have very little to say ‘Yes’ when offered more. But perhaps the situation is more subtle. Maybe people who grow up in worlds where they are able to take choice for granted, and where they are encouraged daily not only to make those choices but to express them too, maybe these people simply have more narratives available to them. But we are mistaken if we suppose that any one culture is in possession of all the stories. I wouldn’t mind betting that the road to hell (the one paved with good intentions) is straight and hard and red, too. Perhaps assumptions — which can seem every bit as unproblematic in the technology-driven market economies of the West as straight, hard, red roads — might, in other settings, be ridden with contradictions. In northeast Nicaragua, progress is neither straight nor clean nor brightly lit. It does not necessarily entail employment on construction sites or in factories or down mines. And when it does, the jobs it creates are often degrading or dangerous or exploitative, or they run roughshod over aeons of culture and tradition. Progress might mean building a path with many diversions, or it might mean leaving others to find another way. Even then, the path will often bend back on itself or be stopped altogether for a while by war or custom or corruption, or even good intentions. Other times, it will be diverted through dimensions busy with ghosts and river spirits, whose existence must be respected and whose needs accounted for. We in the West, well-intentioned though we might be, may well be spending our time and money trying to build bridges to the developing world without really understanding where or even if they will land on the other side. None of which is to imply that change or even progress is impossible, only that the kind of progress, and its shape and speed, can only ever be determined by the people who are affected by it and that, often, it can seem to us haphazard and wasteful. Consultation is never enough. Until people are institutionally, legally and culturally equal, they cannot make institutionally, legally and culturally equal decisions. Happily, this story does not end at the road. In 2005, assisted by various aid agencies, DANIDA among them, but with the Mayagna driving the process, Mayagna community leaders filed a petition against the government of Nicaragua to grant them inalienable land rights over traditional Mayagna territory. For a group of people who had so often been silent in the face of successive waves of foreign incursion, it was an astonishing move, a kind of critical-mass showing of self-confidence. And it worked. After a good deal of wrangling, the government of Nicaragua finally granted communal land titles to the indigenous peoples living around the Biosphere, principally the Mayagna and their Miskito neighbours. A little less than four years later, in April 2009, 66 Mayagna communities in nine of those territories, representing 15,516 Mayagna people living on 10,000 square km or 7.7 per cent of the Nicaraguan land mass, established the Nacion Indígena Mayagna, with a Territorial Assembly. One of the Assembly’s first actions was to set up six volunteer forest guard corps which, supported by DANIDA and other agencies, have so far managed to slow the rate of incursion into the forest by Mestizo cattle ranchers, as well as to crack down on a number of illegal logging operations, and largely put an end to the kidnapping and extortion. Access to and along the rivers Waspuk, Pis Pis and Coco has been improved by the construction of wharves and jetties, some funded by DANIDA, though not for much longer. The agency, perhaps sensing that it is no longer needed, has reprioritised and ceased operations in the Mayagna territory. DANIDA will be moving out of Nicaragua altogether by the end of 2012. The Mayagna, meanwhile, are in the process of creating another story, one which better suits their aspirations, a story with a hopeful ending. This one they intend to tell for themselves.
Melanie McGrath
https://aeon.co//essays/what-does-progress-mean-for-the-mayagna-of-nicaragua
https://images.aeonmedia…y=75&format=auto
Consciousness and altered states
What is the greatest human gift? It is metaphor, carrying a cargo of meaning across the oceans that divide us
Eros is coursing through the forest. The forest is mewing with its jaguar life. Life is spiralling into poetry. I am in the other world, I thought, at once in the actual forest and in the forests of the mind where the visible world is not denied but augmented. I had gone to the Peruvian Amazon seeking treatment from forest doctors for an episode of depression so long and so severe that I had worked out how I was going to kill myself (length-wise, in the bath). What I experienced was more than the healing of this desolate madness, it was a sense of the raw, green-eyed, lustrous sacredness of life which has never left me, and which came through a sense of identification with other creatures, the knowers of the forest. Shamanism is universally concerned with the well-being of both nature and human nature, and the relationship between them. Here in the Amazon, the shamans’ rituals unleashed in me a force of empathy that was exact, sensitive and enraged. The pull of my imagination was tautened by all the aspects of the occasion: the medicine, the night, and the shamans’ songs until, the torque twisted most surely, I lost my singular self and stepped across the border in a wild, charged, ferocious apprenticeship to a jaguar. At times, I felt a hot sexuality coursing through me as if, in pelt and paw and breath, I could feel from within my body a radical love for the earth as strong as the gravitational force. At others, I felt a prowling rage, as a jaguar might feel, watching the trenchant stupidity of deforestation. My society is destroying this forest. The anger burned me and I could feel the fire of pure fury. How can modernity know so little for knowing so much? The sacral need to protect life in all its forms swept through me like wildfire, with meaning not only intellectual but physical, sensory, what Hermann Hesse called ‘this felt faith’. This was shape-shifting. It is part of the repertoire of the human mind, cousin to mimesis, empathy and Keats’s ‘negative capability’, known to poets and healers since the beginning of time. It did not hold literal truth, quite obviously, but had a ‘slanted, metaphoric truth’ — the words I used, when the page was printed, to describe it. Shape-shifting is a transgressive experience, a crossing over: something flickers inside the psyche, a restless flame in a gust of wind, endlessly transformative. The mind moves from its literal pathways to its metaphoric flights. Art is made like this, from a volatile bewitchment, of a self-forgetting and an identification with something beyond. Ted Hughes once said that the secret of writing poetry is to ‘imagine what you are writing about. See it and live it … Just look at it, touch it, smell it, listen to it, turn your self into it’. One writing exercise Hughes suggested for students was titled: ‘I am the Amazon’. We are what we think, and we humans have a way to become other, in a necessary, wild and radical empathy. Dark transformation: a scene from the Handspring Puppet Company’s production of Ted Hughes’ Crow. Photo by Simon AnnandShape-shifting involves a willingness to make mimes in the mind, copying something else. Art, meanwhile, depends on mimesis furthering our desire to know and to understand. In a recent, Ovidian, dance piece, ‘Swan’, French dancers performed and swam with live swans, imitating the birds in a mime which alluded to the metamorphosis of all art, and to the artists’ ability to lose themselves in order to mirror something beyond. ‘But we, when moved by deep feeling, evaporate; we/breathe ourselves out and away’, wrote Rilke in ‘The Second Elegy’. In making art, the artist expires, breathing herself out to allow the inspiring to happen, the breathing in of glinting universal air, intelligent with many minds, electric and on the loose. Artist, shape-shifter, shaman or poet, all are lovers of metamorphosis, all are minded to vision, insight and dream. Imitative shamanism can reek of cultural appropriation, but even in cultures that have temporarily misplaced their shamanic rites, the role survives, donning a deep disguise. The American writer and mythographer Joseph Campbell, among others, believed that artists have taken up the role of the shaman. It seems to me that this is true for a distinct reason: both art and shamanism use the realm of metaphor where feeling is expressed and where healing happens. With metaphoric vision, empathy flows, knowing no borders. Both artist and shaman create harmony within an individual, and between the individual and the wider environment, a way of thinking essential for life. Poetry works ‘to renew life, renew the poet’s own life, and, by implication, renew the life of the people’, wrote Ted Hughes. But ours is an age of lethal literalism that viciously attacks metaphoric insight and all its values, an age that burns the Amazon and mocks those who would protect it, sing it and become it. Shamans may be feared if they are seen to ‘bind’ people with ‘spells’, and a good storyteller is spellbinding. People are entranced by Debussy, mesmerised as by a magician. The shaman may sometimes act the part of a showman and, from Liszt to the Beatles, performers are glamorous, in a history older than they may know, for glamour was an old term for bewitchment. Artists — for which read musicians, dancers, performers, sculptors, painters, directors, writers and poets, the lot — often suffer an overwhelming psychological experience in youth, according to Campbell, as if ‘the whole unconscious has opened up and they’ve fallen into it’. Shamanism, like art, is a calling, and a young person may be ‘doomed to inspiration’ as the anthropologist Waldemar Bogoras wrote of the Siberian shamanic vocation. In a painful transformation lasting months or years, the young shaman loses interest in life, eats little, is withdrawn or mute, sleeping most of the time. It reads like a portrait of the young artist in a devastating depression. The young shaman overcomes the illness through the practice of shamanism, just as many artists know that their own best medicine is found in their work. Lit by their own sun, like Van Gogh, artists are guided by a vision from the dark side of the mind, fleet with the peculiar velocity of sudden stillness Art creates an emotional catharsis, said Aristotle, which rebalances our emotions. With the shamans in the Amazon I felt a powerfully cathartic effect, when they used a medicine called (among other things) la purga, the purge, which is one translation of the Greek word catharsis. The shamanic trance is like the entranced artist, the ordinary laws of time are repealed, the illumination of daylight doesn’t apply. As intense as the flame absorbed by its own burning, as the wine intoxicated by its own alcohol, as the wind swept by its own gust, their paradoxical role of ferocious power is coupled with unshieldable vulnerability. Shamans and artists alike occupy an ambivalent place in society, treated with both savage psychological violence and fear as well as deep reverence. Consider how viciously hated and profoundly honoured was Ted Hughes. If I had met him, I would have wanted to kneel, an ancient fealty due. Shamans have traditionally lived on the edge of their communities, and the quality of ‘edge’ is what marks original artists. The shamanic path and the artist’s way are both associated with the hero’s journey, as Campbell terms it, ‘the dangerous, solitary transit’. Solitary. That’s the word. The path known to be stony and lonely, the unknown destination known only to be beyond. In what is understood to be a self-portrait, Michelangelo painted the solitary, sad figure of a centurion in The Crucifixion of St Peter, and the young William Blake recast the figure, deepening the loneliness to accord with his own experience of being a visionary in a scornful world. ‘Wisdom is sold in the desolate market where none come to buy,’ Blake wrote in Vala, or The Four Zoas, in a bleak version of a universal understanding: as the Inuit shaman Igjugârjuk commented in the early twentieth century, ‘True wisdom is only to be found far away from people, out in the great solitude’. During one ceremony in the Amazon, I had the sensation that one of the shamans had sent his soul out to find mine. Although I was lost in the dark forest of depression, suddenly he was there, in a bright clear pool, healing and sunlit. Shamans use the term ‘soul-loss’, not an expression I had heard before, but exactly what I felt the moment mine was found. A good healer of any kind can find people who are lost in the forests of the mind. Halfway through his journey in life and lost in a dark forest, Dante began his poem-path. By naming his lostness to his readers, they, if they are lost themselves, may feel understood — found — by him. Artists send their soul out into the world in a parabola, thrown from the heart of solitude so that in the arc of its return it can comprehend and speak to the loneliness and separateness of other minds. A book, as Franz Kafka said, must be an ice axe to break the sea frozen inside us. With his raw materials of rough magic, wax, felt, fat and coyote, the artist Joseph Beuys interwove art and the shamanic role. Max Ernst took on a shamanistic familiar, the bird-king Loplop. The German philosopher Novalis wrote of Romantic poetry’s aim to ‘give mysteriousness to the common, give the dignity of the unknown to the obvious, and a trace of infinity to the temporal’. He could have been describing the shaman’s art. The violinist Yehudi Menuhin came to consider his playing a form of healing. The Sámi shaman-musician Nils-Aslak Valkeapää and the singer Woody Guthrie (both political artists) shared a parallel experience when they were young: people began coming to them for help and for healing of the psyche. They responded instinctively and empathetically, taking the role of healer as they would in later life, healing both the individual and the body politic through their music. Shelley considered poets the unacknowledged legislators of the world. Shamans may work as acknowledged legislators of their worlds, and part of the shaman’s traditional role is to regulate hunting, laying down rules, taboos and off-limit places or times, in order to allow wildlife to thrive. Welsh bards were actual legislators as well as poets and, in fifth-century Athens, theatre and music were tied to governance, so attending performances was considered part of a citizen’s preparation for jury service and legislation. In his fascinating anthology Shamans Through Time, the anthropologist Gerardo Reichel-Dolmatoff describes Amazonian shamans as typically curious, humane and fascinated by myth and tribal tradition. A shaman’s spirit will ‘illuminate,’ it should ‘shine with a strong inner light rendering visible all that is in darkness, all that is hidden from ordinary knowledge and reasoning’. Lit by their own sun, like Van Gogh, artists are guided by by a vision from the dark side of the mind. Obscurity shines in its own night because their tricky truths triangulate a turquoise paradox, fleet with the peculiar velocity of sudden stillness. Mediating between a world of daylight sight and a world of night insight is the role of both shaman and artist. Rilke termed it ‘divine inseeing’. Offering a particular kind of attention yields a different kind of knowledge: in part it is the wisdom of the dream. Amazonian shamans may be called sueñadores or sueñadoras, the dreamers. ‘I am here to dream dreams,’ said Nils-Aslak Valkeapää. In The Seasons of the Soul, Hermann Hesse wrote: ‘I stand alone in my role as a “dreamer”’. At night, when the day closes its eyes, othersight is possible. With our eyes closed, we see in dreamsight, sharing nightly the paradox of vision known to Tiresias, the blind seer. It is revealing that the first recorded metaphor is of sleep: in the Epic of Gilgamesh, sleep is described as mist, as though metaphor is, from its earliest use, the property of the dreamer. Only a metaphor seemed a craft strong enough for me to cling to, a boat to carry my grief across the sea to someone else’s mind Wallace Stevens calls poetry ‘the necessary angel’, and the root of the word ‘angel’ is the Greek for messenger. To be a messenger, to negotiate between the real and visible world and the true and invisible world is, shamans say, a crucial part of their role; and the artist, too, is a messenger between actuality and imagination. Hesse said the poet is a messenger ‘between the familiar and the unknown as if he were at home in both places’. The kinetic power of artist and shaman resides in their ability to return their insights to their communities, to go down with Dante and find the providential paradox that only in the depths can the high ascent begin. It is a peregrine part, shamanism and art both, each stooping for their prey. Sometimes the journey is juddered, as the Man from Porlock cannons through the door. Sometimes the journey is a seduction and sometimes a refuge, often more real to the artist than anywhere else. ‘Thou must pass for a fool,’ writes Emerson, ‘And this is the reward: that the ideal shall be real to thee’. The writer-character in Hemingway’s The Garden of Eden begins to live more and more in ‘the other country’, the land of his novel, and the real world comes to seem false. Dwellers on metaphor dwell more truly in that other world. If I were asked what is the greatest human gift, I would say it is metaphor. A little boat of metaphor chugs across the seas, carrying a cargo of meaning across the oceans that divide us. Metaphor is how we relate to each other and how our one species attempts to comprehend others. With this gift, humans listen and speak more intensely and the meanings of all things — ocean or forest, snail or chaffinch — grow outwards in concentric rings of concentrated word-poems. ‘Every word was once a poem,’ said Emerson, and ‘language is fossil poetry.’ So a tulip ultimately derives from the Turkish word for turban and a daisy is the sun: the day’s eye. Metaphor works with the legerdemain of the psyche, the lightest of touches to shift the mindscape, transforming one thing into another, leading to new ways of seeing. Metaphor follows Emily Dickinson’s injunction to ‘tell the truth but tell it slant’. So, slantwise, by Saturn-mind running rings around literalism, metaphor is a canted incantation, it breathes life into fact, it enchants. In Here Is Where We Meet, John Berger describes a moment in his childhood when on a visit to Epping Forest with a small friend, he drew an owl, and hid it in the hollow of an oak. Returning later, the drawing had gone and the hollow was full of feathers. His friend said they could write with them. ‘I thought she meant they were an alphabet. It could be that it is with them that I’m writing at this moment.’ Swinging, like a feather falling, between real feathers and the mind’s metaphorical flight and back to feather. Shamanism and poetry both use indirect, implicit and enigmatic language to know and to heal. Amazonian shamans, talking to the anthropologist Graham Townsley, described their mode of expression as ‘language twisting-twisting’, and explained its elliptical and abstruse power thus: ‘I want to see — singing, I carefully examine things — twisted language brings me close but not too close — with normal words I would crash into things — with twisted ones I circle around them — I can see them clearly’. Metaphor, says Townsley, changes the world by changing people’s perceptions of it. Artists know that blunt and obvious references militate against deeper thinking and prefer to work in subtle, oblique ways. At my most depressed, in order to describe what I felt, I said I was drowning. No literal description was good enough. Only a metaphor seemed a craft strong enough for me to cling to, a boat to carry my grief across the sea to someone else’s mind. For some, elliptical language may be the only way to unwrap tight, compressed pain. For others, self-disclosure can only take place in language which conceals, even as it draws attention to itself. A good therapist listens carefully to a client’s metaphors, ellipses and masks, for they are true beyond literalism. The shamans I visited used a metaphor common in the Amazon: you have been struck by arrows, they said, poisonous darts designed to kill the spirit In Australia, I met an Indigenous woman who was a ‘story doctor’, choosing which story to tell to help heal someone. Some stories can be spells to heal oneself. I recently wrote a novel partly ‘about’ Frida Kahlo. At first, it was entirely autobiographical, but it was too uncovered and I needed the allusive nature of fiction to tell a story more truly. So my book is a votive story, a prayer that my life might copy my art. Halfway through his life, Jung found himself lost in an underworld of the psyche which he was determined to understand. He noted two influences: one, the ‘spirit of this time’, was concerned with ‘use and value’, while the other ‘rules the depths … the inexplicable and the paradoxical … the melting together of sense and nonsense, which produces the supreme meaning’. The shamans I visited used a metaphor common in the Amazon: you have been struck by arrows, they said, poisonous darts designed to kill the spirit. It was a perfect metaphor for what I — like so many artists — had experienced. And, they said, they could suck them out of my mind. So, like powerful dramaturges, they dramatised the metaphor, embodied its meaning, staging the powerful sense of cure, sucking the poison out of my head. It made me well. Placebo effect, a cynic may might say. Absolutely. The word has its roots in ‘pleasing’, and good medicine like good art should please in order to heal: the placebo’s success is evidence for the power of metaphoric medicine to heal mind. ‘My project,’ says the magus Prospero in The Tempest, is ‘to please’. In the Amazon the shamans sang songs over me called icaros, half-whistled, half-voiced, half-heard, half-imagined: exquisite Ariel music, in themselves mind-medicine, curative music sung by these curanderos. Ted Hughes wrote of ‘the healing effects of reading and writing poetry’, which recalls the fact that the Ancient Greek Apollo was god of poetry, music and healing. A bundle of eagle feathers, perhaps, or a pelt of wolfskin, a drum painted with ciphers of a particular land, a hare paw, maybe, or a tiger tooth or bone of reindeer — shamans identify themselves with the motley earth, in patchwork cloaks jingling with diverse life. The striking correspondence of shamanic practice the world over, the similarity of costume and the likeness of role, have led some to suggest that shamanism somehow began in one place and spread globally. To me, that is absurd. Rather, it will arise, this ur-religion, wherever earth meets mind. Wordsworth called himself ‘nature’s priest’ and the term suits a shaman’s role from the Amazon to the Arctic. In Tanzania, shamans may be called ‘doctors of the forest’ and they must protect it. Shamans have always known what the discipline of ecology has painstakingly retaught: that everything is interdependent and coursing with a transcendent and sacred life force. Shamanism survived in Britain and much of Europe until the witch-hunts, when the beyonders’ wisdom and nature-knowledge was hunted down. So knowers and seers had to shift their shape, in a new transformation which radically masked who they truly were. I think I can see it happen. Watch Shakespeare. His fools and jesters shine out of his plays with the illumination so typical of the shaman’s role. They are charismatic, mercurial, they play as reckless tumblers and clowns and yet the Fool is wise. The quality of empathy that shaman and artist both have is portrayed in Lear’s poignant, devoted, suffering fool, and is played out in numerous acts of mime and mimicry, those related skills of fools. Characters of licence, they can speak truth to power, they are called mad when they speak their sanities to a crazed court; they may speak a language twisting, twisting the words of a king to wring out his honesties. Shakespeare’s fools trip logic with inspired nonsense and, acting by their own twilights, speak truths in a shadow language denied to the glaring light of literal speakers. To restore balance, they create a turvy-topsy world, they are grave at comedy and witty at the graveside, they are liminal, living on the edge. Like shamans, they are unsalaried and work by pleasing, by placebo, even when their healing stings. The left hemisphere is an important servant but a terrible master. Seeing the timber but ignoring the forest, it counts the profit but discounts its own destructiveness Lear’s Fool or Feste or Touchstone come alive in the between-spaces, the manoeuvrings of love, of power, of psyche. Shakespeare’s fools often speak directly to the audience, playing on the border between the play and the ‘real’ world (how I hate that half-witted term). They perform their motley antics right on the edge where metaphor plays, the edgy dimension of the player. Shakespeare was profoundly shamanic, a magician who knew how magic was related, etymologically and intrinsically, to imagination. Some say that Prospero is modelled on John Dee, alchemist and seer, while others say he represents Shakespeare himself. He is, to me, a portrait of shape-shifting made just as shape-shifting itself cast off its old costume, just when the shamanic role was far too dangerous and had, like Prospero’s cloak, to be cast away. For the tide had turned and magic was a castaway, the magician shipwrecked on the dry island of Protestant literalism. What was needed was a sea change, and as Prospero abjures his ‘rough magic’ of mere matter, actual materiality, in the literal world, he stands before us a metaphorista, on a different shore, with a poet’s power over the mind’s ocean, the imaginal world. Listen: the actors are changing behind the scenes. At the threshold you can hear their shifts rustling. And just at that moment, on stage, Prospero slips off his cloak, and Shakespeare shifts the shape of shamanism into art, the magician becoming the imaginer. A threshold character at a threshold in history, Shakespeare’s genius gathered the first and most august harvest. Just in the nick of time. For the curs of Puritans were there, snarling at the gods. They closed the playhouses of theatre. Seeing everything as literal as black letters on the white page, they wanted to close the playhouses of the mind, too, where imagination makes a play on words and thoughts, making ludic allusion to an illusory world. Malvolio, ill-wisher to revelry and liveliness, bleaches the motley fool of his colour. No enthusiasm. In its fossil poetry, enthusiasm, en-theos, means the god within, and that unmediated — natural — divinity — was cursed. No metaphorical revelations, only literal scripted teachings. And no icons. The alchemy of art which transmutes the base metal of literalism into the gold of metaphor, was denied. Literalism was in the ascendant. Catholicism’s metaphors erased by Protestant literalism running its writ over the following centuries through the Industrial Revolution and Utilitarianism. The psyche, understood so well by artists and shamans, was reduced to a machine to the point where Descartes located the interaction of body and mind in the pineal gland. Such mechanistic thinking continued its brutalisation, as if mind didn’t matter, and in the 1930s depression was treated by leucotomy, a form of psychosurgery where the frontal lobes would be severed from the rest of the brain, as a cure. It was done, says medical historian Roy Porter, ‘using an ordinary cocktail-cabinet ice-pick, inserted, via the eye-socket, with a few taps from a carpenter’s hammer’. Instead of the healing power of metaphors (literature an ice axe to crack the frozen seas within us), a literal ice pick was shoved in through the literal eye into the psyche. Henri Rousseau (1844–1910) The Dream c1910. Oil on canvas 204.5 x 298.5. Museum of Modern Art (MoMA)/ScalaIn the Amazon, I met healers who would not call themselves shamans because, I was told, barely 15 years before I was there, Christian missionaries had urged indigenous people to kill their own shamans for their othersight. This was reported directly to me, by Aguaruna people in the Peruvian Amazon. Twentieth-century Huichol shamans in Mexico were murdered at the behest of that same faction of the church. Siberian shamans were interned and executed by Soviet authorities, their costumes and drums burned. Those who said they could fly when their minds were metaphorically winged found that they were persecuted with literalism’s sadistic mockery, and flung out of helicopters. In northern Scandinavia, Sámi shamanism was subject to long episodes of persecution for hundreds of years and was driven underground as the spiritual and secular authorities sentenced shamans to death. Rationality, reason alone, is insufficient to the wholeness of the psyche, said the Romantics. Blake contrasted the ‘man of imagination’ with the ‘idiot reasoner’. The role of shaman was safer described as ‘art’ but not safe enough: the Romantics were scorned for their shamanic willingness to translate the voices of nature by deaf literalists, the idiot reasoners. Iain McGilchrist’s The Master and his Emissary shows how the Romantics represented the ascendance of the brain’s right hemisphere: an instinctive, metaphorical view which comprehends the whole, honours life, art, humour and metaphor. But, argues McGilchrist, there has been a dangerous coup d’état in western culture, a privileging of the left hemisphere which is an important servant (good at logic, language and engineering) but a terrible master. Seeing the timber but ignoring the forest, it counts the profit but discounts its own destructiveness. It has no regard for thoughtways other than its own. Short-sighted and narrow-minded, it cannot see the stupidity of its own position. Fundamentalist Christians attempting to translate the Bible into one Amazonian language got stuck on the ‘good shepherd’. There aren’t any sheep in the Amazon, so what were they to do? Unable to comprehend metaphor, they imported a few surprised sheep into the grassless Amazon to the bewilderment of the sheep, people and forest. The philosopher Max Weber characterised western modernity as a ‘progressive disenchantment of the world’. This is an age of fundamentalist literalism in culture, as much as religion or politics. Literalism, a dogma treasureless and lamentless, acknowledges only the coins that can be counted, frankly pounding the dolour of marked money. As Jung remarked, the spirit of this age understands use and value. Nothing else counts. Neither nature nor poetry. Barren postmodernists scorn the fertile romantics. The cursed evangelicals kill the curers of the forests. The hate-filled libertarians mock environmentalists for their shameless willingness to love the earth. You catch the glare: a blinding sheen where shadow should be seen; the cold touch of steel where the warmth of wood should be felt; one dimension of scale where seven kinds of music should be heard. Where is Basil the Blessed when you need him? One of the great holy fools of Medieval Russia, Basil robbed the rich and gave to the poor and Ivan the Terrible built a cathedral in his honour, with jester’s hat cupolas. In shamanism’s wise and witty fooling, grave laughter mocks injustice. Mexican government troops were ‘bombed’ with hundreds of paper aeroplanes, sent by the Zapatistas, whose spokesperson for the protection of indigenous cultures and the forests is the masked, wise, ludic, elusive serious jester Subcomandante Marcos. The volunteer organisation Clowns Without Borders, which began in Barcelona, has now mushroomed internationally. The Italian satirist Dario Fo, who uses Commedia Dell’Arte and mocks those in power, is now joined by Beppe Grillo, the comedian subverting politics. In Britain, carnivalesque protesters set up the Clandestine Insurgent Rebel Clown Army, and Rupert Murdoch had a custard pie thrown at him by a clown-protester. In Iceland, the comedian Jón Gnarr and the Best Party have taken mayoral power. In the States, the Yes Men mimic public folly, impersonating loathsome entities to tell the truths of corporate lies, posing, for example, as representatives of Dow Chemical to issue an ‘apology’ for Bhopal. All clowns have the immunity of the court jester, the suits cannot hold them still, they are slippery as a bladder on a stick. ‘Who shall bring redemption but the jesters?’ asks the Talmud. Jesus-the-jester subverted traditional authority and St Francis, ‘God’s Jester’, was a holy fool, his happiness simple and, in fact, silly, as that interesting word has it, which derives from saelig: holy. For calling Earth Gaia, a goddess, James Lovelock was called a holy fool — an insult to treasure. We are such stuff as dreams are made on, said Prospero, while in a disenchanted modernity a delinquent prosperity is loosed upon the world, where only materialism matters, and that only for the few. Utilitarianism, efficiency and the profit-motive are literalism’s tyrannies, a totalitarian state of mind deplored by poets, including Paul Kingsnorth who, in a furious refusal of ‘the useful’, wrote: ‘I have hid my heart in a butterfly’. Eros tucked inside Psyche, for the butterfly was the Greek’s image for Psyche. For the Greeks, Eros was a liberating force, unleashing Psyche’s creativity in art. In Greek myth, Eros cannot be separated from Psyche. Dante considered that the interplay of stasis and movement was the result of the gravitational attraction of love which ‘moves the sun and the other stars’. The human psyche cannot be unbound from the force of erotic gravity, a jaguar’s ferocious love for the felt earth, a love which prowls in the human heart no less, a helical love spiralling inward to this erotic earth, a love which transcends downwards, earth-enchanted and grave with fury at the burning of the forests. There are shooting stars in daylight, as well as by night, and the ones I like the best are the ones you never see, the stars which play truant from ordinary sight, and can only be seen by psychelight, as they fall upwards from below, towards the earth, rising in the erotic gravity of love.
Jay Griffiths
https://aeon.co//essays/how-the-jaguar-shamans-took-the-arrows-from-my-mind
https://images.aeonmedia…y=75&format=auto
Demography and migration
Spanish high spirits die hard, but when a generation is moving overseas or in with parents, the party can’t last
Few occupations are as frenetic and time-consuming as job hunting. During a week in Estepa, a small hilltop town known as ‘the heart of Andalusia’, I anticipated boundless horizons of enforced idleness. Unemployment is 33 per cent in southern Spain. It seemed reasonable to assume that there would be limitless time to sit with Antonio and his friends, drinking coffee in the winter sunshine, flicking olive stones into ashtrays for sport, and contemplating the rolling hills. Andalusia has long been characterised as lazy by northern Spaniards. In that sense, it’s analogous to the southern states of the US — the industrial north resents the slower pace of life in the warm south, and dismisses poverty as sloth. There wasn’t much time for olive stone-flicking, competitive or otherwise. Antonio was constantly hustling. I should have guessed. When I met him two years earlier in London, he was using his degree from the University of Granada to throw himself into any work he could find, however overqualified he was. He worked as a delivery driver for a Chinese restaurant in Surrey, spent Saturday nights arranging the Peri-Peri sauce in the Balham Nandos, started a handyman business, and DJed disco records on Thames pleasure boats. On my first afternoon in Estepa, he took me out with his gaggle of American friends. They were bright-eyed 20-somethings with basic TEFL qualifications, spending a year teaching English to children in Andalusian villages. As we roamed from one café to another, Antonio kept falling behind on the narrow, cobbled streets, bumping into someone he needed to talk to. Surely, even at the age of 30, he couldn’t know everybody in a town of 12,000? But he was always setting up a contact for someone, fixing a flat rental for someone else, or roping people into his big project: Heart of Andalusia, a tourism and language teaching company which would soon, he hoped, bring visitors flocking to his remote hometown. ‘We call him Don Antonio,’ teased a slight young man from Oregon called Riley. ‘He’s Estepa’s mafia boss.’ Out there, even an ersatz mafia boss is swimming against the tide. Andalusia’s economy had become hugely reliant on the coastal construction industry, so its collapse has been devastating. As Emma, a hostel receptionist in Seville, explained, a whole generation had been encouraged to leave school before taking their final exams, in order to rake in the short-term gains on the building sites — often earning €3,000 or €4,000 (more than £2,000-£3,000) a month. The ‘brick crisis’ had ruined her generation, she said. Many of her friends were left with mortgages they couldn’t pay, young children in school, no qualifications, and seemingly no prospect of finding any other work. Indignation: unemployed youths rally in Madrid, March 29, 2012The crash has disproportionately punished Spain’s young people. At 30, Antonio is not in the demographic that has borne the brunt: the 16- to 24-year-olds among whom there was 47 per cent unemployment a year ago, rising to 50 per cent, and most recently, 53 per cent. The numbers start to lose meaning when they follow so rapidly after one another: 800,000 empty homes; more than five and a half million unemployed; up to eight million indignados on the streets, calling for ‘real democracy now’; €65 billion of cuts on the way. Antonio, like so many others, had returned to the family home, living with his parents and younger brother. The Ortiz homestead certainly felt cosy as we stretched out by their dining table, under heaters that took the edge off the relative evening chill of January. After a bit of family catching-up in rapid, heavily accented Andalusian Spanish, Antonio passed on the news to me in English. ‘So, my mother just got a call this morning: my brother has just lost his job. It was a big company as well. Just closed completely. He got one month’s notice.’ Antonio and his mum shrugged. She was knitting determinedly. It’s hard to be shocked when the statistics make stories like this so likely. But the news hung over the table. An interminable TV talk show was rattling away in the background. Antonio’s other brother Manuel returned from work — he drives an hour each way to his job as a primary schoolteacher — and gave me a friendly clip around the ear. He joined us, knees under the tablecloth, for a comforting bowl of beef and potato stew. As the family proceeded through an itemised list of everyone they knew who’d lost their jobs recently, Manuel did what I began to recognise as ‘the crisis sigh’: a unique expression that combines a slight eye-roll and a resigned half-smile. The indignados won’t change anything. We are just going to become a poor country again What are people to do, I asked. ‘Leave,’ he said, coughing up a chuckle. Manuel’s friends were going abroad in droves. ‘Especially the graduates, people with lots of skills, you know? The only opportunities left in Spain are to work somewhere like Burger King.’ The brain drain takes people to Germany, to Britain, to North America. One cousin, for example, could only find the well-paid tech research work that she trained for in Canada, so she was heading there, leaving her friends and family behind. Antonio himself was considering prospects in France (where he had a Spanish émigré friend), or Sweden (where he had a few tenuous work contacts), or the UK once more. ‘Don’t tell my mother I’m thinking of leaving again, though,’ he said in rapid, deliberately indecipherable English, smiling at her through gritted teeth. These conversations have become perennial: a whole generation fixated on the question, ‘So where are you going to go?’ Of course, these graduates, from Spain’s broad middle-class, are not at the sharpest end of the country’s deepening crisis. There is an undoubted privilege in being able to consider emigration as an option, to have an emotionally and spatially accommodating family home that will offer a fallback. These are the much-discussed graduates with no future, a group who were integral to the various global ruptures in 2011, and whose very existence suggests more of the same. Five years of university, three languages and no reply to 88 job applicationsSpain’s young unemployed certainly do not lack the skills to sustain the country’s lunge into post-Fordist modernity. The fruits of three decades of social democracy since Franco are now mouldering in the heat. Forty per cent of Spanish 25- to 34-year-olds have degrees, compared with the EU average of 34 per cent. Yet few of them can find a way to use those qualifications in the country where they earned them. And so they are leaving. There are no official figures but several estimates put the number of graduates who have departed since the crisis began at around 300,000. The great historical brain drains tended to emerge either from a dearth of particular skills in the destination country or from the persecution of a particular group in the country of origin, or both. Scientists left post-war Europe for America because of the skills gap in the latter, while 17th-century Huguenots fled Louis XIV’s regime for their own safety and ended up starting what would become the South African wine industry. What’s happening in Spain looks altogether more ragged and unfocused. Perhaps this is a vision of the West’s medium-term future: itinerant human capital spreading out in a thousand directions at once, like an anthill hit with a spade, as outsourcing, automation, and the immutable rise of the global south leave an over-educated post-Franco generation with the skills to work everywhere and no opportunity to apply them at home. While Antonio pursued various options at home and abroad, his father gave him an offer he couldn’t refuse — though he very much wanted to. A business contact needed a new middle manager for his export business, and had consented to see Antonio on his father’s assurance that he was bright, eager and well qualified. The problem was, this contact didn’t have time to sit down and interview Antonio in Estepa. Instead, he wanted him to come along in the car while they drove to meet a client in Granada. Antonio calculated what that meant for his day: an hour and a half on the road with some old duffer he’d never met, then an hour for the client meeting, then lunch in Granada, then an hour and a half back again. ‘My whole day will be gone!’ he cried. He was frustrated because he had so many other things to do, so many emails to send to contacts in Sweden and England about Heart of Andalusia. The bonds of Spanish patronage are tight, though. He couldn’t turn down something his father had already set up. In the end, his father’s contact liked him a lot, and offered him the job — but it would be a big commitment, and Antonio felt compelled to be honest. ‘I might not be here in six months,’ he had said, slightly sheepishly. ‘It was better to tell him that than to take the job for a bit, then when I am ready to go back to England, or Germany, or France, or …’ He tailed off. Sure enough, within a few months of my visit, he had used his savings to fly to the UK for a job interview in Brighton (no luck), and eventually found his way to Berlin, where he intends to remain indefinitely. In the meantime, there were more meetings to be had, more international Skype calls, more hustling — even some more studying. Andalusia was the only place holding civil service exams in the summer, and they’d had an unprecedented number of applications. Sensing that the end for this kind of job was nigh, everyone seemed to have entered. ‘It’s like a mass job lottery, for the last few secure positions in the country,’ Antonio explained. ‘And they only cost €20 to enter so, I thought, why not? But I am not really bothering to study hard, I’m too busy with my other projects.’ His sister-in-law, on the other hand, had been working every day, grabbing every spare moment to revise for the exams. It was, Antonio said in an uncharacteristically sombre tone, her one hope. After a few days it was time to return to the big city, Seville. Antonio had had another work-related meeting arranged for him. At times, it seems as if Spain could ride out the crisis just on its recessionproof determination to meet up for coffee. In this case, a friend of Antonio’s from school, now an entrepreneur with his own private security firm, had put Antonio in touch with the daughter of a business associate. Cherezade was an unemployed teacher. She wanted advice about moving to London to find work. Antonio’s friend had also intimated to him that Cherezade was single, which made him grin coyly. But he wouldn’t be drawn into such game-playing. He’d agreed to the meeting out of the kindness of his heart. We walked through Seville’s impeccably preserved city centre, past the Giralda tower, the cathedral, the Alcázar palace, monuments to wealth plundered during globalisation 1.0. The appointed meeting place was down towards the Guadalquivir river, in a mosaic-clad corridor of a bar with just two small tables and a chalkboard menu. Formal introductions were made. The atmosphere was somewhere between a business meeting and a more ancient ritual, perhaps the discussion of a dowry. Six of us were present: beside myself and Antonio’s school friend, Cherezade’s parents were both there. Her mother stood dotingly behind her chair while Cherezade asked Antonio questions, shyly but methodically. Her father, a haughty man in his fifties, dashed out every few minutes to talk on his BlackBerry, returning with plates of pork rillette, langoustine drenched in garlic butter, and morcilla or black pudding on toast, drizzled with honey. After probing Antonio’s social, professional and domestic experiences in the UK, Cherezade bowed her head and turned to me. Was there much work in London at the moment, she asked hopefully, what with the Olympics? I told her that it wasn’t as bad as Andalusia. She smiled meekly and said that would have to do. Two months later, she surprised me with an email. She was setting off for London in a week and still hadn’t found any work or anywhere to live. I detected a note of panic in her ‘jejeje’ (‘hehehe’ in Spanish) but, as she said herself, what else was she going to do? As we left the bar, I suggested to Antonio that he should make this his job: a consultant advising the rest of his generation on getting work abroad. ‘There’s a lot of potential there,’ he said with a smile. ‘I would be a bit of a parasite though.’ Walking through Seville’s open-air museum of an old town, Antonio weighed up where to go next — London, Stockholm, Paris, Berlin? Each had their advantages. Each was undermined by not being Seville. So you’d stay if you could, I asked? ‘Of course,’ he snapped. As we swung past the town hall, still in the hands of the social democrat Spanish Socialist Workers’ Party (PSOE), Antonio got a text message from an old friend. It asked if he had heard the rumours that the right-wing People’s Party (PP) were going to scrap the civic exams — and with them, that last tranche of secure jobs. ‘I’m going to vote PP in the next regional elections anyway,’ Antonio concluded defiantly. I expressed surprise. He hadn’t struck me as a conservative. ‘I’ve just had enough of PSOE,’ he said. ‘They have been in power in Andalusia for so long. They are so corrupt, they would do anything for power.’ He told me that he had lost his job in Seville’s civil service because of the PSOE; they checked the lists of civic employees and the non-party members were always the first to go when cuts were made. This sounded like a conspiracy theory to me, an unlikely interpretation from someone so smart: but he was adamant. ‘They have to go. I am so angry!’ he insisted. Patting him on the shoulder, I said, ‘You could almost say that you are … indignant? He laughed. ‘Si! Estoy indignado!’ Six weeks later, in Gracia in northern Barcelona, an unpretentious, youthful quarter, I met Tom, an English friend I’d made via online discussions of the indignados, and his Catalan wife, Gemma. Sitting in a sunny square and dabbing forks at patatas bravas, I told Gemma how fascinating it was to an outsider that the Spanish family unit seemed able to support these astonishing levels of unemployment. ‘It’s not just unemployed people, though,’ she said. ‘That’s Spain. I have a job and I’m doing a university course, and I have a flat, and I’m married, and I’m 30. I feel like a grown-up — but my mother still says to me, “Oh, you’re so busy, with your studying and your job, you don’t have time to do washing — you should bring it home, I’ll do it for you”.’ Every time Gemma went home to see her parents, she would return with plastic containers full of freshly cooked food stacked up to her chin. ‘They know I can cook!’ she laughed. The next weekend, at a demonstration against new labour reforms designed to make it easier to sack people, I ran into Tom and Gemma again. ‘Fun, isn’t it?’ Gemma asked as a samba band started up. ‘It’s just a warm-up for the general strike and the summer.’ Flexing those muscles again, airing out the banners, warming up the vocal chords. There was an astonishing number of Second Republic flags: the red, yellow and purple tricolour of the government ousted by Franco’s military coup in 1936. A symbol of antifascism raised up once again, but this time, against what? Against austerity? Against capitalism in general? The resurgence of the Second Republic flag felt odd: a kind of ready-to-wear identity for those alienated from the prevailing orthodoxy. It remains to be seen whether ‘the two Spains’ of old will return to their magnetic poles of right and left. The latter are certainly becoming increasingly vocal, and they will only become more so as prime minister Mariano Rajoy’s austerity programme takes effect. When the demo arrived at its destination, Plaça de Catalunya, there were no speeches, no tabletop pamphlet sellers, no sociable lingering. Ten thousand people instantly dispersed, as if a bomb had gone off: it was Sunday, and 2pm. There were family lunches that needed eating. I turned on my heel and walked inland to meet another contact, a film-maker and journalist who had once come to London to interview me about British dance music. Like Antonio, Eduardo had been sucker-punched by the crisis. He was 36 and had spent more than a decade of his adult life building his skills, experience, and independence during the good times. His friends mostly seemed to be several years younger than him, including his girlfriend Julia, who was 28. The couple had met while working on a culture magazine two years previously. It had seemed quite stable by the standards of the industry but, nevertheless, it folded. Eduardo proposed meeting at the same square in Gracia where I had met Tom and Gemma. It was a popular spot among young Catalans, tucked away from the tourists and moving to a gentler rhythm than that dictated by the hawkers and hucksters of Las Ramblas. Inside our café, only two tables were occupied and a faint yellow light glowered over the mahogany bar, but the outdoor seats were in huge demand, slow-cooking in the dazzling March sunshine. Eduardo and Julia were a fashionable couple, of the kind you might see at a hipster rockabilly night in London. Eduardo wore Chelsea boots, mod jacket and rich blue jeans. His helmet of black hair was flecked with grey. Julia was strikingly pale. She wore an Audrey Hepburn outfit, a black-and-white polka-dot dress with ruffles and matching hairband. For about five hours, we grazed on tapas and beers, lemonades, coffees and vermouth. Several friends came and went. Others stopped by to chat. Since losing his job, Eduardo had been trying to get freelance work. Even with 15 years of contacts in the media and in Barcelona’s music industry, it was incredibly hard. ‘No one wants to pay for anything anymore,’ he lamented, picking dolefully at a plate of salted potatoes. Nine months earlier, he had left his flat and moved back into his mother’s house in suburban Vall d’Hebron. ‘It’s about 20 minutes further out from here on my scooter,’ he said. ‘I get on with my mother, but there’s nothing to do there. I have to come into Barcelona all the time for fun, and for work.’ I asked if he felt strange living back at home in his 30s. ‘It’s better than the alternative,’ he said, ‘spending my savings living in a small flat with no space, no garden, no patio.’ Your 30s are different to your 20s, he told his younger friends, and better too — you lose your nervous sense of who you are. You slide into a more comfortable gear. Scarcity of work means time is plentiful for young MadrileñosEduardo was philosophical about the position the crisis had landed him in. As long as he had his friends, he said, and then he stopped himself mid-sentence. That too was now in doubt: Julia had recently been accepted onto a postgraduate course in New York, he said quietly, and would be leaving in the autumn. This convivial scene couldn’t last. After one recent job application, Eduardo was considering his own move abroad — to the Spanish capital. ‘Madrid is a different country to me, a different atmosphere and people. I don’t like it so much — it’s too big.’ He gestured to the gloriously contented afternoon unfolding around us, his friends laughing and telling stories in the background. ‘I don’t want to move to Madrid. They are somehow less relaxed there.’ If he got the job, he would have little choice. ‘From the 1990s to, say, 2004 or 2005, we’ve had the Spanish economic miracle. But you know, the problem with miracles is, they tend not to be true.’ He didn’t believe there would be riots. ‘We are too lazy,’ he said, as if this was an incontestable fact. But your history, I replied, is a litany of coups, rebellions, riots, revolutions, occupations, and wars. ‘I know, I know, but —’ he pressed his hands towards me to emphasise the point ‘— really Dan, we won’t have a revolution this time. We need something different to the politics we have, but the indignados won’t change anything. We are just going to become a poor country again.’ Three of the girls facing into the sinking winter sun had put on large, round sunglasses. ‘Hey,’ Julia interrupted, ‘can you take a picture? We look like movie stars.’ Eduardo tried to take a photograph but he couldn’t steady his shaking hand. Too many drugs in his 20s? I didn’t like to ask. Later, he started rolling a joint, which was a similarly messy affair. As the afternoon wound on, the marijuana and vermouth kicked in, and the combination pushed Eduardo into a much giddier gear, the narcotic muscle memory of a youth well lived flowing through him. He gripped the arm of the plastic chair and laughed loud, lifting his chin up to the sky. Catalonia has long been the breadbasket of Spain, the industrial hub that, under Franco, drew so many people from poor rural areas. The process left numerous villages abandoned altogether. These ghost villages have a modern partner, the massive new suburbs of Madrid built during the construction boom, some of them finished just as the collapse hit. Seseña, 50 miles south of Madrid, was built to house 30,000 people. Now it is all but empty, a three-dimensional blueprint of systemic failure. Meanwhile, in Rasquera, a small hilltop town some 100 miles south-west from Barcelona, finances became so dire this year that the town embarked on a serious plan to sell marijuana as a cash-crop. The Asociación Barcelonesa Cannábica de Autoconsumo agreed a price of €1.3 million over two years for the harvest. Only central government interventions to try to prove that the plan is illegal — which it might not be — have slowed it down. ‘W e are surrounded by mountains here,’ Eduardo explained to me a week later, mapping Vall d’Hebron’s secluded suburban geography. The area was like a forest clearing, and the wooded hills seemed to comfort him. We were sitting in his mum’s large back garden at 5am, finishing off a Friday night with a patio cigarette, gazing out on the small pool, sets of garden tables, and lemon and orange trees. ‘When we have parties here, I don’t want to leave.’ He meant his mum’s home, but also Barcelona. The closer he got to the job in Madrid, the more unhappy he became, even though Julia’s departure to New York meant they would have to split up anyway. Earlier, as we stood idly outside an electronica gig in a record shop in the Barri Gòtic area, I had tried to tell him that moving could be a useful distraction from missing Julia. All around us, black-haired, well-manicured goth hipsters smoked skinny roll-ups and necked bottles of Estrella, occasionally jumping off into a doorway when a motorbike passed down the narrow street. The motion seemed deft — almost unconscious — and good-natured. Those medieval streets were a shared space, everyone understood that. It had been a memorable evening. It included my first ride on the back of a scooter, which might have been the most thrilling experience of my life. We cruised the empty three-lane highways, unguarded, stoned, and drunk. Once you’ve accepted the possibility that you might die at any moment, speeding through the Barcelona night was pure exhilaration. What do I have to do, I asked slightly nervously, before we set off? ‘Just hang on,’ said Eduardo. On the 45-minute journey home we climbed past the pseudo-tropical palms of the former Olympic site, up and up to Vall d’Hebron, the city in the mountains. Through long empty tunnels, the cold wind drew tears; a rush of early 1990s computer games, the simulated yellow glow on the tarmac. Previously, we had seen the British DJ Rustie in the foyer of CaixaForum Madrid, a big art museum, bathed in blue light and framed by incongruous posters advertising a Delacroix exhibition. Then another bar, with more friends and acquaintances, locals, Finns and Englishmen, then eventually a 1960s-themed club run by some of Julia’s friends. The vaga general, or general strike, was approaching, the red and black banners of the anarcho-syndicalist CNT union hung along the Diagonal, the endless avenue that bisects the entire city. The 1960s club was in a largely empty basement bar arranged in an awkward triangle shape. Pixie girls in vintage dresses and small pointy shoes did shimmy steps and everyone drank what felt like quadruple gin and tonics (Spanish measures are serious measures). After an hour of shouting over the speaker stacks, Eduardo whispered in my ear. ‘I’m tired,’ he said. ‘Shall we go?’ We pushed our way up the black painted stairs and out into the still, quiet night. ‘I’m too old for this,’ he said, only half-joking, once we had escaped the din. He was too far gone for a 45-minute scooter ride, so we motored cautiously around a few tight corners to an unpretentious café called Els Tres Tombs — open 7am to 5am — to feed Eduardo back to health. I chose a fortifying selection of padrón peppers, pan con tomate, morro (deep-fried pig’s snouts) and croquetas. Eduardo had a coke. Then, substantially restored, he finished off with coffee and cognac. ‘Thanks man,’ he said. ‘I don’t know how we would ever get home otherwise.’ In his 2006 travelogue Ghosts of Spain, Giles Tremlett describes the steely eyed determination with which post-Franco Spain pursued hedonism as ‘the heavy breathing of her freedom’. For 30 years, good times have been an act of self-determination and emancipation in themselves, following decades of cultural repression and denial. That night — calm yet full of life, drunken but never self-destructive, felt emblematic of an easier kind of freedom — a country breathing through the pores, not the mouth. Spain will need a new kind of miracle for it to continue. Some names have been changed in this piece.
Dan Hancox
https://aeon.co//essays/can-ailing-spain-keep-its-young-and-its-spirit
https://images.aeonmedia…y=75&format=auto
Ethics
Can private charity eliminate world poverty – or is it about making the givers feel better about themselves?
Place your hand on your heart, and repeat after me: ‘I recognise that I can use part of my income to do a significant amount of good in the developing world. Since I can live well enough on a smaller income, I pledge that from today until the day I retire, I shall give at least 10 per cent of what I earn to whichever organisations can most effectively use it to fight poverty in the developing world. I make this pledge freely, openly, and without regret.’ How did that feel? Did your heart skip a beat when you got to the ‘10 per cent’ bit? If not, then you could become a member of Giving What We Can, a global organisation set up by the University of Oxford research fellow Toby Ord with the aim of alleviating poverty in the developing world. While members of Ord’s fellowship pledge to give away at least a tenth of their income, Ord himself, a philosopher who works on practical ethics and consequentialism, is giving away about half of his salary — more than £10,000 a year. Membership of Giving What We Can spans 17 countries. Not that the 250 members – many of whom are students – are likely to eradicate world poverty on their own, but it’s a start. When Ord first made his stand earlier this year, he was briefly besieged by journalists, all of whom wanted to know why he had decided to give away such a large proportion of his income to charity. Ord’s answer, in a nutshell, was that it was the right thing to do, and that each of us should do the same; the only caveat being that we should spend our money carefully, choosing those charities that save the most lives for the least money. Giving away a tenth of one’s income no doubt sounded very Old Testament to anyone who knows anything about the Bible — a tithe, in other words. Although, it should be said, not in the ancient Egyptian sense of the word, where, in the days before official taxation, the pharaoh would tour his kingdom collecting tithes from his people. Nor, indeed, in the old meaning of the word in America, where Christian pastors began to focus on the importance of tithing only after the First Amendment in 1791 made specific the split between church and state, effectively cutting the church off from the tax money it enjoyed in early colonial times. Today, Americans — just four per cent of whom say they tithe — donate $298.5 billion dollars each year to charity, of which $95.8 billion or a third goes to religious organisations. And yet, in an era that’s seen tented encampments mushroom on Wall Street and on the steps of St Paul’s, maybe the time is ripe for a return to the tithe. Might this ancient ritual pledge, given lip service at best even by many Christians, make a comeback and, in the process, change our world? Inside all of us — however rich or poor — is a small child holding out a homemade card and saying, ‘Mum, I’ve made this for you.’ Ord’s stand was inspired by Princeton philosopher Peter Singer and his 1972 essay ‘Famine, Affluence and Morality’. In brief, Singer argued that if one could afford to give money to reduce others’ suffering it was immoral not to do so; and that, moreover, just because the person in need might be thousands of miles away does not make it any less justifiable to do nothing to help. Singer is now a signed-up member of Giving What We Can. Membership of Singer’s own organisation — The Life You Can Save — also involves a personal pledge to give a recommended percentage of annual income to charity, although it does not call this a ‘tithe’. The results have been spectacular: since February 2009, close to $81 million has been pledged. Clearly, if in 1972 his was a voice in the wilderness, by 2012 his call to action has captured the imagination of a great many people. The practice of tithing — ‘to take the tenth of’— has been around for millennia. Traditionally, Jews have always pledged a tenth of their income, the earliest biblical reference being to Abraham, who gave a tenth of all he had to King Melchizedek after returning victorious from a battle at which he had not only managed to retrieve his nephew Lot but all his lost possessions. In this guise, giving is an act of thanks. In Numbers 18:21, tithing is included in Mosaic law, under the more utilitarian pretext that it would provide for the Levites, who God wanted to concentrate on priestly duties. Much of contemporary religious tithing in Britain, whereby the devout simply pledge their 10 per cent directly to the church, does the same, going straight into church coffers, paying for vicars, staff, and whatever outreach projects a particular church might be engaged with. Interestingly, the New Testament is a lot quieter — indeed, a whole lot less religious — on the practice of tithing. The first Christians — not unlike Ord himself, who, however, professes no religion — were social radicals, described in Acts as sharing everything they had, and selling their property and possessions ‘to give to anyone who had need’. A few books later, St Paul, writing to the early church in Corinth, is in flexible mood, leaving it up to individuals to give ‘according to what one has’, and telling them that ‘God loves a cheerful giver.’ Which might be why — credit crunch and double-dip recession notwithstanding — today’s Anglican churchgoers give on average a good deal less than 10 per cent to their Church. Dr John Preston, national stewardship and resources officer for the Church of England, says that, in 2010, the last year for which figures are available, £305 million was given by 605,000 regular givers, who gave on average 3.4 per cent of their income, some way short of the 5 per cent advised by the General Synod. What would Melchizedek say? Money and the church have had a long and rocky relationship and, for each Christian radical such as Ron Sider — whose 1977 bestseller Rich Christians in an Age of Hunger advocated a graduated tithe according to income — there are others, not least the proponents of the ‘Prosperity Gospel’, who claim that personal wealth is one of the things the Almighty desires for his flock. ‘Prosperity’, which rose to prominence with 1980s televangelism, and swiftly disappeared from view after the Jim Bakker and Jimmy Swaggart sex scandals, is once again popular in America. Megachurches run by the likes of the American televangelist Joel Osteen place finance centre stage, promising that if believers can establish through word and deed that they are ‘in Jesus Christ’, then God will respond with gifts of health and wealth. And tithing is central, with Malachi 3:10 cited frequently as a favourite inspirational verse: ‘Bring the whole tithe into the storehouse, that there may be food in my house. Test me in this,’ says the Lord Almighty, ‘and see if I will not throw open the floodgates of heaven and pour out so much blessing that you will not have room enough for it.’In other words — to paraphrase the Lord — give to me and I’ll give you back tenfold. Which sounds compelling, though exactly what these churches use their congregations’ tithes for is another matter. Some, admittedly, will go on outreach programmes to the poor, but a great deal will be spent on church staff, buildings, PA systems and the like. And what would Peter Singer — let alone Melchizedek — say to that? Reading this, it sounds almost as if one needs to be persuaded to give; as if without the karmic promise that we will receive far more than we might give, we’d never get out our wallets in the first place. And yet — despite the message of consumerism that it is more blessed to get than to give — isn’t the wish to give part of our makeup from our very earliest days? Look at a mother feeding her newborn, for instance. It might seem as if the mother is giving the baby her breast; but the baby is also giving his mouth. Just as, later in life, he will give his lover his body as she gives hers to him. Anthropologists argue the same: that the ‘gift’ gives rise to reciprocal exchange, without which our societies would crumble. I can see from my work as a psychotherapist, as well as from my own life, that being deprived of the opportunity to give can create as much difficulty as being prevented from getting what one needs. Let’s say I come into some money and decide to take an impoverished friend on holiday. When we get there, I feel so generous that I refuse to take anything from my friend. It’s not hard to imagine that, before too long, my friend will begin to feel deprived; with each expensive dinner, he will feel increasingly frustrated and, most likely, enraged. Giving, in other words, is a powerful weapon, which needs to be handled with care. In this light, looking back at last summer’s riots, I’d say that what happened was not just an explosion of violence driven by feelings of deprivation, but also an eruption of fury among some young people who feel that ‘society’ does not want anything that they might actually have wanted to give. On a different demographic level, I think we’re seeing a similar thing among wealthy philanthropists. Back in May, the Government dropped plans to cap the amount of tax that donors could claim back on their charitable donations, after loud protest from the charity sector. As the Charities Aid Foundation pointed out, half of the £11 billion given to charity last year came from just seven per cent of donors. All well and good, you might say: but why should the Government give back tax to the moneyed classes when it could use this money for people in real need? The rich, I’d hazard, don’t see it like this. Talk of caps made some feel personally offended. Quoted on BBC Radio 4, the technology entrepreneur Dame Stephanie Shirley (a former philanthropy tsar under Gordon Brown) said it was ‘extraordinary to cast such a bad light not only on the respectable charities but also on philanthropists. The vast majority are dedicated, well-intentioned and effective citizens, and we want to be respected as such.’ Or, as she said elsewhere: ‘I try to make [giving money] a committed act of love.’ In other words, Dame Shirley, like the rest of us, needs to give in order to feel human. Inside all of us — however rich or poor — is a small child holding out a homemade card and saying, ‘Mum, I’ve made this for you.’ We need to feel that our offerings are wanted. As we grow older, anxiety tends to creep in and we start to tell ourselves that, actually, what we need is to get, and to save, and to keep; that no one will look after us in our old age and so we’d better put whatever excess we have into our pensions. What if, however, we were to put a little less into our pensions and give away a little more? Try filling in the ‘How Rich You Are’ form on Ord’s website and you’re likely to discover that however low you believe your income to be, in global terms you in the world’s highest income percentile. In this light, religious or not, tithing suddenly doesn’t seem such a leap of faith after all.
Edward Marriott
https://aeon.co//essays/taking-the-pledge-tithing-makes-a-comeback
https://images.aeonmedia…y=75&format=auto
Architecture
Old ideas of balance and harmony need to be put aside if we are to save a natural world in constant flux
One day this summer my younger son Oisín and I strayed from a heavily wooded trail on the Muckross peninsula in Ireland’s Killarney National Park and entered a narrow limestone cave. The entrance was fringed with vegetation, a pelt of moss on the rock, tendrils of tree roots exposed, so that entering that cave felt like climbing into the bearded mouth of a living thing. We paused a moment in the gloam just as I had done 30 years before, when I visited for the first and only other time. Then we began a shallow descent into a throat of absolute darkness. When I first slipped into this cave all those years ago, I was claustrophobic and the pressing of our small group of youthful conservation volunteers panicked me. Their stops and starts, their mirthful howls as they feigned mild terrors ahead, made me anxious to turn back. But I was sandwiched between my peers and my only option was to stumble on. Fortunately, after no more than 100 lightless metres, an opening became visible, the cave being actually a passageway, and I pushed those ahead of me until we spilt back into the day. Perhaps it was the panic and the feeling of having been spat back into the world that made that woodland appear to me like no other. It seemed to be a forgotten world, like those beloved by our science-fiction writers. For this is indeed a landscape like few others. It is Reenadinna, one of the planet’s few remaining yew woods, a virtual monoculture of coniferous yew trees (Taxus baccata). They grow here on thin mossy soil, though in some places they drape directly over bare limestone bedrock. The light is green-soaked and dim beneath the crowded canopy: it looked no different this summer than it did 30 years before. Of course, three decades is a short time in the life of a yew wood, where an individual tree can live thousands of years. But it looked no different this summer than it would have done in September 1776, when Arthur Young, the great English writer on rural affairs, visited the region and exuberated on the wooded Muckross peninsula. And by the calculation of Fraser Mitchell, a botanist in Dublin’s Trinity College, it has looked that way for millennia. The stability of this ancient yew woodland, capable of rebuffing storms, insect outbreaks and other moderate disturbances, embodies an original tenet of ecology going back to the late 19th century. I’m talking about the so-called ‘balance of nature’. Henry David Thoreau coined the word ‘succession’ to describe the predictable and progressive development of vegetation from the weedy colonisation of newly exposed rocks to the stable and balanced state of a mature system. Bare ground is colonised by a few hardy plants. These give way to a new suite of species, which in turn can be superseded by yet more species. At last, mature vegetation (forest trees, for instance) establishes itself and persists, generation after generation. Succession was ecology’s first significant conceptual scaffolding. Upon this, the young scientific discipline was built. Nature in this view of things is stable and balanced until humankind, that agent of derangement, interferes with it, after which it can wither away. The term ‘balance of nature’ raised the hackles of some scientists as early as the first decades of the 20th century. The Oxford ecologist Charles Elton forcefully claimed in his book Animal Ecology (1927) that ‘the balance of nature does not exist, and perhaps has never existed’. Nonetheless, it remained central to the highly influential systems thinking of the late and celebrated American ecologist Eugene Odum. Think of an ecosystem as something like an individual organism: the adult develops in an orderly, predictable way from the child. Similarly, Odum argued, the trajectory of natural systems was preordained too. Perhaps it says unflattering things about me that I felt bored, impatient even, in these minutes waiting for my demise In the mid-20th century, Odum modernised the concept of succession, so central to early ecology. He turned it into a mechanistic account of how organisms and their environments interact to produce orderly and predictable results. He identified 24 trends that might be expected to develop as ecosystems mature, each of which was like a physiological marker of a functioning organism. One of these was ‘overall homeostasis’ — the ability to retain equilibrium in the face of change, just as our bodies keep a stable internal temperature. The ‘development’ of a mature ecosystem led to a stable whole. When severely disturbed, the system would simply rebound to balance. One of Odum’s favourite catchphrases was that ‘the whole is greater than the sum of its parts’. Complex ecosystems contain surprises just as living organisms do — this explains some of the force of the superorganism analogy. Due to its complexity as a system, a living body has qualities that its constituent parts cannot have. If you can’t imagine emergent properties, think of a carcass that has lost the body’s lustre and, with it, the emergent properties of the living. Odum was explicitly committed to the idea of a balance of nature. Stability and equilibrium were at the heart of his ecological thinking. But it hovered in other, less obviously philosophical, ecological thinking too — such as the theory of island biogeography posited in 1967 by Robert MacArthur and EO Wilson. This theory predicts that the number of species on an island will be the outcome of a dynamic balance; an equilibrium, that is, between immigration to an island and extinction rates of those populations. The trouble was, an attachment to ideas of balance and stability didn’t seem to match the messy dynamic reality of nature. And, increasingly, ecologists were troubled by what it all meant. Daniel Simberloff, a former graduate student of EO Wilson’s, was the one to pull the old ‘balance of nature’ idea to pieces. His work in community ecology, looking at the distribution and diversity of species, led him to see it as hardly a scientific idea at all. In 1980 Simberloff published a formidable and hugely influential paper called ‘A Succession of Paradigms in Ecology: Essentialism to Materialism and Probabilism’. It was a biting critique of the holistic concept of the ecosystem. Simberloff wondered why ecosystem ecology has been so seductive when, as far as he could tell, it simply did a bad job of explaining the observables of nature. Is it, he wondered, because it legitimated the notion of a self-regulating market in ‘unfettered capitalism’ — an invisible hand predictably driving a system to a preordained end? Noting the impeccable Marxist credentials of many ecologists, he conceded that this might not be the reason. No, he argued, chief among the reasons for the continued appeal of ecosystem ecology was that ‘it accords with Greek metaphysics’. The holistic ecosystem reflects the idealism of Plato and the essentialism of Aristotle. Because Odumite ecosystem ecology was rooted so profoundly in ancient world views, in notions of nature in balance, it would not, warned Simberloff, vanish easily. Nevertheless, it should be grubbed out of the science of ecology wherever possible. Metaphysical assumptions had no place in a respectable science. If the idea of balance resurrects Aristotle, the disturbance view may have its own Greek hero: Heraclitus, pagan saint of flux Simberloff’s acerbic essay helped to turn the tide against the balance of nature idea, at least in scientific ecology (it is alive and kicking in popular ideas of ‘ecology’ and conservation). But Simberloff was aided by a growing suspicion that the notion of a holistic, balanced ecosystem was inconsistent with evolutionary thinking, which emphasised instability and change. Professional ecologists now tend to agree that the balance of nature is no more than an unhelpful fable. An ecologist might go about her business believing in true and everlasting love, say, or in the perfectibility of man, or that there awaits an eternal reward after the struggles of this sublunary life and yet, so strongly has she been conditioned against it, that she would shudder at the suggestion that the patterns of nature reflect any sort of balance. ‘I don’t believe in Santa Claus, though once I did,’ writes John Kricher, a professor at Wheaton College in Massachusetts, laying down the new orthodoxy in his book The Balance of Nature: Ecology’s Enduring Myth (2009): ‘When it comes to life support systems, it won’t do to create myths … It will be my task to convince you that life on Earth has neither innate balance, nor purpose.’ If 20th-century ecology was marked by an infatuation with balance, then our era is one of disturbance, disruption, non-equilibrium, stochasticity, chaos, and randomness. This new ecological world view extends not only to semi-arid grasslands such as the shortgrass prairies of western America or the Mongolian steppes — systems that have long been recognised as turbulent — but also to old-growth forests long seen as stable. Viewed from the long-time perspective, Reenadinna yew woods are evanescent, emerging only a few thousand years ago from a prior woodland dominated by pines, oaks, elms, and hazel. It looks as though the popular idea that ecology is about intricately balanced, harmonious and stable systems is completely out of step with the science. But has something been lost in the process? A few years ago, this time with my friend and colleague Randall Honold, I travelled to a remote ecological reserve at the confluence of the Western and Eastern Ghats in south India. We were there primarily to look at wildlife, for these mountain ranges are exceptionally rich in species, many of which are found nowhere else on earth. The Western Ghats are one of the world’s dozen or so global hot spots for biodiversity. They were declared a World Heritage Site earlier this summer, around the time of my visit back to Ireland’s yew woods. The Biligiriranga Hills Reserve is a day’s drive from Bangalore. As a place, Bangalore is all people, buildings, blatant smells, and phonic surprise. The Ghats, by contrast, are calm and subtle. Their demeanour is patient, abiding. It was months after the monsoon. The soils were dry and the air was clear when we travelled with three Soliga tribal guides into the heart of the reserve. The mammals of this reserve include deer — barking deer, sambar, and chital — as well as tigers and, famously, a herd of elephants. We were made aware of the elephant population in an especially terrifying way. An agitated female charged us. She broke from the brush at dust-billowing speed, her ears flaring. Just when impact seemed inevitable, she swerved behind our white jeep. Before us was a lake. Behind us was an irate animal. The forest was hushed. With nowhere to go, we waited. The elephant, viewed in the rear-view mirror, was all but motionless, although a tiny swaying of her body suggested that she was trying to resurrect an anger requisite to finish what she had started. Randall snapped a picture: ‘Objects in the mirror are closer than they appear.’ We sat more or less frozen, for this was the advice of our guides, one of whom later said that this was the closest he’d come to death-by-elephant in 40 years. Perhaps it says unflattering things about me that I felt bored, impatient even, in these minutes waiting for my demise. I cast a minuscule glance around the forest and noticed that the wall of vegetation from which the animal had exploded was made up of Lantana camara. This is an especially aggressive exotic shrub which has become a management nuisance throughout the Western Ghats. Spanish Flag, as it is commonly known, is native to the American tropics and is regarded as one of the world’s most invasive plants. Its flowers, as I noticed then, are exceptionally pretty. Having spent half a lifetime combating non-native shrubby vegetation in Ireland and in the American Midwest, it seemed fitting, though ultimately a little dispiriting, that a non-native shrub had just disgorged the raging agent of my death. ‘She broke from the brush at dust-billowing speed, her ears flaring.’ Photo by Liam HeneghanAfter 30 minutes, the elephant retreated, as did we. Later, how we laughed. A leading hypothesis of our Soliga friends was that my hair, an outmoded snowy-white hank, had enraged the animal. A year later, when we revisited the field station at the Biligiriranga Hills Reserve with a group of students, the ‘elephant and the hair’ story was still going strong. Stories persist. Ecosystems, however, do not. By this time the elephants had left this part of the reserve and were seen neither that second year nor the year after. The walls of Lantana had become even more pronounced. Vigorous before, the invasion had reached that critical point where, in some places at least, the plant was occupying so much space that wildlife was being crowded out. Along with the spread of the invasion, the diversity of plants is changing as well. And perhaps, if we embrace an ecological paradigm of disturbance and change, this is simply the way nature works, and nothing to be done about it. Or is there? The environmental historian Donald Worster writes about the fall of the ‘balance of nature’ as an idea, and points out that this disruptive world-view makes nature seem awfully like the human sphere. ‘All history,’ he notes, ‘has become a record of disturbance, and that disturbance comes from both cultural and natural agents.’ Thus he places droughts and pests alongside corporate takeovers and the invasion of the academy by French literary theory. If the idea of a balance resurrects Plato and Aristotle, the non-equilibrium, disturbance-inclined view may have its own Greek hero: Heraclitus, pagan saint of flux. ‘Thunderbolt,’ Heraclitus wrote in Fragment 64, ‘steers all things.’ In its brief history, the science of ecology appears to have smuggled in enough ancient metaphysics to make any Greek philosopher nod with approval. However, the question remains. If the handsaw and hurricane are equivalents in their ability to lay a forest low, it is hard to see how we can scientifically criticise the human destruction of ecosystems. Why should we, for instance, concern ourselves with the fate of the Western Ghats if alien introductions are just another disturbance, no different from the more natural-seeming migration of species? The point of conservation in the popular imagination and in many policy directives is that it resists human depredations to preserve important species in ancient, intact, fully functional natural ecosystems. If we have no ‘balance of nature’, this is much harder to defend. If we lose the ideal of balance, then, we lose a powerful motive for environmental conservation. However, there might be some unintended benefits. A dynamic, ‘disturbance’ approach has fostered some of the most promising new approaches to environmental problems such as urban ecology and restoration ecology. That’s because it is much less concerned with keeping humans and nature separate from one another. The thing is, both balance and flux are undoubtedly aspects of nature. A new view of nature that combines them in a way that both scientists and the public find compelling is needed. We should bridge the present disparity between ecology as a science and ecology as a romantic idealism about nature, not only for intellectual reasons but for the sake of robust public policy. Ecology, after all, needs to explain both the stability of a yew grove (a woodland that persists for more than 3,000 years commands attention), as well as the rapid transition of forest in the Western Ghats. One promising middle path that integrates balance and disturbance has emerged in recent years. Referred to as ‘resilience thinking’, it builds on the work of the Canadian-born ecologist CS Holling and has been developed in recent years by an international collaboration called the Resilience Alliance. Resilience thinking assumes that change and disturbance are an integral part of every system, but that some systems are more resilient to destructive change than others. This might seem a subtle point, but if we understand the processes that promote or restore resilience, we have a much better chance both of mopping up after ecological catastrophes – or of avoiding them altogether. Resilience thinking can be applied to economics (the capacity of financial markets to absorb shock), friendship (the capacity of our loved ones to tolerate our nonsense), and nature (the capacity of ecosystems to endure disruption). One of the striking findings is that diversity is crucial to success. When an ecological system is managed for just one factor (say, a single crop) or where a nation’s wealth is dominated by a single economic sector (say, the housing market before the 2008 global financial crisis), the result is a loss of resilience. Resilience thinking ultimately theorises about the limits of a system’s capacity to endure. Financial markets collapse, crops fail, love blanches, ecosystems unravel, and death, alas, is a part of every life. What of the resilience of the two woodlands we have visited? The Killarney yew wood has been stable for a long time, but the oak woods that surround them have been inundated with invasive shrubs. The next time I pass through the limestone passage on the Muckross peninsula, the yews might have been overwhelmed. It seems that excessive deer browsing is preventing them from regenerating. Managers in the National Park are attempting to fence out the deer population. The montane woodlands of Biligiriranga Hills Reserve are in even more rapid transition. That’s partly due to the invasion of the Lantana, which sheltered our charging elephant. A Soliga colleague of mine claims to have seen an elephant walk along the top of huge Lantana hedges ­— a thought that had made me smile while I waited, motionless, for our elephant to leave. The entire landscape is being changed by this invasive species, affecting vegetation, animals and human livelihoods alike. One experimental solution is that the Soliga use the plant in their woodcraft. However, once the resilience of a system has been breached, it is very difficult to return it to its original state: it might have crossed a threshold that cannot be recrossed. Ecosystems that have been damaged are often damaged irreparably. The cost of restoration projects, we know, is very high, so if we value the diversity of the Western Ghats, we need to prevent this switch from ecological delight to impoverished catastrophe. The idea of resilience provides an ecologically accurate, powerfully intuitive reason for protecting species and habits everywhere, from Ireland to India. Good management of these ecosystems will require extensive knowledge of those ecological forces (competition, predation, mutualisms and so on) that create the natural patterns we see. Managers also need knowledge of those disturbances — fire, pest, storms — that have historically rejuvenated the forests. What precisely we do with this knowledge calls for ethical judgments of the most practical kind. This is a conversation that involves all of us, scientist and layperson alike. My son tells me he wants to revisit the yew woods in a few decades. I hope they endure. And if he ever visits the Biligiriranga Reserve Hills, I hope the elephants have returned to the heart of the reserve, if only to teach him that some cataclysms are worth avoiding.
Liam Heneghan
https://aeon.co//essays/nature-is-out-of-balance-but-it-s-still-worth-saving
https://images.aeonmedia…y=75&format=auto
Philosophy of science
For decades the sciences and the humanities have fought for knowledge supremacy. Both sides are wrong-headed
Whenever we try to make an inventory of humankind’s store of knowledge, we stumble into an ongoing battle between what CP Snow called ‘the two cultures’. On one side are the humanities, on the other are the sciences (natural and physical), with social science and philosophy caught somewhere in the middle. This is more than a turf dispute among academics. It strikes at the core of what we mean by human knowledge. Snow brought this debate into the open with his essay The Two Cultures and the Scientific Revolution, published in 1959. He started his career as a scientist and then moved to the humanities, where he was dismayed at the attitudes of his new colleagues. ‘A good many times,’ he wrote, ‘I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare’s?’ That was more than half a century ago. If anything, the situation has got worse. Throughout the 1990s, postmodernist, deconstructionist and radical feminist authors (the likes of Michel Foucault, Jacques Derrida, Bruno Latour and Sandra Harding) wrote all sorts of nonsense about science, clearly without understanding what scientists actually do. The feminist philosopher Harding once boasted: ‘I doubt that in our wildest dreams we ever imagined we would have to reinvent both science and theorising itself’. That’s a striking claim given the dearth of novel results arising from feminist science. The last time I checked, there were no uniquely feminist energy sources on the horizon. In order to satirise this kind of pretentiousness, in 1996 the physicist Alan Sokal submitted a paper to the postmodernist journal Social Text. He called it ‘Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity’. There is no such thing as a hermeneutics of quantum gravity, transformative or not, and the paper consisted entirely of calculated nonsense. Nevertheless, the journal published it. The moral, Sokal concluded, was that postmodern writing on science depended on ‘radical-sounding assertions’ that can be given ‘two alternative readings: one as interesting, radical, and grossly false; the other as boring and trivially true’. Truth be told we don’t know whether the laws that control the behaviour of quarks scale up to the level of societies and galaxies Blame for the culture wars doesn’t lay squarely on the shoulders of humanists, however. Scientists have employed their own overblown rhetoric to aggrandise their doings and dismiss what they haven’t read or understood. Their target, interestingly, is often philosophy. Stephen Hawking began his 2010 book The Grand Design by declaring philosophy dead — though he neglected to provide evidence or argument for such a startling conclusion. Earlier this year, the theoretical physicist Lawrence Krauss told The Atlantic magazine that philosophy ‘reminds me of that old Woody Allen joke: those that can’t do, teach, and those that can’t teach, teach gym. And the worst part of philosophy is the philosophy of science; the only people, as far as I can tell, that read work by philosophers of science are other philosophers of science. It has no impact on physics whatsoever’. To begin with, it is fair to point out that the only people who read works in theoretical physics are theoretical physicists, so by Krauss’s own reasoning both fields are irrelevant to everybody else (they aren’t, of course). Secondly, Krauss, and Hawking for that matter, seem to miss the fact that the business of philosophy is not to solve scientific problems — we’ve got science for that. Objecting to philosophy on these grounds is like complaining that historians of science haven’t solved a single puzzle in theoretical physics. That’s because historians do history, not science. When was the last time a theoretical physicist solved a problem in history? And as the philosopher Daniel Dennett wrote in Darwin’s Dangerous Idea (1995), a book that has been very popular among scientists: ‘There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination’. Whether or not they realise it, Hawking and Krauss need philosophy as a background condition for what they do. Perhaps the most ambitious contemporary attempt at reconfiguring the relationship between the sciences and the humanities comes from the biologist EO Wilson. In his 1998 book, Consilience: The Unity of Knowledge, he proposed nothing less than to explain the whole of human experience in terms of the natural sciences. Beginning with the premise that we are biological beings, he attempted to make sense of society, the arts, ethics and religion in terms of our evolutionary heritage. ‘I remember very well the time I was captured by the dream of unified learning,’ he wrote. ‘I discovered evolution. Suddenly — that is not too strong a word — I saw the world in a wholly new way’. Wilson claims that we can engage in a process of ‘consilience’ that leads to an intellectually and aesthetically satisfactory unity of knowledge. Here is how he defines two versions of consilience: ‘To dissect a phenomenon into its elements … is consilience by reduction. To reconstitute it, and especially to predict with knowledge gained by reduction how nature assembled it in the first place, is consilience by synthesis’. Despite the unfamiliar name, this is actually a standard approach in the natural sciences, and it goes back to Descartes. In order to understand a complex problem, we break it down into smaller chunks, get a grasp on those, and then put the whole thing back together. The strategy is called reductionism and it has been highly successful in fundamental physics, though its success has been more limited in biology and other natural sciences. The overall image that Wilson seems to have in mind is of a downward spiral wherein complex aspects of human culture — literature, for example — are understood first in terms of the social sciences (sociology, psychology), and then more mechanistically by the biological sciences (neurobiology, evolutionary biology), before finally being reduced to physics. After all, everything is made of quarks (or strings), isn’t it? Before we can see where Wilson and his followers go wrong, we need to make a distinction between two meanings of reductionism. There is ontological reduction, which has to do with what exists, and epistemic reduction, which has to do with what we know. The first one is the idea that the bottom level of reality (say, quarks, or strings) is causally sufficient to account for everything else (atoms, cells, you and me, planets, galaxies and so forth). Epistemic reductionism, on the other hand, claims that knowledge of the bottom level is sufficient to reconstruct knowledge of everything else. It holds that we will eventually be able to derive a quantum mechanical theory of planetary motions and of the genius of Shakespeare. How are we doing in the millennia-long quest for absolute and objective truth? Not so well, it seems The notion of ontological reductionism is widely accepted in physics and in certain philosophical quarters, though there really isn’t any compelling evidence one way or the other. Truth be told, we don’t know whether the laws that control the behaviour of quarks scale up to the level of societies and galaxies, or whether large complex systems exhibit novel behaviour that can’t be reduced to lower ontological levels. I am, therefore, agnostic about ontological reductionism. Fortunately for the purposes of this discussion, it doesn’t matter one way or the other. The real game lies in the other direction. Epistemic reductionism is obviously false. We do not have — nor are we ever likely to have — a quantum mechanical theory of planets or of human behaviour. Even if possible in principle, such a theory would be too complicated to compute or to understand. Chemistry might have become a branch of physics via a successful reduction, and neurobiology certainly informs psychology. But not even the most ardent physicist would attempt to produce an explanation of, say, ecosystems in terms of subatomic particles. The impossibility of this sort of epistemic reductionism therefore puts one significant constraint on Wilson-type consilience. The big question, then, is how far we can push the programme. Let’s begin in the obvious place. If culture has to be understood in terms of biology, then genes must have quite a bit to do with it. Wilson, however, is too sophisticated to fall into straightforward genetic determinism. Instead he tells us: ‘Genes prescribe epigenetic rules, which are the regularities of sensory perception and mental development that animate and channel the acquisition of culture’. As it happens, I have worked on epigenetics. The word actually refers to all the molecular processes that mediate the effects of genes during plant and animal development. The problem from Wilson’s point of view is this: biologists don’t know what ‘epigenetic rules’ are. They don’t know how to quantify them or how to study them. For explanatory purposes, they are vacuous. Wilson’s next move is to invoke Richard Dawkins’s idea of ‘memes’, or units of cultural evolution. If culture is made of discrete units that can replicate in the environment of human society, perhaps there is a way to bring evolutionary theory to bear directly on culture. Instead of genes (or epigenes), we apply Darwinian principles to memes. Unfortunately for consilience, the research programme of memetics is in big trouble. Scientists and philosophers have cast doubt on the usefulness, even the coherence, of the very concept. As my evolutionary biology colleague Jerry Coyne has said, it is ‘completely tautological, unable to explain why a meme spreads except by asserting, post facto, that it had qualities enabling it to spread’. We don’t know how to define memes in a way that is operationally useful to the practicing scientist, we don’t know why some memes are successful and others not, and we have no clue as to the physical substrate, if any, of which memes are made. Tellingly, the Journal of Memetics closed a few years ago for lack of submissions. None of the above, of course, is to say that biology is irrelevant to human culture. We are indeed biological entities, so lots of what we do is connected with food, sex and social status. But we are also physical entities, and humanity has found cultural ways to exploit or get around physics. We built aeroplanes to fly despite the limitations imposed by gravity, and we invented endless variations on the basic biological themes, from Shakespeare’s sonnets to Picasso’s paintings. In each case, the supposedly fundamental sciences give us only a very partial picture of the whole. If we take the idea of unity of knowledge seriously, there are some broad categories of inquiry that we should try to integrate into our picture. This turns out to be harder than we might think. Take mathematics and logic. Wilson is keen on these disciplines. ‘The dream of objective truth peaked,’ he writes, ‘with logical positivism’ — that is, with a philosophical movement of the 1920s and ’30s that attempted to capture the essence of scientific statements using logic. Mathematics, too, is central to his scheme. Because of its effectiveness in the natural sciences, it ‘seems to point arrowlike toward the ultimate goal of objective truth’. Let’s leave aside the pretty well-established fact that human beings aren’t in the business of ‘ultimate objective truth’. When we come down to it, is scientific knowledge the same kind of thing as mathematical-logical knowledge? They are, I think, quite different. Look at what counts as a ‘fact’ in science: for instance the statement that there are four natural satellites of Jupiter that can be seen through small telescopes from Earth. These satellites were discovered by Galileo Galilei in the 17th century, and represented the first example of a solar-like system within our own Sun-centred one. Indeed, Galilei used this as a major reason to take seriously the then-highly controversial Copernican theory. By contrast, take a mathematical ‘fact’, such as the demonstration of the Pythagorean theorem. Or a logical fact, such as a truth table that tells you the conditions under which particular combinations of premises yield true or false conclusions according to the rules of deduction. These two latter sorts of knowledge do resemble one another in certain ways; some philosophers regard mathematics as a type of logical system. Yet neither looks anything like a fact as it is understood in the natural sciences. Therefore, ‘unifying knowledge’ in this area looks like an empty aim: all we can say is that we have natural sciences over here and maths over there, and that the latter is often useful (for reasons that are not at all clear, by the way) to the former. Let’s consider yet another type of fact, more germane to the project of reducing the humanities to the sciences. I happen to have a strong conviction that the music of Ludwig van Beethoven is better than that of Britney Spears. To me, that’s an aesthetic fact. I hope it’s also clear that this is a ‘fact’ (based on my ‘knowledge’ of music) that has a different structure and content from both logical-mathematical and natural-scientific facts. Indeed, it isn’t a fact at all: it’s an aesthetic judgment, one to which I have a strong emotional attachment. Why would evolution produce brains such as Andrew Wiles’s, capable of solving Fermat’s last theorem? Now, I do not doubt that my ability to make aesthetic judgments in general is influenced by the kind of biological being that I am. I need to have a particular type of auditory system even to hear Beethoven and Spears, and that system presumably accounts for why musicians rarely produce pieces outside a certain range of sound frequencies. Still, it seems hard to deny that my particular judgment about Beethoven versus Spears is primarily the result of my culture and psychology and upbringing. People in different times and cultures, or with different temperaments, have disagreed and will disagree with me — and they might feel just as strongly about their tastes as I do about mine (of course, they would be ‘wrong’). Clearly, there are aspects of human culture in which the very notion of ‘objective and ultimate truth’ is a category mistake. Let’s set aside the goal of unifying all knowledge. How are we doing in the millennia-long quest for absolute and objective truth? Not so well, it seems, and that is largely because of the devastating contributions of a few philosophers and logicians, particularly David Hume, Bertrand Russell and Kurt Gödel. In the 18th century, Hume formulated what is now known as the problem of induction. He noted that both in science and everyday experience we use a type of reasoning that philosophers call induction, which consists in generalising from examples. Hume also pointed out that we do not seem to have a logical justification for the inductive process itself. Why then do we believe that inductive reasoning is reliable? The answer is that it has worked so far. Ah, but to say so is to deploy inductive reasoning to justify inductive reasoning, which seems circular. Plenty of philosophers have tried to solve the problem of induction without success: we do not have an independent, rational justification for the most common type of reasoning employed by laypeople and professional scientists. Hume didn’t say that we should therefore all quit and go home in desperation. Indeed, we don’t have an alternative but to keep using induction. But it ought to be a sobering thought that our empirical knowledge is based on no solid foundation other than that ‘it works’. What about maths and logic? At the beginning of the 20th century, a number of logicians, mathematicians and philosophers of mathematics were trying to establish firm logical foundations for mathematics and similar formal systems. The most famous such attempt was made by Bertrand Russell and Alfred North Whitehead, and it resulted in their Principia Mathematica (1910-13), one of the most impenetrable reads of all time. It failed. A few years later the logician Kurt Gödel explained why. His two ‘incompleteness theorems’ proved — logically — that any sufficiently complex mathematical or logical system will contain truths that cannot be proven from within that system. Russell conceded this fatal blow to his enterprise, as well as the larger moral that we have to be content with unprovable truths even in mathematics. If we add to Gödel’s results the well-known fact that logical proofs and mathematical theorems have to start from assumptions (or axioms) that are themselves unprovable (or, in the case of some deductive reasoning like syllogisms, are derived from empirical observations and generalisation — ie, from induction), it seems that the quest for true and objective knowledge is revealed as a mirage. At this point one might wonder what exactly is at stake here. Why are Wilson and his followers in search of a unified theory of everything, a single way to understand human knowledge? Wilson gives the answer explicitly in his book, and I think it also applies implicitly to some of his fellow travellers, for instance the physicist Steven Weinberg in his book Dreams of a Final Theory (1992). The motive is philosophical. More specifically, it is aesthetic. Some scientists really value simplicity and elegance of explanations, and use these criteria in evaluating of the relative worth of different theories. Wilson calls this ‘the Ionian enchantment’, and names the first chapter of Consilience accordingly. But the irony here is obvious. Neither simplicity nor elegance are empirical concepts: they are philosophical judgments. There is no reason to believe a priori that the universe can be explained by simple and elegant theories, and indeed the historical record of physics includes several instances when the simplest of competing theories turned out to be wrong. Enough with the demolition project. Is it possible to reconstruct something like Wilson’s consilience, but in a more reasonable manner? Think about visual art. Its history includes prehistoric cave paintings, Michelangelo, Picasso, and contemporary abstraction. It is reasonable to think that science — perhaps a combination of evolutionary biology and cognitive science — can tell us something about why our ancestors started painting to begin with, as well as why we like certain types of patterns: symmetrical figures, for instance, and repetitions of a certain degree of complexity. Yet these sorts of explanations massively underdetermine the variety of ways of doing visual art, both across centuries and across cultures. Picasso’s cubism is not about symmetry, for instance; indeed, it’s about breaking symmetry. And it is hard to imagine an explanation of the rise of, say, the Impressionist movement that doesn’t invoke the specific cultural circumstances of late 19th century France, and the biographies and psychologies of individual artists. We find a similar situation with maths. It is plausible that our ability to count and do simple arithmetic gave us an evolutionary advantage and was therefore the result of natural selection. (Notice, however, that this is a speculative argument: we don’t have access to the kind of evidence needed to test the hypothesis.) But what on earth is the possible adaptive value of highly abstract mathematics? Why would evolution produce brains such as Andrew Wiles’s, capable of solving Fermat’s last theorem? Biology sets the background conditions for such feats of human ingenuity, since a brain of a particular type is necessary to accomplish them. But biology by itself has little else to say about how some human cultures took a historical path that ended up producing a small group of often socially awkward people who devote their lives to solving abstruse mathematical problems. Or, finally, take morality, perhaps the most important aspect of what it means to be human. Much has been written on the evolutionary origins of morality, and many good and plausible ideas have been proposed. Our moral sense might well have originated in the context of social life as intelligent primates: other social primates do show behaviours consistent with the basic building blocks of morality such as fairness toward other members of the group, even when they aren’t kin. But it is a very long way from that to Aristotle’s Nicomachean Ethics, or Jeremy Bentham and John Stuart Mill’s utilitarianism. These works and concepts were possible because we are biological beings of a certain kind. Nevertheless, we need to take cultural history, psychology and philosophy seriously in order to account for them. Here’s a final thought. Wilson’s project depends on the assumption that there is such a thing as human knowledge as a unifiable category. For him, disciplinary boundaries are accidents of history that need to be eliminated. But what if they helped to explain some further fact? An intriguing view has been proposed in different contexts by the linguist Noam Chomsky, in his Reflections on Language (1975), and the philosopher Colin McGinn, in The Problem of Consciousness (1991). The basic idea is to take seriously the fact that human brains evolved to solve the problems of life on the savannah during the Pleistocene, not to discover the ultimate nature of reality. From this perspective, it is delightfully surprising that we learn as much as science lets us and ponder as much as philosophy allows. All the same, we know that there are limits to the power of the human mind: just try to memorise a sequence of a million digits. Perhaps some of the disciplinary boundaries that have evolved over the centuries reflect our epistemic limitations. Seen this way, the differences between philosophy, biology, physics, the social sciences and so on might not be the result of the arbitrary caprice of academic administrators and faculty; they might instead reflect a natural way in which human beings understand the world and their role in it. There might be better ways to organise our knowledge in some absolute sense, but perhaps what we have come up with is something that works well for us, as biological-cultural beings with a certain history. This isn’t a suggestion to give up, much less a mystical injunction to go ‘beyond science’. There is nothing beyond science. But there is important stuff before it: there are human emotions, expressed by literature, music and the visual arts; there is culture; there is history. The best understanding of the whole shebang that humanity can hope for will involve a continuous dialogue between all our various disciplines. This is a more humble take on human knowledge than the quest for consilience, but it is one that, ironically, is more in synch with what the natural sciences tell us about being human.
Massimo Pigliucci
https://aeon.co//essays/why-should-science-have-the-last-word-on-culture
https://images.aeonmedia…y=75&format=auto
Economics
Following decades at the heart of government thinking, there are signs that neoliberalism is giving way
Consider the following developments in UK policy. Last year, Britain’s Office for National Statistics published its first ever set of ‘national well-being’ indicators, which were based on surveys of how satisfied people felt with their lives. Next year, it will be illegal to sell a bottle of wine in Scotland for less than £4.69. Meanwhile, in the face of prolonged economic stagnation, welfare claimants and young people are being urged or forced to work for free in order to develop the mindset and motivation to render them employable in the future. None of these examples alone seems especially significant. Taking them together, however, we can begin to trace the outline of a subtly new way of conceiving of economic activity, one that is exerting a growing influence among policymakers in Britain. Crucially, for good and for ill, the authority of monetary prices as authoritative indicators of value is diminishing. Formerly, society’s progress was measured in terms of GDP, a bottle of wine was worth whatever the market would allow, and work was remunerated in wages. Now, the rise of psychological perspectives on the economy is providing a new framework. As the sciences of well-being and economic behaviour grow more sophisticated, the potential arises for a new way of understanding value. And as we witness this framework on the rise, so we might be witnessing the slow death of the paradigm known as neoliberalism. In order to understand the significance of these seemingly low-level changes, we need to think back to the 1940s, when the economic power of central governments was growing at unprecedented speed. Centralised economic planning was at its height thanks to a combination of world war, statist ideologies and the new macroeconomic techniques invented by John Maynard Keynes. It was against this backdrop that a marginalised group of economists and philosophers began the neoliberal fightback. They were inspired and organised by the émigré Austrian intellectual, Friedrich Von Hayek. Policymakers are recognising that there is a limit to how much consumer freedom we can cope with Hayek argued, most notably in his 1944 classic The Road to Serfdom, that economic planning always led to tyranny — that was the inevitable result of allowing centralised experts to take unilateral decisions on behalf of the population. By contrast, the free market that had operated during the heyday of Victorian liberalism served as a mechanism for coordinating millions of decisions, with no single individual or institution dictating the ultimate outcome. By trusting individuals to pursue their own goals without consensus or debate, markets had an innately liberal quality, quite aside from whatever efficiency claims might also be made for them. Markets, from this perspective, are information-processing machines. They are like vast computers, channelling decisions, desires and preferences. They have no ideas of their own about what is worth producing and having: they just process the choices that individuals happen to make. And their capacity to do this is all thanks to the price mechanism. Increased desire for some good or service will cause the price to rise, resulting in more of that good or service being offered. Hayek, supported by Milton Friedman and the Chicago School of economics, urged policymakers to abandon any concern with desirable economic outcomes or values. Instead, they should focus only on building the structures within which the price mechanism could operate freely. This is why, for example, neoliberals have long viewed cartel-busting as one of the state’s foremost responsibilities. The prolonged economic slowdown of the 1970s created a thirst for new policy ideas, which the neoliberals cleverly satisfied. Although the purity of Hayek’s vision was inevitably polluted by the messy reality of politics, the new era that Margaret Thatcher and Ronald Reagan ushered in treated free markets, governed by the magic of price, as the basis for the moral and economic logic of state and society. At the heart of the neoliberal era were two fundamental assumptions. Firstly, individuals, not experts, were the best judges of their own tastes and welfare. Secondly, the price mechanism of the market could be trusted to adjudicate between the various competing ideas, values and preferences that exist in modern societies. The state, by contrast, could not. By this definition, a society in which it is illegal to sell a bottle of wine for £4.50, no matter how profitable it is to do so, nor how much demand there is for it, is no longer a neoliberal society. A different set of assumptions is built into such a policy. Unlike so-called ‘sin taxes’ that governments have long levied on tobacco and alcohol as a morally righteous way to increase their own revenue, a state-enforced price has no fiscal rationale, other than the longer-term reduction of health spending. Evidently, it is no longer assumed that individuals are necessarily the best judges of their own welfare. And although a price still exists, it is no longer set only by the magical forces of supply and demand. Expert decree now has a place. To put this another way, policymakers are recognising that there is a limit to how much consumer freedom we can cope with. Minimum alcohol pricing would have appalled Hayek and his followers, just as maximum rental prices were an early target of Friedman’s criticisms. And yet it does not exactly mark a return to the planning of the 1940s, either. As so often happens during times of economic crisis, an entirely new logic is emerging on the back of neoliberalism. At the core of this logic is a new vision of the individual, who is acting partly in a calculating economic fashion, but partly in a social, habitual fashion. When the two come into conflict, as arguably occurred in the financial sector, the results can be disastrous. The implications for work are as significant as they are for consumption. One of the most striking findings from the new science of well-being (admittedly noticed by Sigmund Freud a century earlier) is that work is an indispensable condition of human happiness. This is something that economists believe they can now prove. Yet the discovery takes on an ambiguous cast in light of the return of high unemployment. Thanks to the new statistical data on happiness, ‘jobs’ and economic tasks can now be offered to those on the margins of society on the premise that work is psychologically valuable, regardless of whether it is remunerated. And so the institution of the labour market begins to look quite different. The disciplines of economics and psychology parted ways in the 1890s, the former operating on the methodological assumption that people would act rationally to satisfy themselves, the latter seeking to explore how people actually feel and behave using experiments and surveys. Yet the two have been growing closer since the early 1970s. The new fields of happiness economics and behavioural economics both offer more nuanced views of what happens when people make choices. Sometimes, it appears, they are influenced by other people in ways that traditional economics refuses to contemplate. Sometimes they do things that are repeatedly shown to make them worse-off in the long run. Such claims are scarcely novel in themselves; indeed, one might argue that they have been the fundamental premise of the marketing industry since the 1920s. Yet never before has this branch of economic psychology provided the logic for policy-making. From this emerging ‘post-neoliberal’ perspective, individual choice and market prices are no longer altogether to be trusted. Especially where physical and mental health are concerned, there is a growing sense that experts need to teach individuals what is good for them in their day-to-day lives. Research into social networks suggests that ‘good’ and ‘bad’ habits are disseminated informally, meaning that policymakers now look for ways of influencing entire groups of people, rather than just isolated individuals. Neoliberalism applied the economic metaphor of ‘human capital’ to individuals. Now a new metaphor of behavioural ‘epidemics’ is used to understand how tastes and habits move through society. If neoliberalism witnessed a market logic infiltrating ever more domains of state and society, this emerging policy paradigm sees a medical logic doing the same thing. Meanwhile, there is a trend for interpreting social, mental and behavioural problems in medical and physiological terms (a trend that neuroscience only exacerbates). Unless healthy behaviour can be promoted across the population, the strains on hospitals and doctors will become unmanageable. As a sign of things to come, responsibility for public health issues such as obesity and sexual health recently passed from the National Health Service to local government. All policies which are shown to reduce the economic burden on the NHS will get a hearing within this emerging, post-neoliberal policy paradigm. It will take time for the ideology of ‘choice’ to diminish, especially as markets continue to penetrate our social and cultural lives. Neoliberal ideas moved from the intellectual margins of the 1940s into the policy mainstream of the 1970s and ’80s. They have finally arrived as a form of folk common sense for many people. We now assume that we have the right to lead whatever lifestyle we choose, according to our own tastes. But it is precisely because of this culture that governments are searching for justifications to set limits, to identify bad choices and promote well-being. The human mind, with all its self-destructive and herdlike tendencies, is now the territory into which economic policymakers are edging.
William Davies
https://aeon.co//essays/where-does-the-state-look-when-it-cant-trust-markets
https://images.aeonmedia…y=75&format=auto
Stories and literature
‘An observer in armed conflict – there’s danger, yes, that’s true – but it’s managed’
BLUE ROAN All winter, Father’s home-made caravan stands empty among Mother’s hives. Home is The School; the walls are dark rough pink like the ex-army roan gelding in the livery yard next door. Damp, most of the time for this is Carlisle, you don’t often see sun. Father is Headmaster so he’s free – he’s the favourite child did I mention that? – to effervesce into cellars under empty classrooms and Indian-file his brothers and little sister into the dark. As she herself, last of the bunch, remembered as I was driving her north to the Lakes, the wild blue distances, named mountains she thought she’d never see again, where they used to camp in that caravan all summer long. But something changed in him, she said. Something mysterious, dimming the afternoon. CARE OF THE VOICE I’m working on Care of the Voice in the kitchen when you come in from packing. ‘An observer in armed conflict – there’s danger, yes, that’s true – but it’s managed.’ You’re off to Colombia to witness for the raped, disappeared, displaced: widows and farmers working ancestral upland in the face of paramilitaries and drug barons who want to sell valleys and mountains to coca factories and multinational investors in palm oil and plantain. I think of holding my breath when you played Sarasati’s Carmen Fantasie in an end-of-term concert, and hitch-hiked to Morocco your first year in college. Of when you dad and I – and the old dog, almost your sister – brought home a puppy, soft silvery Velvet, and you lay down with her laughing on the sofa. When we tracked a perfectly circular iridescent beetle over pebbles on a Cretan shore through twigs of wind-snapped mimosa. A TRIP TO THE MOON My mother is moving house. She’s ninety-one and determined: words like sheltered accommodation are coming at us from outer space but it’s not like that, at least not yet. There are spare rooms in the new home, she’ll have a small garden, feed nuthatches, do her own cooking, grow shrubs. Still, down the slope will be a sanatorium. That’s the point. A clinic, an Alzheimer’s wing. She doesn’t want to be a burden. In every room is a vermilion string to pull if you fall over. When I clear out her cupboards we find histories woven in every blanket, like this scorch mark made the winter the heating failed. Should she sell the oversize kitchen clock (which she still gets up on a ladder to wind every Sunday, as my dad used to do) to the blind piano tuner who took a shine to it when he came to value the piano? Or should it stay around in case one day some grandchild might give it a home? For the first time in her life she’ll live only with things she has chosen. No husband or children to consider, no furniture from aunts. She can sell, she can give things away. Traumas of today, contracts to exchange, dates of completion, arguments over who’ll let the carpenter in to the new place to measure up, will be forgotten because forgetting is an issue let’s face it. And she is, she is facing it. She’ll be three miles from family but she’s going to an unknown zone.
Ruth Padel
https://aeon.co//essays/poems-blue-roan-care-of-the-voice-a-trip-to-the-moon
https://images.aeonmedia…y=75&format=auto
Computing and artificial intelligence
The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?
It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances. But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence. Why? Because, as an unknown sage once remarked, ‘it ain’t what we don’t know that causes trouble, it’s what we know for sure that just ain’t so’ (and if you know that sage was Mark Twain, then what you know ain’t so either). I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough. Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation. This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory. The first people to guess this and to grapple with its ramifications were the 19th-century mathematician Charles Babbage and his assistant Ada, Countess of Lovelace. It remained a guess until the 1980s, when I proved it using the quantum theory of computation. Babbage came upon universality from an unpromising direction. He had been much exercised by the fact that tables of mathematical functions (such as logarithms and cosines) contained mistakes. At the time they were compiled by armies of clerks, known as ‘computers’, which is the origin of the word. Being human, the computers were fallible. There were elaborate systems of error correction, but even proofreading for typographical errors was a nightmare. Such errors were not merely inconvenient and expensive: they could cost lives. For instance, the tables were extensively used in navigation. So, Babbage designed a mechanical calculator, which he called the Difference Engine. It would be programmed by initialising certain cogs. The mechanism would drive a printer, in order to automate the production of the tables. That would bring the error rate down to negligible levels, to the eternal benefit of humankind. Unfortunately, Babbage’s project-management skills were so poor that despite spending vast amounts of his own and the British government’s money, he never managed to get the machine built. Yet his design was sound, and has since been implemented by a team led by the engineer Doron Swade at the Science Museum in London. Slow but steady: a detail from Charles Babbage’s Difference Engine, assembled nearly 170 years after it was designed. Courtesy Science MuseumHere was a cognitive task that only humans had been able to perform. Nothing else in the known universe even came close to matching them, but the Difference Engine would perform better than the best humans. And therefore, even at that faltering, embryonic stage of the history of automated computation — before Babbage had considered anything like AGI — we can see the seeds of a philosophical puzzle that is controversial to this day: what exactly is the difference between what the human ‘computers’ were doing and what the Difference Engine could do? What type of cognitive task, if any, could either type of entity perform that the other could not in principle perform too? One immediate difference between them was that the sequence of elementary steps (of counting, adding, multiplying by 10, and so on) that the Difference Engine used to compute a given function did not mirror those of the human ‘computers’. That is to say, they used different algorithms. In itself, that is not a fundamental difference: the Difference Engine could have been modified with additional gears and levers to mimic the humans’ algorithm exactly. Yet that would have achieved nothing except an increase in the error rate, due to increased numbers of glitches in the more complex machinery. Similarly, the humans, given different instructions but no hardware changes, would have been capable of emulating every detail of the Difference Engine’s method — and doing so would have been just as perverse. It would not have copied the Engine’s main advantage, its accuracy, which was due to hardware not software. It would only have made an arduous, boring task even more arduous and boring, which would have made errors more likely, not less. Babbage knew that it could be programmed to do algebra, play chess, compose music, process images and so on For humans, that difference in outcomes — the different error rate — would have been caused by the fact that computing exactly the same table with two different algorithms felt different. But it would not have felt different to the Difference Engine. It had no feelings. Experiencing boredom was one of many cognitive tasks at which the Difference Engine would have been hopelessly inferior to humans. Nor was it capable of knowing or proving, as Babbage did, that the two algorithms would give identical results if executed accurately. Still less was it capable of wanting, as he did, to benefit seafarers and humankind in general. In fact, its repertoire was confined to evaluating a tiny class of specialised mathematical functions (basically, power series in a single variable). Thinking about how he could enlarge that repertoire, Babbage first realised that the programming phase of the Engine’s operation could itself be automated: the initial settings of the cogs could be encoded on punched cards. And then he had an epoch-making insight. The Engine could be adapted to punch new cards and store them for its own later use, making what we today call a computer memory. If it could run for long enough — powered, as he envisaged, by a steam engine — and had an unlimited supply of blank cards, its repertoire would jump from that tiny class of mathematical functions to the set of all computations that can possibly be performed by any physical object. That’s universality. Babbage called this improved machine the Analytical Engine. He and Lovelace understood that its universality would give it revolutionary potential to improve almost every scientific endeavour and manufacturing process, as well as everyday life. They showed remarkable foresight about specific applications. They knew that it could be programmed to do algebra, play chess, compose music, process images and so on. Unlike the Difference Engine, it could be programmed to use exactly the same method as humans used to make those tables. And prove that the two methods must give the same answers, and do the same error-checking and proofreading (using, say, optical character recognition) as well. But could the Analytical Engine feel the same boredom? Could it feel anything? Could it want to better the lot of humankind (or of Analytical Enginekind)? Could it disagree with its programmer about its programming? Here is where Babbage and Lovelace’s insight failed them. They thought that some cognitive functions of the human brain were beyond the reach of computational universality. As Lovelace wrote, ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.’ And yet ‘originating things’, ‘following analysis’, and ‘anticipating analytical relations and truths’ are all behaviours of brains and, therefore, of the atoms of which brains are composed. Such behaviours obey the laws of physics. So it follows inexorably from universality that, with the right program, an Analytical Engine would undergo them too, atom by atom and step by step. True, the atoms in the brain would be emulated by metal cogs and levers rather than organic material — but in the present context, inferring anything substantive from that distinction would be rank racism. Despite their best efforts, Babbage and Lovelace failed almost entirely to convey their enthusiasm about the Analytical Engine to others. In one of the great might-have-beens of history, the idea of a universal computer languished on the back burner of human thought. There it remained until the 20th century, when Alan Turing arrived with a spectacular series of intellectual tours de force, laying the foundations of the classical theory of computation, establishing the limits of computability, participating in the building of the first universal classical computer and, by helping to crack the Enigma code, contributing to the Allied victory in the Second World War. Turing fully understood universality. In his 1950 paper ‘Computing Machinery and Intelligence’, he used it to sweep away what he called ‘Lady Lovelace’s objection’, and every other objection both reasonable and unreasonable. He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written. This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken. The first, initially predominant, camp cited a plethora of reasons ranging from the supernatural to the incoherent. All shared the basic mistake that they did not understand what computational universality implies about the physical world, and about human brains in particular. What is needed is nothing less than a breakthrough in philosophy, a theory that explains how brains create explanations But it is the other camp’s basic mistake that is responsible for the lack of progress. It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how much. Why? I call the core functionality in question creativity: the ability to produce new explanations. For example, suppose that you want someone to write you a computer program to convert temperature measurements from Centigrade to Fahrenheit. Even the Difference Engine could have been programmed to do that. A universal computer like the Analytical Engine could achieve it in many more ways. To specify the functionality to the programmer, you might, for instance, provide a long list of all inputs that you might ever want to give it (say, all numbers from -89.2 to +57.8 in increments of 0.1) with the corresponding correct outputs, so that the program could work by looking up the answer in the list on each occasion. Alternatively, you might state an algorithm, such as ‘divide by five, multiply by nine, add 32 and round to the nearest 10th’. The point is that, however the program worked, you would consider it to meet your specification — to be a bona fide temperature converter — if, and only if, it always correctly converted whatever temperature you gave it, within the stated range. Now imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal. Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do! I’m sorry Dave, I’m afraid I can’t do that: HAL, the computer intelligence from Stanley Kubrick’s 2001: A Space Odyssey. Courtesy MGMTraditionally, discussions of AGI have evaded that issue by imagining only a test of the program, not its specification — the traditional test having been proposed by Turing himself. It was that (human) judges be unable to detect whether the program is human or not, when interacting with it via some purely textual medium so that only its cognitive abilities would affect the outcome. But that test, being purely behavioural, gives no clue for how to meet the criterion. Nor can it be met by the technique of ‘evolutionary algorithms’: the Turing test cannot itself be automated without first knowing how to write an AGI program, since the ‘judges’ of a program need to have the target ability themselves. (For how I think biological evolution gave us the ability in the first place, see my book The Beginning of Infinity.) And in any case, AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs. The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not. Such a theory is beyond present-day knowledge. What we do know about epistemology implies that any approach not directed towards that philosophical breakthrough must be futile. Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible. I myself remember, for example, observing on thousands of consecutive occasions that on calendars the first two digits of the year were ‘19’. I never observed a single exception until, one day, they started being ‘20’. Not only was I not surprised, I fully expected that there would be an interval of 17,000 years until the next such ‘19’, a period that neither I nor any other human being had previously experienced even once. How could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways. Of course, given the explanation, those drastic ‘changes’ in the earlier pattern of 19s are straightforwardly understood as being due to an invariant underlying pattern or law. But the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation. So, why is it still conventional wisdom that we get our theories by induction? For some reason, beyond the scope of this article, conventional wisdom adheres to a trope called the ‘problem of induction’, which asks: ‘How and why can induction nevertheless somehow be done, yielding justified true beliefs after all, despite being impossible and invalid respectively?’ Thanks to this trope, every disproof (such as that by Popper and David Miller back in 1988), rather than ending inductivism, simply causes the mainstream to marvel in even greater awe at the depth of the great ‘problem of induction’. In regard to how the AGI problem is perceived, this has the catastrophic effect of simultaneously framing it as the ‘problem of induction’, and making that problem look easy, because it casts thinking as a process of predicting that future patterns of sensory experience will be like past ones. That looks like extrapolation — which computers already do all the time (once they are given a theory of what causes the data). But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences. We think about the world: not just the physical world but also worlds of abstractions such as right and wrong, beauty and ugliness, the infinite and the infinitesimal, causation, fiction, fears, and aspirations — and about thinking itself. Now, the truth is that knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations and don’t need justification. Why? Because genuine knowledge, though by definition it does contain truth, almost always contains error as well. So it is not ‘true’ in the sense studied in mathematics and logic. Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data. And therefore, attempts to work towards creating an AGI that would do the latter are just as doomed as an attempt to bring life to Mars by praying for a Creation event to happen there. Present-day software developers could straightforwardly program a computer to have ‘self-awareness’ if they wanted to. But it is a fairly useless ability Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines. Furthermore, despite the above-mentioned enormous variety of things that we create explanations about, our core method of doing so, namely Popperian conjecture and criticism, has a single, unified, logic. Hence the term ‘general’ in AGI. A computer program either has that yet-to-be-fully-understood logic, in which case it can perform human-type thinking about anything, including its own thinking and how to improve it, or it doesn’t, in which case it is in no sense an AGI. Consequently, another hopeless approach to AGI is to start from existing knowledge of how to program specific tasks — such as playing chess, performing statistical analysis or searching databases — and then to try to improve those programs in the hope that this will somehow generate AGI as a side effect, as happened to Skynet in the Terminator films. Nowadays, an accelerating stream of marvellous and useful functionalities for computers are coming into use, some of them sooner than had been foreseen even quite recently. But what is neither marvellous nor useful is the argument that often greets these developments, that they are reaching the frontiers of AGI. An especially severe outbreak of this occurred recently when a search engine called Watson, developed by IBM, defeated the best human player of a word-association database-searching game called Jeopardy. ‘Smartest machine on Earth’, the PBS documentary series Nova called it, and characterised its function as ‘mimicking the human thought process with software.’ But that is precisely what it does not do. The thing is, playing Jeopardy — like every one of the computational functionalities at which we rightly marvel today — is firmly among the functionalities that can be specified in the standard, behaviourist way that I discussed above. No Jeopardy answer will ever be published in a journal of new discoveries. The fact that humans perform that task less well by using creativity to generate the underlying guesses is not a sign that the program has near-human cognitive abilities. The exact opposite is true, for the two methods are utterly different from the ground up. Likewise, when a computer program beats a grandmaster at chess, the two are not using even remotely similar algorithms. The grandmaster can explain why it seemed worth sacrificing the knight for strategic advantage and can write an exciting book on the subject. The program can only prove that the sacrifice does not force a checkmate, and cannot write a book because it has no clue even what the objective of a chess game is. Programming AGI is not the same sort of problem as programming Jeopardy or chess. An AGI is qualitatively, not quantitatively, different from all other computer programs. The Skynet misconception likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality. Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough. In 1950, Turing expected that by the year 2000, ‘one will be able to speak of machines thinking without expecting to be contradicted.’ In 1968, Arthur C. Clarke expected it by 2001. Yet today in 2012 no one is any better at programming an AGI than Turing himself would have been. This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI. But for the people in the other camp (the AGI-is-imminent one) such a history of failure cries out to be explained — or, at least, to be rationalised away. And indeed, unfazed by the fact that they could never induce such rationalisations from experience as they expect their AGIs to do, they have thought of many. The very term ‘AGI’ is an example of one. The field used to be called ‘AI’ — artificial intelligence. But ‘AI’ was gradually appropriated to describe all sorts of unrelated computer programs such as game players, search engines and chatbots, until the G for ‘general’ was added to make it possible to refer to the real thing again, but now with the implication that an AGI is just a smarter species of chatbot. Another class of rationalisations runs along the general lines of: AGI isn’t that great anyway; existing software is already as smart or smarter, but in a non-human way, and we are too vain or too culturally biased to give it due credit. This gets some traction because it invokes the persistently popular irrationality of cultural relativism, and also the related trope that: ‘We humans pride ourselves on being the paragon of animals, but that pride is misplaced because they, too, have language, tools … … And self-awareness.’ Remember the significance attributed to Skynet’s becoming ‘self-aware’? That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one. Perhaps the reason that self-awareness has its undeserved reputation for being connected with AGI is that, thanks to Kurt Gödel’s theorem and various controversies in formal logic in the 20th century, self-reference of any kind has acquired a reputation for woo-woo mystery. So has consciousness. And here we have the problem of ambiguous terminology again: the term ‘consciousness’ has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations (‘qualia’), which is intimately connected with the problem of AGI. At the other, ‘consciousness’ is simply what we lose when we are put under general anaesthetic. Many animals certainly have that. AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. This does not mean that apes who pass the mirror test have any hint of the attributes of ‘general intelligence’ of which AGI would be an artificial version. Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic. Ironically, that group of rationalisations (AGI has already been done/is trivial/ exists in apes/is a cultural conceit) are mirror images of arguments that originated in the AGI-is-impossible camp. For every argument of the form ‘You can’t do AGI because you’ll never be able to program the human soul, because it’s supernatural’, the AGI-is-easy camp has the rationalisation, ‘If you think that human cognition is qualitatively different from that of apes, you must believe in a supernatural soul.’ ‘Anything we don’t yet know how to program is called human intelligence,’ is another such rationalisation. It is the mirror image of the argument advanced by the philosopher John Searle (from the ‘impossible’ camp), who has pointed out that before computers existed, steam engines and later telegraph systems were used as metaphors for how the human mind must work. Searle argues that the hope for AGI rests on a similarly insubstantial metaphor, namely that the mind is ‘essentially’ a computer program. But that’s not a metaphor: the universality of computation follows from the known laws of physics. Some, such as the mathematician Roger Penrose, have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. To explain why I, and most researchers in the quantum theory of computation, disagree that this is a plausible source of the human brain’s unique functionality is beyond the scope of this essay. (If you want to know more, read Litt et al’s 2006 paper ‘Is the Brain a Quantum Computer?’, published in the journal Cognitive Science.) That AGIs are people has been implicit in the very concept from the outset. If there were a program that lacked even a single cognitive ability that is characteristic of people, then by definition it would not qualify as an AGI. Using non-cognitive attributes (such as percentage carbon content) to define personhood would, again, be racist. But the fact that the ability to create new explanations is the unique, morally and intellectually significant functionality of people (humans and AGIs), and that they achieve this functionality by conjecture and criticism, changes everything. Currently, personhood is often treated symbolically rather than factually — as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs. The battle between good and evil ideas is as old as our species and will go on regardless of the hardware on which it is running For example, the mere fact that it is not the computer but the running program that is a person, raises unsolved philosophical problems that will become practical, political controversies as soon as AGIs exist. Once an AGI program is running in a computer, to deprive it of that computer would be murder (or at least false imprisonment or slavery, as the case may be), just like depriving a human mind of its body. But unlike a human body, an AGI program can be copied into multiple computers at the touch of a button. Are those programs, while they are still executing identical steps (ie before they have become differentiated due to random choices or different experiences), the same person or many different people? Do they get one vote, or many? Is deleting one of them murder, or a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different AGI people, either on one computer or on many, what happens next? They are still people, with rights. Do they all get the vote? Furthermore, in regard to AGIs, like any other entities with creativity, we have to forget almost all existing connotations of the word ‘programming’. To treat AGIs like any other computer programs would constitute brainwashing, slavery, and tyranny. And cruelty to children, too, for ‘programming’ an already-running AGI, unlike all other programming, constitutes education. And it constitutes debate, moral as well as factual. To ignore the rights and personhood of AGIs would not only be the epitome of evil, but also a recipe for disaster: creative beings cannot be enslaved forever. Some people are wondering whether we should welcome our new robot overlords. Some hope to learn how we can rig their programming to make them constitutionally unable to harm humans (as in Isaac Asimov’s ‘laws of robotics’), or to prevent them from acquiring the theory that the universe should be converted into paper clips (as imagined by Nick Bostrom). None of these are the real problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive — economically, intellectually or whatever — as most people; and that such a person could do enormous harm were he to turn his powers to evil instead of good. These phenomena have nothing to do with AGIs. The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of ‘good’ needs continual improvement. How should society be organised so as to promote that improvement? ‘Enslave all intelligence’ would be a catastrophically wrong answer, and ‘enslave all intelligence that doesn’t look like us’ would not be much better. One implication is that we must stop regarding education (of humans or AGIs alike) as instruction — as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves. I do not highlight all these philosophical issues because I fear that AGIs will be invented before we have developed the philosophical sophistication to understand them and to integrate them into civilisation. It is for almost the opposite reason: I am convinced that the whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology, and that the philosophical progress that is essential to their future integration is also a prerequisite for developing them in the first place. The lack of progress in AGI is due to a severe logjam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. And Popperian epistemology is not widely known, let alone understood well enough to be applied. Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view. Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose ‘thinking’ is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity. Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.
David Deutsch
https://aeon.co//essays/how-close-are-we-to-creating-artificial-intelligence
https://images.aeonmedia…y=75&format=auto
Astronomy
Astro-tourism, air travel and nifty apps gave Venus’s 2012 transit a democratic edge over astronomy’s historic heroism
I awoke on the morning of June 5 in a bottom bunk of the ‘Monastery’ at the Mount Wilson Observatory in California. This small dormitory has housed a pantheon of astronomers over the past hundred years, including Edwin Hubble, whose private library survives intact behind a locked door at one end of the building. I had heard that the Monastery earned its name as a play on the surnames of its early inhabitants, Charles Greeley Abbot and Charles Edward St John, but, more likely, the term described the long-standing all-male dominion over the observatory — as over the whole science of astronomy. At daybreak, with the 2012 transit of Venus still ahead of me, I felt my night’s rest had won a symbolic victory for any woman ever denied telescope time or a bed on the mountain. I had observed the previous transit of Venus eight years ago, from the campus of an Italian university outside Rome, where I travelled in a large company of astro-tourists just to see it. This time, I had joined a congenial group of antique telescope enthusiasts, who I knew were already out at first light, setting up equipment hauled in vans and trailers up the tortuous mountain road. Today promised to grant us the privileged status of second-time Venus-transit viewers. We would not get a third opportunity. No mortal can witness more than two transits of Venus in a lifetime: the motions of the heavenly spheres permit only two per century. Most of the human family has never seen even one. As spectacles go, the transit of Venus — the sight of the small planet passing across the face of the Sun — is not beautiful. Many everyday sunrise and sunset vistas surpass it. Yet Venus’s promise to reveal secrets of the universe during each brief transit has seduced generations of scientists to sacrifice anything for the sake of observing the rare event. In the 18th and 19th centuries, scores of men died in such pursuits. In the 20th century, no transits occurred. The next one will come, in its due course, on December 10, 2117. A transit transforms specific locations into destinations if they afford ideal viewing — no matter that they stand behind enemy lines in wartime or at frozen outposts normally deemed inaccessible. Adventurers past who chased planet Venus on her cross-solar trek braved dangers ranging from shipwreck in the North Atlantic to a fatal epidemic in Baja, California. All I had to do was board an aeroplane and hope for good weather. How could I not go? The 3,000 miles I travelled from my home in New York to the summit of Mount Wilson comes to about 1/31,000th the distance from the Earth to the Sun. I mention this because the average Earth-Sun distance of 93 million miles, so familiar a fact today, was originally determined by observing the transit of Venus. Without knowledge of the Earth-Sun distance no one could gauge the gaps between the planets, or grasp the enormity of the gulf separating the planets from the stars. Without knowledge of the Sun’s actual distance, one could not even begin to calculate its mass, diameter or true brightness. ‘Willingly would I burn to death like Phaeton,’ the Greek mathematician Eudoxus swore in the fourth century, ‘were this the price for reaching the Sun and learning its shape, its size, and its distance.’ Star struck: early morning at Mount Wilson Observatory and one Californian star-gazer awaits the onset of Venus transiting the sun. Photo by Patrick FraserPrecise modern measurements of the solar system have not rendered the transit of Venus obsolete to science. On the contrary, the 2012 event drew the attention of the Hubble Space Telescope, the Nasa Solar Dynamics Observatory, and the European Space Agency’s Venus Express orbiter. Findings from these and other ground- and space-based instruments will leap the bounds of the solar system, as astronomers apply them to the ongoing search for Earth-like exoplanets orbiting stars beyond the Sun. The 2012 occasion also drew the largest global audience in transit history. Thousands of day-trippers turned up at the Adler Planetarium in Chicago, the Griffith Observatory in Los Angeles, and other such centres of public outreach in astronomy. The more serious, or more affluent, boarded cruise ships to points in the Pacific Ocean, where they could count on clear skies and capture the transit in its entirety of six hours and 40 minutes. I knew that viewing the transit from Mount Wilson precluded my seeing its full duration. Sunset would intervene nearly two hours before the event ended. While diehard friends flew off to Hawaii, I convinced myself I could be content with the beginning and the middle at such a historic site. The Mount Wilson Observatory, high above the city of Pasadena, opened in 1904 for the express purpose of studying the Sun. Its research purview has since widened and many large and small telescope domes now dot the mountain, but the 150-foot solar tower dominates the site. At the tower’s top, movable mirrors (still adjusted by hand) track the Sun all day, every day, throwing down a live image to the control room at the base, where a bust of the observatory’s founder George Ellery Hale oversees the monitoring of sunspot activity. The parking lot assigned to our group as the transit encampment marked the very spot where Hale tested his first solar instrument in 1903. Had he somehow re-materialised on the mountain to watch the transit with us, he would have recognised a few of our 100-year-old instruments. And even the view through them. At the age of fourteen Hale observed the 1882 Transit of Venus, inaugurating a career of invention and discovery that made him the acknowledged ‘priest of the Sun’ in his day. This, Lomonosov said, proved the planet had an atmosphere. The discovery fed his visions of living, breathing Venusians Johannes Kepler, however, the German astronomer who made the first ever transit predictions in 1629, failed to see in life what he foresaw on paper. Kepler died the year before the transit of Venus took place, and in any case, the 1631 transit was not visible from Europe. Kepler’s studies of planetary motions in the early 1600s revealed the relative dimensions of the Solar System. He could say, for example, that Jupiter’s distance from the Sun was 5.2 times that of the Earth’s — whatever the Earth’s might be. Likewise, Venus’s distance from the Sun was a fraction of Earth’s (0.72), and Mercury’s an even smaller fraction. Given these relationships as a foundation, a single true value for any interplanetary distance would put real numbers on all the rest. In 1716, Edmond Halley, England’s second Astronomer Royal, proposed using the transit of Venus as a milepost. Halley got the idea while observing a transit of Mercury — a much more common phenomenon that occurs at least a dozen times per century. Although Mercury was too small and too close to the Sun for his purpose, Halley assumed Venus would serve. Astro-tourists mingle with astronomers, scientific instrument makers and app-laden digerati to compare notes at the Mount Wilson Observatory in California. Photo by Patrick FraserHe proposed dispatching observers as far north and south as possible to view the next transit of Venus. Expeditionary groups must go where the entire event would be visible from start to finish. Northern and southern teams would see the same thing from different vantage points: Venus would appear to cross the Sun’s disk at different solar latitudes and therefore traverse the disk in different time spans. From those small differences between the reports, mathematicians could extract the crucial Earth-Sun distance — a quantity dubbed the astronomical unit. Halley’s birth in 1656 came too late for him to see the transit of 1639, and doomed him to miss the next one, in 1761, unless he could survive to 105. A realist, he implored future generations to live his dream instead: I strongly urge diligent searchers of the Heavens (for whom, when I shall have ended my days, these sights are being kept in store) to bear in mind this injunction of mine and to apply themselves actively and with all their might to making the necessary observations. And I wish them luck and pray above all that they are not robbed of the hoped-for spectacle by the untimely gloom of a cloudy sky; but that at last they may gain undying glory and fame by confining the dimensions of the celestial orbits within the narrower limits.Nearly two decades after Halley’s death, his idea launched a thousand ships. That’s an exaggeration, of course. But 120 hopeful observers followed his injunction, fanning out to 62 locations stretching from Siberia to the Cape of Good Hope. Even before this international effort to observe the 1761 transit got under way, the French astronomer Josef-Nicolas Delisle had refined Halley’s method to make it fail-safe. Rather than tracking the entire transit from near-polar positions, Delisle advised teams to spread around the globe and try to catch either the beginning phases or the end stages. Later, the results from all points could be pooled, matched to their precise locations, and then analysed to produce the astronomical unit. Disaster dogged many of the 1761 expeditions, given that much of the world was engaged in the Seven Years War. Thus Charles Mason and Jeremiah Dixon, sailing out of Portsmouth Harbor, bound for Sumatra aboard a Royal Navy frigate, were fired on by a French warship and forced back to port with heavy damages and a decimated crew. By the time they refitted and readied themselves for a second foray, they had no choice but to settle for a less favourable site in Cape Town, South Africa. There, on June 6, and despite intermittent clouds, they succeeded in capturing the only accurate transit observations from the southern hemisphere. In northern Russia, Mikhail Lomonosov of the St Petersburg Academy of Sciences watched the same transit from his home observatory. He reported seeing a slender arc of light illuminate the outer limits of Venus as the planet entered and exited the Sun’s disk. This glow, Lomonosov said, proved that the planet possessed an atmosphere. The discovery fed his visions of living, breathing Venusians. Only later did the awful truth emerge about Venus’s atmosphere — the miles-thick blankets of poisonous gases bearing down on the planet with 90 times the Earth’s air pressure at sea level. Indeed, modern planetary science has revealed Venus to be a perfect theme park of Hell. Its landscape seethes with smouldering volcanoes and surface temperatures hot enough to melt lead. Corrective vision: notwithstanding the majesty of astronomical history at her fingertips, Dava Sobel succumbs to Venus’s allure. Photo by Patrick FraserNevertheless, the planet retains its ancient associations with beauty and femininity. Early skygazers probably conferred these attributes on Venus when noting how her visitations in the morning or evening sky lasted a significant nine months. And it was this long-standing link with romantic love that lured a local couple to ascend Mount Wilson with their friends and relations on transit day 2012 for a formal exchange of wedding vows. Two members of our astro-group also turned to thoughts of love. The pair had met eight years before in Boston, when they were brought together by efforts to ready a historic telescope — a veteran of 1761 — for a public viewing of the 2004 transit. As they knelt together on Mount Wilson to share an eyepiece at the start of the 2012 transit, he proposed marriage, and she accepted. In my own self-induced, pre-transit mania, I devoured most of Shirley Hazzard’s The Transit of Venus, her 1980 story of thwarted love, before realising that the title was a metaphor. The twentieth-century setting should have clued me right away that the novel could never involve a real transit, but one of the characters worked as an astronomer, so I kept hoping all the same. By page 167, where Hazzard showed her ignorance of transit dynamics by saying, ‘Venus can blot out the Sun,’ I was too enamoured of her prose to stop reading. Astronomers are just as liable to incline toward metaphor, describing the four key stations of Venus’s transit as ‘contacts’, although Venus and the Sun merely appear to touch each other across millions of miles of interplanetary space. At ‘first contact’, Venus’s body just kisses the Sun’s perimeter. ‘Second contact’ follows soon after, when the entire dark silhouette of the planet has fully penetrated the Sun’s disk. Hours then pass in transit before Venus’s leading edge encounters the Sun’s opposite shore for ‘third contact’. Over the final 20 minutes Venus emerges, until its ‘fourth contact’, or kiss goodbye. The trick, back in 1761, was to time the exact moment of second and third contacts, which meant transporting an accurate timepiece the size of a grandfather clock to the observation site, restoring it to working order after a turbulent ocean voyage or overland journey in horse-drawn carriage or sled, and then nailing the clock’s case to a tree or other solid posts so that no vibrations would disturb the swings of its pendulum. While one man looked through a telescope (darkened with filters of smoked glass to shield his eyes from the Sun’s blinding light) another watched the clock and waited to hear the shout of ‘Contact!’ But first contact eluded the fellow at the telescope because Venus remained invisible in the Sun’s glare until she began denting the disk. And the all-important second contact proved unexpectedly difficult to judge, because Venus misbehaved. Instead of maintaining a perfect sphere as predicted, and perching like a pea on the rim of a dinner plate, she appeared to stretch herself into a teardrop shape, hanging on to the Sun’s limb by a thread, or blob, or a bridge of connective tissue. By the time this ‘black drop effect’ dissipated, the planet stood well within the Sun’s bounds, and it was impossible to say exactly when that had happened. Near third contact, Venus again elongated into a black drop, confounding attempts at precise reckoning. Guillaume LeGentil of the French Academy of Sciences suffered added hardship by attempting his observations at sea. He had departed from Paris in 1760, more than a year ahead of the transit, in order to allow time for all necessary preparations at his chosen site — Pondicherry, in southeast India. Storms, skirmishes, dysentery, and other difficulties delayed him en route, however, and as he finally neared Pondicherry he learnt the region had fallen to the British, preventing his landfall. Turning about in late May with hopes of reaching a fallback site on Mauritius, LeGentil found himself still aboard ship on the appointed day of June 6. At least the weather smiled on him: he saw the entire transit. But he did not observe it scientifically, since he could neither time the contacts nor ascertain his position. In 1761, mariners owned no means of establishing their longitude. Even on land, determining longitude demanded lengthy and painstaking astronomical observations. To that end, Mason and Dixon stayed on at Cape Town for a further four months after seeing the transit, to lock in the coordinates of their temporary observatory. A devastated LeGentil locked himself in his cabin after the transit and made no entries in his diary until the ship reached Mauritius more than a fortnight later. Rather than return to France in defeat, LeGentil, who had no loves or dependants expecting him at home, chose to wait in the Indian Ocean basin for the eight years until the next transit of Venus. The odd pattern of transit repetition — two within a decade, practically on the same date, followed by a hiatus of more than a century before another pair — was already well understood in LeGentil’s time. It results from the speed and slant of Venus’s orbit, which tilts about three degrees from the plane of Earth’s orbit. If the paths occupied the same plane, we would see a transit of Venus about every 18 months, whenever the speedier Venus overtook the Earth on its circuit of the Sun. But because the planets travel on crisscrossed paths, Venus usually skirts above or below the Sun (from an Earthling’s perspective). For a transit to occur, Venus needs to pass between the Earth and the Sun when both planets are near one of the two nodes — the points where the planes of their orbits intersect. Earth reaches those points of possibility every June and December. Sometimes Venus is there, lying right along the line of sight to the Sun, but more often it is not. If the first transit of a pair occurs in June, the second one does, too, while the following two will fall in December, 105 years later. After a pair of December couplings, 122 years elapse before a June reunion. During June transits, Venus descends from above Earth’s orbital plane to below, and so traces a down-sloping course across the Sun (from 11 to 3 on a clock face, say, or from 8 to 5). At December transits, with Venus on the upswing, the perceived path climbs. Each of us has a response time — a mix of expectation, preparation, anxiety, and attention, not to mention the travel time of nerve impulses through the body In the 1760s, throughout the eight long years of LeGentil’s inter-transit layover, he occupied himself studying the flora and fauna of Mauritius and Madagascar. As he explored, he drew improved maps of both islands. A dedicated natural philosopher in those pre-specialised times could make worthy contributions to several fields of science. Geography and weather conditions convinced LeGentil of Manila’s superiority as a base from which to observe the 1769 transit. However, the Spanish governor proved uncooperative, and LeGentil’s French superiors insisted he return to Pondicherry, which was once again French territory. With characteristic diligence, he arrived more than a year in anticipation, using the lead-time to pinpoint the coordinates of the old palace pavilion where he set up his instruments. After a full month of clear skies, June 3, 1769 dawned dreary. Pondicherry was socked in for 24 hours, and the transit obliterated. ‘I had crossed such a great expanse of seas,’ LeGentil moaned (and with good reason), ‘exiling myself from my native land, only to be the spectator of a fatal cloud which came to place itself before the sun at the precise moment of my observation, to carry off from me the fruits of my pains and fatigues.’ Even those blessed with clear skies found cause to lament, when they realised how the black-drop effect undermined their efforts. Captain James Cook and astronomer Charles Green, standing side by side in 1769 at Point Venus in Tahiti, each at his own telescope, called out values for third contact that differed by 12 seconds. Other teams fared worse, all failing Halley’s expectations of split-second concurrence. The 1761 results had narrowed the estimate of the Earth-Sun distance to between 77,100,000 and 98,700,000 miles — a margin of error of more than 20 per cent. The 1769 efforts reduced the distance discrepancy to about 4,000,000 miles, and the margin of error to four per cent, but astronomers craved still greater precision. Black drops tethering the planet Venus to the Sun’s limb again bedevilled the efforts of 1874 and 1882 transit chasers to establish the exact Earth-Sun distance, despite the availability of bigger, better telescopes and the introduction of photography. It wasn’t until the 2004 transit — long after interplanetary distances had been shaved to millimetre precision via radar signals transmitted from Earth and bounced back by Venus — that astronomers unravelled the twin causes of the dreaded black drop. Half the blame rests on the inevitable blurring of images viewed through a telescope’s lenses. Even the lens of the human eye causes some spreading-out of perceived objects, which constrains an astronomer to peer through two levels of distortion. The Sun plays the other role in causing the black-drop effect. The centre of the Sun’s disk appears much brighter than the peripheral regions — because the Sun is really a round ball, not a flat disk. We see most deeply into its warm depths near the centre, while the light emerging from the perimeter filters through more layers of gas, and darkens accordingly. The extreme edge, or limb, where observers try to descry second and third contacts, looks the darkest of all. So the fault, to paraphrase Shakespeare, lies part in the stars, and part in ourselves. Not only do our eyes deceive us, but each of us has a characteristic response time — a mix of expectation, preparation, anxiety, and attention, not to mention the travel time of nerve impulses through the body. Introspective astronomers of the 19th century studied these effects, and factored the ‘personal equation’ into their observations. Nothing daunted, several of us on Mount Wilson had downloaded the Transit of Venus phone app, designed to engage the global social network in a citizen-science effort to remeasure the solar system. The free app put many accoutrements of expeditions past in the palm of the hand: a single screen-tap would log in each user’s atomic-clock time of observation and the local GPS coordinates. The app came with built-in black-drop simulations to give users a few practice sessions in confronting ambiguity. Statistics had convinced the app’s designers that the world population of transit observers would divide more or less equally between the trigger-happy and the slow responders. The final data analysis of several thousand tap-reports could be expected to balance all the personal equations. Unfortunately, Wi-Fi failed us on the mountain, so that is where our estimations stayed. Mostly, however, we just gaped. The leisurely pace of the transit and the large gathering of telescopes afforded an abundance of eyefuls. There was Venus on every hand: her passage projected through a grand old telescope onto a big screen for crowd pleasing. Her black body almost three-dimensional as seen through astro-binoculars. Her darkness contrasted with brightness, as a moving beauty mark on the Sun’s face. And the Sun, either blanched of all colour by protective filters, or churning crimson in solar telescopes tuned to the ferment of a star. The observatory shop did a brisk business in eclipse glasses — inexpensive paper frames with aluminised ‘lenses’ to block ultraviolet and infrared radiation. These shades afforded protection for an otherwise naked-eye view. Although I had watched a good chunk of the 2004 transit through just such glasses, I found I could hardly make out Venus through them this time. The passage of eight years might be insignificant astronomically, but counted on a human timescale, from 50-something to 60-something, it produces a marked change in visual perception. Venus descending: the planet traces a downward arc over the face of our star, having mapped an descent across it in 2004. The dramatic black on red is the product of protective filters. Photo by Patrick FraserA few experienced observers rued another manifestation of ‘poor seeing’, which is astro-speak for atmospheric temperature differences that cause turbulence and rob telescope views of crispness. ‘With seeing like this,’ one visitor from the East complained, ‘I could have stayed home.’ Even the image of the Sun in the 150-foot solar tower was ‘boiling’ around the edges due to seeing effects. But no clouds blocked anyone’s view of the transit and that was cause for joy. Indeed, after the early cries of ‘I see it!’ ‘There it is!’ ‘It’s Venus!’, the general mood settled into a quiet euphoria. How long could we look at variations of an opaque circle on a bright background? Longer than you’d think. Longer than I’d thought. I found it hard to look away, knowing that the sight would soon be gone for good. Pictures of it would abound, I knew, and the myriad pictures I was bound to see later would replace the live view that could be savoured only in these moments, along with the heat and the dust. Toward sunset, the telescopes that had begun the day’s vigil pointing high above Mount Wilson lowered their sights toward the horizon. Seen through added thicknesses of air, Venus jiggled and morphed. The stately pace that had marked her progress since ingress broke into ecstatic dance. In the reddened skies of sunset or sunrise, added layers of air and haze may safely reveal a transit to the naked eye. This happened in 2004 on the east coast of the United States, where folks waited for sun-up to catch the final fraction of that transit. They had feared they would be ‘clouded out’ but, instead, the thin clouds allowed them an ideal view. Where I was in Rome on June 8 that year, the action began well after sunrise and lasted through the peak heat of a cloudless afternoon, and we needed all forms of sun protection in addition to eye protection. One wonders whether the transit of Venus truly remained hidden through all the ages until the 17th century, when improved astronomical tables coincided with the invention of the telescope. It seems at least likely that some keen-sighted ancient, out to welcome the dawn, might have observed a perfectly round blemish on the rising Sun before full daylight swallowed Venus in flames. Mount Wilson turned chill the instant the Sun went down, and observers pulled extra layers of clothing from their backpacks as they stowed their equipment. Three hundred and fifty miles overhead, the Hubble Space Telescope was still watching the transit through an elaborate safety net. The Hubble’s delicate optics rule out looking anywhere near the Sun, let alone at it, so Hubble viewed the transit as reflected off the surface of the Moon. Three on-board instruments parsed the mirrored light in a range of wavelengths from ultraviolet to near infrared, trying to pick out the faint signature of Venus’s atmosphere. During the hours that Venus crossed the Sun, Hubble orbited the Earth four times, losing sight of the Moon for 40 minutes each time around. In another part of the sky, the Kepler spacecraft kept its back to the Sun as usual. Ignoring the transit going on in this Solar System, Kepler continued monitoring its 150,000 target stars suspected of anchoring other solar systems. Kepler aims to discover exoplanets by their transits. Any dip in a target star’s light may signal a ‘candidate’ planet, to be confirmed by further observations. To date, the Kepler team has revisited 2,321 candidates, and conferred bona fide planet status on 74 of those worlds. Kepler is especially seeking small terrestrial exoplanets, the size of Venus or Earth, with an emphasis on Earth-like characteristics such as liquid water and life. One day, thanks to the overall assessments of Venus’s atmosphere gained from this year’s transit, planet hunters will be able to tell a Venus from an Earth in a solar system far from our own. To be able to distinguish, across interstellar distances billions of times the astronomical unit, a Venusian ember ravaged by runaway greenhouse effects from an earthy paradise lush with greenery — this feat would surely prove the long-awaited return on Edmond Halley’s promise.
Dava Sobel
https://aeon.co//essays/burning-love-what-venus-tells-us-when-the-planets-align
https://images.aeonmedia…y=75&format=auto
Values and beliefs
High priests, holy writ and excommunications – how did Humanism end up acting like a religion?
In February this year, there was a clash of Titans. In one corner, Richard Dawkins, former Oxford professor, Darwinian biologist, brilliant science writer, scourge of the sloppy, and above all the Platonic Form of Atheist. In the other, Dr Rowan Williams, the Archbishop of Canterbury, himself no intellectual slouch, acknowledged as one of the West’s foremost scholars of Russian literature. The issue at stake: are you for a world devoid of ultimate meaning or are you for a world infused with purpose? Are you, as Benjamin Disraeli, the 19th-century prime minister, asked, on the side of Darwin or of the angels? Although Dawkins is fully committed to the exclusive disjunction — science or religion but not both — Williams would have been surprised and appalled to be forced to choose between the two. And here’s the rub: I, like Dawkins, am a non-believer. Yet I, like Williams, refuse to put science and religion at war. This is partly because I do not think they have to be — I see them as asking different questions. But it is also because I think there is something socially and psychologically unhealthy about the course that the debate has taken, especially by those on my side of the fence. I do not think the faults are all on one side, but let me speak to the side to which I might naturally be expected to belong. Holy warriors? In their much-anticipated debate Richard Dawkins and Archbishop Rowan Williams struggled to find much to disagree about, besides cosmology. Photo by London News Pictures/Rex FeaturesWith the Dawkins-Williams confrontation, history was repeating itself. In 1860, a year after the publication of On the Origin of Species, the British Association for the Advancement of Science met at the Oxford University Museum. Darwin himself had long ceased to go to this kind of gathering, which was designed to explain and celebrate the achievements of science both to scientists and to the general public. He was always sick and had, moreover, grown to dislike the physical aspects of controversy — getting up and confronting opponents in person. No such qualms were felt by his most devoted and closest followers, the botanist Joseph Hooker and the anatomist and paleontologist Thomas Henry Huxley. They knew that Darwin’s theory of evolution through natural selection was going to be the topic of the day and that the critics were spoiling for a fight. In a way, it was a holy mission — the two knights out there to promote and protect the reputation of their sick leader. If only Wagner had been an Englishman: instead of Parsifal, we might have had Darwin. No one was disappointed. The climax was the clash between the Bishop of Oxford, Samuel Wilberforce (who had been primed by the eminent anatomist Richard Owen), and Huxley, who was professor of natural history at the Royal School of Mines in London. Wilberforce, who was known for his oratory (not always favourably — his nickname was ‘Soapy Sam’), supposedly turned to Huxley and asked him if he was descended from monkeys on his grandfather’s side or his grandmother’s side. Huxley supposedly responded that he would rather be descended from an ape than from a man of learning who misused his talents to make a scoring point in a debate. Word got out later that Huxley said he would rather be descended from an ape than from a bishop of the Church of England. In short, everybody had a grand time, keyed up by the fact that Admiral Robert Fitzroy, the man who captained HMS Beagle when Darwin took his trip around the world in the 1830s, had become a fervent Evangelical. He rushed around the museum brandishing a bible and crying: ‘The book, only the book.’ The Wilberforce-Huxley clash has been a defining origin story for evolutionists for well over a century now. Indeed, I first heard of Charles Darwin from my history master, in England in 1955. A terrific teacher, he strode around the front of the classroom acting out the debate. Unfortunately, as with so many important myths, historians have thrown doubt on the authenticity of the fateful encounter. Perhaps clever things were said, but at the time they did not make the impression that later tellings imply. When the debate was over, the gladiators shook hands and went off together for a well-earned supper. They were, after all, Englishmen and gentlemen, and that is what really counted. Englishmen and gentlemen both: Vanity Fair cartoon of Samuel Wilberforce and Thomas Huxley circa 1860. Courtesy Wikimedia commonsNonetheless, there were real differences between the two protagonists, as can be seen in contemporary cartoons of the two men in Vanity Fair. Wilberforce is shown in full bishop’s regalia, including the baggy ‘lawn’ sleeves, taking one right back to the English Reformation. Huxley is in a Victorian business suit. He is the man of the New Age, when Britannia ruled the waves and, increasingly, the dry land also. He is the man of modern, science-based university curricula, of proper sanitation and well-built sewers and drains, of medicine intended to cure not kill, of universal literacy and votes for all (men that is). It isn’t only evolutionists who have enjoyed retelling the Wilberforce-Huxley story. It is a favourite of secular Humanists, who like to define themselves as the champions of reason against the unreason of religion. Today’s Humanists claim a lineage that stretches back into the classical world. They have no exclusive claim on the older humanist tradition of men such as Erasmus of Rotterdam, whose skill with ancient languages led, for instance, to better translations of the Bible. This broadly humanist world view may or may not have been religious but it did emphasise learning, human needs and human freedom. Indeed, all that is needed for a full and satisfying life, with an emphasis on reason and good sense. One would hope that every broad-minded person, believer or not, is a humanist in this regard. What I am concerned with here is the self-proclaimed world-view of Humanism (which I capitalise to make this distinction). This is the movement that makes claims about science — and evolution in particular — that interest me. And it is this kind of Humanism that makes me uneasy. It doesn’t just define itself against religion; in some respects, it has taken on aspects of religion. Perhaps it is a kind of religion. I think my religious friends are mistaken but I don’t think they are stupid or crazy or ill or evil simply because they are religious Is it fair to speak of Thomas Henry Huxley as a Humanist in this sense? It is, at any rate, anachronistic. Indeed, Huxley is famous for coining the term ‘agnostic’ to describe his views. Yet in important ways he does foreshadow many characteristics of today’s Humanists. He was deeply committed to science, not just as a form of inquiry but as the foundation of his world-view. His life’s work in science, education and elsewhere (he was for many years a civil servant responsible for fisheries) shows that he was always thinking about the good that can come from science. He, like every other evolutionist of his day, thought that humans were not just any species. Evolution was progressive, from monad to man as it were, and we were the apotheosis of the evolutionary process. As such, we had a special role and status. (To be fair, late in life Huxley began to have doubts about this.) Huxley was eager to distinguish himself from the certainties of the religious believer: When I … began to ask myself whether I was an atheist, a theist, or a pantheist; a materialist or an idealist; Christian or a freethinker … at last, I came to the conclusion that I had neither art nor part with any of these denominations, except the last. The one thing in which most of these good people were agreed was the one thing in which I differed from them. They were quite sure they had attained a certain ‘gnosis,’ – had, more or less successfully, solved the problem of existence; while I was quite sure I had not, and had a pretty strong conviction that the problem was insoluble. So I took thought, and invented what I conceived to be the appropriate title of ‘agnostic’. Thus far, this is all very well. I too might describe myself as an agnostic, although I prefer ‘sceptic’. The word ‘agnostic’ suggests someone who is not especially bothered about the relationship between science and religion and who wants to get on with other things. I, on the other hand, am very interested, and sceptics today generally seem more like me. They believe these questions matter. In fact, I have great admiration for Thomas Henry Huxley. Frankly, when I first started on the history of science more than 40 years ago, I found him a bit too Victorian: smug and sanctimonious, always going on about integrity and that sort of thing. Over the years, I have come to appreciate the scale of his achievements and also the way that, unlike some, including Darwin, he was willing to go out and fight for what he thought right. When Governor Eyre of Jamaica hanged a half-caste troublemaker, although Huxley agreed that the man was probably a great nuisance, he argued that ‘English law does not permit good persons, as such, to strangle bad persons, as such.’ Leading the charge against Eyre, Huxley risked breaking long and deep old friendships. Despite suffering the most crushing of depressions, Huxley was never swayed from what he thought was morally proper. Yet even Huxley was looking for something to replace religion as a world-view. In certain important ways, he anticipated the quasi-religious behaviour and attitudes of the Humanist movement today. He didn’t think science was indifferent to religion: he thought it could compete with it: Extinguished theologians lie about the cradle of every science as the strangled snakes beside that of Hercules; and history records that whenever science and orthodoxy have been fairly opposed, the latter has been forced to retire from the lists, bleeding and crushed if not annihilated; scotched, if not slain. He published a collection of his essays under the title Lay Sermons. The popular press knew him as ‘Pope Huxley’. And he wouldn’t brook any opposition. The Catholic biologist St George Mivart, a former student of Huxley who wrote against Darwin, found this out in quick order. From being one of the chosen inner group, he was expelled into outer darkness. Before long, charges were floating that Mivart was scientifically and religiously undependable, and that he also exuded a whiff of moral unreliability. Differences about science weren’t just epistemological: they were ethical too. That is what I don’t like: Huxley made science into something that behaved like a religion. Why do I get upset by this? Firstly, because I didn’t give up one faith to take up another. There are many aspects of religion that I find really offensive, celibate old men in skirts telling young women how to run their private lives being one. Not all scientists are keen on authority; plenty would say that the best thing about science is that it is anti-authoritarian. Nonetheless, when scientists start talking about values, they often find it hard to resist the temptations of moralising and authoritarianism. Secondly, I am uneasy that Humanism puts human beings at the centre of things in a way that is reminiscent of religion, especially monotheistic traditions. Huxley’s world vision makes humans as central as Christianity does. This kind of self-importance has contributed to world pollution and appalling behaviour towards plants and animals. He saw evolution as a visionary, almost spiritual, ideal, a progressive force leading to the pinnacle of human morality Thirdly, although science and religion can clash (you can’t believe in modern paleoanthropology and a literal Adam and Eve), I don’t think they are always in opposition. There are some meaningful questions that science simply does not address. ‘Why is there something rather than nothing?’ ‘Does life have a purpose?’ If religion wants to have a crack at answering these, then science cannot object. You might criticise the religious answers on theological or philosophical grounds, as I would, but not on scientific grounds. I don’t see Huxley or his intellectual descendants allowing this. Fourthly, and perhaps most importantly, rival religions tend to say awful things about each other, putting down the doctrines and the practitioners. Think of evangelicals on the subject of Mitt Romney’s Mormonism, or of Northern Irish Protestants on the subject of the Pope. I think my religious friends are mistaken, but I don’t think they are stupid or crazy or ill or evil simply because they are religious. Huxley often preached tolerance, but in practice he could not wait to go after religion and religious people in the most scornful of terms. In the end, I think it all boils down to what the religious call ‘enthusiasm’: being possessed of a divine inspiration or afflatus: literally en-theos. For all Huxley’s adherence to free thought and his rejection of divine authority, there was still the feeling that somehow one has the truth and that those who do not are lesser beings, perhaps even somewhat shifty or immoral. This glow of conviction is directly antithetical to humanism in the more generous sense, but it dogs ‘Humanism’. But I have been talking about the 19th century. We are now in the 21st. Humanism has proven a hardy plant, but how did we get from there to here? In the middle of the 20th century, the world’s most ardent and prominent Humanist was none other than Thomas Henry Huxley’s oldest grandson, Julian, who among many other public roles was the first president of the British Humanist Association. Julian Huxley was the Richard Dawkins of his day: evolutionary biologist, wildly popular science writer and ardent humanist, here addressing the Zoological Society in London, 1942. Photo by Felix Man/Picture Post/Hulton Archive/GettyFar more explicitly than his grandfather, Julian Huxley saw Humanism (a word he did use) as an exact substitute for religion: a world-view based on evolutionary biology. ‘This new ideas-system, whose birth we of the mid-twentieth century are witnessing, I shall simply call Humanism, because it can only be based on our understanding of man and his environment. It must be organised around the facts and ideas of evolution, taking account of the discovery that man is part of a comprehensive evolutionary process, and cannot avoid playing a decisive role in it.’ In the spirit of his grandfather, he added: ‘it will have nothing to do with absolutes, including absolute truth, absolute morality, absolute perfection and absolute authority …’ Unusually, Huxley did not abandon eugenical thinking in the wake of the Second World War There was a link in Julian Huxley’s mind to a kind of secular religion, ‘a conviction that religion of the highest and fullest character can coexist with a complete absence of belief in any straightforward sense of the word, and of the belief in that kernel of revealed religion, a personal god.’ For Huxley, science was the basis of a ‘religion without revelation’: What the sciences discover about the natural world and about the origins, nature and destiny of man is the truth for religion. There is no other kind of valid knowledge. This natural knowledge, organised and applied to human fulfilment, is the basis of the new and permanent religion. Julian Huxley was an idealist and a technocrat, believing that scientific and technical ingenuity would solve the social problems of his day — whether by massive hydroelectric schemes or population control. He was the first director of UNESCO and looked forward to ‘the emergence of a single world culture, with its own philosophy and background of ideas, and with its own broad purpose.’ This ‘single world culture’ was what he called ‘Evolutionary Humanism’: the ‘new and permanent religion’ of science and rational planning. Julian Huxley was a star public intellectual and a great populariser of evolutionary theory. You could say he was the Richard Dawkins of his time, and as with Dawkins, some of his fellow scientists were disturbed by his extension of Darwinism into an encompassing world view. Huxley saw evolution as a visionary, almost spiritual, ideal, a progressive force leading to the pinnacle of human morality. He ignored the warnings of David Hume about illicit shifts from matters of fact to matters of morality. Evolution pointed ever upward, according to Huxley, and so our moral obligation was to see that humans were promoted and their decay prevented. As the Christian implores you to love your neighbour as yourself, the Huxleyan Humanist asks you to facilitate the evolutionary process. Julian Huxley’s vision of an ascending human evolutionary path could be notably indifferent to individual human beings. Like many intellectuals of his generation, he had been an enthusiast for eugenics in his youth. Unusually, though, he did not abandon eugenical thinking in the wake of the Second World War. Indeed, his proposed world government would have had a mix of eugenics and population control at the core of its responsibilities: no other institution would have sufficient rational, scientific and moral authority to do so, as he wrote in UNESCO: Its Purpose and Philosophy: ‘Political unification in some sort of world government will be required … Even though … any radical eugenic policy will be for many years politically and psychologically impossible, it will be important for UNESCO to see that the eugenic problem is examined with the greatest care, and that the public mind is informed of the issues at stake so that much that now is unthinkable may at least become thinkable.’ The trouble is, there is no simple line from evolutionary biology to the ethical life, and there is no guarantee that an alternative secular religion will lead us there. Huxley’s vision of a rationalised world united by Evolutionary Humanism makes me uneasy. Apparently UNESCO agreed: the organisation booted him out of his job after only two years, a long way short of achieving a world government based on rational, scientific lines. In the second half of the 20th century, the outstanding Humanist in my sense has been my long-time friend Edward O Wilson, retired now from his post as professor of biology at Harvard but still going strong at 82 and always immersed in controversy. In his Pulitzer Prize-winning book On Human Nature, he declares explicitly that Darwinism is a new mythology replacing the old religious forms. The story is now a familiar one: … make no mistake about the power of scientific materialism. It presents the human mind with an alternative mythology that until now has always, point for point in zones of conflict, defeated traditional religion. Its narrative form is the epic: the evolution of the universe from the big bang of 15 billion years ago through the origin of the elements and celestial bodies to the beginnings of life on earth… Every part of existence is considered to be obedient to physical laws requiring no external control. The scientist’s devotion to parsimony in explanation excludes the divine spirit and other extraneous agents. Like Julian Huxley and the older evolutionists, Wilson is an ardent progressionist and believes that values emerge from the evolutionary process. Exactly which values emerge from evolutionary science is another matter. One thing that always strikes me when looking at the history of religion is how the moral imperatives of religious world views mold and change through the ages as culture shifts. A hundred years ago, almost all Christians believed that homosexual relations were a terrible sin, worse even than theft and murder. Now, thanks to a deeper understanding of sexual orientation, many Christians (not all) find that homosexuality barely merits moral attention. The same sorts of shifts can be seen in Humanism, as it reflects the concerns and beliefs of the day. Julian Huxley was mightily impressed by large government works and big science, things that kick-started economies in the 1930s: he wrote a whole book about the Tennessee Valley Authority. Edward O Wilson is concerned with ecology and biodiversity. He argues that in a world of plastic we would perish and that we need nature to survive, physically and psychically. Thus follow the moral imperatives that he derives from an evolutionary and scientific world-view. By temperament, Wilson is a deeply religious man. This goes back to his Baptist childhood in the American South. He describes his discovery of evolutionary biology as a conversion experience. His faith did not fall away: it changed horses. Despite a strategic alliance with religious leaders in the environmental cause, he can be scathing about religious beliefs. Nonetheless, he sees religion as fulfilling deep human needs. In that sense it needs to be replaced by something like it. If monotheistic religion is a tribal cultural construct, he argues, then ‘religious faith is better interpreted as an unseen trap unavoidable during the biological history of our species. And if this is correct, surely there are ways to find spiritual fulfilment without surrender and enslavement. Humankind deserves better.’ His faith changed horses: Edward O Wilson, speaking in New York in 2012, was raised a Baptist and has never shaken the power of spiritual conviction. Photo by Cindy Ord/GettyI love and respect Ed Wilson, but I cannot follow him this far. ‘Spiritual fulfilment’! It is not just a matter of disagreement. It is, once again, the enthusiasm that halts me. You may say that I am a lesser being. There is some part of me that is dead or missing. I am sure this would be the response of many Christians, particularly those who live in the American South, as I do. This is as it is. I think there are good reasons to be unsettled by enthusiasm, however. We saw in the case of Thomas Henry Huxley how quickly differences about facts and theories slide from the epistemological to the ethical. If you don’t agree with me, then you are not just intellectually suspicious but morally questionable also. ‘It is absolutely safe to say,’ as another prominent Humanist has written (I’ll tell you later who), ‘that if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I’d rather not consider that).’ Wilson is an old man now, past his sell-by date. At least he is if you read the rude things now said about him, especially regarding his fondness for group selection over individual selection. This is all part of his world picture that sees nature as essentially harmonious, in opposition to the mainstream Darwinian picture of ‘nature red in tooth and claw.’ More than 150 scientists signed letters to Nature criticising an article Wilson had co-authored in that publication in 2010, and his most recent book has been described by Dawkins as ‘erroneous and downright perverse.’ What, then, about the new generation of Humanists? What about the New Atheists, who have had so much to say on science and religion of late? The bible of the movement, Richard Dawkins’s The God Delusion, defines itself not so much in its aggressive statement of non-belief, although that is certainly there, but in putting science in opposition to religion and replacing it as the basis of a world view. This is not new. Dawkins has been arguing for a long time, as have many evolutionists, that reconciling a Darwinian natural world with Christian belief is impossible. In the 1990s, Dawkins was fond of quoting Darwin on the difficulty of squaring the suffering and pain associated with natural selection with the idea of a good God. Take the predator-prey relationship: cheetahs seem wonderfully designed to kill antelopes — ‘what we should expect if God’s purpose in designing cheetahs was to maximise deaths among antelopes’ — yet ‘we find equally impressive evidence of design for precisely the opposite end: the survival of antelopes and starvation among cheetahs.’ I have teased Jerry Coyne (something he does not entirely appreciate) and sent him $50 (something he did appreciate) What kind of God is this? ‘Is He a sadist who enjoys spectator blood sports? Is He trying to avoid overpopulation in the mammals of Africa? Is He manoeuvring to maximise David Attenborough’s television ratings?’ The whole thing is ludicrous to Dawkins: there are no ultimate purposes to life, no deep religious meanings. ‘The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil and no good, nothing but blind, pitiless indifference.’ This view that science has an exclusive claim on truth has been taken to its logical conclusion by a recent book, The Atheist’s Guide to Reality: Enjoying Life without Illusions, by the philosopher Alex Rosenberg. Embracing the pejorative term ‘scientism’, which at the least has major overlaps with what I am calling ‘Humanism’, he argues that once science is finished, no questions remain. If the Big Bang or something like that cannot explain the meaning of existence, then there is no genuine question at stake. The same is true of morality, meaning, consciousness and everything else that religion and philosophy have claimed as their own. If, as Julian Huxley once claimed, there is ‘no other valid kind of knowledge’ outside science, it is a short step to argue, as he did, that we should invent a new morality based on science. And sure enough, the neuroscientist Sam Harris has started to argue that morality needs no foundation outside science and can be derived from the natural state of affairs, in particular an evolutionary understanding of human beings. In his book The Moral Landscape: How Science Can Determine Human Values, Harris writes that: Values reduce to facts about the well-being of conscious creatures … If our well-being depends upon the interaction between events in our brains and events in the world, and there are better and worse ways to secure it, then some cultures will tend to produce lives that are more worth living than others; some political persuasions will be more enlightened than others; and some world views will be mistaken in ways that cause needless human misery. Like Julian Huxley before him, Sam Harris isn’t arguing that he is entitled to his private beliefs, but that science dictates what all right-thinking people must now believe. The New Atheists believe that science replaces the claims about the world that religion makes — and therefore makes religion redundant. Some of them think that a whole new moral system should be based on science. That’s sounding more and more like religion itself to me. But the other unsettling way in which Humanism imitates religion — and perhaps the most notable one in the case of the New Atheists — is its claim that people who do not share its beliefs are not only mistaken but also deluded and perhaps even evil. The line I quoted above about opposition to evolution being a sign of insanity and possibly wickedness comes, of course, from Richard Dawkins. Is this enough to say definitively that the New Atheists are making a religion from their position, that they are Humanists in the strongest sense? Dawkins has protested vehemently that his position is nothing like a religious one. Accepting the 1996 Humanist of the Year award, he stated: It is fashionable to wax apocalyptic about the threat to humanity posed by the AIDS virus, mad cow disease, and many others, but I think a case can be made that faith is one of the world’s great evils, comparable to the smallpox virus but harder to eradicate … Well, science is not religion and it doesn’t just come down to faith. Although it has many of religion’s virtues, it has none of its vices. Science is based upon verifiable evidence. Is Dawkins right about science? After David Hume, I think most of us would agree that science does involve an element of trust, of belief in something that can never have absolute proof. The very laws of nature cannot be shown to be mathematically or logically necessary. We could be like the turkey, fed every morning and, as the farmer approaches the coop, happily expecting breakfast on the December 24. Because it has risen every day before, the sun does not have to rise tomorrow. But I think we can say that the pragmatic justification we give to the laws of science — they ain’t broke, so don’t fix them — is one that can be shared by all reasonable people in a way that seems barred to the many different and conflicting claims made in the name of faith. Are the Mormons right in thinking that Joseph Smith was given special revelations or are the Evangelicals right in thinking this deeply heretical? There is no way to settle questions of faith in the pragmatic style we take with scientific matters. Let us agree that science itself is not a religion. But Humanism is a different matter, and in its most virulent form, it does try to make science into a religion. And despite the protestations of Dawkins and his fellows, the behaviour of the Humanists does not exhibit the kind of openness to evidence and adaptability that we’d expect from a rational, non-religious mindset. On the contrary, it is awash with the intolerance of enthusiasm. For a start, there is the nigh-hysterical repudiation of religion. As with religions themselves, the implication is that those who fail to follow the New Atheist line are not just wrong, but morally challenged. Dawkins again: I think there’s something very evil about faith … it justifies essentially anything. If you’re taught in your holy book or by your priest that blasphemers should die or apostates should die — anybody who once believed in the religion and no longer does needs to be killed — that clearly is evil. And people don’t have to justify it because it’s their faith. In the caricaturing of ‘faith’ as murderous fundamentalism, one hears echoes of the bloody and interminable Reformation squabbles between Protestants and Catholics. One also sees contempt for fellow human beings, many of whom are educated, thinking members of society. One may question whether the present Archbishop of Canterbury was the man for the job, torn as his Church is over questions about women and homosexuality. One cannot doubt, however, his integrity or intelligence or Christian concern. To belittle a man such as this is to shine a light back on your own intolerance and failure to understand your species mates. It is also, of course, to help the real enemy, those who turn their backs fully on science as they follow their religion. Instead of making allies of those believers who hate intolerance as much as do you, everyone is at war and no proper defence is mounted against the really dangerous, the genuinely fanatical and fundamentalist. There are other aspects of the New Atheist movement that remind me of religion. One is the adulation by supporters and enthusiasts for the leaders of the movement. It is not just a matter of agreement or respect, but of a kind of worship. This certainly surrounds Dawkins, who is admittedly charismatic. Freud describes a phenomenon that he calls ‘the narcissism of small differences’, in which groups feud over distinctions that, to the outside, seem totally trivial. It is highly characteristic of religions: think of the squabbles about the meaning of the Eucharist, for instance, or the ways in which Presbyterians tear each other apart over the true meaning of predestination. For those not involved in the fights, the issues seem virtually nonsensical, and certainly wasting energies that should be spent on fighting common foes. But not for those within the combat zone. The New Atheists show this phenomenon more than any group I have ever before encountered. This is a personal matter, so let me stress at once I am not writing this from a sense of exclusion or hurt or whatever. I am happy with my position and I love a good fight. Dawkins has said that on a scale from 0 to 7, from belief to non-belief, he scores about 6.9. I place myself even higher than that. I am a true non-believer. I am also a fanatical Darwinian — more so even than Dawkins because I think that, when it comes to culture, genes do much that he hands over to his own special cultural notion of ‘memes’. I have written many books about the implications of Darwinian thinking for epistemology and ethics. What’s more, I think that religion has done and continues to do much harm to society. In the blog I write for the Chronicle of Higher Education I have taken on the Catholics, the Calvinists, the Mormons, and even the Quakers (perhaps a bit Oedipal, because I was raised a Quaker). Some years back, I was the expert witness in philosophy in Arkansas when the American Civil Liberties Union successfully fought against a law requiring the teaching of so-called ‘creation science’ (in other words biblical literalism) in the publicly supported schools of that state. I have been a vocal opponent of Creationism for many years. I have paid my dues. And yet I, and others of my ilk, am reviled in terms far harsher than those kept for the real opponents like the Creationists. We are labelled ‘accommodationists’ for our willingness to give religion a space not occupied by science. We are put down in terms that denote powerful emotion, way beyond reason. In The God Delusion by Richard Dawkins, I am likened to Neville Chamberlain, the pusillanimous appeaser of Hitler. Jerry Coyne, the author of both the book and the blog Why Evolution is True and an ardent fan of Dawkins and Christopher Hitchens, wrote about one of my books in terms used by George Orwell: ‘There are some ideas so absurd that only an intellectual could believe them.’ The Minnesota biologist PZ Myers, who writes the blog Pharyngula, has referred to me as a ‘clueless gobshite’. And if I had a dollar for everyone who has made a pun out of my last name, I would be a very rich man. Because I will not toe the line absolutely or bow down in praise of Dawkins and company, because I laugh at their pretensions and positions, I am anathema maranatha. As I said, I don’t care about the personal attacks. Indeed, I have the kind of personality that welcomes being in the public eye, even if the attention is critical. I have teased Jerry Coyne (something he does not entirely appreciate) and sent him $50 (something he did appreciate) as a retainer to make sure I am not forgotten. But I do think it all tells us something. Call it a secular religion if you will, or call it something else entirely. The Humanism I have been discussing in this piece does bear strong similarities to conventional religion. One finds the enthusiasm of the true believer, and this encourages a set of unnerving attributes: intolerance, hero-worship, moral certainty and the self-righteous condemnation of unbelievers. As an atheist Darwinian evolutionist, as one who is a humanist in the broader sense, this makes me feel really ill.
Michael Ruse
https://aeon.co//essays/how-humanism-lost-its-way-in-a-charismatic-crusade
https://images.aeonmedia…y=75&format=auto
Consciousness and altered states
Ineffable encounters and moments of ego-transcendence can be quite matter-of-fact. What’s really going on?
There are two kinds of experience, both of which have happened to me several times, and I can’t explain either of them. In fact, I could make up any number of explanations for them on the spot: they may be mysterious but they’re not mystical, and they don’t make me suspect for a moment that anything inexplicable is going on. But I’ve never actually come across an explanation of them, or even an account by someone else of having had them. I’ve described them as best I could as minor incidents in novels, and I’ve hardly ever heard anyone say they knew just what I was talking about. The first kind dates from my late childhood and early teens. It hasn’t happened since. Until the age of 10 I lived on the Isle of Lewis and that’s where it first took place. I may have been eight or nine at the time. On hot, sunny afternoons — which were rare — I would go exploring up a narrow glen near our house. Its sides were rocky and steep: two cliffs, face to face. On its floor a single-track road ran alongside a small river. I’d do daft, dangerous things like walking along a water pipe that crossed the burn beside an ancient stone bridge, or clambering from boulder to boulder. And now and again, I climbed up the side of the glen to sit on the lip of a rock step near the top, commanding the roads with an imaginary machine-gun. On at least one, maybe more, of these adventures I became intensely aware of something that rang from the silence, sunlight, solitude, and rock. I can only describe it as a sense of some enormous presence. It was everywhere, like the shimmer of the heat in the air. Maybe I was frightened at first but that passed, and it became something that was just there, like the light. My mind had stepped back from my personality and wondered how it could possibly be that Not surprisingly for a son of the manse, I had not even the most childish spirituality. I believed what I was told, but as far I was concerned it was all facts about some reality of which I had no personal experience, like Australia. It just didn’t occur to me to attribute this feeling of presence to God, or to any other supernatural agency. Nor, as far as I can recall, did it occur to me when the experience recurred a few years later. One sunny afternoon in Lochcarron I was on my own, exploring the banks of a river that ran along a broad, deep gully. I wasn’t far from human habitation but I don’t remember any sound except the river on the stones, dripping moss and humming insects. The sun was high in the west, brightly lighting one side of the gully. I was on the other side, in shade but nothing like darkness. There was nothing spooky or scary about my surroundings, nothing dangerous about my situation. Out of nowhere, the feeling of presence came back, ringing from the rocks. The second kind of experience was quite different. Again, I remember exactly where I was when it first happened. Around about the age of 16 my adolescent introspective tendencies were made worse by the books of Colin Wilson (or rather, that small fraction of his work which had been written by the early 1970s). The one I’d read most recently was Poetry and Mysticism (1969). Like all his books whatever their ostensible subject, it contains a complete exposition of his views. Unlike most of them, it is short. The effect it had on me was to make me strive to be intensely aware of objects in my surroundings. So far, so good — I’ll never forget that teapot in the sunlight from the kitchen window. And the Buddhist injunction repeated by the trained crows in Aldous Huxley’s utopian novel Island, also recently read, was seldom far from my mind: ‘Attention! Attention!’ One fine morning, I was walking along a long wide street with a high wall on my left. As usual I was preoccupied with my thoughts. The featurelessness of the wall and street, and the long perspective-lines, may have helped to induce what happened. Out of nowhere, from one step to the next, I was overcome by an astonishment at being me. It was like a second iteration of self-awareness, combined with an odd detachment, as if my mind had stepped back from my personality and wondered how it could possibly be that. Atomic Discourse Gale, the narrator of my science fiction novel Learning the World, describes the experience much better than I can: I became very much aware of being me, and it felt strange. It was as if a wider, cooler mind had found itself in my head, and was surprised to be there behind my eyes. And yet that larger mind was mine. Very odd. It passed in a few moments, leaving me a little shaken, curious, and quite unable to recapture it. I have never found a name for this experience, and though I’ve had it several times since, I can neither induce it at will nor prevent its recurrence.That’s all true of me. What I’d like to find, by writing this, is whether anyone else knows what I’m talking about. Gale continues: When I tell people about it they either look blank or say: ‘Oh! You mean you have that too?’ But it isn’t a bond between us, not a secret, just a peculiarity, an anomaly, perhaps as random a feature of our minds as the ability to roll one’s tongue is of our bodies. It solves no problem, conveys no insight, and yet leaves me with an impression of significance. It has an aftertaste, but no taste. That impression, that aftertaste, may be its empty secret: it may be a tiny glitch in the process by which our brains find meaning in sense.But is she right? I’d like to know.
Ken MacLeod
https://aeon.co//essays/an-enormous-presence-like-the-shimmer-of-heat-in-air
https://images.aeonmedia…y=75&format=auto
Automation and robotics
The 15-hour working week predicted by Keynes may soon be within our grasp – but are we ready for freedom from toil?
I first became an economist in the early 1970s, at a time when revolutionary change still seemed like an imminent possibility and when utopian ideas were everywhere, exemplified by the Situationist slogan of 1968: ‘Be realistic. Demand the impossible.’ Preferring to think in terms of the possible I was much influenced by an essay called ‘Economic Possibilities for our Grandchildren,’ written in 1930 by John Maynard Keynes, the great economist whose ideas still dominated economic policymaking at the time. Like the rest of Keynes’s work, the essay ceased to be discussed very much during the decades of free-market liberalism that led up to the global financial crisis of 2007 and the ensuing depression, through which most of the developed world is still struggling. And, also like the rest of Keynes’s work, this essay has enjoyed a revival of interest in recent years, promoted most notably by the Keynes biographer Robert Skidelsky and his son Edward. The Skidelskys have revived Keynes’s case for leisure, in the sense of time free to use as we please, as opposed to idleness. As they point out, their argument draws on a tradition that goes back to the ancients. But Keynes offered something quite new: the idea that leisure could be an option for all, not merely for an aristocratic minority. Writing at a time of deep economic depression, Keynes argued that technological progress offered the path to a bright future. In the long run, he said, humanity could solve the economic problem of scarcity and do away with the need to work in order to live. That in turn implied that we would be free to discard ‘all kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital’. Keynes was drawing on a long tradition but offering a new twist. The idea of a utopian golden age in which abundance replaces scarcity and the world is no longer ruled by money has always been with us. What was new in Keynes was the idea that technological progress might make utopia a reality rather than merely a vision. Traditionally, the golden age was located in the past. In the Christian world, it was the Garden of Eden before the Fall, when Adam was cursed to earn his bread with the sweat of his brow, and Eve to bring forth her children in sorrow. The absence of any discussion of the feasibility of an actual golden age was unsurprising. As Keynes observed in his essay, ‘From the earliest times of which we have record — back, say, to 2,000 years before Christ — down to the beginning of the 18th century, there was no very great change in the standard of life of the average man living in the civilised centres of the earth’. The vast majority of people lived lives of hard labour on the edge of subsistence, and had always done so. No feasible political change seemed likely to alter this reality. It was only with the Industrial Revolution, and the Enlightenment that preceded it, that the idea of a future golden age, realised as a result of human action, began to seem possible. By the end of the 18th century incomes had risen to the point where radical thinkers such as William Godwin could propose that, with a just distribution of wealth, everyone could live well. The novel idea of progress — that the natural tendency of human affairs was to get better rather than worse — became part of ‘common sense’ Such dangerous speculation led to the first and still the most notable defence of the inevitability of scarcity, Malthus’s ‘Essay on the Principle of Population’, written specifically to refute Godwin. Malthus argued that, even if a technological innovation or redistribution of wealth could improve the living standards of the masses, the result would simply be to allow more children to survive. Inevitably, the exponential growth of population would outstrip linear growth in the means of subsistence. In a short time, the poor would be poor once again. In the initial presentation of his argument, Malthus admitted only two checks on population — misery and vice. Misery meant poverty and hunger. Vice meant contraception, to which Malthus, unlike his neo-Malthusian successors, was resolutely opposed. Although he later admitted the third option of ‘moral restraint’ (that is, sexual abstinence), he was comfortably assured that this would never be sufficient to undermine his argument. Thus he concluded that the maintenance of a small upper class (clergymen, for example), with leisure to preserve, extend and transmit culture, was the best that humanity could hope for. Linear growth? Fruit processing in Hawaii, 1960s. Factories drove up both working hours and living standards. Photo by Bates Littlehales/National Geographic/GettyThe conditions of the early 19th century seemed to support Malthus’s case. The Industrial Revolution had produced an intensification of work that was almost unparalleled in human history. Driven off the land by enclosure acts and population growth, former peasants and agricultural labourers became the first industrial proletariat. The factories in which they worked rapidly drove old traders and cottage industries like that of the handloom weavers into destitution and then into oblivion. Unconstrained by seasons or by the length of the day, working hours reached an all-time peak, with the number of hours worked estimated at over 3,200 per year — a working week of more than 60 hours, with no holidays or time off. There were small increases in material consumption, but not nearly enough to offset the growth in the duration and intensity of work. Most economists of Malthus’s time agreed with him. All the standard models ended in a steady state, with the majority of the population at subsistence. The only important exception was Karl Marx, for whom the process of immiseration ended, not with a subsistence-level steady state, but with crisis and revolution. By the late 19th century, things had changed. On the one hand, Malthus’s predictions were being falsified in practice. A growing middle class was enjoying improved living standards as a result of technological progress. And, whether through moral restraint or contraception, they were having smaller families. The relatively novel idea of progress — that the natural tendency of human affairs was to get better rather than worse — rapidly became part of ‘common sense’. The working class had more compelling reasons to hope for better things. Over decades of struggle, workers clawed back the ground they had lost and then some. The Factory Acts outlawed child labour in Britain, and by 1870 all children in England and Wales were entitled to at least an elementary education. The hours of work were limited by legislation and union action. The eight-hour day, a norm that is still under challenge 150 years later, was first achieved by Melbourne stonemasons in 1855, though it was not established more generally, even in Australia, until the early 20th century. The weekend, making Saturday as well as Sunday a day of leisure, came even later, around the middle of the 20th century in most developed countries. The idea that a combination of technological progress and political reform could produce a genuine utopia became an appealing alternative to the ‘pie in the sky’ of an afterlife. Edward Bellamy’s Looking Backward (1888), a critique of 19th century capitalism written from the imagined perspective of the year 2000, was the archetypal example of this literature. Oscar Wilde’s ‘The Soul of Man under Socialism’ (1891) was perhaps the most appealing. Even Marx, sternest critic of the old utopians, had his moments, most notably in The German Ideology (1846). There, he and Engels looked forward to a society in which labour did not depend on the lash of monetary incentives: For as soon as the distribution of labour comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a herdsman, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.None of these writers, however, had a theory of economic growth. Neither was one to be found in the literature of classical economics. Keynes’s discussion of economic possibilities was one of the first to spell out the argument that improvements in living standards, based on a combination of technological progress and capital accumulation, might be expected to continue indefinitely. He argued that technological progress at a rate of two per cent per year would be sufficient to multiply our productive capacity nearly eightfold in the space of a century. Allowing for a doubling of output per person, that would be consistent with a reduction of working hours to 15 hours a week or even less. This, Keynes thought, would be sufficient to satisfy the ‘old Adam’ in us who needs work in order to be contented. Keynes himself had no grandchildren, but he was a contemporary of my own grandparents. It seemed to me when I first read his essay that there was a good chance that his vision might be realised in my lifetime. The social democratic welfare state, supported by Keynesian macroeconomic management, had already smoothed many of the sharp edges of economic life. The ever-present threat that we might be reduced to poverty by unemployment, illness or old age had disappeared from the lives of most people in developed countries. It wasn’t even a memory for the young. There was, it seemed, every reason to expect further progress towards Keynes’s vision. Working hours were decreasing. A comfortable retirement at or before 65 had become a normal expectation. The idea of a lengthy and fairly leisurely university education was increasingly accepted, even if access to higher education was far from universal. More generally, in a labour market where the number of vacancies routinely exceeded the number of jobseekers, responding to economic ‘rewards and penalties’ seemed much less urgent. If one job was unsatisfying or boring, it was a simple matter to quit, take some time off and then find another. In these favourable conditions, anti-materialist attitudes that had been confined to a Bloomsbury elite in Keynes’s day became widespread, particularly among the young. The enthusiastic consumerism of the 1950s was repudiated in varying degrees by nearly everyone, a trend exemplified by the adoption of blue jeans, previously the cheap and durable everyday wear of unskilled workers. The idea of ‘the environment’ as a problem of more general concern than specific local issues such as air pollution and the preservation of national parks was also a product of the ’60s, book-ended by Rachel Carson’s Silent Spring (1962) and the first Earth Day in 1970. The idea that we could continue on a path of ever-growing material consumption appeared to be not merely unsatisfying but a recipe for ultimate catastrophe. So on a first reading, ‘Economic Possibilities for our Grandchildren’ seemed prophetic. Yet, 40 or so years later, I am a grandparent myself, the year 2030 is rapidly approaching, and Keynes’s vision seems further from reality than ever. At least in the English-speaking world, the seemingly inevitable progress towards shorter working hours has halted. For many workers it has gone into reverse. The situation in Europe was, until recently, very different. Germany’s work hours declined from 2,387 hours annually in 1950 to 1,408 in 2010. France’s declined from 2,241 hours annually in 1950 to 1,552 in 2010. Yet even here, and even before the advent of austerity, there were signs of a turnaround. The loi Aubry, the law which reduced the normal French working week to 35 hours, has been repeatedly weakened. Work-sharing in Germany was highly successful in reducing the impact of the global financial crisis, but that does not seem to have had much effect on German judgments about the desirability of more and harder work for other countries. Have allowances of free time peaked? A worker at the IRS center in Ogden, USA, 1980s. Photo by Roger Ressmeyer/CorbisMoreover, far from fading into irrelevance, the struggle to accumulate capital and maintain or increase consumption is more intense than ever. Instead of contracting, the values of the market have penetrated ever further into every aspect of our lives. During the decades leading up to the global financial crisis, the scope and scale of speculative markets grew beyond any conceivable bound. Avarice and usury, as Keynes called them, are worshipped on an unimaginable scale. Financial instruments with notional values in the trillions were routinely traded, creating immense wealth for some (mostly participants in the trade) while bringing ruin and destitution to others (mostly far removed from the scene of the action). Particularly during the ’90s, it seemed that this wealth was there to be taken by anyone willing to focus their thoughts on financial enrichment at the expense of any broader goals in life. Now that the bubble has burst, the burden of unsustainable debt left behind for both households and governments has ensured that the gods of the marketplace maintain their pre-eminence, even if their worship is much less enthusiastic than before. How did this reversal come about, and is there any possibility that Keynes’s vision will be realised? The first of these questions is easily answered. The economic turmoil of the ’70s put an end to the utopianism of the ’60s, and resulted in the resurgence of a hard-edged version of capitalism, variously referred to as neoliberalism, Thatcherism and the Washington Consensus. I have used the more neutral term ‘market liberalism’ to describe this set of ideas. Social democracy must offer more than a lever to stabilise the economy. We need a vision of a genuinely better society The central theoretical tenet of market liberalism is the efficient (financial) markets hypothesis. In the strong form that is most relevant to policy decisions, the hypothesis states that the prices determined in markets for financial assets such as shares, bonds and their various derivatives are the best possible estimates of the value of those assets. In the core ideology of market liberalism, the efficient markets hypothesis is combined with the claim that the best way to achieve prosperity for all is to let the rich get richer. This claim is rarely spelt out explicitly by its advocates, so it is best known by its derisive label, the ‘trickle down’ hypothesis. Taken together, the efficient markets hypothesis and the trickle down hypothesis lead us in the opposite direction to the one envisaged by Keynes. If these hypotheses are true, the mega-fortunes piled up in speculative financial markets are not merely justified: they are essential to achieve and maintain decent living standards for the rest of us. The investments that generate technological progress will, on this view, only be made if they are guided by financial markets driven by the desire to make unimaginable fortunes. As long as market liberalism rules, there is no reason to expect progress towards a less money-driven society. The global financial crisis and the subsequent long recession have fatally discredited its ideas. Nevertheless, the reflexes and assumptions developed under market liberalism continue to dominate the thinking of politicians and opinion leaders. In my book, Zombie Economics (2010), I describe how these dead, or rather undead, ideas have risen from their graves to do yet more damage. In particular, after a resurgence of interest in Keynes’s macroeconomic theory, the entrenched interests and ideas of the era of market liberalism have regained control, pushing disastrous policies of ‘austerity’ and yet more structural ‘reform’ on free-market lines. Social democratic parties have failed to put up any serious resistance so far. Popular anger at the crisis has been channelled into right-wing tribalist movements such as the Tea Party in the US and Golden Dawn in Greece. This experience makes it clear that, if Keynesian social democracy is to regain the dominant position it held from the end of Keynes’s own lifetime until the ’70s, it must offer more than a technocratic lever to stabilise the economy. We need a vision of a genuinely better society. For this reason, the time is right to re-examine Keynes’s vision of a future where economic scarcity, real or perceived, no longer dominates life as it does today. To begin with, it is important to consider the limitations of Keynes’s thinking. First, Keynes considered only the developed world, implicitly assuming that the colonialist world order could be sustained indefinitely. Judging from his other writing, including his early work on the Indian economy, Keynes envisaged a gradual increase in living standards, under colonial tutelage, for the poor countries. The idea that a post-scarcity society in Europe and its settler offshoots could coexist with mass poverty elsewhere seems incongruous now, but in 1930, the European empires seemed destined to endure for a long time. The Indian National Congress had declared its goal of independence only the previous year, and the Statute of Westminster, establishing the legislative independence of the settler dominions, was a year in the future. Once we try to apply Keynes’s reasoning to the world as a whole, it’s clear that the end of scarcity is further away than he supposed. How much further? To be more precise, how much technological progress would be needed for everyone to enjoy the average standard of living of Britain in 1930 (when Keynes was writing) by working only 15 hours a week? For the first time in history, our productive capacity is such that no one need be poor By 1990, 60 years after Keynes’s essay, average income for the world as a whole had just reached Britain’s level in 1930. So, it seems we need to add another 60 years, or two generations, to his timescale. On the other hand, because developing countries are mostly adopting existing technology, the average world growth rate of income per person is around three per cent, not the two per cent proposed by Keynes. In that case, an eightfold increase would take only 70 years. So, taking the entire world into account only defers the estimated end of scarcity by 30 years, to 2060 — within the expected lifetime of my children. The problem of distribution, sharp enough in the Britain of the ’30s, is far worse for the world as a whole. A billion or so people live in destitution, and billions more are poor by any reasonable standard. Nevertheless, for the first time in history, our productive capacity is such that no one need be poor. In fact, more people are rich, by any reasonable historical standard, than are poor. Even more strikingly, perhaps, more people are obese than are undernourished. And this is not true merely in terms of basic nutrition. Right now, the world produces enough meat to give everyone a diet comparable to the average Japanese person’s. This amount could be increased by replacing grain-fed beef with chicken and pork, a step that would also reduce carbon emissions. With another 50 years of technological progress and even a modest effort to aid the poorest onto the path of rapid growth already being followed by most of Asia, poverty could be eliminated. The vast majority of the world’s population could enjoy a living standard comparable, in material terms, to that of the global middle class of today. A second problem to which Keynes pays only passing attention is that of housework. As a male academic born into a household staffed with domestic servants, he almost certainly did none himself. His discussion reflects this. Looking forward to the problems that might arise in a society with unaccustomed leisure, Keynes mentions ‘the wives of the well-to-do classes’ who ‘cannot find it sufficiently amusing, when deprived of the spur of economic necessity, to cook and clean and mend, yet are quite unable to find anything more amusing’. These traditional tasks had not, of course, been eliminated by technological progress. Rather, they had been contracted out to others, typified by the charwoman in a song quoted by Keynes, whose hope for paradise was to do nothing for all eternity. Some housework is enjoyable and fulfilling but much of it is drudgery. A central requirement for a post-scarcity society is that no one should have to spend a lot of time on the latter. The household appliances that first came into widespread use in the ’50s (washing machines, vacuum cleaners, dishwashers and so on) eliminated a huge amount of housework, much of it pure drudgery. By contrast, technological progress for the next 40 years or so was limited. Arguably, the only significant innovation in this period was the microwave oven. As a result, housework alone takes up all of Keynes’ proposed 15 hours a week. Time-use surveys suggest that the average woman in the UK spends around three hours a day on household work (excluding childcare, of which more later) and the average man spends about two hours. Both of these numbers have declined over time, but only slowly. Market alternatives to most kinds of housework are available. Cooking can be replaced by eating out, washing and ironing can be sent out to a laundry, and (low-paid) workers can be hired to clean houses. Obviously, while people are being paid to do the housework of others, we are a long distance from Keynes’s post-scarcity world. A little less obviously, such a situation demands more time spent in paid work from those who want the money to buy market alternatives. We might be willing to support surfers in return for non-market contributions to society Still, the time spent on housework has been falling, and there are good reasons to think that it can fall further, to the point where most housework is done by choice rather than necessity. The rise of the internet and the advent of mobile telephony have drastically simplified a wide range of household chores, from banking and bill-paying to dealing with tradespeople. At the same time, the online world is changing shopping from a necessity to an optional extra, pursued only by those who enjoy it. It allows the requirements for a decent life to be met without any significant interaction with the culture of consumption, exemplified by the shopping mall. An even more important omission in Keynes’s essay is the effort involved in raising children. Childless himself, Keynes came from a social class in which child rearing was contracted out, to an extent unparalleled before or since. Babies were handed to wet-nurses, cared for by nannies and governesses and then, from the age of eight or even younger, packed off to boarding schools. From the perspective of today’s parents, such a world is hard to imagine. Even if the need for market work were to disappear altogether, parents of young children would not have much time to worry about the need to fill their leisure hours. But far from weakening Keynes’s case against a money-driven society, the problems of caring for children illustrate the way in which our current economic order fails to deliver a good life, even for the groups who are doing relatively well in economic terms. The workplace structures that define a successful career today require the most labour from ‘prime-age’ workers aged between 25 and 50, the stage when the demands of caring for children are greatest. For the first time in history the world produces enough food so that none need go hungry: yet we are far from solving the problem of fair distribution. Hot dogs on Puget Sound, 1960s. Photo by Merle Severy/National Geographic/GettyWork is distributed unequally, and perversely, in other dimensions as well. And yet, in the English-speaking countries at least, this has not meant more leisure so much as more time in retirement, unemployment or otherwise involuntarily excluded from the labour force. The result has been an inequality of leisure, the counterpart to the growing inequality of income. Particularly in the US, families are becoming polarised. On the one hand there is the two-income class of economically successful couple households in which both partners work full-time or more. On the other is the zero-income class, with one or two adults dependent either on welfare benefits or else on intermittent and insecure low-wage employment. If work was distributed more equally, both between households and over time, we could all be better off. But it seems impossible to achieve this without a substantial reduction in the centrality of market work to the achievement of a good life, and without a substantial reduction in the total hours of work. The first step would be to go back to the social democratic agenda associated with postwar Keynesianism. Although that agenda has largely been on hold during the decades of market-liberal dominance, the key institutions of the welfare state have remained both popular and resilient, as shown by the wave of popular resistance to cuts imposed in the name of austerity. Key elements of the social democratic agenda include a guaranteed minimum income, more generous parental leave, and expanded provision of health, education and other social services. The gradual implementation of this agenda would not bring us to the utopia envisaged by Keynes — among other things, those services would require the labour of teachers, doctors, nurses, and other workers. But it would produce a society in which even those who did not work, whether by choice or incapacity, could enjoy a decent, if modest, lifestyle, and where the benefits of technological progress were devoted to improving the quality of life rather than providing more material goods and services. A society with these priorities would allocate most investment according to judgments of social need rather than market signals of price and profit. That in turn would reduce the need for a large and highly rewarded financial sector, even in relation to private investment. There remains the question of how to move from a revitalised social democracy to the kind of utopia envisaged by Keynes. It would be absurd to spell out a detailed transitional program, but it’s useful to think about one of the central elements of such a society — a guaranteed minimum income. In one sense, a guaranteed minimum income involves little more than a re-labelling of the existing benefits provided by all modern welfare states (with the US, as always, a notable exception). In most modern welfare states, everyone is eligible for income support which should be sufficient to prevent them from falling into poverty. Those who cannot work because of age or disability are automatically entitled to such support, while unemployed workers receive either insurance benefits related to their previous wages or some basic allowance conditional on job search. In a post-scarcity society, everyone would be guaranteed an income that yielded a standard of living significantly better than poverty, and this guarantee would be unconditional. The move from a near-poverty benefit subject to eligibility conditions to a liveable, guaranteed minimum income would require both an increase in productivity, such that a smaller number of workers could produce an adequate income for all, and some fairly radical changes in social attitudes. It seems clear enough that technological progress can generate the necessary productivity gains, so what is needed most is a change in attitudes to work that would make a guaranteed minimum income socially sustainable. The first is that the production of market goods and services needs to become pleasant enough that those doing it don’t mind supporting others who choose not to. The second is that the option of receiving a guaranteed minimum income does not become a trap, leading into the kind of idleness that produces despair. We can imagine a few steps towards this goal. One would be to allow recipients of the minimum income to choose voluntary work as an alternative to job search. In many countries, a lot of the required structures are in placed under ‘workfare’ or ‘work for the dole’ schemes. All that would be needed is to replace the punitive and coercive aspects of these schemes with positive inducements. A further step would be to allow a focus on cultural or sporting endeavours, whether or not those endeavours involve achieving the levels of performance that currently attract (sometimes lavish) public and market support. An Australian example might help to illustrate the point. Under our current economic structures, someone who makes and sells surfboards can earn a good income, as can someone good enough to join the professional surfing circuit. But a person who just wants to surf is condemned, rightly enough under our current social relations, as a parasitic drain on society. With less need for anyone to work long hours at unpleasant jobs, we might be more willing to support surfers in return for non-market contributions to society such as membership of a surf life-saving club. Ultimately, people would be free to choose how best to contribute ‘according to their abilities’ and receive from society enough to meet at least their basic needs. We do have the technological capacity to start down that path and to approach the goal within the lives of our grandchildren. That’s a couple of generations behind Keynes’s optimistic projection, but still a hope that could counter the current tides of cynicism and despair. This brings us to the final, really big question. Supposing a Keynesian utopia is feasible, will we want it? Or will we prefer to keep chasing after money to buy more and better things? In 2008, 16 economists contributed to an interesting volume called Revisiting Keynes, edited by Lorenzo Pecchi and Gustavo Piga. Many of those economists argued that Keynes had been proved wrong. Experience, they said, had shown that people will always want to consume more and will be willing to work harder to do it. Implicit in much of their discussion was the idea that the US economy, as of 2008, represented the way of the future. With the advantage of a few years’ hindsight, this assumption seems every bit as dubious as the view against which Keynes argued in 1930, that the Depression would continue indefinitely. The steady growth in consumption expenditure in the US in the decades leading up to the financial crisis depended on debt. And of course, the need to service debt necessitated a willingness to work long hours. Now, after millions of foreclosures and bankruptcies, a large proportion of the population has been excluded from credit markets. Households in general have seen the need to build up their savings. More importantly, the culture of conspicuous consumption, which reached unparalleled heights of excess in the 1990s and early 2000s, is on the wane. The most striking emblem of this change is the end of the American love affair with the motor car. Throughout the 20th century the car stood in American culture as a symbol of personal freedom attainable through consumption expenditure. Year after year, pausing only briefly for recessions and slowdowns, more and more cars were driven further and further, burning more and more petrol. But this endless growth has now, apparently, come to an end. The use of petrol in the US peaked in 2005, before the advent of the economic crisis. The distance driven has also peaked and Americans are buying fewer and smaller cars. Economic factors, including higher fuel prices, have a role to play. But anecdotal evidence suggests that there is more to it than this. Increasingly, driving is seen as an unpleasant chore rather than an exercise of freedom. Young people in particular have been less eager than their parents to start driving and acquire cars. Such shifts bring bigger changes in their wake. Without cars and commuting, large houses in the suburbs are much less attractive. After decades of steady growth, the size of new houses seems to be declining. Smaller houses mean fewer possessions to fill them, and less appeal for a privatised life based on private consumption. An escape from what Keynes called ‘the tunnel of economic necessity’ is still open to us. Yet it will require radical changes in the economic structures that drive the chase for money and in the attitudes shaped by a culture of consumption. After decades of finance-driven capitalism, it takes an effort to recall that such changes ever seemed possible. Yet it is now clear that market liberalism has failed in its own terms. It promised that if markets were set free, everyone would benefit in the long run. In reality, most households in developed countries experienced less income growth under market liberalism than in the decades of Keynesian social democracy after 1945. Of more immediate importance, except for the top one per cent there has been no recovery from the crisis of 2008, and even worse looms ahead. And despite the initial success of the backlash against Keynesian macroeconomic policies, austerity is now failing in political as well as economic terms. Popular anger has boiled over in a string of electoral defeats for the advocates of austerity. But, unlike the right-wing tribalism that has formed part of that backlash, progressive politics cannot, in the end, rely on anger. It must offer the hope of a better life. That means reclaiming utopian visions such as that of Keynes.
John Quiggin
https://aeon.co//essays/the-time-is-right-to-reclaim-the-utopian-ideas-of-keynes
https://images.aeonmedia…y=75&format=auto
Neuroscience
Neuroscience is changing the meaning of criminal guilt. That might make us more, not less, responsible for our actions
In the summer of 2008, police arrived at a caravan in the seaside town of Aberporth, west Wales, to arrest Brian Thomas for the murder of his wife. The night before, in a vivid nightmare, Thomas believed he was fighting off an intruder in the caravan – perhaps one of the kids who had been disturbing his sleep by revving motorbikes outside. Instead, he was gradually strangling his wife to death. When he awoke, he made a 999 call, telling the operator he was stunned and horrified by what had happened, and unaware of having committed murder. Crimes committed by sleeping individuals are mercifully rare. Yet they provide striking examples of the unnerving potential of the human unconscious. In turn, they illuminate how an emerging science of consciousness is poised to have a deep impact upon concepts of responsibility that are central to today’s legal system. After a short trial, the prosecution withdrew the case against Thomas. Expert witnesses agreed that he suffered from a sleep disorder known as pavor nocturnus, or night terrors, which affects around one per cent of adults and six per cent of children. His nightmares led him to do the unthinkable. We feel a natural sympathy towards Thomas, and jurors at his trial wept at his tragic situation. There is a clear sense in which this action was not the fault of an awake, thinking, sentient individual. But why do we feel this? What is it exactly that makes us think of Thomas not as a murderer but as an innocent man who has lost his wife in terrible circumstances? Automatism implies that the accused person had no control over his actions, that he acted like a runaway machine Our sympathy can be understood with reference to laws that demarcate a separation between mind and body. A central tenet of the Western legal system is the concept of mens rea, or guilty mind. A necessary element to criminal responsibility is the guilty act — the actus reus. However, it is not enough simply to act: one must also be mentally responsible for acting in a particular way. The common law allows for those who are unable to conform to its requirements due to mental illness: the defence of insanity. It also allows for ‘diminished capacity’ in situations where the individual is deemed unable to form the required intent, or mens rea. Those people are understood to have control of their actions, without intending the criminal outcome. In these cases, the defendant may be found guilty of a lesser crime than murder, such as manslaughter. In the case of Brian Thomas, the court was persuaded that his sleep disorder amounted to ‘automatism’, a comprehensive defence that denies there was even a guilty act. Automatism is the ultimate negation of both mens rea and actus reus. A successful defence of automatism implies that the accused person had neither awareness of what he was doing, nor any control over his actions. That he was so far removed from conscious awareness that he acted like a runaway machine. The problem is how to establish if someone lacks a crucial aspect of consciousness when he commits a crime. In Thomas’s case, sleep experts provided evidence that his nightmares were responsible for his wife’s death. But in other cases, establishing lack of awareness has proved more elusive. It is commonplace to drive a car for long periods without paying much attention to steering or changing gear. According to Jonathan Schooler, professor of psychology at the University of California, Santa Barbara, ‘we are often startled by the discovery that our minds have wandered away from the situation at hand’. But if I am unconscious of my actions when I zone out, to what degree is it really ‘me’ doing the driving? This question takes on a more urgent note when the lives of others are at stake. In April 1990, a heavy-goods driver was steering his lorry towards Liverpool in the early evening. Having driven all day without mishap, he began to veer on to the hard shoulder of the motorway. He continued along the verge for around half a mile before he crashed into a roadside assistance van and killed two men. The driver appeared in Worcester Crown Court on charges of causing death by reckless driving. For the defence, a psychologist described to the court that ‘driving without awareness’ might occur following long, monotonous periods at the wheel. The jury was sufficiently convinced of his lack of conscious control to acquit on the basis of automatism. The argument for a lack of consciousness here is much less straightforward than for someone who is asleep. In fact, the Court of Appeal said that the defence of automatism should not have been on the table in the first place, because a driver without ‘awareness’ still retains some control of the car. None the less, the grey area between being in control and aware on the one hand, and in control and unaware on the other, is clearly crucial for a legal notion of voluntary action. If we accept automatism then we reduce the conscious individual to an unconscious machine. However, we should remember that all acts, whether consciously thought-out or reflexive and automatic, are the product of neural mechanisms. For centuries, scientists and inventors have been captivated by this notion of the mind as a machine. In the 18th century, Henri Maillardet, a Swiss clockmaker, built a remarkable apparatus that he christened the Automaton. An intricate array of brass cams connected to a clockwork motor made a doll produce beautiful drawings of ships and pastoral scenes on sheets of paper, as if by magic. This spookily humanoid machine, now on display at the Franklin Institute in Philadelphia, reflects the Enlightenment’s fascination with taming and understanding the mechanisms of life. Modern neuroscience takes up where Maillardet left off. From the pattern of chemical and electrical signaling between around 85 billion brain cells, each of us experiences the world, makes decisions, daydreams, and forms friendships. The mental and the physical are two sides of the same coin. The unsettling implication is that, by revealing a physical correlate of a conscious state, we begin to treat the individual not as a person but as a machine. Perhaps we are all ‘automata’, and our notions of free choice and taking responsibility for our actions are simply illusions. There is no ghost in the machine. Under the influence of alcohol, people become more likely to daydream and less likely to catch themselves doing so In his book Incognito (2011), David Eagleman argues that society is poised to slide down the following slippery slope. Measurable brain defects already buy leniency for the defendant. As the science improves, more and more criminals will be let off the hook thanks to a fine-grained analysis of their neurobiology. ‘Currently,’ Eagleman writes, ‘we can detect only large brain tumours, but in 100 years we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioral problems.’ On this view, responsibility has no place in the courtroom. It is no longer meaningful to lock people up on the basis of their actions, because their actions can always be tied to brain function. While it is inevitable that defence teams will look towards neuroscientific evidence to shift the balance in favour of a mechanistic, rather than a personal, interpretation of criminal acts. But we should be wary of attempts to do so. If every behaviour and mental state has a neural correlate (as surely it must), then everything we do is an artifact of our brains. A link between brain and behaviour is not enough to push responsibility out of the courtroom. Instead we need new ways of thinking about responsibility, and new ways to conceptualise a decision-making self. Responsibility does not entail a rational, choosing self that floats free from physical processes. That is a fiction. Even so, demonstrating a link between criminal behaviour and conscious (or unconscious) states of the brain changes the legal landscape. Consciousness is, after all, central to the legal definition of intent. In the early ’70s, the psychologist Lawrence Weiskrantz and the neuropsychologist Elizabeth Warrington discovered a remarkable patient at the National Hospital for Neurology and Neurosurgery in London. This patient, known as DB, had sustained damage to the occipital lobes (towards the rear of the brain), resulting in blindness in half of his visual field. Remarkably, DB was able to guess the position and orientation of lines in his ‘blind’ hemifield. Subsequent studies on similar patients with ‘blindsight’ confirmed that these responses relied on a neural pathway quite separate from the one that usually passes through the occipital lobes. So it appears that visual consciousness is selectively deleted in blindsight. At some level, the person can ‘see’ but is not aware of doing so. Awareness and control, then, are curious things, and we cannot understand them without grappling with consciousness itself. What do we know about how normal, waking consciousness works? Hints are emerging. Studies by Stanislas Dehaene, professor of experimental cognitive psychology at the Collège de France in Paris, have revealed that a key difference between conscious and unconscious vision is activity in the prefrontal cortex (the front of the brain, particularly well-developed in humans). Other research implies that consciousness emerges when there is the right balance of connectivity between brain regions, known as the ‘information integration’ theory. It has been suggested that anesthesia can induce unconsciousness by disrupting the communication between brain regions. Some fear that an increased understanding of consciousness will dissolve our sense of personal responsibility Just as there are different levels of intent in law, there are different levels of awareness that can be identified in the lab. Despite being awake and functioning, one’s mind might be elsewhere, such as when a driver zones out or when a reader becomes engrossed. A series of innovative experiments have begun to systematically investigate mind-wandering. When participants zone out during a repetitive task, activity increases in the ‘default network’, a set of brain regions previously linked to a focus on internal thoughts rather than the external environment. Under the influence of alcohol, people become more likely to daydream and less likely to catch themselves doing so. These studies are beginning to catalogue the influences and mechanisms involved in zoning out from the external world. With their help we can refine the current legal taxonomy of mens rea and put legal ideas such as recklessness, negligence, knowledge and intent on a more scientific footing. An increased scientific understanding of consciousness might one day help us to determine the level of intent behind particular crimes and to navigate the blurred boundary between conscious decisions and unconscious actions. At present, however, we face serious obstacles. Most studies in cognitive neuroscience rely on averaging together many individuals. A group of individuals allows us to understand the average, or typical, brain. But it does not follow that each individual in the group is typical. And even if this problem were to be overcome, it would not help us to adjudicate cases in which normal waking consciousness was intact, but happened to be impaired at the time of the crime. Nonetheless, the brain mechanisms underpinning different levels of consciousness are central to a judgment of automatism. Without consciousness, we are justified in concluding that automatism is in play, not because consciousness itself is not also dependent on the brain, but because consciousness is associated with actions worth holding to a higher moral standard. This perspective helps to arrest the slide down Eagleman’s slippery slope. Instead of negating responsibility, neuroscience has the potential to place conscious awareness on an empirical footing, allowing greater certainty about whether a particular individual had the capacity for rational, conscious action at the time of the crime. Some worry that an increased understanding of consciousness and voluntary action will dissolve our sense of personal responsibility and free will. In fact, neurological self-knowledge could have the opposite effect. Suppose we discover that the brain mechanisms underpinning consciousness are primed to malfunction at a particular time of day, say 7am. Up until this discovery, occasional slips and errors made around this time might have been put down to chance. But now, armed with our greater understanding of the fragility of consciousness, we would be able to put in place counter-measures to make major errors less likely. For Brian Thomas, a greater understanding of his sleep disorder might have allowed him to control it. He had stopped taking his anti-depressant medication when he was on holiday, because he believed it made him impotent. This might have contributed to the night terrors that caused him to strangle his wife. Crucially, increased self-knowledge often percolates through to laws governing responsible behaviour. A diabetic who slips into a coma while driving is held responsible if the coma was the result of poor management of a known diabetic condition. Someone committing crimes while drunk is held to account, so long as they are responsible for becoming drunk in the first place. A science of consciousness illuminates the factors that lead to unconsciousness. In reconsidering the boundary between consciousness and automatism we will need to take into account the many levels of conscious and unconscious functioning of the brain. Our legal system is built on a dualist view of the mind-body relationship that has served it well for centuries. Science has done little to disrupt that until now. But neuroscience is different. By directly addressing the mechanisms of the human mind, it has the potential to adjudicate on issues of capacity and intent. With a greater understanding of impairments to consciousness, we might be able to take greater control over our actions, bootstrapping ourselves up from the irrational, haphazard behaviour traditionally associated with automata. Far from eroding a sense of free will, neuroscience may allow us to inject more responsibility than ever before into our waking lives. For references to the scientific research discussed in this essay, see Steve Fleming’s blog The Elusive Self.
Stephen M Fleming
https://aeon.co//essays/will-neuroscience-overturn-the-criminal-law-system
https://images.aeonmedia…y=75&format=auto
Cognition and intelligence
The ways in which jumping spiders see and map the world help to illuminate the mystery of human memory
A few years ago a message from God was found in a tomato in Yorkshire. The Arabic letters were clearly visible, for those who could see them, spelt out in two halves of mesocarp, endocarp and seeds cradled within mandalas of indigestible skin. At least two explanations come to mind. One is that the Supreme Being sees fit to make Himself visible in produce no less than in whirlwind and quasar. Another is that those who saw the message experienced apophenia — the tendency to see meaningful patterns and connections where they are not in fact present. Whatever the truth of that tomato, it is certainly the case that human beings regularly see things which are not there. All of us have seen faces in what are actually inanimate objects, a phenomenon known as pareidolia. Evolutionary psychologists argue that there is a good adaptive reason for this. If an ambiguous shape in long grass turns out to be a rock rather than the face of a lion, the cost of having wrongly identified it as a dangerous animal is likely to be trivial compared to the cost of making the opposite mistake. Furthermore, as hyper-social beings we dedicate copious attention to scrutinising and interpreting each other’s facial expressions and the changes, sometimes extremely subtle, in them. Neuroscientists have found that a substantial part of the visual cortex, the fusiform face area, is largely dedicated to these complex and demanding tasks. What then to make of a creature like Phidippus mystaceus? It certainly has a face, complete with snow-white whiskers around its mouth and pointy black tufts on top. But the two pairs of anterior (front) eyes, known as the anterior median and the anterior lateral, both claim our attention, and our gaze will tend to flicker between one pair and the other as points on which to anchor a sense of its face. There’s something here like the duck-rabbit illusion which never resolves one way or other: an arachnid trompe l’œil. (In addition to their four anterior eyes, four posterior ones, one pair of them tiny and one rather larger, are placed further back on mystaceus’s cephalothorax, like the bubbles that housed the turret for the mid upper gunner on a Lancaster bomber.) Our abhorrence at memory’s fraying and dissolution is, perhaps, second only to our abhorrence of death itself P. mystaceus, which lives in North America, is a jumping spider. It is one of about 5,000 species in a highly successful family of arachnids (eight-legged, air-breathing, venom-fanged arthropods) that thrive almost everywhere except Greenland and Antarctica. Britain alone has 36 different kinds. Jumping spiders, which are smaller than your little fingernail, have remarkable eyesight, a very particular hunting style, and an appetite for bees, bugs and — quite often — other spiders (the only known vegetarian exception is the delightfully named Bagheera kiplingi from Central America). Some species have better visual acuity than cats, which are more than 100 times their size, and though each of their pairs of anterior eyes has a limited field of view, the full complement of eight allows them to scan large sections of the world around them. (Like most spiders, they also have acute hearing, mediated by tiny hairs on their legs which are sensitive to the smallest vibrations.) They are much more powerful jumpers than cats, able to pounce up 50 times their body length and land with precision. And they have a safety rope: a silk thread tethered to the launch point in case they misjudge their leap and fall short. A jumping spider is a voracious panopticon, bungee-jumper and traceur in one. Nor are P. mystaceus and other jumping spiders cowering timorous beasties when it comes to love. The males of many species sport outrageous colours for courtship. The male Phidippus audax, a close cousin of mystaceus, has mouthparts as splendidly hued as the feathers of a bird of paradise: a receptive female will allow him to wrestle her mouthparts with his. Hentzia palmarum, another cousin, makes do with carrot-coloured facial hair around his four anterior eyes with an Arctic fox coloured band like a muff beneath. Each species of jumping spider taps out its own distinctive dances of intimidation and seduction — three or even seven-act shows that combine features of semaphore, flamenco and South African gumboot dancing. Still, the beauty of some jumping spiders is more apparent in their brains than their bodies. Just as we create patterns of the world, searching it for faces and symbols, they are mapping out their own lives in surprising detail. The drabbest genus contains some of the cleverest species known. Among them is Portia labiata, a jumping spider of South and East Asia that lives solely on the flesh of other spiders. P. labiata varies and adapts its behaviour according to the characteristics of the species it is hunting: using trial and error it observes and then mimics rhythms tapped out by species it has not encountered before in order to deceive them, and plots devious lines of attack if a full frontal assault looks too risky. The spider may spend an hour or more scanning the tangles of vegetation and gaps between itself and its intended victim, calculating the best route for a surprise attack. Scientists believe the reason labiata takes so long to do this is because, for all its excellent vision, it has very limited ability to take in and process information. So it systematically scans small sections of the surroundings with its anterior eyes, gradually building up enough information in its memory to build a mental map which it can then use. It’s a little like trying to download a large and fine-grained picture over a very slow internet connection. Once the map is complete, however, Portia will usually execute without fail, rapidly retracing its course if it finds it has started down a blind alley, choosing the correct option and finally swooping on its prey like a special forces ninja. The differences between jumping spiders and people (or most of the people I know, at any rate) are obvious enough. Not least, we have much more ‘bandwidth’ and processing power: about 86 billion brain cells compared to their mere 600,000. And, of course, we multiply our capacities through co-operation, creating webs of support and information between us that are vastly more powerful and intricate than anything that one of us can manage alone. But for all our differences we exist in continuity with them. Like them we live in a narrow perceptual zone with respect to the world as it actually is. ‘A human being is capable of taking in very few things at one time’, observes Kris Kelvin in Stanislaw Lem’s novel Solaris ‘we see only what is happening in front of us only here and now’. Just as jumping spiders overcome some of their limitations through a laboriously constructed mental map of their surroundings, we too apprehend the world by unconscious integration within the brain of fragments of perception, memory and supposition. It’s a conjuring trick that gives us a rough model of what is actually going on but which we believe to be the real thing. Memory is one of our most treasured capabilities. We build our identities and our cultures with it. Human memory and the things we do with it can be extraordinary, especially when we have not had too much to drink. Yet if we have a broad view of memory as the ability to retain information for later use, it is not exclusive to human beings, but foundational to life itself. The first living systems, perhaps those hypothesised for an RNA world, would have been distinguished by (among other things) precisely this: an ability to record in their chemical codes, and reproduce later, those properties which enabled them to thrive. And all organisms alive today retain subsystems that were first encoded during the early days of DNA-based life roughly four billion years ago. Every moment your cells are replaying routines that ran in the Archaean aeon. Most of the ‘memory’ in the world continues to be entirely unconscious and does not even require a brain. The immune system is a good example: it ‘remembers’ the viruses, bacteria and other nasties that you’ve encountered during your lifetime: if you encounter the same pathogen again, the ‘memory’ cells will recognise it and your body will be able to mount a faster immune response. Plants do this as well as humans and other animals. Human memory can be rich, varied and subtle in ways that, as far as we can tell, no other earthly beings experience. Our abhorrence at memory’s fraying and dissolution is, perhaps, second only to our abhorrence of death itself. But it is also possible to remember too much. In a story by Jorge Luis Borges, a young farmhand named Ireneo Funes falls from a horse and is severely concussed. When he comes to, his powers of perception and his memory are ‘perfect’. By comparison all of his previous life seems like a dream in which he had looked without seeing, heard without listening and forgotten virtually everything. In his new life, Funes can recall ‘the forms of the clouds in the southern sky on the morning of April 30, 1882, and … compare them in his memory with the veins in the marbled binding of a book he had only seen once, or with the feathers of spray lifted by an oar on the Rio Negro on the eve of the Battle of Quebracho’. But so intense is the rush of impressions and memories that Funes is unable to cope, and he never stirs from his bed, ‘his eyes fixed on the fig tree behind the house or on a spiderweb.’ He becomes incapable of generalisations and abstract ideas, which require little acts of forgetting to become possible. He becomes almost incapable of making sense of the world, of thinking. Like the Portia labiata patiently building its picture of the world, we have limited processing powers and must turn away most information if we are not to stall the machinery entirely. To function effectively, then, we have to forget most things. This fact has long been recognised by psychologists and philosophers. William James, writing in 1890, quoted from his colleague Théodule-Armand Ribot: ‘Without totally forgetting a prodigious number of states of consciousness, and momentarily forgetting a large number, we could not remember at all. Oblivion … is thus no malady of memory, but a condition of its health and life’. Friedrich Nietzsche, in 1886, was more terse: ‘Blessed are the forgetful: for they also get over their stupidities’. Perhaps sanity depends on steering a course between remembering too much and remembering too little. But even this middle way is vulnerable to delusions. Neuroscience has recently proven what David Hume recognised nearly three hundred years ago — that remembering is an act of re-creation and therefore subject to distortion and fictionalisation: ‘real’ memories become tales, and tales become ‘memories’. And there is a tension, if not a paradox, at the heart of at least some of the conscious experiences that we value most. On the one hand, we want to be completely present in the moment. As the young Ludwig Wittgenstein put it: only a man who lives not in time but in the present is happy. On the other hand, we want to build and retain the fullest possible picture of the world around us and this must, if it is to be durable, include a coherent map of its deep past and foundations. Sometimes it seems to me that some of the most important moments of our existence are spent in attempts to bridge the gap between the two states of (on the one hand) trying to live utterly in the moment, and (on the other hand) trying to live in memory and reflection. We want, somehow, to experience both at once, and we look from the one to the other and from the other back to the one, rather as pairs of eyes on the front of a jumping spider. The ‘face’ is blank: it does not tell us where to look and, like the cat in Kafka’s A Little Fable, the spider would eat us up if it could. From The Book of Barely Imagined Beings: A 21st Century Bestiary, to be published 4 October by Granta Books www.barelyimaginedbeings.com Correction, Oct 1, 2012: the original version of this article stated that the vegetarian spider, Bagheera kiplingi, was found in India, not Central America. Thanks to Matt Lewis for pointing out the error.
Caspar Henderson
https://aeon.co//essays/an-enigmatic-spider-and-the-fragile-threads-of-human-memory
https://images.aeonmedia…y=75&format=auto
The environment
Wind farms are good for the world but hard on the heart. A lover of wilderness reshapes her own instinct for beauty
It is pale dawn and I am climbing steadily up through wet chilly bracken and across patches of sheep-cropped green grass. As I get higher, long views of peat moorland open around me: up and down, green and gold and beige and brown. I can see no houses (there are only 10 in the surrounding 30 square miles — a widely dispersed community). Occasionally, I catch silver glimpses of the little river or of the single-track unfenced road that struggles up from the coast, through the village seven miles down the valley, over the watershed, and across into the next valley. I am walking up to Arecleoch, the fourth largest wind farm in Scotland, but for the moment it is hidden by the rising ground in front of me and, briefly at least, the valley feels restored to the empty beauty that brought me here. This walk is part of a three-year struggle to resolve the issue that wind farms have forced upon me: how do we choose between two virtues, between beauty and justice in this instance? Working on the theory that knowledge breeds love and that writing clarifies thought, I keep coming back — physically and in writing — to the wind farms: to why they have created such a conflict and, more crucially, what I can do about it. I came to live in this upland valley in south-west Scotland because it was so beautiful to me. I recognise that moorland is not always regarded as beautiful and does not fit either of the two culturally favoured aesthetics: the ‘sublime’ (the ferocious wildness of the Highlands, for example) and the ‘lovely’ (the lush green meadows and ancient woodlands of the Cotswolds). You might say the vertical or the horizontal, rather than anything in between. Given the classical association of beauty with symmetry this makes sense, but it is a small palette relative to what is available. Moorland beauty is neither of these; it is too gently curved to be sublime and too austere to be lovely. I came here to live alone in the huge hush that is both visual and aural and is, to me, exceptionally beautiful It is the beauty of emptiness in part, of featurelessness. Of course it has ‘features’ really: it has the river, little and playful; there are sea trout, water voles and, they say, otters, although I have never seen one. It has lichen-patched dry stone dykes, many of them now in disrepair, and ruined ghost walls running away up the hillsides. It has greened fields — areas of hard-won, drained and stone-cleared pasture. There is a complicated variety of Neolithic remains — barrows, field systems and standing stones, easily confused with the even older erratic boulders left by the retreating ice of the last glaciation and the newer heaps of stone hauled out of the cultivated fields. There are the farm steadings and the ruined abandoned houses. There is a railway line — a single track that carries the Glasgow trains down to Stranraer where there is no longer a ferry-crossing to Belfast. There is an extensive tangled cat’s cradle of cables. Inevitably, there is also the forestry, both great swathes of it and a series of ‘pepper-pot plantations’ — the little rectilinear patches of conifers that were such a cunning tax-avoidance wheeze in the 1970s. And all these are in addition to the more subtle ‘natural features’: chunks of extrudant granite; the complex patterning of reed beds, rough grass, bracken, heather, sphagnum moss and sheep-shorn lawn; low clumps of sallows, gnarled old hawthorns, a few neglected coppiced hazels. There is a rich flora, including sheets of orchids, bog asphodel and meadowsweet in season, and some good bird life, although nothing particularly impressive or unique. It does not amount to a site of ‘exceptional natural beauty’ in most people’s book. Nonetheless, I find it heart-wrenchingly beautiful. The valley does have one notable and distinctive characteristic: it has an exceptional ‘soundscape’. It is not silent in the pure sense that an acoustic chamber, say, or a desert on a windless night are silent. Rather, it is ‘hushed’, and such sound as there is comes from a complex relationship between the various elements of the open, broken land. Like a piece of music compared to a painting, a soundscape is more fluid and varying than a landscape. It is more seasonal, because the migratory birds come and go: the haunting, bubbling call of the curlew is heard only between mid-March and high summer; and the distinctive, almost mechanical ‘chip-chip-chip’ of the invisible grasshopper warbler only between May and July. Because of a phenomenon called ‘attenuation’, the way sounds resonate and carry is different in hot and cold, and in wet and dry, weather. But in the soundscape here, the background noise is so minimal — neither waterfalls nor motorways — that each identifiable sound is distinct and laid onto a sort of murmur or breath which is flowing water and teasing wind, rising, singing, pouring through the day: silence made audible, made musical, if indescribable. In The Great Animal Orchestra, Bernie Krause defines soundscape by breaking it down into ‘geophony’ — the sounds made by the physical environment (wind, water, etc); ‘biophony’ — the sounds made by animals, birds and insects; and ‘androphony’ — the sounds made by human activities. A soundscape is the interaction and balance of these factors, based on a pretty much correct assumption that there is never absolute silence. Here on the moorland, we have a delicate soundscape that is unusually ‘geophonic’ because we have so much water and wind, but no trees and very little ‘androphony’, even in the distance. To those who listen and care, this is notably pleasing. The soundscape is important to me, partly because I came here seeking silence in the first place, and is certainly one of the things that makes this place so beautiful. However, I am aware that this is very much a minority concern. Currently our response to nature is almost entirely visual. We barely have a language to discuss the aural. So much is this the case that the Environmental Impact Assessment, which planners now require for many kinds of development, does not take seriously any aesthetic criteria except the visual. The EIA guidelines on noise are deeply confused; changes to the quality (as opposed to the quantity) of sound are never mentioned as grounds for rejecting a proposal. And the source of the sounds seems to be regarded as irrelevant. A chaffinch in your garden, or a waterfall, or beach nearby might be considerably noisier than a motorway a mile off, but still be more pleasing, or less disturbing of the peace. A rare place of silence: Sara Maitland takes in the beauty of the moors on her doorstep. Photo by Adam LeeAll this seems quite strange to me, because my sense, anecdotally, is that far more people are moved by and engaged with music than with painting. Yet the very word ‘landscape’ is drawn from visual arts, and we speak only of ‘views’ when describing our natural environment. In short, I came here to live alone in the huge hush that is both visual and aural and is, to me, exceptionally beautiful. It gives me daily joy. But now I have to plan carefully to get that dose of beauty. Arecleoch wind farm became operational last year. Because of the isolated location, I am, I think, the only person who can see the turbines from an inhabited house. But you cannot miss them from our little lonely road: the massive turbine fins break the smooth curl of the hill-line and inescapably dominate the whole valley, as well as radically changing the atmosphere and appearance of my home. They are disproportionate in scale; they are out of tone with their surroundings — both in colour and in the precision of their lines. Above the gentle moorland, they look brutal. They have been there for more than a year, and they can still stop me in my tracks with grief. This is just the beginning. Kilgallioch wind farm, less than two miles from Arecleoch and with 99 turbines, is in the final stages of seeking planning permission. Because of its size and because it will cross two regions (as Scottish counties are called), planning permission will come directly from the Scottish Parliament, rather than from the local council, so individual feelings will have even less leverage than usual. The nearest turbine will be barely half a mile from my front door. Mark Hill wind farm (28 turbines, again very visible from our road) opened last year. Last month, the new extension to Artfield Fell wind farm — a further seven turbines, now visible from my garden — started generating. We have just received notification of ‘scoping’ — that is preplanning permission investigations — for another 50 turbines on the opposite side of the valley. If every turbine now in the system (either operational or formally proposed) is constructed, there will be more than 250 turbines, some of them 145 metres high (almost three times the height of Nelson’s Column) within 10 miles of my house. And although, of course, I won’t be able to see every one of them from my house, there won’t be any direction I can walk in where I will not have to see them. Once the nearer ones are up, I will also have to hear them. This is not necessarily offensive. One of my neighbours likes the sound: she feels that it adds depth and resonance to the soundscape and reminds her of her childhood when she lived beside the sea. But I still feel that, for me, it will be a grotesque change in the context of the hushed atmosphere that I love so much. I personally find the wind farms ugly. And this leaves me with an internal conflict, because I also believe that they are just. Although we tend to treat Justice as an objective and unified category, it is scarcely less subjective than Beauty when it comes down to it. I find this very difficult to hold clearly in mind. I grew up with the fixed belief that, for example, there can be no ‘competition’ between forms of oppression, and therefore that justice for some leads to increased justice for all (a sort of ethical version of economic ‘trickle down’). I have become increasingly aware, though, that this is a ridiculous belief — so ridiculous, in fact, that I am amazed how long it took me to notice. In The Universal Declaration of Human Rights, anyone could see that Articles 18 and 19, while both clearly just, are at least potentially in conflict: Article 18: Everyone has the right to freedom of thought, conscience and religion … either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship and observance.Article 19: Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Forty years later, with the publication of The Satanic Verses by Salman Rushdie in 1988, that rift was plainly exposed. Part of the pain was that no one (including me) seemed to have noticed it beforehand, so absorbed were we in the myth of the indivisible good. Now I know that Justice, too, is fragmented and partial and subjective. I try to bear this in mind. I am conscious that many people genuinely do not see wind farms as just. On the contrary, for various reasons, they see them either as spurious or even unjust. Some of them do not believe in either climate change or fossil fuel shortage. If they are correct, then wind farms are indeed both pointless and deeply unjust: they are simply a brutal degradation of landscapes and communities by the power companies for the sake of their own bottom line. Others think that ‘fuel supply security’ — that a country needs to be fuel self-sufficient — is a paltry concern or even a distraction in the pursuit of global peace. Some recognise that the increasingly urgent search for more oil has downsides — including toxic pollution, political instability, habitat destruction, and cultural vulnerability — that are at present bearing unfairly heavily on the poorer nations; but, at the same time, they believe that technology will sort this out efficiently, and also discover sufficient further sources of oil. Many feel that the negative effects on the beauty of the countryside (and, in many cases, its biodiversity, too) simply outweigh all other considerations. Some people are convinced that our international commitments to lowering the carbon footprint were so stupid as to be non-binding, or that the obduracy of emerging economies such as China and India mean that we are no longer bound by such treaties. There are those who hold that we can have no responsibility to those yet unborn. I find I am living, at alarmingly close quarters, with a moral conflict. All my instincts tell me that this cannot be so. All my thoughts and feelings tell me that it is Even taken all together, in Scotland at least, these people are a minority. The most recent survey, by Scottish Renewables in 2010, found that 78 per cent of people in Scotland agreed that ‘wind farms are necessary to meet current and future energy needs’. (Up from 74 per cent in 2005, which is interesting because there was a vast increase in the number of turbines visible across the country between the two surveys.) Just over half (52 per cent) disagreed with the statement that they were ‘ugly and a blot on the landscape’, and a slightly larger number (59 per cent) felt that they were necessary to the point that how they looked was irrelevant. Nonetheless, being in a minority does not make wind-farm objectors wrong, especially as there is an inherent problem with democracy and majority rule in that it cannot measure how strongly people feel about something, nor how informed they are on the topic. Presumably some of those questioned might have never even seen a wind farm. Wrestling with demons: Sara Maitland struggles to see the generators as benevolent giants. Photo by Adam LeeSo I am not trying to claim some absolute or idealist Justice here, even if such a thing were to exist. I am saying that I cannot agree with any of these arguments against wind farms. I believe that they are a just and proper way of addressing our power shortfall, of relieving the ecological and social damage suffered by poorer nations who use less fossil fuel, of meeting our international obligations, of protecting the planet and its future from our avaricious consumption of non-renewable resources, and of limiting the damage that human-driven climate change is causing. I cannot even honestly say that there is some special site-specific argument for my moor: the reason why so many power companies want to construct wind farms here is that the conditions are nearly ideal and there are exceptionally few people who will be inconvenienced by it. I find I am living, at alarmingly close quarters, with a moral conflict. All my instincts tell me that this cannot be so. All my thoughts and feelings tell me that it is. I can sit dithering and hand-wringing (I do a good deal of that anyway). Or I could make a determined choice about which of the two I prioritise; I could decide whether Beauty or Justice is the more important ‘good’, although that would mean fracturing the ideal that all good is ultimately singular. Either way, I have to unpick my ethical inheritance. I was born in 1950 and brought up, by intelligent parents, in a liberal conservative intellectual tradition. This is not a bad framework for the bright child — offering confidence, security, tolerance, and a benign view of community and independence. Unfortunately, like any political philosophy that is treated as ‘natural’ and therefore goes unexamined, it accumulates myths that are hard to disentangle and sometimes hard even to identify. Among those myths are that the truth will always triumph; that the family is the central bulwark of civilisation; that gender dimorphism makes men behave better towards women; that history is inevitably progressive; that privilege leads directly to a strong sense of responsibility; and that those who ask don’t get. All these turned out to be untrue, but they are less pervasive, less problematic, than the subtle belief that ‘the good’ is indivisible — that, in the final analysis, there cannot be a conflict between one virtue and another. One problem with such a pure-minded but absurd assumption is that it leaves one with very little equipment, practical or theoretical, for addressing internal conflicts or making choices between any two things that are both apparently good but incompatible. I find myself convinced that the circle not only should but can be squared. Of course, I could just consent to live with, or endure, the conflict, but I think I would find that too tiring. The alternative is to decide that one or other of the two elements was not in fact a significant part of the ‘good’ at all. I find it hard to imagine any society in which Justice was seen as somehow undesirable or was held to exist solely as a psychological rather than an ethical attribute. (I am not talking here about whether or not any particular society is just, but whether its members hold Justice as desirable, regardless of their actual interactions.) One could denigrate Justice as being impossible to deliver, because indeed it seems, as I mentioned above, that justice for person A might well undermine the rights of person B. On the whole, however, the idea that Justice is a key element of ‘the good’ is not only deeply and indelibly embedded in my consciousness, it is also something that I could not seriously or honestly want to change. Tampering with such an aspiration would be to open the floodgates to a ‘might is right’ political ethic that could deliver only horror. So Beauty would seem a better candidate for such treatment. Modernism, after all, has already launched a partially successful assault on the Romantic notion that there is any connection between Beauty and Truth. And Beauty has certainly been downgraded as a criterion in the arts and is frequently treated as a digression (at best) in the search for the common good. Some people have gone further and suggested that Beauty is actually negative; that when it is observed by power (by the ‘male gaze’, for example), it is dangerous and demeaning to the beautiful, because it objectifies the ‘owner’ of the beauty, and damaging to the not-beautiful as it diminishes their value. Quite how one could apply this to a waterfall, a mathematical proof, a sonnet, a heath milkwort or the Orion Nebula is not clear. Perhaps a better angle from which to approach the task of amputating Beauty from the whole of the good would be to stress its indefinability, its subjectivity, and its ephemeral nature: as in, ‘not many people think this moor is beautiful’. But this is absurd — ask people whether they want more or less beauty in their lives and the answer is pretty reliably in favour of beauty, whatever it might precisely be to them. When I first realised the depths of my unease, I struggled to ramp up the emotional intensity of one or other of the two qualities. For a while, I had grim photographs pinned to my walls of suffering children from the Niger Delta, where the damage inflicted by the increasingly desperate search for oil is particularly painful and vivid. I researched mining and drilling accidents. I even attempted for a day or two to live without electrical power and to imagine how that would be for others. I tried to focus on the importance of Justice. Equally, I attempted the reverse: to pursue the human need for Beauty, to increase its ethical importance in my life, to make it intellectually passionate, to give it a wider hold, on my mind as well as my heart. But this ended up feeling both silly and wrong. I find I still want to hold Beauty and Justice, along with other good things, in a whole and healthy balance. At the moment, I am attempting a rather different approach. I am trying to change the terms of the engagement. I am working on reconstructing my aesthetic taste, so as to find the turbines subjectively beautiful. If I found them lovely and pleasing as well as just, I would have reconsolidated the good and increased my daily joyfulness. Of course, I could instead work on finding them unjust, but I am too aware of the self-deluded contortions of a certain type of anti-wind farm campaigner (for example, Roger Scruton) who, in order to justify the belief that Beauty is the supreme arbiter, has to marshal a ragbag of unsound arguments to prove that wind farms are unjust and ineffective. But since some of the wind farms already exist within my beloved terrain, I know it would be much pleasanter to live surrounded by positives rather than negatives: beautiful and just seems immeasurably preferable to ugly and unjust. My new question then is: is it possible, by an act of will, to change one’s aesthetic and emotional response to something? Some people I have discussed this project with think that it cannot be done, or at least not without psychological mutilation. But I cannot really believe this because I know that my personal aesthetics do change. I know that I now find beautiful things that I found ugly in the 1970s (Rothko’s paintings, John Clare’s poetry and fireworks) and vice versa (Art Nouveau, my ex-husband and miniskirts). I can even think of an example where my idea of beauty was directly affected by morality: when I was young, I thought that fur coats were very lovely — glamorous, sinuous, desirable; later I came to see them as immoral. But, more than that, when I was sorting out my late mother’s clothes, I was almost surprised to notice that I found her furs ugly — lumpy, bulky, shapeless, dull. I had no guilty longing for them at all. The pleasure of finding new things beautiful, and the sadness of recognising that the erstwhile glory of something has departed, are both quite common experiences, perhaps more so now than before because fashions change so fast. But this is not the same as a deliberate mental campaign to change the way I perceive things. I did not set out to change my mind on these — and many other — judgments. I discovered they had changed for me, sometimes with a sense of startled delight or a feeling of loss. Suspecting that such an activity is both possible and not perilous to my sanity, I have been experimenting with various ways of making the wind farms look, feel, be, beautiful to me. I am surprised to discover how little this possibility seems to have been explored. For example, given the energy that middle-class parents put into ‘improving’ their children’s aesthetic tastes (making them more like our own, so that they prefer Bach to grunge and Raphael’s Madonnas to Barbie), you might think that the ubiquitous child-rearing manuals would be able to help, but the subject is never mentioned. More directly, it would be useful if the pro-wind farm faction offered some helpful suggestions. But they tend simply to sneer, dismiss, or offer dreary stoical platitudes rather than positive usable suggestions. Nonetheless, I have evolved a few strategies. Association is a primary tactic. I am trying to talk to the people who already find them beautiful and avoid all conversations about how ugly they are. In particular, I have been speaking to the sheep farmers, who tend to favour wind power. The Single Payment Scheme (the EU’s subsidy for keeping marginal farming viable) looks due to end in 2015; once it is gone, it is not clear that hill sheep farming can continue. My neighbours tend to see the wind farms as a way, perhaps the only way, of drawing money into the area so that they can go on with their hard, much-loved work. Among other things, they are teaching me that ungrazed and unshepherded the high moor will change anyway. Sheep made the landscape I love; if the sheep go, much of the beauty will go with them; that land will go back to unwalkable scrub and dank bog. The wind farms might turn out to be the guardians and saviours of the landscape. Several of my neighbours have also lived up here for a long time and speak of the various changes that the moor has undergone in the past without losing its loveliness. These conversations really help me. I am realising that knowledge nourishes beauty. The actual workings of a wind turbine are elegantly economical, seriously clever, and both strong and delicate like an athlete. Inside each nacelle (the housing for the generator at the height of the mast: a lovely dancing word itself) is a very beautiful piece of machinery. This is not the beauty of open high moor, but it has a true beauty of its own which I am trying to learn more about. A Holy Trinity of engineering, or a St Patrick’s shamrock? These are just two images of beauty that we might begin to attach to wind turbines. Photo by Adam LeeLanguage is obviously a key issue: the people who decided to call them ‘wind farms’ were not fools. It is certainly a more attractive name than, say, ‘rural power stations’, particularly when they are located in agricultural countryside. I am rather intrigued by the observation that words drawn from the natural world so often feel more endearing than a technical or contemporary vocabulary. Last year I had a running fight in the local press with someone who wanted to construct an energy-from-waste power generation scheme: he insisted belligerently that it was a ‘plant’; I doggedly reiterated the word ‘incinerator’. You could tell from that alone which side each of us was on, yet both terms are perfectly correct. ‘Rooted’, ‘flowering’ and ‘branching’ feel much lovelier than ‘embedded’, ‘achieving’ and ‘criss-crossed’. So I try to say, think and write ‘wind farm’ rather than ‘power plant’ or ‘generators’, and ‘fins’ rather than ‘blades’ or ‘propellers’ whenever there is a choice. As a society, we do believe this works in other areas; one core strand of the argument behind ‘political correctness’ is that what we name something or someone does affect how we feel about it, as well as how the named experience that naming. I have spent a good deal of my life arguing for inclusive and non-sexist language in that belief; so this should not be too big a step. Even better, for me, is positive imagery. The ecologist Will Anderson refers to them as my ‘whispering giants’ on the grounds that I love fairy stories and this reference might connect them more easily to other things I find beautiful. And although I do realise it would not be everyone’s way forward, I have also been working on seeing them as an image of the Holy Trinity, like St Patrick’s shamrock. Theologically, this works rather well. The poise, balance and mighty movement of the three fins gathering in and moving the whole sky while the still centre creates power out on the wide space of the moor; and the steady turning grounds my prayers. Most useful of all, though, has been changing the perspective from which I see them and associating them with things that are already beautiful and joyful for me. I have a nephew, a child still, with true and undisguised enthusiasms. He wanted to see a turbine close up, and so we made them the goal of a long moorland walk. We were actively trying to reach them rather than avoid them. Close to, they are awesome, impressively magnificent, and he was genuinely and bubblingly excited (helped of course by the fact we had no business being there). Looking up at that huge swooping power against a fast wild winter sky, hearing close to the deep song of energy and space, I felt infected by his fierce delight. Does it work? Maybe. Not quite yet. But this is why I am walking at dawn up a long boggy slope: I want to see the sunrise turn the turbines into shining silver; I want to see the light glint off them and scatter across the shimmering morning. I want to see it close to and see the long view down the valley which will still be in shadow below me. In the dawn there are curlews crying invisible. Yesterday it was hot and now, even as I walk, strange patches of very white mist are being exhaled from the curves of the moor like dragons’ breaths. I reach the crest of the hill quite abruptly and, surprisingly near, the Arecleoch turbines are turning slowly, slowly but elegantly; their bases invisible in still smoky mist, their fins silver and catching and refracting the first brightness of the day. It took me several moments of delight to recall that I found them ugly — they seemed enchanted and enchanting. Beautiful. Or nearly. The author is grateful to Elaine Scarry for On Beauty and Being Just (Duckworth, 2006) which addresses some of these issues. Sara Maitland’s new book, Gossip from the Forest: The Tangled Roots of our Forests and Fairytales, is published by Granta in November
Sara Maitland
https://aeon.co//essays/overblown-and-under-loved-wind-farms-at-the-edge-of-beauty
https://images.aeonmedia…y=75&format=auto
Consciousness and altered states
How I found my way out of depression, thanks to the writings of the English priest who brought Buddhism to the West
Ever since I was a child, I have been acutely sensitive to the idea — in the way that other people seem to feel only after bereavement or some shocking unexpected event — that the human intellect is unable, finally, to make sense of the world: everything is contradiction and paradox, and no one really knows much for sure, however loudly they profess to the contrary. It is an uncomfortable mindset, and as a result I have always felt the need to build a conceptual box in my mind big enough to fit the world into. Most people seem to have a talent for denying or ignoring life’s contradictions, as the demands of work and life take them over. Or they fall for an ideology, perhaps religious or political, that appears to render the world a comprehensible place. I have never been able to support either strategy. A sense of encroaching mental chaos was always skulking at the edges of my life. Which is perhaps why I fell into an acute depression at the age of 27, and didn’t recover for several years. The consequence of this was my first book, a memoir called The Scent of Dried Roses (1996). While I was researching it, I read the work of the psychologist Dorothy Rowe, a quiet, almost secret, follower of Buddhist philosophy. Secret, because Rowe knew what the term ‘Buddhist’ implied to the popular imagination (as it did to me) — magical thinking, Tibetan bell-ringing, and sticking gold flakes on statues of the Buddha. Truth is not to be found by picking everything to pieces like a spoilt child It was through Rowe’s writing that I first came across Alan Watts, and he sounded like an unlikely philosopher. His name evoked the image of a paper goods sales rep on a small regional industrial estate. But through Watts and his writing, I was exposed directly to the ideas of Zen Buddhism. I was suspicious at first, perceiving Zen Buddhism to be a religion rather than a philosophy. I wasn’t interested in the Four Noble Truths, or the Eightfold Path, and I certainly didn’t believe in karma or reincarnation. All the same, I read a couple of Watts’s books. They made a significant impact on me. The Meaning of Happiness (1940) and The Wisdom of Insecurity (1951) are striking primers to his work, and they underlined what Rowe was already teaching me: that life had no intrinsic meaning, any more than a piece of music had an intrinsic ‘point’. Life was, in Zen parlance, yugen — a kind of elevated purposelessness. Watts, like Rowe, showed me how we construct our own meanings about life. That nothing is a given and, since everything is uncertain, we must put together a world view that might fit roughly with the facts, but is never anything other than a guess — a working fiction. This, too, is a typical Zen understanding — that life cannot be described, only experienced. Trying to see all of life is like trying to explore a vast cave with a box of matches. Impressed though I was, I more or less forgot about Watts after I finished his books, and pursued my career as a fiction writer. I was weary of introspection. Then, years later, a bad spell in my life propelled me back into a chasm. In 2004, three close friends died in sudden succession. One died in front of my eyes. Another was murdered. A third succumbed to cancer. My depression — and that original sense of meaninglessness — resurfaced. I turned to Watts again. This time, it was as if I was reading for dear life. No time for received wisdom: a young Alan Watts (L) and friends reading haiku poems written for a contest. Photo by Nat Farbman/Time Life Pictures/GettyAlan Watts had been prolific in his 58 years. He died in 1973, after producing not only 27 books but also scores of lectures, all of which were available online. They had intriguing titles such as ‘On Being Vague’, ‘Death’, ‘Nothingness’ and ‘Omnipotence’. I stopped writing novels and worked my way through every one of them instead. I found a DVD of an animation of Watts by Trey Parker and Matt Stone (of South Park fame). I discovered that Van Morrison had written a song about him, and that Johnny Depp was a follower. But he remained largely unknown in Britain, even though he was English, albeit an expatriate. Watts was born in 1915 in Chislehurst, Kent. His father had been a sales rep for the Michelin tyre company and his mother was a housewife whose father had been a missionary. In later life, Watts wrote of mystical visions he’d had after suffering fever as a child. During school holidays — while he was a scholar at King’s School in Cambridge — he went on trips with the Buddhism enthusiast Francis Croshaw, who first developed his interest in Eastern religion. With penetrating eyes like Aleister Crowley’s, he described himself as a ‘spiritual entertainer’ By the age of 16, Watts was the secretary of the London Buddhist Lodge, which was run by the barrister Christmas Humphreys. But Watts spoiled his chances of a scholarship to Oxford because one examination essay was judged ‘presumptuous and capricious’. And, despite his obviously brilliant mind, Watts never achieved a British university degree. This, perhaps, is another of his qualities that chimes with my own spirit — I too left school with only two A-levels, and am, like Watts, an autodidact. As a young man, Watts worked in a printing house and then a bank. During this time, he hooked up with the Serbian ‘rascal guru’ Dimitrije Mitrinović — a follower of the Armenian spiritual teacher GI Gurdjieff and the Russian esotericist PD Ouspensky — who became a major influence on his thinking. At the age of 21, in 1936, he attended the World Congress of Faiths at the University of London. There, he heard the renowned Zen scholar DT Suzuki speak, and was introduced to him. Later that year, Watts published his first book The Spirit of Zen. That same year, he met the American heiress Eleanor Everett, whose mother was involved with a traditional Zen Buddhist circle in New York. He married Eleanor in 1938 and they moved to America, where he trained as an Episcopal priest, before leaving the ministry in 1950, thus separating once and for all from his Christian roots. From then on he concentrated on the study and communication of Eastern philosophical ideas to Western audiences. I felt powerfully attracted to Alan Watts. Not only to his ideas, but to him, personally. Watts was no dry, academic philosopher. With eyes hooded and penetrating like Aleister Crowley’s, he was a jester as well as a thinker, describing himself as a ‘spiritual entertainer’. Aldous Huxley described him as ‘a curious man. Half monk and half racecourse operator.’ Watts wholeheartedly agreed with Huxley’s characterisation. He carried a silver cane ‘for pure swank’, he hung out with Ken Kesey and Jack Kerouac (he is even parodied in On the Road as Arthur Whale). His English public school-educated voice was rich and deep, like a prophet’s, and his laugh juicy and contagious. But it was his thinking that most excited me. He was, if not the earliest, then certainly the foremost translator of Eastern philosophical ideas to the West. In some ways, his interpretations were radical — for instance, he dismissed the core Zen idea of zazen (which meant spending hours seated in contemplative meditation) as unnecessary. ‘A cat sits until it is tired of sitting, then gets up, stretches, and walks away,’ was his forgiving interpretation of zazen. Slightly less forgiving was his comment on Western Zen enthusiasts, whom he mocked as ‘The uptight school … who seem to believe that Zen is essentially sitting on your ass for interminable hours.’ It was a great relief to read this for someone like me, who found the idea of excessive meditation as unhealthy as the idea of excessive masturbation. Watts also rejected the conventional ideas of reincarnation and the popular understanding of karma as a system of rewards and punishments carried out, lifetime after lifetime. It was this radical approach that made his ideas so fresh — he had no time for received wisdom, even from those who claimed to know Zen inside out. The idea of walking around with a metaphorical stick to whack yourself with is foreign to a Zen master Many Zen ideas have become debased into ‘new age’ philosophy, basely transmuted into wishful thinking, quasi-religious mumbo jumbo and the narcissistic fantasies of the ‘me generation’. But before the beatniks and the hippies got hold of it, Zen philosophy, as described by Watts, was hard-edged, practical, logical and, in some ways, oddly English in tone, as it had deep strands of scepticism and humour. (You’ll never see Christian saints laughing. But most of the great sages of Zen have smiles on their faces, as does Buddha.) Zen and Taoism are more akin to psychotherapy than to religion, as Watts explained in his book Psychotherapy East and West (1961). They are about finding a way to maintain a healthy personality in a culture that tends to tangle you up in a lot of unconscious logical binds. On the one hand, you are told to be ‘free’ and, on the other, that you should follow the demands of the community. Another example is the instruction that you must be spontaneous. These kinds of snags, or double binds, according to Zen writings, produce inner tension, frustration, and neurosis — what Buddhism calls dukkha. Watts saw his job, via Zen philosophy, to teach you to think clearly, so that you could see through conventional thinking to a place where your mind could be at peace inside a culture that could have been designed to generate anxiety. But, although he was an entertaining writer who presented his ideas with a brilliant clarity, Watts had a difficult job on his hands — mainly because Zen and Taoism are so fundamentally counter-intuitive to the Western mind. Western philosophers and laymen find Eastern thinkers frustrating because Buddhist sages don’t have the same emphasis on the power of language, reason and logic to transform the self or to ‘know’, in the way Westerners think of the word. The riddles, or koans, that Zen thinkers speak in are intended to trip you up and make you realise how inadequate words — either spoken or inner dialogue — are in making sense. Zen emphasises intuition and mushin, that is, an empty mind, over planning and thought. The ideal is that your mind can be unblocked from maya (which means both illusion and play) and thus acquire a kind of resonance or instant reflection, or munen, which translates awkwardly as now/mind/heart. This makes it alien to Western philosophical traditions, which tend to distrust spontaneity, since it supposedly clears the way for the dominance of brute animal instincts and dangerous passions. But the idea of walking around with a metaphorical big stick with which to whack yourself if you make a mistake, or get carried away by your emotions, is foreign to a Zen master. Zen, after all, was used by the Samurai warriors, who had to strike immediately without reflection or die. Intuition, in a healthy soul, is more important than conscious reflection. Millions of years of evolution have made the human unconscious wise, not reckless. You can find similar ideas in modern books such as Blink by Malcolm Gladwell, which emphasises the value of gut reactions. Zen started as a reaction against the highly conventionalised and ritualised Japanese society from which it emerged. This must have struck a chord with Watts, who grew up at a time when British society — hidebound, introverted and conventional — was not so different from the self-controlled, ‘uptight’ world of the Japanese. In such a society, spontaneous behaviour becomes impossible. The word Zen is a Japanese way of pronouncing chan, which is the Chinese way of pronouncing the Indian Sanskrit dhyana or sunya, meaning emptiness or void. This is the basis of Zen itself — that all life and existence is based on a kind of dynamic emptiness (a view now supported by modern science, which sees phenomena at a subatomic level popping in and out of existence in a ‘quantum froth’). In this view, there is no ‘stuff’, no difference between matter and energy. Look at anything closely enough — even a rock or a table — and you will see that it is an event, not a thing. Every ‘thing’ is, in truth, happening. This too, accords with modern scientific knowledge. Furthermore, there is not a ‘multiplicity of events’. There is just one event, with multiple aspects, unfolding. We are not just separate egos locked in bags of skin. We come out of the world, not into it. We are each expressions of the world, not strangers in a strange land, flukes of consciousness in a blind, stupid universe, as evolutionary science teaches us. The emphasis on the present moment is perhaps Zen’s most distinctive characteristic. In our Western relationship with time, in which we compulsively pick over the past in order to learn lessons from it and then project into a hypothetical future in which those lessons can be applied, the present moment has been compressed to a tiny sliver on the clock face between a vast past and an infinite future. Zen, more than anything else, is about reclaiming and expanding the present moment. It tries to have you understand, without arguing the point, that there is no purpose in getting anywhere if, when you get there, all you do is think about getting to some other future moment. Life exists in the present or nowhere at all, and if you cannot grasp that, you are simply living a fantasy. For all Zen writers life is, as it was for Shakespeare, akin to a dream — transitory and insubstantial. There is no ‘rock of ages cleft for thee’. There is no security. Looking for security, Watts said, is like jumping off a cliff while holding on to a rock for safety — an absurd illusion. Everything passes and you must die. Don’t waste your time thinking otherwise. Neither Buddha nor his Zen followers had time for any notion of an afterlife. The doctrine of reincarnation can be more accurately thought about as a constant rebirth, of death throughout life, and the continual coming and going of universal energy, of which we are all part, before and after death. Another challenge for Western thinkers when struggling with Zen is that, unlike Western religion and philosophy, it has no particular moral code. The Noble Truths are not moral teachings. Zen (unlike Mahayana Buddhism with its ‘Eightfold Path’) makes no judgment about good or bad, except to say that they are both necessary to make the universe dynamic. Like the Greek philosopher Heraclitus said, there is no idea of ‘good’ out to destroy ‘evil’, or vice versa. Evil cannot be destroyed, any more than good can, because they are polar opposites of the same thing, like poles of a magnet. Destruction is as necessary as creation. Chaos must exist if we are to know what order is. Both aspects of reality, in tension with one another, are necessary to keep the whole game going: the unity of opposites. This can lead to some fairly shocking moral reasoning. When the American composer and Zen follower John Cage was asked, ‘Don’t you think there’s too much suffering in the world?’, he answered, ‘I think there’s just the right amount.’ This encapsulates, and yet somewhat satirises the Zen world view — that the dark and the light, the negative and the positive, the yin and the yang, are all necessary parts of the overall whole. Behind this thinking is the idea that, for the accomplished follower of Zen, moralists are dangerous because they will destroy everything in pursuit of their vision of ‘the good’. Straightforward greed might result in the destruction of the local village to get their wealth and their women — but that won’t be too bad because it will preserve the wealth and the women. A ‘cutting-up’ attitude to life gives us dead knowledge, not live knowledge However, if you are on a moral crusade, you will destroy everything in your wake. And who can deny that the history of the 20th century bears out this view, with Nazi and Communist ideologies causing such havoc? After all, Hitler was an idealist, too. So Confucius — who was not, admittedly, part of the Zen tradition, though he influenced it — puts the greatest value not on absolute good, but on ‘human-heartedness’, or jen. If you are human-hearted, you are unlikely to want to do any great ill, even without a great moral vision to guide you. And, even if you do, the damage you cause will be limited by your own self-interest. This lack of a clear moral code is perhaps why Zen is not a philosophy wholly appropriate for the young or immature mind. In the 1950s, Watts critiqued the Beatnik appropriation of Zen in his book Beat Zen, Square Zen and Zen (1959). The apparent fatalism of Zen seemed to open the door for an individual to do ‘whatever they like’. Watts thought the Beats were childish, although he did suggest that their behaviour also revealed a clever paradox: that absolute fatalism implies absolute freedom. Again, you can look at it both ways. In fact, Zen isn’t fatalistic. Rather, it accepts something that Western philosophy finds hard to grasp — that two contradictory truths are possible at the same time. It just depends on which way you look at it. The world is not a logically consistent one, but a profoundly paradoxical one. Again, this is illustrated in science, which shows that two things can be one at the same time — light, for instance, acts as both a particle and a wave. The Zen masters say the same thing about human life. Perhaps you are doing ‘it’. Perhaps ‘it’ is doing you. There is no way of knowing which is which. It is like a formal dance so deft that you cannot tell who is leading, and who is following. While it is refreshing that Zen philosophy is supported in many ways by present scientific knowledge, it is also a critique of scientific thought. The scientific tradition requires things up to be cut up — both mentally and physically — into smaller and small pieces to investigate them. It suggests that the only kind of knowledge is empirical and that the rigid laws of scientific method are the only kind that are valid. Zen implies that this is like throwing the baby out with the bathwater — scientific thinking might be immensely useful, but it also does violence to a meaningful conception of life. It tends to screen out the essential connectedness of things. We live in an imprecise world. Nature is extraordinarily vague. Science promotes the idea of hard, clear ‘brute facts’ — but some facts are soft. A ‘cutting-up’ attitude to life gives us dead knowledge, not live knowledge. The fundamental nature of the world is not something you can get too precise about. The basis of one’s life and thought must always remain undefined. Some ideas — such as the Tao, the ‘way of things’ — come to us, we can’t just go out and get them. They are mysterious and unknown. This kind of thinking is anathema to the modern scientist who thinks that everything can be known and finally will be known. But, Watts argued, it is impossible to appreciate the universe unless you know when to stop investigating. Truth is not to be found by picking everything to pieces like a spoilt child. It is impossible, of course, to summarise Zen in a few thousand words. In fact it doesn’t ask to be summarised. The first principle of Zen, voiced by the philosopher Lao Tzu, is ‘Those who know don’t say, and those who say don’t know.’ Zen is not proselytising, quite the reverse. It asks you to come to it, in supplication, and to tease it out. Another Zen saying is, ‘He who seeks to persuade does not convince.’ But it convinced me. After spending nearly two years studying Zen, Taoism and the works of Alan Watts, I think I genuinely achieved a sort of satori — a freedom from the inner weights and contradictions of ordinary life. When a student asked Watts what enlightenment felt like, he said if felt very ordinary — but like walking slightly in the air, an inch above the ground. And that is exactly how I felt — every day. I don’t know how long the experience lasted. Perhaps as long as a year, perhaps even longer. All that time, Watts and the Zen idea were there in my head, informing my thoughts and actions. The background noise, the static of worry and gabble that informed my old life had disappeared. My head was clear. The philosophy entirely permeated me. My life was truly more joyful than it had ever been. Nothing bothered me. I felt full of energy and optimism. Then one day, I lost the vision. I don’t know how it happened. A period of stress and clinical depression took me under and, when I surfaced again, Watts and the Tao had left my thoughts. I was alone again, puzzled and conflicted. I knew the words, but I couldn’t hear the music anymore. The old thoughts and habits I had been conditioned into since birth reasserted themselves. Once more, I worried about things pointlessly, and got lost in the past and the future instead of existing in the dynamic present. But then, I shouldn’t have been surprised. What Alan Watts taught, above all else, is that everything is transitory. Everything comes and goes. Watts himself did not exist in a perpetual state of spiritual bliss. He died an alcoholic. He had been a lifelong heavy drinker. His later life was not easy — in the last years, he cut a Dickensian figure, working desperately to support his seven children and, presumably, his two ex-wives (by the time he died he was on a third). But he was by no accounts an ‘unhappy’ drunk. He never expressed guilt or regret about his drinking and smoking, and never missed a lecture or a writing deadline. If Watts’s own example is to be taken into account, being ‘enlightened’ doesn’t always make you happy. Yet it is still something worth attaining. It brings clarity and peace, even if it doesn’t protect you from all of life’s vicissitudes. My personal ‘enlightenment’ came and went — but I hope it might return. Perhaps this article will be the first step in that direction. It feels like it is. It might be in my hands or it might not. But if I can find the path again, then I will stay on it — until I lose it. And, as the Zen saying instructs, if I see the Buddha, I will kill him. Because the moment you start thinking of yourself as ‘enlightened’, you are not. twitter@timlottwriter
Tim Lott
https://aeon.co//essays/alan-watts-the-western-buddhist-who-healed-my-mind
https://images.aeonmedia…y=75&format=auto
Knowledge
Speaking from the heart? Politicians who don’t want us questioning their facts rely on a time-honoured American trope
In response to a leaked video in which he claimed that 47 per cent of Americans are ‘dependents,’ the Republican presidential candidate Mitt Romney did not choose to clarify his facts. He could have said that this number came from the Tax Policy Center, which also found that most households in that 47 per cent do pay payroll taxes. He could have said that of the 18 per cent of Americans who pay neither payroll nor income taxes, more than half are elderly and more than a third have incomes under $20,000. He could have admitted, more audaciously, that a few hallowed members of the 47 per cent are in fact millionaires who pay no taxes. Instead, he chose to stay on message. He chided the president for his ‘government-dominated’ politics and asserted his own vision of a ‘society driven by free people pursuing their dreams.’ Thanks to innumerable incidents like this, a debate has begun to brew on the worth of the social practice of ‘fact-checking’ political speeches. Critics argue that fact-checking reports, though written with the intent of protecting public discourse, are little more than fodder for political mud-slinging, reinforcing to party adherents the view that the other guy is the hypocrite they already thought he was. Furthermore, since elections are won by coded appeals to the research-verified prejudices of certain social groups, political messaging is inherently untruthful. To fact-check a political speech is to judge it by a different standard than that for which it was written. Is politics as we know it therefore utterly devoid of truth? If by ‘truth’ we mean something like correspondence with a shared, objective reality, then yes, perhaps the critics are right. But it would be a mistake to ignore another kind of truth – let’s call it Truth – that is primarily a rhetorical tool, a way of signalling vision, gumption and higher purpose. This kind of Truth lords it over myriad smaller truths with the baronial and tipsy generosity of a bucolic squire negligent of the whereabouts of some of his serfs. Supporters of both parties can easily overlook a few factual errors here and there if their candidate’s larger vision for America is one they staunchly believe in. Who cares if politicians fail to tell the truth every once in awhile? What matters is that they speak the Truth! This ‘religion of the heart’, with its emphasis on ‘proper’ emotion and inner conviction, came to dominate American life Of course, politicians might defend their social visions while owning up to, and apologising for, factual misdemeanours. But this would be to demonstrate fallibility in a culture that tolerates none. Instead, if they care to justify their pronouncements at all, politicians often invoke a species of self-defence. During the Republican primary this spring, Rick Santorum, then a still-hopeful presidential candidate, was called out for claiming (incorrectly) that senior citizens euthanised by hospitals for ‘budget purposes’ account for five per cent of all deaths in the Netherlands. When challenged by a Dutch reporter, Santorum’s spokeswoman Alice Stewart offered a defence as curious as the original claim: ‘It’s a matter of what’s in his heart.’ Yes, she seemed to admit, he may be factually incorrect, but his words echo in their own way his genuine pro-life sentiments. Santorum spoke from his heart and that is what counts. I was reminded of Tony Blair’s insistence that he ‘genuinely believed’ there were weapons of mass destruction in Iraq, despite UN investigations repeatedly coming up empty. Or, to pick from the other end of the political spectrum, Mike Daisey’s claim that his fabricated story about exploitative working conditions at Apple’s supplier plants was ‘true’ in that it made people ‘really care’. The satirist Stephen Colbert famously called the non-truth produced by this kind of confused conviction ‘truthiness’. It’s tempting to see these examples as the worst kind of political evasion or psychological delusion (or both). Yet the idea manifest in them – that some higher truth is present whenever genuine feeling is elicited or marshalled – is neither unfamiliar nor so easily refutable. From self-help books to sentimental movies, the mantra ‘follow your heart’, in all its opacity, is a conceptual pillar of the modern culture industry. The question is whether it can do more than hawk absurdly simple advice in a complex world. Does some form of truth, however corrupted, underlie the pop psychology? As Elvis Costello once asked, in an earnest jab at his ironic generation: ‘What’s so funny about peace, love, and understanding?’ We might also inquire: ‘What’s so funny about following one’s heart?’ But where to begin? One might argue that Americans are now living out the latest chapter of Richard Hofstadter’s Anti-Intellectualism in American Life. In this 1966 study, Hofstadter documented how the intellectually oriented Puritanism of New England gave way to a more rugged Evangelicalism as Christianity spread west with the pioneers. Allying itself with an anti-elitist, democratic spirit this ‘religion of the heart’, with its emphasis on ‘proper’ emotion and inner conviction rather than historically correct and rational understanding, came to dominate American life. http://www.youtube.com/watch?v=VokvT0CyT5E Without intending to, Hofstadter helped to propagate the now-entrenched dichotomy between rational intellect and anti-intellectual emotionalism. The phenomenon that I am after here, by contrast, seems wrapped up in a broader romanticism, one that hints at a synergy between head and heart. It is employed by intellectuals and anti-intellectuals alike. What is the source of this general belief that Truth, in some higher and murkier sense than a ‘correspondence with reality,’ is a matter of inner feeling and conviction? And why, in the face of irrefutable evidence to the contrary, can this belief be summoned in even a partially convincing manner? In 1799, Friedrich Schleiermacher, one of the founders of modern liberal theology, wrote a series of speeches on religion dedicated to its ‘cultured despisers’. Like others before him, Schleiermacher laments the doctrinal quibbles and ritual idiosyncrasies that separate Christians from one another. In an effort to counteract the malicious divisiveness wrought by the ‘plastic spirit of high contemplation’, he argues that genuine piety requires no great contemplative feats. Rather, the true seat of religion is the ‘inner sanctuary’ of feeling in which one bears an ‘immediate consciousness’ of the divine. Since this lies ‘directly on the bosom of the infinite world’, it is ‘raised above all error and misunderstanding’. At various points in his speeches on religion, Schleiermacher associates this inner feeling with a kind of intuition, or particular way of grasping the world. He speaks of religious intuitions as ‘self-contained’ and ‘set apart’ from abstract thought, if nonetheless a form of apprehension and conception. In the second edition of his speeches, Schleiermacher drops many of these references to intuition. His fear was precisely that it made the ‘inner sanctuary’ seem too much like a place of special access to truth (take note, Mr Santorum). But the damage was already done. Schleiermacher’s inner feeling caught on as a mode of attaining higher forms of transcendent knowledge. Schleiermacher’s theology no longer draws the adherents it once did, given the general retreat from his brand of liberal Protestantism. According to the neo-orthodox theologian Karl Barth who in many ways led this turn, Schleiermacher made the mistake of thinking he had really understood Christianity. In Barth’s Kierkegaardian view, this was the greatest offence a person of faith could commit. Schleiermacher ‘did not speak as a responsible servant of Christianity but … as a free master of it’, and the hubris of his theological style is echoed in the calm self-assurance of his vision of piety. Though much closer in spirit to Schleiermacher, the theologian and philosopher Paul Tillich expressed similar worries about the theology of religious feeling. Too much emphasis on inward conviction, he believed, ran the risk of devaluing the external world to the point of narcissistic isolation. By the time these criticisms were launched, the liberal Protestant theology of the early 19th century had already metamorphosed, transcending the bounds of theology. As the modern concept of religion emerged in the work of comparativist pioneers such as James Freeman Clarke and Cornelius Tiele, the template for determining the ‘great religions’ of the world was undoubtedly their own Protestantism. As such, religions were (and still are) seen as ‘faiths’ that involve ‘beliefs’, centred around the teachings of divinely inspired individuals (Buddha, Mohammed, Zoroaster). This means that in addition to being one of the so-called world religions studied in departments of religion, Protestantism is also the conceptual frame for the entire field. One of the ways this is blatantly evident is in the emphasis placed on religious experience. In William James’s The Varieties of Religious Experience (1902) and Rudolf Otto’s The Idea of the Holy (1917), both founding texts in the academic study of religion, one finds the idea that the essence of all religion lies in a sui generis feeling for an ineffable ‘more’ in the universe. That’s a clear inheritance from Schleiermacher. For this academic class, religion was a universal feature of humankind (so-called homo religiosus). All human beings were supposed to bear within them the kind of ineffable feeling that points to a higher truth. And as this thinking seeped into popular culture, so Schleiermacher’s theology transcended its Christian context. I do not mean to suggest that Schleiermacher is to blame for the Rick Santorums of the world. But his ideas did contribute, through the simultaneous secularisation and universalisation of liberal Protestant theology, to a situation where some form of truth can be appealed to from the safety of an ‘inner sanctuary’. Tillich was exactly right when he said that the theology of religious feeling lent itself to a subjectivism that spurned external validation. All the same, it is important to recognise that Schleiermacher invoked religious feeling not to insulate belief from criticism, but to encourage interreligious dialogue in a pluralistic world. Given the bounteous variety of systems and practices, he looked to inner feeling as the essential oneness around which religious people could cohere, celebrating rather than fighting over their differences. In an impassioned defence of pluralism that still reads freshly today, he asserted that truly pious people honour all religious concepts and rituals and flee ‘with repugnance the bald uniformity which would again destroy this divine abundance’. In short, Schleiermacher’s theology of feeling was an attempt to open conversation between people with different doctrinal allegiances. Its use in political discourse tends, by contrast, towards the evasion of dialogue. A proper historical sense is not the only gain here. The problem is that an essentially theological point, divorced from its theological context, has run rampant in the absence of a corresponding shift in justificatory framework. Perhaps by returning to that framework we can show the limits of the original point, engaging and critiquing it without rejecting it as nonsense. Schleiermacher’s speeches on religion quickly qualify the idea that feeling is the true site of religion. He argues that this inner sentiment, if it is to exist at all, must find outward, social expression. It is in dialogue with people of differing beliefs that true religious feeling is cultivated. The implication is that an inner feeling that resists outward manifestation (and spurns dialogue) is no feeling at all. It is a simulacrum of feeling, a narcissism that poisons the social realm by masquerading as authenticity. On this reading, politicians who call upon ‘what’s in their heart’ to justify their untruths do not speak out of inner conviction. They speak from the false confidence of men in artificially small ponds of their own making. If they find an audience, it is not because they are catalysts of dialogue, but because their adherents, seeing a reflection of themselves, mistake the familiar for the true.
Benjamin Y Fong
https://aeon.co//essays/the-heart-has-its-reasons-of-which-reason-knows-nothing
https://images.aeonmedia…y=75&format=auto
Fairness and equality
Back from Afghanistan, a former Marine bears witness to the estrangement of the home front
If you’re fortunate enough to visit the Kunsthaus Zürich in Switzerland, odds are you’ll saunter by Max Beckmann’s Strandpromenade Scheveningen with little more than a passing glance. I don’t blame you. The permanent collection is massive and stocked with works of far greater seductive power. But the scene of a young man making his way along a menacing waterfront, with all the elements (human and otherwise) arrayed against him — this struck a chord with me. I learnt later that Beckmann served as a field medic during the First World War, and I figure that’s the key. What many might view as a mundane depiction of a gusty day at the beach, I interpreted as a harrowing expression of the alienated veteran. I even mistook the street lanterns for decapitated heads, morbidly draped. Ever since I stumbled on the Beckmann painting in early June, I’ve been brooding over it. This is curious, as at the time I happened to be hopscotching across Europe in a dynamic fashion, trying to escape the half-decade of military life behind me and the untold years of academic tedium ahead. I spent three weeks on an organic farm in central France, scraping dung from a donkey’s stable and instructing the owner and her two daughters on how to block a jab in martial arts or prepare a mean burger. I walked 250km of the Saint Jacques de Compostelle trail with an oversized pack verging on 30kg (‘piss-poor planning,’ as we say in the ranks). I slipped into an advanced viewing of On the Road at the Cannes film festival, thanks to the maneuverings of a dear and well-connected friend. I couch-surfed with a gay man in Paris, where he and his pals accompanied this straight guy to All Fours, the most flaming restaurant in town. My final night in Barcelona, after surviving a sweltering and self-imposed itinerary of all things Gaudí, I befriended an Italian girl, a graphic artist born and raised in Rome, who took me on a tour of her favourite late-night haunts. And given it was the season of Euro 2012, I passed and smashed beers with dozens of jovial unknowns at bars and bistros from Berlin to Amsterdam to Brussels, celebrating the world’s most beloved sport. Max Beckmann (1884-1950) Strandpromenade in Scheveningen 1928, Kunsthaus Zürich © 2012, DACS, LondonOne might imagine that these are the memories that linger. And yet, one would be wrong. It’s the ominous Strandpromenade that holds me. Unlike Beckmann, not to mention some of the Marines with whom I have served, I didn’t experience any war-related trauma during my (almost) year-long tour in Afghanistan. I played occasional spectator to vehicle explosions up close, or bodily horrors from afar, but I never endured the bleedings-out of enemies and comrades. Nor did I submit to a daily routine of foot patrols or other missions that could reasonably fall under the rubric of imminent danger. I functioned as an intelligence officer, not a grunt. I might have flown out of dodge with a combat action ribbon, but my deployment, in toto, was conclusively antiseptic. So why the hang-up? Soldiers deal in gloomy realities so everyone else can go about their juvenilia as if such realities didn’t exist All veterans, no matter their history, register some level of estrangement at the sight of their homecoming. Modern war is too grave, and modern peace too frivolous, for this to be otherwise. The return to celebrity gossip and ‘reality’ television can be grating. It has a tendency to awake and beckon. Military service can induce a stubborn and sombre awareness of other people’s suffering. This is a way of being in the world that might grip any individual, community, or vocation, but perhaps is most violently embodied in Beckmann’s sad strolling veteran. This doesn’t mean our soldiers are saints — far from it. And it must be noted that most of this awareness, most of the time, is reserved for buddies or notional buddies, always out there at the front. Regrettably, it’s a surprise to come across a fellow vet whose sympathies are extended beyond his band of brothers and sisters, much less to the everyday toils of Afghans or Iraqis, or in its broadest form, all those struggling under oppressive systems and forces, at home and abroad. But it’s a severity of awareness all the same, one that’s less developed among my civilian counterparts, even those with elite credentials, privileged upbringings and liberal educations. For the enlightened rich, their cosmopolitanism may be complete, but it’s also abstract and slight. For those on the ground doing the empire’s bidding, their empathy may be limited in range, but it’s urgent, compelling, and real. It’s conventional American wisdom that men and women in uniform bear the brunt of war so that everyone else can enjoy their freedoms. A more pointed restatement might be that our uniformed services deal in gloomy realities so everyone else can go about their juvenilia as if such realities didn’t exist. But what if this formulation requires a reversal of logic? What if the realities at play — imperial war, global poverty, barbaric inequity and ecological pillage — persist exactly because our comfortable classes increasingly indulge in habits of mind and deed in keeping with eternal adolescence? In other words, the standard mantra of war service appreciation would be a grand old excuse for justifying mass narcissism. When the complacent well-to-do accidentally find themselves sitting next to a soldier on a commercial airliner, they forgive their own apathy by offering: ‘Thank you for your service.’ But in the ears of the most perceptive recipient, the gesture is nothing more than a polite way of saying: ‘Thank you and fuck you.’ For all my righteous indignation, what I say here isn’t quite fair. There’s a wanton element to it that undermines my call for cultural seriousness. For starters, I’m assuming too much. I’m also discounting the possibility that maybe there’s something to be said for a society that has effectively marginalised upsetting feelings and thoughts. A more mature protest might not demand that everyone prove as hopelessly serious as the sad strolling vet. It would be satisfied if all shared a little in his seriousness, understanding that, as long as the majority of the world’s inhabitants continue to live and die in Hobbesian style, we all bear social responsibility; that the deepest ideals of citizenship have not been rendered obsolete in the wake of progressive taxation or the iPhone, and that in fact they’ve increased amid the growing noise. Let me leave you with something concrete. This past summer, as I happily joined in the drinking and viewing rituals of some of Europe’s loudest soccer fans, I couldn’t help but notice the half-time and post-game course of events. The television broadcast would flip to something news-related, usually involving the continent’s economic and social implosion (or possibly, the war in Afghanistan), the volume would be rolled down to the point of nothingness and everyone would tend to each other’s jokes or refills. While I understand that a bar is not the place to go in order to brave life’s difficulties, especially on game night, I still find this to be a fitting metaphor. The challenge for myself, for the readers of this magazine and for all able citizens, is to conceive and commit to a more humane balance. It is to remind our neighbours, however gently, that as long as there is blood, we are required to conduct our affairs with a measure of sobriety. How we go about this is arguable. It is my own conviction that the tactics must be multiple and reinforcing, big and small. One possibility, for example, might be an international convention led by disheartened war veterans, a kind of Winter Soldier Investigation for America’s 21st-century wars. Another suggestion would be to recruit some of these same socially conscious vets for political office, as well as any other persons who can challenge ruling orthodoxies with hard truths borne from experience. Whatever steps are taken, they must venture beyond progressives chiding in the company of progressives or radicals fuming to the cheers of radicals. We can’t afford such sectarian pride. The stakes are too high. We’ve seen too much of the fallout. And it’s our job to bear witness to the wider court, to confront injustice as best we can, together. Lyle Jeremy Rubin blogs at myrivercityblues.com.
Lyle Jeremy Rubin
https://aeon.co//essays/an-afghanistan-veteran-on-the-bubble-of-the-home-front
https://images.aeonmedia…y=75&format=auto
Childhood and adolescence
Schools are in the business of forming character – so what kind of people will thrive in the 21st century?
We live in a morally bashful age. Perish the thought that anyone might try to impose their values on anyone else. Trying to ‘adopt the moral high ground’ sounds, to modern ears, arrogant or hubristic. You risk becoming a figure of fun, like a Speakers’ Corner tub-thumper. Education colludes with this squeamishness by pretending that the only serious questions it faces are technical ones, such as how are we going to raise standards? Or what are the most appropriate methods for testing students, and when, and how much? And should we have an ‘English Baccalaureate’, or a six-term year? But this coyness is both weaselly and pusillanimous. Education is essentially a moral enterprise. Whether overtly or covertly, every aspect of a school system is riddled with value judgements about what is worth knowing, and what kinds of young people we are trying to turn out. Words such as ‘standards’ and ‘appropriate’ merely finesse the underlying moral questions. They have only the appearance of neutrality, for we only need ask ‘standards of what?’ or ‘appropriate to what end?’ for their value-laden nature to be hauled to the surface. Only if we assume that standards refer, self-evidently, to performance on national tests — with a sprinkling of statistics about attendance and exclusions — do the moral questions seem to disappear. Despite occasional bursts of rhetoric about developing that mysterious beast ‘the world class workforce’, the goal of most education ministers turns out to be beating Singapore or Finland in the tables of PISA, the Program for International Student Assessment: in other words, to keep racking up the test scores, without stopping to think what those scores are meant to indicate. Examination results are proxies for our underlying values and intentions, not ends in themselves. Most of what kids learn in school they forget within weeks of having taken the test. As Einstein said, ‘Education is what remains after you have forgotten everything you learnt in school.’ So what are the valuable residues which we want for all our young people after those 12 long years in school? On this question, there is, from many current governments, a deafening silence or, at best, a feeble voice saying ‘a place at your chosen university’, as if this were something to which all students should aspire (despite there being places for only just over half of them in the UK). Politicians are not alone. I give many talks to head teachers, and I often put this situation to them. Imagine you run into a young man who left your secondary school a couple of years ago. He stops you and, out of the blue, thanks you for the wonderful education you gave him. You are puzzled, because you recall that he only scraped two poor GCSEs. So you suggest that he must be referring to the friendships he made, or to his part in the very successful school production of War Horse. True, he says, but that’s not what I meant — I was talking about the core education I got. And now you really are at a loss, and you ask him what he means. What does he say? If we can’t imagine a clear answer to this question, I think we are morally lazy, and probably corrupt — don’t you? If, after 100 years of tinkering and innovation, roughly half of all young people still don’t get a decent secondary qualification, if millions of school-leavers still can’t read well and thousands of students vote with their feet every day (not because they are inherently lazy or stupid, but because they can see no value in what school is offering) then surely it is time for a deeper look at the aims and values of education. Too much chalk and talk: traditional schooling can deaden any child. Photo by Piotr Malecki/Panos PicturesThe idea that schools are more or less as they must be, and it is just a shame that so many youngsters lack the ability to do well at them, is an anachronism, an apology for the status quo, which in any case has been shot dead by the contemporary science of intelligence. Genes establish only a wide range of possible intellectual development; where you end up is determined largely by experience. Lauren Resnick, director of the Institute of Learning at the University of Pittsburgh, defines intelligence merely as ‘the sum total of one’s habits of mind.’ Ability is not fixed, it is elastic, and your environment either stretches it or not. If teachers continue to believe in theories of fixed intelligence, they won’t look for ways to stretch it and the belief becomes a self-fulfilling prophecy. Actually, it’s worse than that. If youngsters pick up the belief in fixed ability, the self-fulfilling prophecy gets installed in their own minds like a computer virus. Studies by Carol Dweck, professor of psychology at Stanford, have shown that this virus damages students’ own ability to learn. They attribute failure to lack of ability, so they simply stop trying. If you have been taught to think of yourself as ‘low ability’, it’s obvious that your life chances are going to be damaged and dampened. For the many school leavers convinced that they are bad at learning, the social and economic as well as the personal costs are incalculable, and unforgivable. But high-achievers suffer from this virus too. Student counsellors at Oxford and Cambridge are seeing a growing procession of unhappy undergraduates who feel fraudulent — and therefore anxious or depressed — when the work gets harder and they start to struggle. They have not learnt how to ‘flounder intelligently’; indeed, they have been systematically deprived of opportunities to learn how to be resilient and resourceful by well-intentioned teachers who have spoon-fed, coaxed and cajoled them into their results. We no longer want to be associated with a school system that sorts children into ‘winners’ and ‘losers’ The fact is, education has always been about more than knowledge manipulation and test scores. It is also, inevitably, about the formation of character. Schools are cultures that are saturated with values: who to admire; what to respect; what is worth knowing; who has a right to question what; where is the line between imagination and silliness, or teasing and bullying; and so on. And it is not in the School Rules that these judgements live; it is in the minutiae of daily interactions with teachers and older students, who demonstrate through their behaviour and their expressions what is worth noticing and what is to be treated with silent contempt; what is ‘cool’ and what is ‘babyish’; what is ‘funny’ and what is ‘insolent.’ Inevitably, some habits are valued and encouraged, and others disdained or ignored. To be a school student is to undergo a protracted social apprenticeship. If, by their actions, teachers repeatedly value politeness over creativity, or being correct over trying something new, that is a value choice. As we cannot avoid making value choices, it behoves all of us in education to make these choices consciously and thoughtfully, in the light of a coherent sense of the purpose of education in and for the 21st century. Dropping Dickens in favour of JK Rowling is not the point. We need to decide whether we, by our actions, value neatness over the discerning consumption of internet-based information, or favour resilience over honour. That debate gets sidelined by a focus on tests and standards. And in its absence, ministers tinker with the peripherals, trying to make marginally more efficient a system that may not — as many people are now saying — be fit for purpose at all. In the 19th century, they didn’t pussy-foot around. The elite private schools talked happily of developing qualities such as team spirit, fair play, judgment and rationality. They produced young men who could outwit an enemy, conduct a trial, preach a sermon and hold their own at High Table in a discussion of arcane subjects. And it was naturally assumed that, as we only needed so many Leaders and a great many more Followers, so mass education (for the followers) sought to develop a complementary character: obedient, punctual, punctilious, honest, tidy and clean, as well as possessing a degree of basic literacy and numeracy. Nowadays, quite rightly, we no longer want to be associated with a school system that sorted children so obviously and so divisively into potential ‘winners’ and ‘losers’ and trained their characters differentially, so we have become nervous about talking about character formation at all. But the problem is not in talking about character per se. It was only the particular sets of valued characteristics that needed challenging and updating. Since schools can’t avoid being in the character-forming business the only questions now are: which characteristics should we value? And how are we going to cultivate them, not just at the level of rhetoric and fond hopes, but deliberately, systematically and demonstrably? Broadly, contemporary societies seem to care about three things: national prosperity, social cohesion and stability, and personal well-being. But the personal attitudes that will lead towards these three ‘goods’ are not eternal: they depend on the nature of the world. So even if those three aspirations are taken for granted, educational values — the traits that we want to develop in young people — will vary. We need to think about the world in which we want our children to flourish before we can say what qualities they are likely to need. Education should not be driven by a dogmatically-held set of eternal verities, but by a clear-sighted look at what the demands, uncertainties, risks and opportunities of the future will be. Merely to assert the value of Latin translation or the Periodic Table, in the face of these challenges, is a cop-out. It is a refusal to do the intellectual, moral and imaginative work that is needed. There are a good many educational organisations around the world where this re-imagining has started to take place. In the past 10 years, some specifications have been produced by individual schools; some by national education systems; some by researchers in the rapidly growing fields of positive psychology and the learning sciences; and some by commercial or not-for-profit organisations. At the British independent school Wellington College the five ‘core values’ of kindness, courage, integrity, respect and responsibility are complemented by the ‘eight aptitudes of learning’ (derived from Howard Gardner’s ‘multiple intelligences’): linguistic, logical, cultural, physical, spiritual, moral, personal and social. New Zealand wants all young Kiwis to become ‘confident, connected, actively involved lifelong learners.’ Singapore is committed to producing youngsters who are ‘creative and imaginative’ and ‘able to think, reason and deal confidently with the future.’ In 2009 the lower secondary curriculum in England was reorganised to create young people who would be ‘independent enquirers, effective participants, reflective learners, team workers, self-managers and creative thinkers.’ Organisations such as the International Baccalaureate have developed a set of desirable traits they call the Learner Profile. It wants all students to develop the dispositions to be ‘naturally curious, to exercise initiative, to express ideas confidently, to approach unfamiliar situations without anxiety, to show integrity and honesty, to be sensitive towards the needs and feelings of others, to be open-minded, to be well-balanced, and to be reflective.’ An offshoot of Martin Seligman’s positive psychology movement called ‘Values in Action’ names 24 ‘character strength and virtues’ on which education should be based. Despite the diversity, there is a fair amount of overlap between all these lists. Broadly, there are two sets of widely-agreed virtues which we might call the prosocial and the epistemic. The prosocial virtues tend to include honesty, trustworthiness, tolerance, conviviality, kindness, lack of hubris and ecological responsibility. They recognise the globalised and multicultural nature of the modern world, and stress virtues of social harmony, as well as those of the responsible employee. The set of such values borrows from, but also differs from, the virtues of the 19th century. Deference and cleanliness tend not to appear these days. It is the other set, the epistemic virtues to do with thinking, learning and knowledge, that would have been truly unrecognisable in both the Eton College and the Bash Street Elementary School of a hundred years ago. These virtues are deeply responsive to the turbulent global and digital world in which children find themselves. They are focused on uncertainty and the need to learn, and are increasingly seen as relevant to all young people. Teaching in a Paris banlieue: building confidence and character. Photo by Stuart Franklin/Magnum PhotosIt is a cliché that we live in times of escalating uncertainty, complexity, ambiguity, choice and individual responsibility. Through the electronic media, children are daily bombarded with conflicting models of what to value and how to live. Their communities often fail to offer strong unanimous guidance about how to choose wisely, or little that they are willing to heed. It is also increasingly obvious that young people (especially in the UK, according to recent reports) are not coping well with this freedom and diversity. Classic symptoms of stress — escapism, recklessness, drug abuse, anxiety, depression, self-doubt — are high across the whole social spectrum. If stress reflects a widening gap between the demands of one’s life and the resources one has to cope, many young people are clearly feeling badly under-resourced. As the core function of education is precisely to develop the mental and emotional resources that young people need to cope well with the real demands of their real lives, it is clearly not doing its job. Those resources are psychological as much as they are material or social. This is surely the heart of the question of what schools are for. Sadly, these vital national and global conversations are still at a vulnerable stage of development. As rapidly as these lists of honourable aims emerge, so they seem to get sidelined. Cynics find it easy to poke fun at them. They assume that because it is hard to put such good intentions into practice they are intrinsically laughable. Teachers are sometimes bewildered as to exactly what is being asked of them. ‘How, exactly, are you asking me to be different?’ is a question that has rarely been given a good answer. The language in which these aspirations are couched has often been vague and highfalutin’. Not many parents immediately understand the need for their children to develop ‘metacognitive awareness’ or ‘autonomous agency.’ Some of these attempts have been derailed by rightward shifts of governments, with their talk of going ‘back to basics’ (though they never do go back to the real basics, to the fundamental purpose of school in the modern world). Of course it is more difficult to demonstrate growth in a young person’s kindness, or their ability to concentrate, than it is to give them a score on sums or reading. And of course some of the pioneering attempts to update the character curriculum for the 21st century have been a bit woolly or grandiose. But, as my dad used to say, if a job’s worth doing, it is worth doing badly (at first), then learning from your mistakes and gradually doing it better. That is the stage we are in right now, as we tinker our way towards a genuinely 21st century education. We shouldn’t give up now. How do we make schools into a kind of ‘virtue gym’ where students get to practise their mental fitness, not just talk about it? One of the things we have learnt is that getting the language right is important. Too often character aspirations are so vague that they are pretty vacuous. Does ‘respecting the environment’ mean lobbying the G8? Demanding James Lovelock come and talk to the school? Insisting that school-meals are organic? Or merely watching An Inconvenient Truth, not dropping litter, and grudging trips to the bottle bank? Is it always a good idea to ‘approach unfamiliar situations without anxiety’? Throwing rocks at an old bomb on a beach is not so smart. Is it always good to ‘persist in the face of difficulty’? I certainly wish I had learnt earlier in my life that it was OK to leave unrewarding books unfinished. The virtues we want for children have to be clearly enough expressed that they can think about them, not just obey them, and can easily relate them to their own experience. If education is to change, it will not be simply by government fiat. It will be because thousands of young people and their families and teachers understand the value of the changes and start to demand them with greater urgency. We need to communicate the real practical rewards of cultivating virtues like tolerance and patience: being grateful and kind are strongly correlated with measures of well-being and life satisfaction. Crudely, nicer people are happier people. Everyone needs to know that. More urgently still, we need good ways of talking about the epistemic virtues in particular: the habits and qualities of mind that make someone a confident, powerful learner (and words like ‘prosocial’ and ‘epistemic’ are not the right ones to use on parents’ evenings). It is impossible to ‘improve’ the running of schools unless we have a clear idea of what those virtues are, and we need an agreed vocabulary to do that. Without that clarity, all educational innovation falls back obsessively on ‘raising standards’ as traditionally, and inadequately, defined. In my book What’s the Point of School? (2008) I had a stab at describing the virtues that make people good at coping with uncertainty and complexity. Since then, I’ve been refining my ideas as I’ve worked with hundreds of schools and thousands of teachers around the world through the Building Learning Power program. Some of my virtues are drawn from the research that lies behind positive psychology; some are derived from asking teachers and young people themselves; and some are suggested by the burgeoning literature of the learning sciences. I think it is important that the virtues of uncertainty are broad enough to take beyond the school gates: that, surely, is the point of learning how to learn. Dealing with the real uncertainties of modern life, and developing one’s own passionate interests and avocations, are usually not at all like school. The carefully planned, predigested, sequenced and graded kinds of bite-size learning in which conventional schooling trades are not the kinds of learning for which young people need to be prepared. An apprenticeship in passing exams leaves even the most successful with a skill for which there is little call once they have left university. Few job adverts specify that applicants ‘must be able to sit still, copy down notes, and regurgitate disembedded chunks of information under pressure.’ So what are the learning virtues that I think are most important? There are eight: 1. Curiosity is the starting point. If you are not interested in things that are difficult or puzzling, you won’t engage. Curious people have an abiding sense of inquisitiveness. They wonder how things come to be, how they work, whether they might be otherwise. They live in a wonder-full world, not a world of dead certainties and cut-and-dried rules. They know how to ask good, pertinent, penetrating questions. They have a healthy scepticism about what they are told. 2. Young people surely need courage; not necessarily physical valour but the capacity to be up for a challenge, to be willing to take a risk and see what happens, not always playing it safe and sticking to things they know they can do. Courageous learners have the determination to stick with things that are hard, (although it is also a virtue to know when to quit, not because you are feeling stupid but because it really isn’t worth it). They bounce back from frustration; they don’t stay floored for long. 3. Exploration is the active counterpart of curiosity. Inquisitive people enjoy the process of finding things out, of researching (whether it be footballers’ lives or particle physics). They like reading, but they also enjoy just looking at things, letting details and patterns emerge. They can let themselves get immersed in a book or a game; absorption in learning is often a pleasure. They can concentrate. They like sifting and evaluating ‘evidence’, not just reading or surfing the net uncritically, and their exploration usually breeds more questions. Explorers are also good at finding, making or capitalising on resources (tools, sources of information, people) that will support their investigations. Spontaneous invention: British teenagers transform an abandoned factory into a playground. Photo by Peter Marlow/Magnum4. Experimentation is the virtue of the practical inventor, actively trying things out to see if they work. Experimenters like tinkering, tuning and looking for small improvements. They don’t have to have a grand, ostensibly foolproof scheme before they try something out; they are at home with trial and error. They spend a good deal of time just playing with materials — paint, cogs, computer graphics — to see what they will do, uncovering new ‘affordances.’ They are happy practising, they enjoy drafting and redrafting, looking at what they’ve produced — a garden bed, an essay, a melody — and thinking about how they could build on and improve their own products and performances. 5. Imagination is the virtue of fantasy, of using the inner world as a test-bed for ideas and as a theatre of possibilities. Good imaginers have the virtue of dreaminess: they know when and how to make use of reverie, how to let ideas come to them. They have a mixture of healthy respect and sceptical appraisal toward their own hunches and intuitions. They use mental rehearsal to develop their skills and readiness for tricky situations. They like finding links and making connections inside their own minds. They use imagery and metaphor in their thinking. 6. The creativity of imagination needs to be yoked to the virtue of discipline; of being able to think carefully, rigorously and methodically, as well as to take an imaginative leap. Reason isn’t the be-all and end-all of learning by any means, but the ability to follow a rigorous train of thought, and to spot the holes in someone else’s argument, as well as your own, is invaluable. Disciplined learners can create plans and forms of structure and organisation which support the painstaking ‘crafting’ of things that usually needs to follow the ‘brainwave.’ 7. The virtue of sociability, and of judiciously balancing sociability with solitariness, also seems essential. Effective learners know who to talk to (and who not), and when to talk (and when to keep silent) about their own learning. And they are good members of groups: they know how to listen, how to take turns, what kinds of contribution are helpful. They have the knack of being able to give their views and hold their own in debate, and at the same time stay open-minded to and respectful of others’ views: of giving feedback and suggestions skilfully and receiving them graciously. They are generous in sharing information, ideas and useful ways of thinking and exploring; and they are keen to pick up useful perspectives and strategies from others. 8. Finally there is the virtue of mindfulness, in the sense of being disposed to reflection and contemplation, taking time to mull things over, take stock and consider alternative strategies. Not paralysed by self-consciousness but capable of self-awareness, reflective learners can take a step back every so often and question their own priorities and assumptions. Thinking about your own thinking isn’t always useful (despite the current fad for ‘metacognition’) but it is needed at strategic moments. Mindfulness means giving yourself the time to go deeper, to see what conclusions you may have leapt to, and let a bigger picture emerge. This list is merely a provocation, an invitation to argue. I’d like to hear suggestions for how it can be improved. But I hope it sounds plausible, even fruitful, both to 11-year-olds struggling with French and 55-year-olds struggling with golf or postmodernism; to people who think and intellectualise their learning a lot, and those who don’t; to people who work at Aardman Animations, Manchester City, Goldman Sachs — and at the local hairdresser’s, motor mechanic’s, or school. No doubt the list can be improved, but as Samuel Beckett said, ‘Try again. Fail again. Fail better.’ The big question is: how do we put these kinds of virtues in action? What does it take for schools to become systematic incubators of learning virtues, so that their students graduate, whatever their grades, with deep-seated habits of curiosity, courage and the rest? How do we make schools into a kind of ‘virtue gym’ where students get to practise their mental fitness, not just talk about it? To answer this question, we need first to weed out what doesn’t work. First, those moral exhortations (much beloved of head teachers on what used to be called speech days) have proven ineffective. Merely talking about ‘character’, desirable though that vocabulary is, does not cultivate the sought-after characteristics, any more than sticking labels on a pig’s ears, legs and tail helps it to grow. Being able to discuss, defend and even agree with the importance of a particular virtue is no guarantee that one will manifest it in practice. For example, when a group of young people were given tests of their moral reasoning ability, their results did not correlate at all with their actual level of antisocial behaviour. Troubled teenagers might be perfectly able to ‘tell right from wrong’; they just don’t choose the ‘right’ option in the heat of the moment. Knowledge and belief get trumped by habit and impulse all the time. Just as with moral habits, so with learning itself. While being able to talk about the nature of good thinking is useful, merely being able to do so does not necessarily make you a better thinker. I have watched lessons in which, for example, youngsters have been parroting Howard Gardner’s theory of multiple intelligences without any evidence that any of them have become the slightest bit more multiply intelligent. We are all, as one of my students put it so eloquently, ‘knowledgeable about things we are crap at.’ Another thing that doesn’t work, in cultivating these learning habits, is little set-piece workshops or activities that are bolted on to ‘business as usual.’ Research on Thinking Skills programmes, for example, shows that, while such activities are often enjoyed and appreciated by students, their benefits may neither last nor spread to other areas of their learning lives in or out of school. It is no use merely tacking on an interesting looking course of ‘problem solving’ or ‘learning to learn’ if the other 95 per cent of students’ time continues to be spent learning to be passive and credulous. The relative ineffectiveness of the skills training approach is exemplified by the disappointing results of the UK Resilience Programme. Based on a much hyped programme designed by the University of Pennsylvania, the package, launched in 2007, comprised a set of lessons and workshops aimed at helping young teenagers become more able to face challenges in school and in their lives. The final evaluation of the program in 2011 found that the beneficial effects of the workshops generally lasted only as long as they were continued, and had faded away a year later — except in the case of the most vulnerable and lowest-achieving youngsters. The disappointing impact was put down, by the researchers, to the ‘over-didactic’ and ‘bolt-on’ nature of the interventions. The thing is, virtues are not just skills, they are also habits or dispositions. Possessing the virtue of curiosity does not simply mean that you have the ability to ask good questions when someone prompts you. It means having a questioning frame of mind. The goal of character education cannot be merely to train skills. A skill is something you can do; not necessarily something that you are constitutionally disposed to do. A virtuous school has to be more than a ‘training’ institution; it has to be an incubator that develops and strengthens the desired qualities of mind through everything it does. So, how do teachers strengthen youngsters’ curiosity? Asking what puzzles them is a good start. Greet them on a Monday morning by saying, ‘Who found a really good question over the weekend?’ Have a ‘wonder wall’ full of sticky notes that capture the children’s questions. Ask your secondary science class to ‘think like scientists’ and generate new hypotheses, and new questions, based on the experimental results they have just collected. What about courage and determination? Encourage students to think of difficulty as a challenge rather than a threat. Don’t let them think that finding something difficult is a sign of stupidity. (Darwin and Einstein were both notoriously slow learners. When faced with something genuinely tricky, slow can be the most intelligent approach!). Don’t think you are being kind by rescuing pupils from difficulty and frustration: you are merely reinforcing the idea that ‘sticking with difficulty’ is fearful rather than exciting. How do we build the habits and capabilities of the explorer? If we give children more resource-based projects, they have to learn how to do their own research and find their own resources. We can encourage them to question the knowledge claims they meet — in textbooks as much as in TV advertisements — and gradually build the habit of respectful, intelligent scepticism about what they read on Wikipedia or in the newspaper. Experimentation? Give students the opportunity to think about how to evaluate and improve work for themselves, both individually and collaboratively. Talk to them about the trials, travails, conflicts and uncertainties that lay behind the discoveries of Galileo and Newton, and the hard work and many drafts that ended in the waste-paper basket on the way to ‘All the world’s a stage’, ‘I wandered lonely as a cloud’ or the scripts for Fawlty Towers or The Office. Science students who are told about these struggles have been shown to remember information better and use it more effectively to solve problems. Imagination too can be taught. Creative people are those who have learnt the knack of toggling between linear, purposeful kinds of thought, and mental modes that are more dreamy and imaginative. Schools have been based on bad psychology, where they have presumed that imagination and visualisation are childish or immature ways of knowing, to be superseded, as rapidly as possible, by those that are deliberate and articulate. Children can be given the chance, as one little girl put it to me, ‘to let our brains cool down so they will bubble up with new ideas.’ No rules allowed: a teacher at the Yomi Yomi Institute in South Korea harnesses the benefits of serious play. Photo by Thomas Hoepker/MagnumNaturally we need to help students develop the discipline of being able to plan, think things through carefully, anticipate consequences, and apply the painstaking skills of crafting that lead to a satisfying essay, proof, bird-box or painting. The American teacher Ron Berger, in his marvellous book An Ethic of Excellence, has shown how even low-achieving or demoralised students can be helped, through the ethos of the school, to develop a craftsmanlike attitude to their work, and a pride in having produced something to the best of their ability. How do we teach sociability? One teacher I know regularly has her students decide, after being given a task, whether they want to pursue it on their own, with a small group of peers, or in a group with her. Afterwards, they reflect in their ‘learning journal’ on whether they thought they had made the right choice or not, and why. Another primary schoolteacher has a class that regularly changes the size and constitution of the groups they are working in because ‘when we are grown up, we will have to get on with all sorts of people, not just our friends, so we want learn how to do that now.’ Finally, how do we teach mindfulness and reflection? Keeping a journal gives pupils time to ruminate and, as another student put it, ‘to suck the juice out of our experience, so we will learn from our choices and mistakes, and so make quicker progress.’ Through small exercises and gentle reminders, a teacher can get her students into the habit of regularly standing back, taking stock, and thinking about what they are doing: useful life skills in anybody’s book. The beauty is that all teachers could make these small adjustments to their modus operandi. It does not involve chucking out Shakespeare in order to make time for some nebulous new subject called ‘learning to learn.’ Learning to learn, in these classrooms, becomes a kind of underlay to the more explicitly patterned subject-matter. In spite of what the traditionalists think, there isn’t a trade off between content and learning virtues: the two depend on each other. The fact of the matter is this: when students are helped to become more confident and articulate about the process of learning itself, they do better, not worse, on the tests. Young people who have been helped to know how to think and persevere take these strengths with them into the examination hall, as well as onto the sports field or the concert stage. With a hundred small adjustments to the milieu of schools and classrooms, we can produce young people who are more confident, capable and enthusiastic about engaging intelligently with difficult things. When we articulate the virtues of uncertainty in clear and concrete terms, we find we can teach in a way that prepares young people both for a life of tests and the tests of life. For more information please visit www.guyclaxton.com.
Guy Claxton
https://aeon.co//essays/a-life-of-tests-is-no-preparation-for-the-tests-of-life
https://images.aeonmedia…y=75&format=auto
Ethics
A former vegan who now hunts deer is troubled by what it takes to put food on our plates
Once upon a time, I believed in the tidy taxonomy of the grocery store. In the meat coolers, near the back of the store, I could find Animalia: beef steaks, pork chops, chicken legs, and fish fillets. In other coolers, along a side wall, I could find gentler products from that same kingdom: eggs, milk, yogurt, and cheese. In other sections, I could find all things Plantae: vegetables, fruits, legumes, nuts, seeds, and grains. The realms seemed clear and separate, each kind of food carrying a distinct meaning. When I ate meat, that meant animal death. When I ate dairy products, that meant animal confinement. When, inspired by the compassionate teachings of Buddhist teacher Thich Nhat Hanh, I turned to veganism — that meant harm to nothing but plants. My conscience seemed clear. Eight years later, this fairy tale began to unravel. In the garden my wife and I tended, for instance, I began to see that squash and green beans were not just the fruit of plants. They were also the fruit of animals. Like all living things, our garden plants had to eat. As their hungry roots drew sustenance from the ground, nutrients had to be replaced. So each year I drove our pickup truck a few miles down the road and brought home a cubic yard or two of compost: rich, dark, dense material made from the manure of cows and other animals, and from their bodies as well, as farmers sometimes compost carcasses. Squash and green beans owe their existence to the lives and deaths of animals I could have insisted on supplementing our own kitchen-scrap compost with fertilisers made from nothing but plants. Such products were certainly available. Most, though, were imported from out of state in bright plastic bags. Depending on them to feed our soil would, I reflected, be like subsisting on grocery-store tofu made from soybeans grown a thousand miles away, instead of eating chicken from a neighbour’s backyard or venison from nearby woods. These choices would keep animal products away from our garden and plates, but they made no ecological sense. And even if I found a local source of animal-free fertiliser, would it make a difference? Though crops can be grown without manure, such approaches typically require more acreage than do integrated plant-animal systems. Why till more land, and perhaps displace more wildlife habitat, for the sake of excluding domesticated creatures from the agricultural landscape? Though this might help shore up my own conceptual categories, would it serve any other purpose, any greater good? Plant-animal integration is, I realised, the norm in nature. It is how prairies and savannahs and all manner of ecosystems have been sustained for countless millennia. It is the most natural, ancient, and sustainable of systems — flora and fauna feeding one another in endless cycles. But our participation blurred boundaries I had taken for granted. If the squash and beans we grew were fed by local dairy farms, were we really eating just plants? In his book Peace Is Every Step, Thich Nhat Hanh reminds us to attend to interconnections, to look deeply into the origins of the materials of everyday life, including food. The more I looked, the more complex things became. In our own garden, I saw the earthworms we accidentally cut in two with our shovels whenever we turned the soil. I saw the beetles I crushed to protect tender young plants. I saw, too, that the compost we imported linked our garden not only to dairy products but also to meat: to give milk, cows must be impregnated. Pregnant cows give birth to calves. And virtually all male calves end up as veal. In larger-scale crop production, I saw prairie and forest habitats disrupted across North America. I saw birds, mammals, reptiles, amphibians, and insects maimed and killed by machinery and pesticides. Even in produce from small-scale organic farms, I saw rodent burrows cleared by deadly smoke bombs and deer populations kept in check by hunters and farmers alike. When I visit the grocery store these days, I realise we have a choice, but it is not simply the choice I once made between the purity of veganism and its alternatives, based on suffering. Walking down the aisles, we can let the orderly bins and shiny packages cultivate our forgetfulness. We can let ourselves believe in all the tidy separations: plants and animals divided into neatly compartmentalised kingdoms, food severed from earth, our shopping disconnected from others’ farming. We can let ourselves be comforted by our own ignorance, by everything we neither see nor want to see. Or we can remind ourselves of just how intertwined everything really is. Uncomfortable though it might be, we can remind ourselves that lettuce is not as innocent as it appears, that squash and green beans owe their existence to the lives and deaths of animals. We can remind ourselves that pastoral landscapes are not just backdrops for recreational hikes or idyllic rides through the countryside. They are not an ‘environment’ that exists around us. They are the places that feed us, the soil in which we are rooted. They are us. We can remind ourselves, too, of all the people who work the land for a living. Day in and day out, they draw sustenance, theirs and ours, directly from the earth. They know the nature of the places where they live and work — the soils and waters and climates and non-human inhabitants — more intimately than most of us do. They know the nature of living and eating more deeply, too. They know it’s a messy business. We can remind ourselves that our lives are not separate from theirs. As a teenage omnivore, I never thought seriously about the connections between my living and eating and the gritty realities of agriculture. Nor did I think about those connections as a twentysomething vegan, up on my ethical high horse, wanting nothing to do with the confinement, let alone the deaths, of fellow creatures. I assumed I could remain aloof from all of that. Only later did I begin to see more clearly. Those connections are, in the literal sense of the word, vital. They keep us alive. The teacher and the student, the artist and the office worker, the doctor and the attorney, are all utterly dependent on the farmer. Whatever romantic notions we might have about ourselves and our ethically or environmentally motivated food choices, the boundaries between vegans, vegetarians and veal eaters are somewhat ambiguous. We are all part of the same food systems. We can — and should — advocate changes in those systems, promoting both animal welfare and ecological health. Our efforts, however, will be most effective if people of all dietary persuasions can collaborate, remembering that we, like the foods we eat, inhabit an integrated whole, not isolated kingdoms. My wife and I, for instance, don’t buy beef or veal, yet we applaud the local farmers who produce those meats in humane, ecologically sound ways. And we recognise that the yogurt we eat is linked to the lives and deaths of cows, just as our garden is. It is easy to forget, of course. I know I do. In the bustle of everyday life, the interconnections slip my mind. I eat a bowl of salad and see nothing but greens. Then the phone rings. It’s a neighbour calling. Woodchucks have begun to obliterate her garden, in spite of the electric fence. I’m one of the few hunters she knows. Would I be willing to lend a hand? Ah, yes, I think. Hidden costs. Taking a deep breath, I fetch my .22 rifle.
Tovar Cerulli
https://aeon.co//essays/a-plate-of-salad-is-not-as-innocent-as-it-seems
https://images.aeonmedia…y=75&format=auto
Future of technology
Is there anything left for the green movement to do but assuage its grief in ritual and myth?
In late August, on a hillside in England’s South Downs where the sun beat down through the trees in stunning shafts, I wandered off to find a secluded place. The other members of my group had fanned out. Every now and then I would glimpse them through the woods. I pressed on until they disappeared from view, then picked up a piece of brushwood and began to scratch at the soil, flicking worms aside and breaking roots with my fingers. Perspiration washed suncream into my eyes and made me swear, the puerile, inventive swearing that goes with digging a hole with a stick. After about 10 minutes I had made a hollow a few inches deep. It was the diameter of my head. I lay down carefully and rested my chin on its rim, causing sweat to run off my nose into the soil. I took a deep breath and screamed as loudly as I could. Then I stood up and looked around. Nobody was in sight. With my foot I filled the hole, pressing the earth back down. I thanked it stiffly for receiving my scream, feeling, in the least rooted way possible, very English. ‘Myth,’ the storyteller Martin Shaw had said earlier in the day, ‘is the power of a place speaking.’ But this place was keeping schtum. According to my phone, there were about 15 minutes of wilderness initiation to go. I really didn’t want to stumble on any members of the group performing their own exercises so I waited in the shadow of a tumbledown wall, listening. Screams crossed the wood from all directions. A few people seemed to be having several goes, one horripilating howl after another, their voices coming from different points of the compass each time. Our instructor had warned us that the first scream might feel forced. After that, perhaps the earth would pull something out of us, would break through to an authentic core of pent-up feeling. Some people were having so much pulled out them it was hard to believe they would make it back to the muster point. We’d find them as husks, yellow cowls of evacuated skin. But the others all drifted back in what looked like a state of poetic contemplation, relaxed and springy. The men wore beards and medicine-man adornments — animal-tooth pendants, feathers behind the ear. The women sported Peter Pan tunics and yogically extended backs. Everyone was, as my mother would say, well-spoken. An older chap in a fishing hat announced that he had run into a pair of mourners. They were looking for a fresh grave in the wood; a boy had been buried there the week before. ‘It was like Hamlet,’ he said sagely. ‘You know. Death.’ Ah, we said. ‘All around us are signs that our whole way of living is already passing into history’ We were learning how to become grown-ups. Some of us looked to be in our 70s. Shouting into the ground hadn’t made us grown-ups; that was just a taster. The real process, if we ever chose to submit to it, would take days. We’d be goaded, provoked, all our hidden inner darkness would bubble up to the surface. Then we’d go off to a lonely place with only water to drink and a tarp to keep the rain off. There we would stay for four days and nights, just our galloping minds for company. Real grown-ups were in short supply, said our instructor, Tom Hirons, an acupuncturist and poet as well as a guide on wilderness rites of passage. We came from a culture that didn’t know how to make them. In our culture, becoming a grown-up was a radical act. ‘And that,’ Hirons said, ‘tells you how bad things have got.’ Real grown-ups would never have let the world get into such a state. I had joined 300 or so campers at the third Uncivilisation festival, held in the grounds of the Hampshire Sustainability Centre, amid the rolling fields outside Petersfield. This year’s version might have been the quietest music festival since the invention of loudspeakers. In the spirit of treading lightly upon the earth, no radios played in the campsite. The headline event was a fireside dance. On the Friday evening, Hirons, the instructor, drew a crowd by reciting a Siberian folk-tale in a shamanic mask, to the accompaniment of chimes and rattles. The two evenings of programmed music were candle-lit, entirely acoustic and, in the centre’s outdoor auditorium, close to the threshold of audibility. An intense preoccupation with story and ritual: Tom Hirons tells a Siberian folk tale. Photo by Andy Sansom/AeonAt times, that created its own spell. Bethia Beadman, a vampish torch singer in the tradition of PJ Harvey, apologised for the slightness of her arrangements. She needn’t have worried: straining to catch them added something to their peculiar glamour. Other acts fared less well. Julian Gaskell’s rasping vocals got lost amid the frenetic gipsy jazz of his Ragged Trousered Philanthropists. It didn’t matter: offstage, folk musicians exchanged songs day and night. On the first night, guitars passed back and forth in the central marquee, all distinction between performer and audience erased. It was, as Uncivilisation’s charismatic architect Paul Kingsnorth told us, not a consumer experience. The festival is an outgrowth of an inscrutable cultural programme begun in 2009 by two journalists, Kingsnorth himself and Dougald Hine. Together they wrote a pamphlet called Uncivilisation: The Dark Mountain Manifesto. ‘We are at a time of social, economic and ecological unravelling,’ they declared. ‘All around us are signs that our whole way of living is already passing into history.’ The environmental movement had failed: the ship of society would never turn around in time. All that was left to do was prepare for the crash, and perhaps learn to look on the bright side. ‘The end of the world as we know it is not the end of the world full stop. Together, we will find the hope beyond hope, the paths which lead to the unknown world ahead of us.’ Considering the following it has picked up — Dark Mountain now has 10 regional chapters in the UK alone — the manifesto is a strange document. It uses and advocates an ominously Heideggerian ‘elemental’ language, written ‘with dirt under our fingernails.’ A surprising amount of it is dedicated to a reappraisal of Robinson Jeffers, an apocalyptic poet of the interwar years. He supplies the pamphlet with its epigraph and the project with its name: ‘The beauty of modern/Man is not in the persons but in the/Disastrous rhythms […] the dance of the/Dream-led masses down the dark mountain.’ One might think there would be bigger fish to fry on the eve of civilisational collapse than the reputation of a neglected author — if there are any fish worth frying at all. Yet the Dark Mountaineers treat literature with startling seriousness. Our whole trouble, they insist, results from ‘the stories we have told ourselves.’ By way of addressing the damage, the group has now published three anthologies of ‘uncivilised’ writing and art, produced by its swelling band of supporters. The styles and tones vary but all three collections stick to the keynotes of the manifesto: the inevitability of environmental and economic collapse; the spiritual imperative to return to the land; the search for stories to replace the ‘myth of progress’ that has captivated us all. When I first heard about Dark Mountain, this all seemed silly and perhaps a little menacing. The fatalism, the dreamy retreat into narrative — wasn’t that just a melodramatic pose? The emphasis on story struck me as typical artistic megalomania. To a man with a paintbrush, every problem looks like a matter of perspective, and arts movements are forever saying that what the world needs is more of their kind of art. Most damningly, there already seemed to be more than enough of this kind of art. The anti-technology polemics, the witchy nature mysticism and huntsman imagery, brought to mind nothing so much as English ‘neo-folk’ acts such as Sol Invictus and Death In June, mainstays of Britain’s far-right bohemia, with its reveries about masks and antlers and the Brownshirts. I was on my guard. It didn’t help that the festival offered lessons in using the scythe, or that the trees around the camp were hung with animal bones, or that the photographer and I were waved into our turning by a woman dressed as a medieval mummer. I texted my wife: ‘Directed to the car park by someone literally in a Wicker Man mask.’ There were discussions of what it might mean to live as an ‘indigenous’ Briton. Racism and nationalism were firmly denounced, but the sinister undercurrents never really went away. At a talk about ‘how to act in an era of failed leadership’, a member of the arts group Mearcstapa wondered aloud whether he would be prepared to use violence to prevent greater violence. ‘I don’t think we are going to stop mass extinction’: Dark Mountain’s founder Paul Kingsnorth. Photo by Andy Sansom/AeonOn this evidence, it might be tempting to dismiss Uncivilisation with a shudder. But what I overlooked in my preparatory reading — perhaps because I wasn’t equipped to feel it myself — was the grief that underpins Dark Mountain. Most festival-goers appeared to have spent their working lives as professional green activists. They weren’t, as Kingsnorth observed to me later, ‘floaty poets’: they were doers, founders of eco-villages, picketers of building works. And as one man who used to develop organic recycling systems told me: ‘We failed.’ The value of Dark Mountain was, he said, psychological. It was a way to cope. Kingsnorth himself has had a classic idealist’s career. He attached himself to the environmental movement as a student, getting arrested during the Twyford Down road protests. In his own words, the experience politicised him; he joined the antiglobalisation movement, campaigning for the Zapatistas in Mexico and for the independence of West Papua. For a couple of years he was deputy editor of The Ecologist magazine. In the latest Dark Mountain anthology, in an essay that lambasts the techno-utopianism of Stewart Brand and flirts with the ideas of the Unabomber, he finds the green movement at age 40 in a ‘full-on midlife crisis.’ Kingsnorth looks back to the 1992 Earth Summit in Rio, when ‘the future looked bright for the greens’, and comments ruefully that: ‘It often does when you’re 20.’ I met him towards the end of the festival. He looked studious in his frameless specs, and wary, though not particularly of me. He had recently published a book of poetry and was gearing up for the release of his first book of fiction, The Wake, ‘a post-apocalyptic novel set 1,000 years ago’ (it’s about the Norman conquest). The festival, he joked, was a bit of a distraction from writing, and from running up hills near his new home in Cumbria. We sat on a picnic blanket while he tried to get his baby son to go to sleep in a buggy. And he told me about collapse. ‘At this moment in history, I don’t think we’re going to stop the climate change,’ he said. ‘I don’t think we’re going to stop mass extinction, I don’t think we’re going to stop the industrial economy in its tracks.’ The machine will only halt when it runs out of road. Unlike many Jeremiahs, Kingsnorth is circumspect about exactly what to expect and when. Predictions, he said, are always wrong, and collapse may in any case be less dramatic than one imagines. ‘You’re not going to wake up with civilisation gone,’ he said. ‘I don’t foresee in my lifetime a time when there are not houses, some form of law and order, and people buying things from shops and working farms and all that kind of stuff.’ Memorials to vanished places: green activists are haunted by the battles they fought and lost. Photo by Andy Sansom/AeonBetween the activist’s impotence and the imponderability of fate, there might not seem to be much to be done. None the less, Kingsnorth sees it as his task to ‘make it clear what’s wrong.’ And what’s wrong, he believes, is more than just practical short-sightedness. It has a metaphysical character. I asked him what he would think if civilisation didn’t collapse. ‘There’s still a huge hole in the middle,’ he replied. ‘It’s still a society that has to cannibalise nature in order to live, it’s still a society that has to put a price on everything, that has to give a material value to everything, has no spiritual relationship with nature.’ Viewed in this light, Dark Mountain’s intense preoccupation with story and ritual makes more sense. These, after all, are the means by which spiritual dispositions are traditionally cultivated. ‘Mythology is the heart of ecology,’ Martin Shaw told his session at the festival. ‘Everything we are talking about is a kind of love affair.’ The folk musician Andy Letcher told an audience how, after the protesters had been evicted from Twyford Down, they held a party on Old Winchester Hill. There they lit fires, to replace the head of an underground dragon that they said was decapitated by the road cutting. That might be whimsy, but the sense that landscapes have their own, immanent personalities and interests was sincere, and it was everywhere at Uncivilisation. It would not be possible, I heard on all sides, both to love and respect the land and to cut down forests, hydraulically fracture rock formations, blast mountaintops. Hadn’t I had to thank the earth just for shouting at it? At times it seemed as if the whole event was an experiment in willed pantheism. ‘[W]hat we’re talking about here,’ Kingsnorth notes in the third issue of Dark Mountain, ‘is something that is maybe not exactly religious, but it’s obviously spiritual, it’s beyond the rational …’ Gentle and concerned: the civility of Uncivilisation. Photo by Andy Sansom/AeonWill it work? Can it stick? It was hard to know how seriously to take this stuff, and hard to take it seriously when I was doing it. Even the distinction of ‘storyteller’, invoked so reverently during formal discussions, became a bit of a joke. A guy in aviator glasses hoisted a toddler into the air and shouted: ‘Listen to this story! Isaac is a serious storyteller!’ Martin Shaw has the true storyteller’s knack of making the fantastic sound unforced. ‘I don’t think we’re in a Zeus time,’ he remarked during his session. ‘Not a Goddess time either. We live in a trickster moment.’ Accordingly, our own tenders and shapers of myth must become ‘bricoleurs’, collage artists, laying disparate elements side by side and seeing what takes on a life of its own. I suppose that means a lot of rummaging in history’s dressing-up box, at the risk of looking foolish or worse. But the people I met at Uncivilisation seemed for the most part to be neither: they were gentle and concerned. At the end of the festival we gathered in a circle in the woods. Some of us wore tribal markings, warpaint. Some wore animal masks, hovering far behind us among the trees. Kingsnorth got up to speak. He had just been with our hosts at the Sustainability Centre. ‘Slightly disappointingly,’ he said with a frown, ‘they said this was the most civilised event they’d ever run.’
Ed Lake
https://aeon.co//essays/a-dispatch-from-the-wild-frontiers-of-uncivilisation
https://images.aeonmedia…y=75&format=auto
Architecture
We can build structures that last for centuries, but can we connect with our distant descendants?
Make a model of the world in your mind. Populate it, starting with the people you know. Build it up and furnish it. Draw in the lines that connect it all together, and the ones that divide it. Then roll it into the future. As you go forward, things disappear. Within a century or so, you and all the people around you have gone. As things go that are certain to go, they leave empty spaces. So do the uncertainties: the things that may not be things in the future, or may take different forms — vehicles, homes, ways of communicating, nations — that from here can be no more than a shimmer on the horizon. As one thing after another disappears, the scene fades to white. If you want a vision, you’ll have to project it yourself. Occasionally, people take steps to counter the emptying by making things that will endure into the distant future. At a Hindu monastery in Hawaii, the Iraivan Temple is being built to last 1,000 years, using special concrete construction techniques. Carmelite monks plan to build a gothic monastery in the Rocky Mountains of Wyoming that will stand equally long. Norway’s National Library is expected to preserve documents for a 1,000-year span. The Long Now Foundation dwarfs these ambitions by an order of magnitude with its project to build a clock, inside a Nevada mountain, that will work for 10,000 years. And underground waste disposal plans for the Olkiluoto nuclear power plant in Finland have been reviewed for the next 250,000 years; the spent fuel will be held in copper canisters promised to last for millions of years. An empty horizon matters. How can you care about something you can’t imagine? A project can also reach out to the distant future even if it doesn’t have a figure placed on its lifespan. How many blueprints for great works, such as Gaudí’s Sagrada Família cathedral in Barcelona, or Haussmann’s Paris boulevards, or even Bazalgette’s London sewers, were drawn with the distant future in the corner of the architect’s or the engineer’s eye? The value of longevity is widely taken for granted: the 1,000-year targets for the Iraivan Temple, the new Mount Carmel monastery and the National Library of Norway are declared with little explanation as to why that particular round number has been chosen. Instead, they play to intuition. A 1,000-year span has an intuitive symmetry for nations such as Norway that have a millennium of history behind them: it alludes to the depth of the nation’s heritage while suggesting that the country has at least as much history yet to come. For spiritual institutions, 1,000 years is short enough to be credible — England, for example, is dotted with Norman churches approaching their millennium — and long enough to refer to a timescale that extends beyond normal human capacities, thus pointing to the divine and the eternal. People don’t generally reach out to the distant future for the future’s sake. Often what they chiefly want to reach is a contemporary audience. Going to extreme lengths to prevent vestigial nuclear hazards the other side of the next ice age is a demonstration of capacity, commitment to safety, and attention to detail. If this is what we’re doing for the distant future, it says to an uneasy public, you can be absolutely sure that we’ve got every possible near-term risk covered, too. At the ultimate extreme, the Voyager space probes are carrying samplers of human culture, on golden disks, out of the solar system and on into infinite space. The notional beneficiaries are life forms that are not known to exist, from planets not yet detected, at distances the probes will not reach for millions of years. But the real beneficiaries were the people who reflected on our species and its place in the universe as they assembled the records and their content. The golden disks were mirrors of the culture that made them. Any project with a distant time-horizon can be explained away as an exercise that invokes the future in the pursuit of immediate goals. But even if such a project is all about us, that doesn’t mean it’s not about the future too. The Long Now Foundation is an attempt to cultivate a consciousness that expands the horizons of the present. (Its name emerged from Brian Eno’s observation that in New York what people meant by ‘now’ was markedly shorter than what people meant by it in Europe.) By expanding ‘now’ to multi-millennial proportions, it makes us part of the future, and the future part of us. Building with an eye on the far horizon: Antoni Gaudi’s Sagrada Familia. Photo by Luis Andrei PhotographyA conceptual foundation that is building a 10,000-year clock may at first glance appear to have little in common with an Australian public highway operator and its new road bridge. But both have found ways to integrate the present with the distant future. Queensland Motorways chose a design lifespan of 300 years for its second Sir Leo Hielscher Bridge, rather than the 100 years typical for such projects, on the grounds that this would represent a better return for the community on the investment. It was worth spending public money now to benefit people in 300 years’ time, and it was justified because those people will be part of the community that exists today. The idea of ‘us’ that this practical, cost-conscious engineering project factored into its calculations is unconditionally generous, implicitly uniting everybody who resides in the area for the next 300 years at least, regardless of how they are related to each other, or whether they have contributed to the investment themselves. A significant factor in the Queensland authorities’ calculations was that building a bridge to last for three centuries doesn’t cost three times as much as building it to last for a single century. One study has estimated that building a structure to last 300 years might be only 10 per cent more expensive than building it to last 30 years. Another was that a long design life might keep maintenance costs down. These considerations are generally applicable. The prospects for reaching a distant time horizon will be improved if a long reach offers near-term benefits, and if the near-term costs of reaching out to the distant future are modest. In other words, the present and the future should have shared interests in the project, and the conflicts between the interests they do not share should be minor. The sense of common interest will be heightened if the benefactors and future beneficiaries are felt to be members of the same group, ‘us’ rather than ‘us and them’. Any project that succeeds in establishing itself, whether by building a physical structure or by making an explicit commitment to deep posterity, will create a point on the empty horizon towards which people can gaze. It may be no more than a single pixel, but that’s better than a blank screen. Even a tenuous image of the distant future is of value at a geohistorical moment in which human actions may determine planetary conditions on a millennial scale. There are good scientific reasons to believe that if the world’s temperature goes up, it will stay up for the rest of the millennium and longer, while sea levels will continue to rise as the oceans slowly warm through and expand. We may well be on the verge of causing profound, irreversible disturbance to the earth’s systems. That places moral responsibilities on us at least to consider with a corresponding seriousness what our responsibilities to the future may be, and how we might make those responsibilities hold our attention. That’s why an empty horizon matters. How can you care about something you can’t imagine? For all but the most rigorous moral philosophers, caring requires more than a logical reckoning of duty. People need visions of things they feel attached to, or find beautiful, or moving. They have to be able to imagine a future the failure of which to materialise would feel like a loss. Points on the horizon that help people to see something in the far future may help them feel connected to it. They may also encourage people to believe that there actually will be a future. After you have systematically cleared the horizon of time and it has faded to white, imagine what is likely to happen if you let someone else get their hands on your vacant landscape. Like as not, they will strew apocalypse all over it: ruins, mutants, scattered bands armed against each other. People seem irresistibly drawn to the end of the world — but if they catch glimpses of a future in which spiritual edifices or ancient documents endure, they might be more inclined to help secure it, and less inclined towards nihilistic fantasy. They don’t have to have a view of the far horizon in order to factor the distant future’s interests into their actions. The interests of their children and grandchildren will be more alive in their minds: serving them may well serve those of more distant generations, too. But at this possibly critical moment, when our imaginative sympathies need all the help they can get, it’s worth trying to focus a 1,000-year stare.
Marek Kohn
https://aeon.co//essays/who-do-we-care-about-when-we-care-about-the-future
https://images.aeonmedia…y=75&format=auto
Human rights and justice
The ruling that Anders Breivik is sane leaves his ideas unchallenged. We need a new verdict for crimes of vainglory
Anders Behring Breivik has been declared sane and criminally responsible for the murder of 77 innocents. But the debate about our proper response to him and other such killers will continue, for the simple reason that they present us with a grim dilemma. This can perhaps best be seen in the terrible irony that the surviving victims and victims’ families welcomed the verdict, and so did Breivik himself: the victims because it made him accountable; Breivik because he believes it shows his odious ideology to be legitimate. To resolve this dilemma, I suggest a new category of criminal act, which ascribes full responsibility to a killer for his crimes while making clear that the perpetrator is none the less risible, self-regarding and delusional. We could call it the Herostratic crime. Herostratus was a young man about whom we know only one fact: that on the night of 21 July 356 BCE, he set fire to the Temple of Artemis at Ephesus. This temple, 120 years in the building, was one of the seven wonders of the world. Visited by pilgrims, kings and tourists, it was gargantuan: some 400 feet long, 180 feet wide and 40 feet high — the size of a football stadium. By all accounts, it was sublime. The fire destroyed it utterly. Herostratus did nothing to hide his guilt, but gave himself up freely and, like Breivik, admitted his crime. When asked why he had committed this terrible act, he replied: to become famous. According to one of the psychiatrists’ reports Brevik described himself as an ‘attention-seeking whore’ To discourage copycats, Herostratus was not only tortured and executed. He was also subjected to a damnatio memoriae — the damnation of a man’s memory through banning (on pain of death) all mention of his name. This is a stark contrast to the worldwide blanket media coverage that has made Breivik one of the most talked about men on earth. The coyness of some ancient commentators with regard to the fire suggests that the damnatio memoriae was widely respected for centuries afterwards. Nonetheless, Herostratus’s name was not entirely forgotten, and has lived on as a byword for the destructive pursuit of notoriety. The Greeks understood that the importance of the hero-cult in their society risked fostering anti-heroes such as Herostratus. And they understood that those who would choose this route did not fear death, but rather obscurity and ridicule. Hence the damnatio memoriae was not only the most fitting punishment, but the best deterrent to would-be copycats. Our society is every bit as obsessed with fame as the ancient Greeks were with glory. Yet it is not clear that we understand nearly so well how to deal with those who seek celebrity through some wicked but dramatic act: assassinating a president or a pop star, blowing up a building, or gunning down high-school students. Many modern commentators struggle with the dilemma of feeling bound to report on a dreadful event with wide cultural significance, and at the same time knowing that through doing so they are giving Breivik and his ilk just the attention they seek. The reality is that modern technologies and mores make a damnatio memoriae inconceivable today. Obscurity, therefore, cannot be imposed on present-day Herostratuses. Ridicule and contempt, however, can. Breivik, of course, denies that his motivation is purely narcissistic, even though according to one of the psychiatrists’ reports he described himself as an ‘attention-seeking whore’. Instead, he is keen to foreground his ideology — a world view in which he takes on the role of the crusading knight battling the forces of darkness. Though an intellectually bankrupt patchwork of conspiracies and half-truths, this ideology is crucial to him and to those who might be tempted to emulate him. Without it, he is not the martyr to justice he wants to be, but a murderer of innocents. It enables him to sit in the courtroom across from the bereaved relatives of his victims and make the hideous claim that he would do it all again: that his actions were, as he put it, ‘based on goodness not evil’. That his ideology legitimates, for Breivik, his actions and ascribes him the role of hero is why he is afraid of being dismissed as a raving lunatic. It is also why it is crucial, both as the proper punishment and as the proper deterrent to would-be emulators, that his ideology is indeed dismissed as the self-justifying rantings of a narcissist. The leader of the far-right English Defence League, Stephen Lennon, has already been reported as saying that the Norwegian court’s ruling ‘gives a certain credibility’ to Breivik’s ideology. The dilemma that the judges faced was that they could only dismiss his mad 1,500-page manifesto and self-serving world-view by deciding that Breivik was suffering from a serious mental illness such as psychosis, which would have resulted in him not being criminally responsible for his actions. The judges rightly decided that this was unacceptable: just as satisfaction for the victims and their families requires that Breivik be held fully accountable for his crimes, so does the possibility of regarding him with the appropriate contempt. It is manifestly not right to mock or scorn those who are genuinely mentally ill. So let Breivik be sane — he does not, in any case, quite fit any of the current diagnoses that would prevent him from being accountable for his actions. Though he demonstrates some symptoms of narcissism, delusion, and perhaps schizophrenia and psychopathy, it is none the less clear that he meticulously and rationally planned his crime. Though he shows no remorse, he shows every sign of being aware of its seriousness (indeed, he revels in it). He is rightly, therefore, held to be answerable. Utoya Island, Norway. Photo by Vegard Groett/CorbisAt the same time, the verdict needs to communicate that his ideological grandstanding is not a rational political agenda, but a contrivance of perverted vanity. This would be the function of categorising this and similar killings as a Herostratic crime. Rather loosely, this would be a serious, premeditated crime committed in full knowledge of its gravity, motivated by narcissism or attention-seeking, often in the name of some supporting ideology with delusional elements. The first part of the definition — that the crime be premeditated and done in full knowledge of its gravity — is intended to establish criminal responsibility; the second part, the narcissistic motivation, should make clear that the claimed justification is unserious, self-serving and rather makes the crime more, not less heinous. One might hope it would be clear to all that there is nothing more despicable, less heroic and less honourable than shooting unarmed children on a small island from which they could not escape. But Breivik’s ideology is an attempt to make exactly this unclear; to make his abominable act seem noble. The point of branding it a Herostratic crime would be to wipe away this obfuscation and reveal it for what it is: the most ignominious of offences. Progressive societies such as Norway would be unlikely to include old-style public humiliation — stocks and pillories, say — in the punishment, but a conviction for a Herostratic crime should at least send the signal to the media that the murderer ought to be mocked for his vainglory. Lawyers and psychiatrists would be needed to work out the details (I’m neither). But as an overarching category that ascribes a particular viciousness to a range of crimes when committed under certain circumstances, it has a precedent: the idea of the ‘crime against humanity’. This also covers a range of serious offences — not just mass murder, but also, for example, the systematic use of rape or torture — that fulfil certain additional requirements, such as being planned, being widespread, and making an affront to human dignity. Though open to interpretation, such a category makes clear that not every killing is the same; that differences in scale or motivation demand different responses from society. It would not always be easy to decide whether a crime had a narcissistic motivation or whether its supporting ideology was delusional (for example, with respect to Islamic suicide bombers). Judges, however, are used to making such nuanced decisions. As we have just seen, it was not easy to decide if Breivik was sane. And there is actually a fairly clear pattern to Herostratus-style cases: they are, for example, invariably perpetrated by marginalised young men (between the ages of 18 and 40) with no prior history of serious mental illness, and they are well planned. But most importantly, they act out some kind of culturally established script of the hero or anti-hero, whether the knight templar, or the Joker from the Batman movies, or the Rambo-style ‘lone wolf’ gunman. Therefore, the point of convicting someone of a Herostratic crime would be to say: we hold you accountable, but also find you risible. It should send the message that we as a society have seen through your attempt to hide your villainy with a veil of ideology; that we regard your views as nothing more than the product of a pathetic craving for attention that is so self-regarding that it has no compunction about sacrificing the lives of others. If Breivik was sentenced as a Herostratus, his victims — and would-be emulators — would know that he would not only be locked up for a very long time, but that he was universally regarded as nothing but a vain and deluded brute.
Stephen Cave
https://aeon.co//essays/could-the-law-make-us-forget-anders-breivik
https://images.aeonmedia…y=75&format=auto
Demography and migration
Despite its turmoil, ever more people are risking their lives to enter Greece. Welcome to Europe’s most porous border
It’s 9.30pm and I am in a police station in Didimoticho, a Greco-Turkish border town on the far northeastern edge of mainland Greece. The border is new, Thrace historically having been carved up between Turkey, Greece and Bulgaria. Outside in the dark the rainstorm is easing, so the night patrol might happen after all. At 10pm, Dimitri, my police escort, arrives. He’s a hulk: navy trousers tucked into black boots, handgun slung round his waist, thick black hair. He dismisses my feeble hired Citroën and my trainers — we’re going to need a 4x4 where we’re going, and boots like his — so he offers a lift in his police jeep instead. A green jeep swings out ahead of us. According to the Cyrillic on the side, it belongs to the Bulgarian police. Together, we power north along the Ignatia Highway, then turn up a dirt track into the hills. We park on the edge of an escarpment and gaze out across the silence. The stars have emerged, which is good, because rain or river fog will obscure the camera’s view. There are lights on the plain below, too. Dimitri whispers and points: the orange street lights are Turkey, the white ones are Greece. Twisting invisibly between them is the Evros river, which runs for 205km from where easternmost Greece meets Bulgaria and Turkey, going south through flat cotton and wheat fields to emerge in a delta on the Aegean. More than a frontier between Greece and Turkey, the Evros divides west from east, Europe from not-Europe, rich from poor: a full stomach from an empty one. For many it offers sanctuary from war. I join the two Bulgarian officers. Armbands worn over their green national police uniforms bear the EU circle of stars and the word ‘Frontex’. Frontex is a pan-European police agency deployed here since 2010 in response to the dramatic escalation of illegal immigration, both by sea to the Aegean Islands and from Turkey across the Evros. In 2011, more than 54,232 immigrants were arrested on this border, up from 8,800 in 2009. That morning, Giorgios Salamangas, police chief of the Evros region, told me that illegal immigration here is ‘out of control’, his underfunded force swamped by an unceasing human tide. It’s another headache for poor, beleaguered Greece. The number of immigrants who remain trapped in Athens is estimated at 450,000 to 1 million — or a tenth of the population Frontex calls this ‘Joint Operation Poseidon Land’. It is well-named. In Hellenic mythology, Poseidon was god of both sea and rivers and when he struck land with his trident he caused earthquakes and drownings. The Evros is deep and dangerous, particularly after this year’s harsh winter and wet spring. Most migrants struggle across in overloaded inflatables, 10 or more squashed into a cheap vessel made for two. In 2010, 48 migrants drowned and 14 died of hypothermia; this year at least 18 have already been killed by water or cold, and these are just the ones the police have found. More vanish without trace. Often nameless — they travel without documents — the police guess their origins by the colour of their skin. Since most immigrants are from Bangladesh, Pakistan, Afghanistan, Iraq, Syria and Somalia, it is assumed that the majority are Muslim, so they’re given makeshift Muslim burials. A machine whirrs, the jeep roof slides open and a structure grows out of it, silhouetted against the stars like a totem pole. It is a thermal imaging camera, able to see to a distance of up to 6km in the dark and detect the infrared radiation emitted by humans and other warm-blooded animals. It stops, rotates, turns its eye on Turkey. We zoom in on a stretch of river and, beyond it, a Turkish village. I imagine villagers tucked up in bed, unaware of being surveilled. And I imagine groups of men, sometimes with women, children and grandparents, creeping through undergrowth, exhausted, hungry and frightened, yet relieved to be ending a journey that may have taken two years. Ghostly figures: police thermal imaging cameras reveal migrants descending the bank of the Evros into a waiting boat. Photo: courtesy Evros PoliceThe aim of Frontex is to spot the migrants before they cross, and to alert the Turkish border guards who are supposed to return them home. Rajah, a Sri Lankan Tamil refugee I met in Turkey, had been arrested on the Turkish side seven times. Each time he was deported to Iran, through which he had travelled en route, only for the Iranian police to bundle him straight back to Turkey, like a human football. These days most migrants take cheap flights to Istanbul, buy a 90-day tourist visa, then access the European frontier from there. The thermal camera refocuses on woods and fields and transforms them on screen into a picturesque snow scene. Something black moves across the white wilderness. The camera zooms: a ghostly figure leaves a house. I hate my own feeling of excitement, as if I’m hunting. We wait to see if more figures emerge — it could be a safe house — but no, just an ordinary person going about his private business. Police chief Salamangas had shown me recent footage from this camera. He had pointed out groups of cartoonlike figures pouring down the bank, climbing into a boat, launching themselves across the current. I saw them being pushed out of a van and beaten by traffickers wielding guns, then left on the roadside. In the Bulgarian jeep, the migrants are like computer game targets. In chief Salamangas’s office in Orestiada, they are a burden. But that afternoon outside the Filakio detention centre, I met the real thing: 33 Bangladeshis who had crossed the river the night before. Several carried plastic bags containing their few possessions. Most had just a handful of euros, along with youthful energy and hope. They seemed anxious but nonetheless proud of getting here. After the Council of Europe condemned Filakio as inhumane, the Greek government rebuilt it and rebranded it a ‘reception centre’. I was not allowed inside, but chief Salamangas was keen to stress the government’s new concern for migrants’ rights. The Bangladeshis had been detained for only one night; they said they had been treated well, fed and given medical checks. Chief Salamangas has arrested 16 traffickers so far this year, but the penalties are small — a brief prison term — and the potential for profit a continual lure. For the overland section from Istanbul to the Greek border, traffickers charge €500-€1,000 per person — down from €3,500, since word on the global network is that Greece is a soft touch. Rich Syrians, however, can be charged as much as €3,500 all the way to Athens. Rajah told me that in 11 years of trying to enter Europe he’d spent a total of €19,000 and still hadn’t succeeded. People-smuggling is now more profitable than drugs. It’s cold, past midnight. Dimitri and I climb back into his police car, leaving the Frontex officers to their lonely vigil. They’ll be there for eight hours, staring at the screen. The next morning I spot Asian men outside Orestiada police station being herded into a van: last night’s pickings, being driven to Filakio. Some had been caught. Others turned themselves in. They’ll be fingerprinted, photographed, registered, then released with a paper entitling them to 30 days’ stay in Greece. Their struggle to reach a country almost as chaotic as their own seems ironic, until you discover that most plan to move elsewhere in the EU’s Schengen zone, where there are no internal borders. However, according to the EU’s Dublin II Regulation, the country in which a migrant arrives is responsible for his asylum application. As 80 per cent of Europe’s illegal immigrants arrive through this gateway, many get returned to Greece. Life in the balance: a group of migrants on a road outside Evros hoping to get a 30-day visa. Photo by Helena DrysdaleGreece is currently pressuring Turkey to tighten border security and accept the return of illegal immigrants in accordance with an agreement signed in 2005. Government contractors have also begun erecting a razor-wire fence, at a cost of €3.1 million, along the particularly porous 12.5km stretch of land border where the Evros river kinks, ignoring critics who claim the fence will only drive migrants towards the dangerous river route. Meanwhile, immigrants who outstay their 30 days remain trapped in Athens in neighbourhoods that have become squalid ghettos. Their numbers are estimated at 450,000 to 1 million — or a tenth of the population. Athenians complain of property prices collapsing, businesses closing, people scared to leave their homes for fear of violence. They feel alienated, besieged. With unemployment at 24 per cent, they also fear for jobs, but immigrants without papers have no legal right to work, so they survive on black market menial labour, or scavenging for scrap metal that they trundle in shopping trolleys for smelting in the Athens rubbish dump. Others are forced into crime. A stroll around the archaeological museum, beside the grand old university building and the formerly middle-class district of Metaxourgio — all in Athens city centre — reveals streets of immigrants selling drugs, the mostly Greek junkies cooking their potions at their feet and shooting up in broad daylight. Just before the May elections, the ruling coalition parties New Democracy and PASOK realised that this has become a vote-winning issue, more potent even than the economy, and the streets were cleared. Frontex told me that since April deportations had also dramatically increased. But it’s not enough for Greek voters, 7 per cent of whom — including half the Greek police — elected 21 Golden Dawn neo-Nazis to parliament on their promise to ‘cleanse’ the country of immigrants. No matter that many Greeks are immigrants themselves, most recently from Russia and Armenia, and further back from Asia Minor. Over an exceptionaly cold winter, immigrant numbers were slightly down, but during high summer the Evros got safer to ford, and so they rose. So, too, did the number of black shirts visible on the streets of Athens. Not surprisingly, there has been a crackdown on border control; under the aegis of Operation Shield, 1000 extra Greek police officers have been detailed along the border, with instructions to inform the Turkish Army if they spot anyone trying to cross. An Algerian migrant rescued by Evros police after a freezing night on the river. Photo by Helena DrysdaleAt the westernmost end of Greece, on the coast of Corfu, a cluster of Africans and Asians who crossed from Turkey last year, await a boat to Italy, but with dwindling hopes. Ahmed, a Mauritian who took five years to get here and lives in an abandoned factory, recognises there’s no work for him anywhere in Europe. Meanwhile, as the Greek crisis deepens, the British government prepares to deal with an anticipated flood of immigrants into the United Kingdom — and they’re Greek.
Helena Drysdale
https://aeon.co//essays/greeces-other-crisis-coping-with-a-rising-tide-of-immigrants
https://images.aeonmedia…y=75&format=auto
Ecology and environmental sciences
We picture ancient Britain as a land of enchanted forests. That’s a fantasy: axes have been ringing for a very long time
As part of a recent walk across England, I entered the Chilterns from their western edge, above the Thames Valley. Because the beech trees were climbing with me up the side of the hill, they had to grow even higher to reach the sunlight. The effect was spectacular: the tall beeches disappearing for nigh-on 100ft up into the canopy, the great height of the tree trunks accentuated by the delicacy and smallness of the beech leaves floating like maidenhair. With the large ferns guarding the entrance to the wood, the effect was Amazonian. Not for the first time, I reflected on how exotic we would find a horse chestnut in flower, or a beech forest in spring, if we came across them in Brazil rather than Buckinghamshire. A forester I met in the woods told me that we might be the last generation to enjoy these Chiltern beeches. Grey squirrels have become so common, and attack the young saplings so viciously, he said, that there is virtually no regrowth. When the last of the great trees reach the end of their natural span and come tumbling down, the change will have far more impact than any high-speed train. The loss of woodland will not just be a physical one. The beeches of the lower Chilterns were the Wild Wood into which Mole and Ratty strayed. They remind us of an older English past that has become heavily mythologised and distorted — like the knights on the Grail Quest who periodically disappeared and were lost in the trees — and yet depends on the perception that much of England was forested for the greater part of its history. A perception that is wrong. Before consulting the archaeological research, my assumption — widely shared, I suspect — was that England was largely wooded until the arrival of the Romans. Prehistoric Britons might have made a few inroads into the densely forested valleys, but preferred the wide-open expanses of Salisbury Plain or other high, treeless places such as Dartmoor or the Berkshire Downs. The Romans cleared some lowland areas for their settlements and built connecting roads. With the arrival and gradual domination of the Anglo-Saxons during the Dark Ages, more woodland was slowly lost, and a pattern of villages emerged, ready for the Domesday Book to record after the Norman conquest. There is an undercurrent of regret running through our history This understanding has now been shown to be wholly inaccurate. Much of England had been cleared as early as 1000 BCE, some two millennia beforehand. The Bronze Age saw intensive farming on a scale that we are only just beginning to appreciate. As Oliver Rackham puts it in The History of the Countryside: It can no longer be maintained, as used to be supposed even 20 years ago, that Roman Britain was a frontier province, with boundless wild woods surrounding occasional precarious clearings on the best land. On the contrary, even in supposedly backward counties such as Essex, villa abutted on villa for mile after mile, and most of the gaps were filled by small towns and the lands of British farmsteads.Rackham describes the immense clearance undertaken during the Bronze Age, boldly claiming that ‘to convert millions of acres of wildwood into farmland was unquestionably the greatest achievement of any of our ancestors’. He reminds us how difficult it was to clear the woodland, as most British species are difficult to kill: they will not burn and they grow again after felling. Moreover, in his dry phrase, ‘a log of more than 10 inches in diameter is almost fireproof and is a most uncooperative object’. The one exception was pine, which burns well and, perhaps as a consequence, disappeared almost completely from southern Britain, the presumption being that prehistoric man could easily burn the trees where they stood: the image of pine trees burning like beacons across the countryside is a strong one. Only with the Forestry Commission in the 20th century were large numbers of conifers reintroduced. Some Bronze Age woodland was naturally kept and managed for what it could provide: timber for building materials, smaller wood and shrubs for fuel, acorns for pigs (which were often turned loose into the woods in autumn), hazel and other trees suitable for coppicing. But this was small-scale. When the Domesday scribes recorded a relatively low level of English woodland — a much lower proportion than, say, modern France enjoys today — this was not a recent development, but the way it had been for millennia. Axe wielding: a Bronze Age rock painting from Tanum, Bohuslan, Sweden. Photo by Ernst Haas/ Getty ImagesThe idea that England 3,000 years ago was already as suburban as the outskirts of Basildon has not been absorbed into the popular consciousness. Nor will it ever be readily, for we suffer from what might be called Sherwood Syndrome: the need to believe that much of England — most of England — was both wild and wooded until modern history ‘began’ in 1066, or indeed stayed so until much later; and that these ancient forests were the repository of ‘a spirit of England’, the Green Man, that could be summoned at times when we needed to be reminded of our national identity; where Robin Hoods of all subsequent generations could escape, where the Druids gathered their mistletoe from the trees, where the oak that built our battleships came from. The myth panders to our need for a sense of loss. There is an undercurrent of regret running through our history. A nostalgia for what could have been: the unicorn disappearing into the trees; the loss of Roman Britain; the loss of Albion; the loss of Empire. We are forever constructing prelapsarian narratives in which a golden sunlit time — the Pax Romana, the Elizabethan golden age, that Edwardian summer before the First World War, a brief moment in the mid-1960s with the Beatles — prefigure anarchy and decay. Or the cutting down of the forest. One only need look at the near-ecstatic reception given to Danny Boyle’s Olympic rendition of our ‘green and pleasant land’, complete with shire culture and hobbit mounds, to see how easily history elides with mythology. Britons are supremely comfortable with that blurring — with a mythic dimension that adds gravitas to our self-understanding, and that imbues the land with a kind of enchantment, a magical aspect that is echoed in our narratives of how we came to be a nation, but is as illusory as the Arthurian lake from which the Lady’s hand emerges to grasp the sword. The idea of England as a wild and wooded land until the arrival of the Romans is a powerful one. Of course, the landscape that Bronze Age travellers surveyed in 1000 BCE, as they travelled the Icknield Way along the Chilterns ridge line, around the time the White Horse of Uffington was carved, would have been different. No Swindon, Didcot Power Station or M4 motorway, for a start. But a cultivated system of fields and pastureland was already there, albeit in a different formation. What the work of archaeologists over the past few decades suggests is that we possessed the land very early — that England was shaped long before the arrival of the Romans, whose occupation can be seen as a brief, anomalous interlude interrupting the continuity of British history. How appropriate then that the Bronze Age should be defined by its bronze axes, which were both the principal units of currency (large stashes of them have been found buried in fields) and the means by which the trees were felled. When the oak tree at the centre of Seahenge was examined and found to have been cut down in 2049 BCE, during the early Bronze Age, the marks from some 50 different bronze axe-heads could be distinguished. What we have always been good at is not hiding in the forest but cutting it down. Hugh Thomson’s The Green Road into the Trees: An Exploration of England is published by Preface.
Hugh Thomson
https://aeon.co//essays/who-chopped-down-britains-ancient-forests
https://images.aeonmedia…y=75&format=auto
Art
As the boundaries between digital and physical dissolve, can the New Aesthetic help us see things more clearly?
Shoreditch, east London, is home to a remarkable cluster of technology start-ups. Dozens of web and ‘new media’ firms, including Last.fm and TweetDeck, set up shop here and make this grimy but beguiling district their home. No one decided this would happen. It was urban alchemy — the chance intersection of multifarious factors of money, technology, education, culture, time and space, which locked into a temporary virtuous feedback loop, long before the politicians arrived and tried to take credit. Although bathed in broadband glow and old-media hype, what these companies actually do is ancient, primal: they build tools. Within a few blocks are hundreds of people devising new applications for digital technologies — new ways of bringing the electronic global network to bear on life. It’s in these streets that the boundary between the digital and the physical is at its most porous — in the devices and the minds of a far-seeing local population who are among the first to understand that there might not be a boundary at all. And strange phenomena can arise in this technological crucible. In May 2011, a British writer and technologist called James Bridle set up a blog on the social networking service Tumblr.com to document a few of the phenomena he had seen. Tumblr favours images and snippets of video and text, which is exactly what Bridle posted, a stream of images, screenshots and video, backed up with an occasional quote or sentence of commentary. Among the images posted on the first day were: a photo of the screen of a cathode-ray television at the moment it is switched off; examples of make-up that could be used to defeat face-recognition software; Osama bin Laden’s hideout in Pakistan as it appears on Google Maps; and fighter jets with a camouflage of blocky patterns suggestive of pixels. Join the dots. Find the thread. What links those images? They share a veneer of digital modernity, perhaps, but what else? Bridle, an expert on digital publishing and a member of the Shoreditch-based design partnership the Really Interesting Group, or RIG, called it a ‘mood board for unknown products’ in a post on the group’s blog. Mood boards are collages made by designers to piece together the visual inspiration behind a project. Here, though, there was no project beyond the collage itself. Bridle called the Tumblr, and the undefined product it represented, the New Aesthetic. And he continued in the same vein, trawling through the crackling, accelerating networked world. Satellite images of server farms, those giant, anonymous sheds where machines crunch out the internet. People dressed in costumes as low-resolution characters from the computer game Minecraft. A label on a pair of jeans bearing nothing more than a line of Excel code, a bug in a clothing factory manifesting at the other end of the supply chain. Oddities from Google Street View — seagulls, prostitutes, security black spots. Scenes from the drone age. Rough-edged, lo-fi objects produced by 3D scanning and 3D printing. Converging, leapfrogging technologies evoke new emotional responses within us, responses that do not yet have names The spectral outline of something swims into view: the way machines see. The spread of robot eyes — on phones, on drones, on buildings, on satellites, on Google vehicles, on us. The way machines show us things, and the spread of screens. The rise of augmented reality, the insertion of digital constructs into real landscapes. The way things appear to us when they are mediated, somehow, by digital machines — the data centres, the mechanised production lines and logistics centres, the LCD-bathed outlets. Oddities and idiosyncrasies brought into view by this machine mediation of the stuff around us — the robot inflection, the accent of media devices. Bridle’s one-sentence summary of it all is ‘an eruption of the digital into the physical’. Intuitively, one feels that this could be important. Smartphones, tablet computers, drones, CCTV cameras, LCD screens, e-readers, GPS, social networking, recognition algorithms and scores of allied technologies and concepts are rising to super-ubiquity around us. They are wreaking untold changes on the behaviour of nation states, corporations and individuals. Yet all this is happening in a cultural environment broadly evacuated of ideology, apart from the exhausted fairytales of neoliberal consumer capitalism. At least the New Aesthetic could be new; really, truly new. Indeed its outer contours suggested frightening, exhilarating novelty, something rare in this paradoxical age of astonishing technology set against jaded and reflexive nostalgia. At this stage, the New Aesthetic was just those images and a few lines of text — and that feeling, that hunch. Still, it wasn’t just Bridle’s hunch. ‘People responded really strongly,’ he said when I spoke to him a year on from his first blog on the New Aesthetic. ‘But a very small number of people — people I know.’ Those people tended to occupy the same inventive media-technology niche as Bridle — a small community concentrated in east London and on the West Coast of the United States. ‘As soon as you declare something a movement, everyone either wants to be a part of it or wants to destroy it’: James Bridle. Photo by liftconferencephotosMatt Jones, one of the three principals at the celebrated creative studio BERG, which shares premises with RIG in Shoreditch, wrote a long blog entry trying to define the New Aesthetic as ‘sensor vernacular’ and ‘an aesthetic born of the grain of seeing/computation … Of computer-vision, of 3D-printing; of optimised, algorithmic sensor sweeps and compression artefacts’. Warren Ellis, an author who occupies a curious role as intellectual patron for this corner of the London design and technology scene, linked to the post from his well-trafficked website, sending thousands of interested eyes in Bridle’s direction. This first small flurry of interest encouraged Bridle to persist with the New Aesthetic project, despite its lack of a clear identity, aims or boundaries. But the Tumblr was also an effort at what Bridle calls ‘self-correction’ — refining his terms, adjusting and clarifying a still-inexpressible concept. ‘It’s not a prescriptive thing,’ said Bridle. ‘It’s a gradual collection in order to find the edges of something … Not all the examples definitely belong to it, and that’s OK.’ The New Aesthetic was a Rorschach blot. Its low resolution meant that all observers could project their own meaning into it. But before Bridle was able to establish what it all really meant, the project was snatched out of his hands. I met Bridle in Allpress, a coffee shop in Shoreditch (of course), on May 4, 2012. After the interview we walked down Redchurch Street towards Shoreditch High Street. Facing us, a building was under construction, its scaffolding wrapped in mesh. The mesh was decorated by a QR code — the blocky black-and-white patches that, using a recognition algorithm available as a free app, will connect your smartphone to a website. Still fairly novel, this technology is being used with mindless abandon by advertisers and developers. Often the code is useless — because it’s on a poster underground, where phones usually can’t connect to the internet, or because the code has been badly positioned or is split or partially obscured. A blog cataloguing these QR-code foul-ups was linked from the New Aesthetic Tumblr. This code on Shoreditch High Street was only half visible, and thus unusable — but this appeared deliberate. Rather than just being badly applied, the code was being used purely as a decorative device. Very New Aesthetic. We paused and Bridle took a photo on his phone. Later, the picture went up on the New Aesthetic Tumblr. It was the last post. On 6 May 2012, its first birthday, the account was shut down. What happened? This March, Bridle chaired a panel on the New Aesthetic at South by Southwest (more commonly, SXSW), a voguish music and technology festival held each year in Austin, Texas. His own presentation on the evolving project was reinforced by presentations by Russell Davies, founder of RIG; Joanne McNeil, the editor of the technological arts website Rhizome; the artist Aaron Straup Cope; and the designer Ben Terrett. The phenomenon was to be set in its proper historical context and differentiated from mere neophilia. ‘Yes, everything has always been new and different, and everything has always been the same,’ Bridle wrote in the summary of his presentation, ‘but we can perform an end-run around this endless back-and-forth …’ The novel ways of seeing suggested by the New Aesthetic promised, ‘if not a new world … then new sensations, which are the medium by which we appreciate a new world.’ Converging, leapfrogging technologies were evoking genuinely new emotional responses within us, responses that do not yet have names. That frisson of wonder when we use Google Street View to scout out a place we haven’t been yet or maybe never will; that shiver at the thought of lethal strikes by unmanned drone aircraft or an end to privacy. The New Aesthetic was setting out to map those reactions. ‘Meaning is emergent in the network, it is the apophatic silence at the heart of everything, that-which-can-be-pointed-to,’ Bridle said in his presentation. ‘And that is what the New Aesthetic, in part, is an attempt to do … to point at these things and go “but what does it mean?” ’ The new things we are not seeing: poster for the New Aesthetic presentation at South by SouthWest in Austin, TX. Image courtesy of new-aesthetic.tumblr.comThe group took as its logo a pixelated predator drone supported by balloons, ‘a bright cluster … tied to some huge, dark and lethal weight’, as the science fiction author Bruce Sterling put it. He was watching the panel. Via his Beyond the Beyond blog for Wired magazine and his appearances at conferences and symposia worldwide, Sterling has carved a reputation as one of the most admired thinkers on contemporary technology and society. Because of his status as a high prince in the techno-intelligentsia, he was invited to deliver the closing address at SXSW. He singled out the New Aesthetic for specific mention, and the following month published a 5,000-word essay on it: The New Aesthetic is one thing among a kind: it’s like early photography for the French Impressionists, or like silent film for the Russian Constructivists, or like abstract-dynamics for Italian Futurists. … we have every reason to take it, and its prospects, seriously. … This is one of those moments when the art world sidles over toward a visual technology and tries to get all metaphysical. This is the attempted imposition on the public of a new way of perceiving reality. … Above all, the New Aesthetic is telling the truth.Bridle was ‘master of that salon’, wrote Sterling, ‘[He] has never yet claimed to be the André Breton-style Pope of the New Aesthetic, but in practice, nobody ever asks the central questions of anybody else but him. So, Bridle’s the guru there. Fine.’ The first Bridle heard of Sterling’s intervention was when ‘everyone on Twitter started talking about it’. Wired’s own metrics show more than 1,000 tweets directly linking to the piece. From those, scores of conversations started. The New Aesthetic is not the easiest idea to communicate, yet all of a sudden thousands of people were energetically generating opinions on it. ‘Initially,’ Bridle said, ‘I was quite flattered.’ Pixel perfect: new vision or retro throwback? Image courtesy of booktwo.orgThe day Sterling’s essay went live, all the existing literature on the New Aesthetic could be consumed in an afternoon. Within a couple of weeks, blog posts were being generated faster than they could be read. It seemed everyone had something to say about the New Aesthetic. Naturally enough, there were parodies — a doppelganger blog called the New New Aesthetic, which lasted all of two posts, and a cat-based lampoon called the Mew Aesthetic. And of the more serious responses, not all were positive. A blog called The Creators Project published a series of essays on the subject. One claimed that the New Aesthetic wasn’t new, had very little shock factor and had a ‘disappointingly stuffy’ name. This was tame compared to a contribution from the curator Hrag Vartanian, which stated that not only was the New Aesthetic not new, it wasn’t an aesthetic, it was a style without any meaning, a hopelessly retro throwback to the bits of recent past that were ‘easily Googled’, and blind to broader cultural history. Maybe, Vartanian mused, Bridle had suffered a head injury. ‘Quite quickly,’ Bridle said, ‘I stopped reading anything about it.’ The fundamental problem was that Sterling’s essay had described the New Aesthetic as a movement. The word suggests membership, doctrine, methods and goals. An agenda. And this movement, said Sterling, was moving into its ‘evangelical, podium-pounding phase’. This was death. ‘As soon as you declare something a movement, everyone either wants to be a part of it or wants to destroy it,’ Bridle said. ‘I couldn’t even look at Twitter, because there were people @-ing me, saying “What is this bullshit?” ’ The storm of controversy resembled a denial-of-service attack. ‘It rendered my social networks almost unusable. I couldn’t continue to talk about it because anything I said about it was lost in that mass.’ Two days later, he shut down the Tumblr. But in a way, Bridle was vindicated: the volume of the response shows he had struck something fundamental — as well as communicable and highly provocative. ‘I would deny it’s a movement because that’s not what I intended it to be,’ he said, ‘but there’s a vacuum for the movement that people think it is. Which is something genuinely of the network. They want the new modernism, and it has to come out of the network.’ Sterling had hinted at this, in a line from his first essay that now feels prophetic: ‘Everybody who attempts [to explain the project] seems to hope and feel that the New Aesthetic must be a private solution to their own personal creative problems.’ Bridle had given a name to a mythical beast that no one has seen directly but, it turns out, lots of people were hunting for: the JPEGwocky. Looking at some of the mistaken impressions about Bridle’s project — what it isn’t — does help to clarify what it is. As well as not being a movement, it is very much not an art movement, although some artists are creating work that fits into it. Sterling is responsible for this confusion, calling it ‘a typical avant-garde art movement that has arisen within a modern network society’. This set up the New Aesthetic to be denigrated by the art world (which saw it as nothing of the kind) and to be pigeonholed as art by everyone else. This is unfortunate, not least because Sterling’s essay also furnished us with some helpful terms for what the New Aesthetic actually is. He called it a ‘gaudy, networked heap’, and better than that, a wunderkammer, a cabinet of curiosities — geodes, two-headed lambs, bits of coral — that were assembled by hungry minds in Enlightenment Europe. The wunderkammer is part of the first act of modern science — astonishment at the oddities of the natural world, which whets the appetite for inquiry. ‘A heap of eye-catching curiosities don’t constitute a compelling world-view,’ Sterling wrote. Perhaps not, but it’s a start. 51 per cent of Americans believe stormy weather can interfere with cloud computing Another recurring charge was that the New Aesthetic was somehow nostalgic or retro. This originates in the recurring appearance of pixelation and low-resolution graphics as a New Aesthetic motif. A roadside sign resembling a giant Facebook ‘Like’ button, pixels the size of dinner plates, or sculptures composed of hundreds of coloured cubes, looking like something that has stumbled out of a computer game — this kind of thing was a regular feature on the Tumblr. And Bridle also had an eye for less knowing appearances of digital textures, such as in the pattern given to the tail fin of a German jet. This is what happens, he says, when a generation raised on the distinctive, blocky 8-bit graphics of the 1980s grows up and starts addressing the world around it. True enough, but this observation contributes to an impression of the New Aesthetic as being generated by 30-something boy-men hankering for the Super Mario of their lost innocence. Sterling was scornful: ‘Sentimental fluff for modern adults’. Pixelation might rupture the interface between the digital and the physical, but it’s ‘a cute, backward-looking rupture’. Well, only up to a point. Firstly, this criticism fails to distinguish between the people who make that sort of work and the curators of the New Aesthetic, who are looking for it. Designing a pixelated shop sign while lost in a reverie about Castlevania video game is nostalgic. It is far less nostalgic to ask why people are decorating their world in this way. In a sense, what the New Aesthetic truly represents is the eruption of a new kind of banality. It is the arrival of digital motifs, glitches and artefacts in the realm of the commonplace and the trivial, the advent of a world where it is thoroughly normal to see a Windows crash screen in place of an advertisement on the Underground, or for trading algorithms to cause a stock market crash. So it’s not truly new — the newness is beside the point. The fascinating thing about the New Aesthetic could be that it was never new — it went from being unknown to being ubiquitous and thoroughly banal with barely a blink. The frisson of shock or wonder one experienced at seeing an aspect of the New Aesthetic out in the wild comes because that is the only time it will be noticed; afterwards it will pass unobserved. The New Aesthetic is not about seeing something new — it is about the new things we are not seeing. It is an effort to truly observe and note emergent digital visual phenomena before they become invisible. Is, or was? After Bridle shut down the Tumblr, it was hard to see if the New Aesthetic would continue as a phenomena at all. There was a steady stream of blog posts and online discussion, and there are occasional outbursts of activity, such as a ‘sprint book’ on the subject produced by V2, a Rotterdam centre for art and media technology. But without the tick-tock of Tumblr posts, a crucial element of momentum seemed lacking. And then, on 20 August, without fanfare, the Tumblr resumed. Images of the disintegrating holographic ghost of Freddie Mercury from the Olympic closing ceremony; Twitter fatwas by Islamic clerics; an automated car park closed down by a software licensing dispute; the news that 51 per cent of Americans believe stormy weather can interfere with cloud computing. The New Aesthetic was back in business. This matters because it was a kind of warning. Digital technologies are transforming social, economic and political relations. If these transformations take place invisibly, if they become banal too fast, we are placed in danger. The infrastructure of the internet — its servers, its logistic hubs — are in anonymous sheds, yet we are encouraged to think of this immense deployment of capital, equipment and expertise as a ‘cloud’, an image that has obviously had an influence on 51 per cent of Americans. Great value is placed on ‘seamlessness’— bigger, better LCD displays that connect better with their surroundings; connections between our bank accounts and mobile phone accounts and loyalty cards; personal devices that talk with each other and compare notes on our data; augmented reality. The virtual world is being integrated with the physical world and this seamlessness is presented as inherently good. No harm may be intended: it’s natural for a designer to want to smooth away the edges and conceal the joins. But in making these connections invisible and silent, the status quo is hard-wired into place, consent is bypassed and alternatives are deleted. This is, if you will, the New Anaesthetic. Instances of the New Aesthetic are often places where a glitch has exposed the underlying structure — the hardware and software. Or it is an oddity that has the unintended side effect of causing us to consider that structure. Part of a plane appearing in Google Maps makes us realise that we are looking at a mosaic of images taken by cameras far above us. We knew that already, right? Maybe we did. But a reminder may still be salutary. This is political. The New Aesthetic was accused of being apolitical — fascinated by the oddities and wonders being thrown up by drones and surveillance cameras without thinking about the politics behind them. This is plain wrong: politics seeps from nearly every pore of the New Aesthetic. It was often hard to see, but that’s what Bridle wanted to expose. The question is one of viewpoint. ‘As soon as you get CCTV, drones, satellite views and maps and all that kind of stuff,’ Bridle said, ‘you’re setting up an inherent inequality in how things are seen, and between the position of the viewer and the viewed. There are inherent power relations in that and technology makes them invisible. When you have a man in a watchtower, you look up at him, and that’s an obvious vision of power. When the man is in a bunker far away and you have just a little camera on a stalk … most people seem to be fine with that.’ The New Aesthetic is about seeing, then. And to see and be seen is to engage in those power relations. It might be that the New Aesthetic, as the writers Madeline Ashby and Rahel Aima have both said, is simply men waking up to the sensation of being under surveillance, something women have long experienced in the form of the ‘male gaze’. The riotous spread of new technologies of seeing — not just thousands of new cameras, but also recognition algorithms and other software and hardware that can detect a presence or identify an individual — is subjecting these power relations to constant flux, and the outcomes could be dangerous. The New Aesthetic was the outer froth of this silent social turmoil — the project was to make it legible. ‘By legibility I mean our own ability to read these systems, how much they can affect the way we see and act in the world, and the differing positions of power we have in the world based on how legible those systems are,’ Bridle said. ‘The programmers have a huge amount of agency in the world, because they can deconstruct, reverse engineer and write and construct and create these systems. People who can’t, don’t, and they have less power in the world because of it.’ Observers of the New Aesthetic from outside the worlds of media technology and programming might be tempted to pass off the phenomenon as something only of importance within that world. In doing so, they disenfranchise themselves. Yes, the New Aesthetic is the product of a small, specialised media-technology niche. But it’s a huge mistake to ignore it. The people inside are trying to get a message out.
Will Wiles
https://aeon.co//essays/what-do-we-uncover-when-we-look-through-digital-eyes
https://images.aeonmedia…y=75&format=auto